id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
2501.14379
|
ECTIL: Label-efficient Computational Tumour Infiltrating Lymphocyte
(TIL) assessment in breast cancer: Multicentre validation in 2,340 patients
with breast cancer
|
eess.IV cs.AI cs.CV
|
The level of tumour-infiltrating lymphocytes (TILs) is a prognostic factor
for patients with (triple-negative) breast cancer (BC). Computational TIL
assessment (CTA) has the potential to assist pathologists in this
labour-intensive task, but current CTA models rely heavily on many detailed
annotations. We propose and validate a fundamentally simpler deep learning
based CTA that can be trained in only ten minutes on hundredfold fewer
pathologist annotations. We collected whole slide images (WSIs) with TILs
scores and clinical data of 2,340 patients with BC from six cohorts including
three randomised clinical trials. Morphological features were extracted from
whole slide images (WSIs) using a pathology foundation model. Our
label-efficient Computational stromal TIL assessment model (ECTIL) directly
regresses the TILs score from these features. ECTIL trained on only a few
hundred samples (ECTIL-TCGA) showed concordance with the pathologist over five
heterogeneous external cohorts (r=0.54-0.74, AUROC=0.80-0.94). Training on all
slides of five cohorts (ECTIL-combined) improved results on a held-out test set
(r=0.69, AUROC=0.85). Multivariable Cox regression analyses indicated that
every 10% increase of ECTIL scores was associated with improved overall
survival independent of clinicopathological variables (HR 0.86, p<0.01),
similar to the pathologist score (HR 0.87, p<0.001). We demonstrate that ECTIL
is highly concordant with an expert pathologist and obtains a similar hazard
ratio. ECTIL has a fundamentally simpler design than existing methods and can
be trained on orders of magnitude fewer annotations. Such a CTA may be used to
pre-screen patients for, e.g., immunotherapy clinical trial inclusion, or as a
tool to assist clinicians in the diagnostic work-up of patients with BC. Our
model is available under an open source licence
(https://github.com/nki-ai/ectil).
|
2501.14390
|
Distinguishing Parkinson's Patients Using Voice-Based Feature Extraction
and Classification
|
cs.LG
|
Parkinson's disease (PD) is a progressive neurodegenerative disorder that
impacts motor functions and speech characteristics This study focuses on
differentiating individuals with Parkinson's disease from healthy controls
through the extraction and classification of speech features. Patients were
further divided into 2 groups. Med On represents the patient with medication,
while Med Off represents the patient without medication. The dataset consisted
of patients and healthy individuals who read a predefined text using the H1N
Zoom microphone in a suitable recording environment at F{\i}rat University
Neurology Department. Speech recordings from PD patients and healthy controls
were analyzed, and 19 key features were extracted, including jitter, luminance,
zero-crossing rate (ZCR), root mean square (RMS) energy, entropy, skewness, and
kurtosis.These features were visualized in graphs and statistically evaluated
to identify distinctive patterns in PD patients. Using MATLAB's Classification
Learner toolbox, several machine learning classification algorithm models were
applied to classify groups and significant accuracy rates were achieved. The
accuracy of our 3-layer artificial neural network architecture was also
compared with classical machine learning algorithms. This study highlights the
potential of noninvasive voice analysis combined with machine learning for
early detection and monitoring of PD patients. Future research can improve
diagnostic accuracy by optimizing feature selection and exploring advanced
classification techniques.
|
2501.14394
|
Reinforcement Learning for Efficient Returns Management
|
cs.LG
|
In retail warehouses, returned products are typically placed in an
intermediate storage until a decision regarding further shipment to stores is
made. The longer products are held in storage, the higher the inefficiency and
costs of the returns management process, since enough storage area has to be
provided and maintained while the products are not placed for sale. To reduce
the average product storage time, we consider an alternative solution where
reallocation decisions for products can be made instantly upon their arrival in
the warehouse allowing only a limited number of products to still be stored
simultaneously. We transfer the problem to an online multiple knapsack problem
and propose a novel reinforcement learning approach to pack the items
(products) into the knapsacks (stores) such that the overall value (expected
revenue) is maximized. Empirical evaluations on simulated data demonstrate
that, compared to the usual offline decision procedure, our approach comes with
a performance gap of only 3% while significantly reducing the average storage
time of a product by 96%.
|
2501.14399
|
Handling Heterophily in Recommender Systems with Wavelet Hypergraph
Diffusion
|
cs.IR cs.AI cs.DB cs.LG cs.SI
|
Recommender systems are pivotal in delivering personalised user experiences
across various domains. However, capturing the heterophily patterns and the
multi-dimensional nature of user-item interactions poses significant
challenges. To address this, we introduce FWHDNN (Fusion-based Wavelet
Hypergraph Diffusion Neural Networks), an innovative framework aimed at
advancing representation learning in hypergraph-based recommendation tasks. The
model incorporates three key components: (1) a cross-difference relation
encoder leveraging heterophily-aware hypergraph diffusion to adapt
message-passing for diverse class labels, (2) a multi-level cluster-wise
encoder employing wavelet transform-based hypergraph neural network layers to
capture multi-scale topological relationships, and (3) an integrated
multi-modal fusion mechanism that combines structural and textual information
through intermediate and late-fusion strategies. Extensive experiments on
real-world datasets demonstrate that FWHDNN surpasses state-of-the-art methods
in accuracy, robustness, and scalability in capturing high-order
interconnections between users and items.
|
2501.14400
|
SKIL: Semantic Keypoint Imitation Learning for Generalizable
Data-efficient Manipulation
|
cs.RO cs.AI
|
Real-world tasks such as garment manipulation and table rearrangement demand
robots to perform generalizable, highly precise, and long-horizon actions.
Although imitation learning has proven to be an effective approach for teaching
robots new skills, large amounts of expert demonstration data are still
indispensible for these complex tasks, resulting in high sample complexity and
costly data collection. To address this, we propose Semantic Keypoint Imitation
Learning (SKIL), a framework which automatically obtain semantic keypoints with
help of vision foundation models, and forms the descriptor of semantic
keypoints that enables effecient imitation learning of complex robotic tasks
with significantly lower sample complexity. In real world experiments, SKIL
doubles the performance of baseline methods in tasks such as picking a cup or
mouse, while demonstrating exceptional robustness to variations in objects,
environmental changes, and distractors. For long-horizon tasks like hanging a
towel on a rack where previous methods fail completely, SKIL achieves a mean
success rate of 70\% with as few as 30 demonstrations. Furthermore, SKIL
naturally supports cross-embodiment learning due to its semantic keypoints
abstraction, our experiments demonstrate that even human videos bring
considerable improvement to the learning performance. All these results
demonstrate the great success of SKIL in achieving data-efficint generalizable
robotic learning. Visualizations and code are available at:
https://skil-robotics.github.io/SKIL-robotics/.
|
2501.14401
|
CVOCSemRPL: Class-Variance Optimized Clustering, Semantic Information
Injection and Restricted Pseudo Labeling based Improved Semi-Supervised
Few-Shot Learning
|
cs.CV
|
Few-shot learning has been extensively explored to address problems where the
amount of labeled samples is very limited for some classes. In the
semi-supervised few-shot learning setting, substantial quantities of unlabeled
samples are available. Such unlabeled samples are generally cheaper to obtain
and can be used to improve the few-shot learning performance of the model. Some
of the recent methods for this setting rely on clustering to generate
pseudo-labels for the unlabeled samples. Since the quality of the
representation learned by the model heavily influences the effectiveness of
clustering, this might also lead to incorrect labeling of the unlabeled samples
and consequently lead to a drop in the few-shot learning performance. We
propose an approach for semi-supervised few-shot learning that performs a
class-variance optimized clustering in order to improve the effectiveness of
clustering the labeled and unlabeled samples in this setting. It also optimizes
the clustering-based pseudo-labeling process using a restricted pseudo-labeling
approach and performs semantic information injection in order to improve the
semi-supervised few-shot learning performance of the model. We experimentally
demonstrate that our proposed approach significantly outperforms recent
state-of-the-art methods on the benchmark datasets.
|
2501.14404
|
Kolmogorov Arnold Neural Interpolator for Downscaling and Correcting
Meteorological Fields from In-Situ Observations
|
cs.CV
|
Obtaining accurate weather forecasts at station locations is a critical
challenge due to systematic biases arising from the mismatch between
multi-scale, continuous atmospheric characteristic and their discrete, gridded
representations. Previous works have primarily focused on modeling gridded
meteorological data, inherently neglecting the off-grid, continuous nature of
atmospheric states and leaving such biases unresolved. To address this, we
propose the Kolmogorov Arnold Neural Interpolator (KANI), a novel framework
that redefines meteorological field representation as continuous neural
functions derived from discretized grids. Grounded in the Kolmogorov Arnold
theorem, KANI captures the inherent continuity of atmospheric states and
leverages sparse in-situ observations to correct these biases systematically.
Furthermore, KANI introduces an innovative zero-shot downscaling capability,
guided by high-resolution topographic textures without requiring
high-resolution meteorological fields for supervision. Experimental results
across three sub-regions of the continental United States indicate that KANI
achieves an accuracy improvement of 40.28% for temperature and 67.41% for wind
speed, highlighting its significant improvement over traditional interpolation
methods. This enables continuous neural representation of meteorological
variables through neural networks, transcending the limitations of conventional
grid-based representations.
|
2501.14406
|
Adaptive Rank Allocation for Federated Parameter-Efficient Fine-Tuning
of Language Models
|
cs.DC cs.AI cs.LG cs.NI
|
Pre-trained Language Models (PLMs) have demonstrated their superiority and
versatility in modern Natural Language Processing (NLP), effectively adapting
to various downstream tasks through further fine-tuning. Federated
Parameter-Efficient Fine-Tuning (FedPEFT) has emerged as a promising solution
to address privacy and efficiency challenges in distributed training for PLMs
on mobile devices. However, our measurements reveal two key limitations of
FedPEFT: heterogeneous data leads to significant performance degradation, and a
fixed parameter configuration results in communication inefficiency. To
overcome these limitations, we propose FedARA, a novel Federated Adaptive Rank
Allocation for parameter-efficient fine-tuning of language models.
Specifically, FedARA employs truncated singular value decomposition (SVD)
adaptation to enhance flexibility and expressiveness, significantly mitigating
the adverse effects of data heterogeneity. Subsequently, it utilizes dynamic
rank allocation to progressively identify critical ranks, effectively improving
communication efficiency. Lastly, it leverages rank-based module pruning to
remove inactive modules, steadily reducing local training time and peak memory
usage in each round. Extensive experiments show that FedARA consistently
outperforms weak baselines by an average of 8.49\% and strong baselines by
6.95\% across various datasets under data heterogeneity while significantly
improving communication efficiency by 2.40\(\times\). Moreover, experiments on
AGX Orin, Orin Nano and Raspberry Pi 5 devices demonstrate substantial
decreases in total training time and energy consumption by up to 48.90\% and
46.95\%, respectively.
|
2501.14413
|
Context-CrackNet: A Context-Aware Framework for Precise Segmentation of
Tiny Cracks in Pavement images
|
cs.CV
|
The accurate detection and segmentation of pavement distresses, particularly
tiny and small cracks, are critical for early intervention and preventive
maintenance in transportation infrastructure. Traditional manual inspection
methods are labor-intensive and inconsistent, while existing deep learning
models struggle with fine-grained segmentation and computational efficiency. To
address these challenges, this study proposes Context-CrackNet, a novel
encoder-decoder architecture featuring the Region-Focused Enhancement Module
(RFEM) and Context-Aware Global Module (CAGM). These innovations enhance the
model's ability to capture fine-grained local details and global contextual
dependencies, respectively. Context-CrackNet was rigorously evaluated on ten
publicly available crack segmentation datasets, covering diverse pavement
distress scenarios. The model consistently outperformed 9 state-of-the-art
segmentation frameworks, achieving superior performance metrics such as mIoU
and Dice score, while maintaining competitive inference efficiency. Ablation
studies confirmed the complementary roles of RFEM and CAGM, with notable
improvements in mIoU and Dice score when both modules were integrated.
Additionally, the model's balance of precision and computational efficiency
highlights its potential for real-time deployment in large-scale pavement
monitoring systems.
|
2501.14414
|
SoK: What Makes Private Learning Unfair?
|
cs.LG cs.CR
|
Differential privacy has emerged as the most studied framework for
privacy-preserving machine learning. However, recent studies show that
enforcing differential privacy guarantees can not only significantly degrade
the utility of the model, but also amplify existing disparities in its
predictive performance across demographic groups. Although there is extensive
research on the identification of factors that contribute to this phenomenon,
we still lack a complete understanding of the mechanisms through which
differential privacy exacerbates disparities. The literature on this problem is
muddled by varying definitions of fairness, differential privacy mechanisms,
and inconsistent experimental settings, often leading to seemingly
contradictory results.
This survey provides the first comprehensive overview of the factors that
contribute to the disparate effect of training models with differential privacy
guarantees. We discuss their impact and analyze their causal role in such a
disparate effect. Our analysis is guided by a taxonomy that categorizes these
factors by their position within the machine learning pipeline, allowing us to
draw conclusions about their interaction and the feasibility of potential
mitigation strategies. We find that factors related to the training dataset and
the underlying distribution play a decisive role in the occurrence of disparate
impact, highlighting the need for research on these factors to address the
issue.
|
2501.14426
|
CENTS: Generating synthetic electricity consumption time series for rare
and unseen scenarios
|
cs.LG
|
Recent breakthroughs in large-scale generative modeling have demonstrated the
potential of foundation models in domains such as natural language, computer
vision, and protein structure prediction. However, their application in the
energy and smart grid sector remains limited due to the scarcity and
heterogeneity of high-quality data. In this work, we propose a method for
creating high-fidelity electricity consumption time series data for rare and
unseen context variables (e.g. location, building type, photovoltaics). Our
approach, Context Encoding and Normalizing Time Series Generation, or CENTS,
includes three key innovations: (i) A context normalization approach that
enables inverse transformation for time series context variables unseen during
training, (ii) a novel context encoder to condition any state-of-the-art
time-series generator on arbitrary numbers and combinations of context
variables, (iii) a framework for training this context encoder jointly with a
time-series generator using an auxiliary context classification loss designed
to increase expressivity of context embeddings and improve model performance.
We further provide a comprehensive overview of different evaluation metrics for
generative time series models. Our results highlight the efficacy of the
proposed method in generating realistic household-level electricity consumption
data, paving the way for training larger foundation models in the energy domain
on synthetic as well as real-world data.
|
2501.14427
|
GraphSOS: Graph Sampling and Order Selection to Help LLMs Understand
Graphs Better
|
cs.LG
|
The success of Large Language Models (LLMs) in various domains has led
researchers to apply them to graph-related problems by converting graph data
into natural language text. However, unlike graph data, natural language
inherently has sequential order. We observe a counter-intuitive fact that when
the order of nodes or edges in the natural language description of a graph is
shuffled, despite describing the same graph, model performance fluctuates
between high performance and random guessing. Additionally, due to LLMs'
limited input context length, current methods typically randomly sample
neighbors of target nodes as representatives of their neighborhood, which may
not always be effective for accurate reasoning. To address these gaps, we
introduce GraphSOS (Graph Sampling and Order Selection). This novel model
framework features an Order Selector Module to ensure proper serialization
order of the graph and a Subgraph Sampling Module to sample subgraphs with
better structure for better reasoning. Furthermore, we propose Graph CoT
obtained through distillation, and enhance LLM's reasoning and zero-shot
learning capabilities for graph tasks through instruction tuning. Experiments
on multiple datasets for node classification and graph question-answering
demonstrate that GraphSOS improves LLMs' performance and generalization ability
on graph tasks.
|
2501.14430
|
Statistical Verification of Linear Classifiers
|
stat.ML cs.LG math.PR math.ST stat.AP stat.TH
|
We propose a homogeneity test closely related to the concept of linear
separability between two samples. Using the test one can answer the question
whether a linear classifier is merely ``random'' or effectively captures
differences between two classes. We focus on establishing upper bounds for the
test's \emph{p}-value when applied to two-dimensional samples. Specifically,
for normally distributed samples we experimentally demonstrate that the upper
bound is highly accurate. Using this bound, we evaluate classifiers designed to
detect ER-positive breast cancer recurrence based on gene pair expression. Our
findings confirm significance of IGFBP6 and ELOVL5 genes in this process.
|
2501.14431
|
Domaino1s: Guiding LLM Reasoning for Explainable Answers in High-Stakes
Domains
|
cs.CL cs.LG
|
Large Language Models (LLMs) are widely applied to downstream domains.
However, current LLMs for high-stakes domain tasks, such as financial
investment and legal QA, typically generate brief answers without reasoning
processes and explanations. This limits users' confidence in making decisions
based on their responses. While original CoT shows promise, it lacks
self-correction mechanisms during reasoning. This work introduces Domain$o1$s,
which enhances LLMs' reasoning capabilities on domain tasks through supervised
fine-tuning and tree search. We construct CoT-stock-2k and CoT-legal-2k
datasets for fine-tuning models that activate domain-specific reasoning steps
based on their judgment. Additionally, we propose Selective Tree Exploration to
spontaneously explore solution spaces and sample optimal reasoning paths to
improve performance. We also introduce PROOF-Score, a new metric for evaluating
domain models' explainability, complementing traditional accuracy metrics with
richer assessment dimensions. Extensive experiments on stock investment
recommendation and legal reasoning QA tasks demonstrate Domaino1s's leading
performance and explainability. Our code is available at
https://anonymous.4open.science/r/Domaino1s-006F/.
|
2501.14432
|
CAMEO: Autocorrelation-Preserving Line Simplification for Lossy Time
Series Compression
|
cs.DB cs.IR cs.IT math.IT
|
Time series data from a variety of sensors and IoT devices need effective
compression to reduce storage and I/O bandwidth requirements. While most time
series databases and systems rely on lossless compression, lossy techniques
offer even greater space-saving with a small loss in precision. However, the
unknown impact on downstream analytics applications requires a semi-manual
trial-and-error exploration. We initiate work on lossy compression that
provides guarantees on complex statistical features (which are strongly
correlated with the accuracy of the downstream analytics). Specifically, we
propose a new lossy compression method that provides guarantees on the
autocorrelation and partial-autocorrelation functions (ACF/PACF) of a time
series. Our method leverages line simplification techniques as well as
incremental maintenance of aggregates, blocking, and parallelization strategies
for effective and efficient compression. The results show that our method
improves compression ratios by 2x on average and up to 54x on selected
datasets, compared to previous lossy and lossless compression methods.
Moreover, we maintain -- and sometimes even improve -- the forecasting accuracy
by preserving the autocorrelation properties of the time series. Our framework
is extensible to multivariate time series and other statistical features of the
time series.
|
2501.14434
|
Remining Hard Negatives for Generative Pseudo Labeled Domain Adaptation
|
cs.IR cs.LG
|
Dense retrievers have demonstrated significant potential for neural
information retrieval; however, they exhibit a lack of robustness to domain
shifts, thereby limiting their efficacy in zero-shot settings across diverse
domains. A state-of-the-art domain adaptation technique is Generative Pseudo
Labeling (GPL). GPL uses synthetic query generation and initially mined hard
negatives to distill knowledge from cross-encoder to dense retrievers in the
target domain. In this paper, we analyze the documents retrieved by the
domain-adapted model and discover that these are more relevant to the target
queries than those of the non-domain-adapted model. We then propose refreshing
the hard-negative index during the knowledge distillation phase to mine better
hard negatives. Our remining R-GPL approach boosts ranking performance in 13/14
BEIR datasets and 9/12 LoTTe datasets. Our contributions are (i) analyzing hard
negatives returned by domain-adapted and non-domain-adapted models and (ii)
applying the GPL training with and without hard-negative re-mining in LoTTE and
BEIR datasets.
|
2501.14438
|
Data-efficient Performance Modeling via Pre-training
|
cs.PL cs.DC cs.LG
|
Performance models are essential for automatic code optimization, enabling
compilers to predict the effects of code transformations on performance and
guide search for optimal transformations. Building state-of-the-art performance
models with deep learning, however, requires vast labeled datasets of random
programs -- an expensive and time-consuming process, stretching over months.
This paper introduces a self-supervised pre-training scheme with autoencoders
to reduce the need for labeled data. By pre-training on a large dataset of
random programs, the autoencoder learns representations of code and
transformations, which are then used to embed programs for the performance
model. Implemented in the Tiramisu autoscheduler, our approach improves model
accuracy with less data. For example, to achieve a MAPE of 20.72%, the original
model requires 18 million data points, whereas our method achieves a similar
MAPE of 22.44% with only 3.6 million data points, reducing data requirements by
5x.
|
2501.14439
|
Optimizing Human Pose Estimation Through Focused Human and Joint Regions
|
cs.CV
|
Human pose estimation has given rise to a broad spectrum of novel and
compelling applications, including action recognition, sports analysis, as well
as surveillance. However, accurate video pose estimation remains an open
challenge. One aspect that has been overlooked so far is that existing methods
learn motion clues from all pixels rather than focusing on the target human
body, making them easily misled and disrupted by unimportant information such
as background changes or movements of other people. Additionally, while the
current Transformer-based pose estimation methods has demonstrated impressive
performance with global modeling, they struggle with local context perception
and precise positional identification. In this paper, we try to tackle these
challenges from three aspects: (1) We propose a bilayer Human-Keypoint Mask
module that performs coarse-to-fine visual token refinement, which gradually
zooms in on the target human body and keypoints while masking out unimportant
figure regions. (2) We further introduce a novel deformable cross attention
mechanism and a bidirectional separation strategy to adaptively aggregate
spatial and temporal motion clues from constrained surrounding contexts. (3) We
mathematically formulate the deformable cross attention, constraining that the
model focuses solely on the regions centered at the target person body.
Empirically, our method achieves state-of-the-art performance on three
large-scale benchmark datasets. A remarkable highlight is that our method
achieves an 84.8 mean Average Precision (mAP) on the challenging wrist joint,
which significantly outperforms the 81.5 mAP achieved by the current
state-of-the-art method on the PoseTrack2017 dataset.
|
2501.14440
|
Convergence of gradient based training for linear Graph Neural Networks
|
cs.LG cs.DM cs.NA cs.SI math.NA
|
Graph Neural Networks (GNNs) are powerful tools for addressing learning
problems on graph structures, with a wide range of applications in molecular
biology and social networks. However, the theoretical foundations underlying
their empirical performance are not well understood. In this article, we
examine the convergence of gradient dynamics in the training of linear GNNs.
Specifically, we prove that the gradient flow training of a linear GNN with
mean squared loss converges to the global minimum at an exponential rate. The
convergence rate depends explicitly on the initial weights and the graph shift
operator, which we validate on synthetic datasets from well-known graph models
and real-world datasets. Furthermore, we discuss the gradient flow that
minimizes the total weights at the global minimum. In addition to the gradient
flow, we study the convergence of linear GNNs under gradient descent training,
an iterative scheme viewed as a discretization of gradient flow.
|
2501.14441
|
Impact of Batch Normalization on Convolutional Network Representations
|
cs.LG
|
Batch normalization (BatchNorm) is a popular layer normalization technique
used when training deep neural networks. It has been shown to enhance the
training speed and accuracy of deep learning models. However, the mechanics by
which BatchNorm achieves these benefits is an active area of research, and
different perspectives have been proposed. In this paper, we investigate the
effect of BatchNorm on the resulting hidden representations, that is, the
vectors of activation values formed as samples are processed at each hidden
layer. Specifically, we consider the sparsity of these representations, as well
as their implicit clustering -- the creation of groups of representations that
are similar to some extent. We contrast image classification models trained
with and without batch normalization and highlight consistent differences
observed. These findings highlight that BatchNorm's effect on representational
sparsity is not a significant factor affecting generalization, while the
representations of models trained with BatchNorm tend to show more advantageous
clustering characteristics.
|
2501.14442
|
New scenarios and trends in non-traditional laboratories from 2000 to
2020
|
cs.CY cs.SY eess.SY
|
For educational institutions in STEM areas, the provision of practical
learning scenarios is, traditionally, a major concern. In the 21st century, the
explosion of ICTs, as well as the universalization of low-cost hardware, have
allowed the proliferation of technical solutions for any field; in the case of
experimentation, encouraging the emergence and proliferation of non-traditional
experimentation platforms. This movement has resulted in enriched practical
environments, with wider adaptability for both students and teachers. In this
paper, the evolution of scholar production has been analyzed at the global
level from 2000 to 2020. Current and emerging experimentation scenarios have
been identified, specifying the scope and boundaries between them.
|
2501.14443
|
Learning more with the same effort: how randomization improves the
robustness of a robotic deep reinforcement learning agent
|
cs.RO cs.AI
|
The industrial application of Deep Reinforcement Learning (DRL) is frequently
slowed down because of the inability to generate the experience required to
train the models. Collecting data often involves considerable time and economic
effort that is unaffordable in most cases. Fortunately, devices like robots can
be trained with synthetic experience thanks to virtual environments. With this
approach, the sample efficiency problems of artificial agents are mitigated,
but another issue arises: the need for efficiently transferring the synthetic
experience into the real world (sim-to-real).
This paper analyzes the robustness of a state-of-the-art sim-to-real
technique known as progressive neural networks (PNNs) and studies how adding
diversity to the synthetic experience can complement it. To better understand
the drivers that lead to a lack of robustness, the robotic agent is still
tested in a virtual environment to ensure total control on the divergence
between the simulated and real models.
The results show that a PNN-like agent exhibits a substantial decrease in its
robustness at the beginning of the real training phase. Randomizing certain
variables during simulation-based training significantly mitigates this issue.
On average, the increase in the model's accuracy is around 25% when diversity
is introduced in the training process. This improvement can be translated into
a decrease in the required real experience for the same final robustness
performance. Notwithstanding, adding real experience to agents should still be
beneficial regardless of the quality of the virtual experience fed into the
agent.
|
2501.14451
|
MARL-OT: Multi-Agent Reinforcement Learning Guided Online Fuzzing to
Detect Safety Violation in Autonomous Driving Systems
|
cs.SE cs.RO
|
Autonomous Driving Systems (ADSs) are safety-critical, as real-world safety
violations can result in significant losses. Rigorous testing is essential
before deployment, with simulation testing playing a key role. However, ADSs
are typically complex, consisting of multiple modules such as perception and
planning, or well-trained end-to-end autonomous driving systems. Offline
methods, such as the Genetic Algorithm (GA), can only generate predefined
trajectories for dynamics, which struggle to cause safety violations for ADSs
rapidly and efficiently in different scenarios due to their evolutionary
nature. Online methods, such as single-agent reinforcement learning (RL), can
quickly adjust the dynamics' trajectory online to adapt to different scenarios,
but they struggle to capture complex corner cases of ADS arising from the
intricate interplay among multiple vehicles. Multi-agent reinforcement learning
(MARL) has a strong ability in cooperative tasks. On the other hand, it faces
its own challenges, particularly with convergence. This paper introduces
MARL-OT, a scalable framework that leverages MARL to detect safety violations
of ADS resulting from surrounding vehicles' cooperation. MARL-OT employs MARL
for high-level guidance, triggering various dangerous scenarios for the
rule-based online fuzzer to explore potential safety violations of ADS, thereby
generating dynamic, realistic safety violation scenarios. Our approach improves
the detected safety violation rate by up to 136.2% compared to the
state-of-the-art (SOTA) testing technique.
|
2501.14452
|
On the Rate-Exponent Region of Integrated Sensing and Communications
With Variable-Length Coding
|
cs.IT eess.SP math.IT
|
This paper considers the achievable rate-exponent region of integrated
sensing and communication systems in the presence of variable-length coding
with feedback. This scheme is fundamentally different from earlier studies, as
the coding methods that utilize feedback impose different constraints on the
codewords. The focus herein is specifically on the Gaussian channel, where
three achievable regions are analytically derived and numerically evaluated. In
contrast to a setting without feedback, we show that a trade-off exists between
the operations of sensing and communications.
|
2501.14453
|
Optimal Strategies for Federated Learning Maintaining Client Privacy
|
cs.LG
|
Federated Learning (FL) emerged as a learning method to enable the server to
train models over data distributed among various clients. These clients are
protective about their data being leaked to the server, any other client, or an
external adversary, and hence, locally train the model and share it with the
server rather than sharing the data. The introduction of sophisticated
inferencing attacks enabled the leakage of information about data through
access to model parameters. To tackle this challenge, privacy-preserving
federated learning aims to achieve differential privacy through learning
algorithms like DP-SGD. However, such methods involve adding noise to the
model, data, or gradients, reducing the model's performance.
This work provides a theoretical analysis of the tradeoff between model
performance and communication complexity of the FL system. We formally prove
that training for one local epoch per global round of training gives optimal
performance while preserving the same privacy budget. We also investigate the
change of utility (tied to privacy) of FL models with a change in the number of
clients and observe that when clients are training using DP-SGD and argue that
for the same privacy budget, the utility improved with increased clients. We
validate our findings through experiments on real-world datasets. The results
from this paper aim to improve the performance of privacy-preserving federated
learning systems.
|
2501.14455
|
Triple Path Enhanced Neural Architecture Search for Multimodal Fake News
Detection
|
cs.CV
|
Multimodal fake news detection has become one of the most crucial issues on
social media platforms. Although existing methods have achieved advanced
performance, two main challenges persist: (1) Under-performed multimodal news
information fusion due to model architecture solidification, and (2) weak
generalization ability on partial-modality contained fake news. To meet these
challenges, we propose a novel and flexible triple path enhanced neural
architecture search model MUSE. MUSE includes two dynamic paths for detecting
partial-modality contained fake news and a static path for exploiting potential
multimodal correlations. Experimental results show that MUSE achieves stable
performance improvement over the baselines.
|
2501.14457
|
Understanding and Mitigating Gender Bias in LLMs via Interpretable
Neuron Editing
|
cs.CL
|
Large language models (LLMs) often exhibit gender bias, posing challenges for
their safe deployment. Existing methods to mitigate bias lack a comprehensive
understanding of its mechanisms or compromise the model's core capabilities. To
address these issues, we propose the CommonWords dataset, to systematically
evaluate gender bias in LLMs. Our analysis reveals pervasive bias across models
and identifies specific neuron circuits, including gender neurons and general
neurons, responsible for this behavior. Notably, editing even a small number of
general neurons can disrupt the model's overall capabilities due to
hierarchical neuron interactions. Based on these insights, we propose an
interpretable neuron editing method that combines logit-based and causal-based
strategies to selectively target biased neurons. Experiments on five LLMs
demonstrate that our method effectively reduces gender bias while preserving
the model's original capabilities, outperforming existing fine-tuning and
editing approaches. Our findings contribute a novel dataset, a detailed
analysis of bias mechanisms, and a practical solution for mitigating gender
bias in LLMs.
|
2501.14458
|
A Survey of Optimization Methods for Training DL Models: Theoretical
Perspective on Convergence and Generalization
|
cs.LG cs.DC math.OC
|
As data sets grow in size and complexity, it is becoming more difficult to
pull useful features from them using hand-crafted feature extractors. For this
reason, deep learning (DL) frameworks are now widely popular. The Holy Grail of
DL and one of the most mysterious challenges in all of modern ML is to develop
a fundamental understanding of DL optimization and generalization. While
numerous optimization techniques have been introduced in the literature to
navigate the exploration of the highly non-convex DL optimization landscape,
many survey papers reviewing them primarily focus on summarizing these
methodologies, often overlooking the critical theoretical analyses of these
methods. In this paper, we provide an extensive summary of the theoretical
foundations of optimization methods in DL, including presenting various
methodologies, their convergence analyses, and generalization abilities. This
paper not only includes theoretical analysis of popular generic gradient-based
first-order and second-order methods, but it also covers the analysis of the
optimization techniques adapting to the properties of the DL loss landscape and
explicitly encouraging the discovery of well-generalizing optimal points.
Additionally, we extend our discussion to distributed optimization methods that
facilitate parallel computations, including both centralized and decentralized
approaches. We provide both convex and non-convex analysis for the optimization
algorithms considered in this survey paper. Finally, this paper aims to serve
as a comprehensive theoretical handbook on optimization methods for DL,
offering insights and understanding to both novice and seasoned researchers in
the field.
|
2501.14459
|
Interpretability Analysis of Domain Adapted Dense Retrievers
|
cs.IR cs.AI
|
Dense retrievers have demonstrated significant potential for neural
information retrieval; however, they exhibit a lack of robustness to domain
shifts, thereby limiting their efficacy in zero-shot settings across diverse
domains. Previous research has investigated unsupervised domain adaptation
techniques to adapt dense retrievers to target domains. However, these studies
have not focused on explainability analysis to understand how such adaptations
alter the model's behavior. In this paper, we propose utilizing the integrated
gradients framework to develop an interpretability method that provides both
instance-based and ranking-based explanations for dense retrievers. To generate
these explanations, we introduce a novel baseline that reveals both query and
document attributions. This method is used to analyze the effects of domain
adaptation on input attributions for query and document tokens across two
datasets: the financial question answering dataset (FIQA) and the biomedical
information retrieval dataset (TREC-COVID). Our visualizations reveal that
domain-adapted models focus more on in-domain terminology compared to
non-adapted models, exemplified by terms such as "hedge," "gold," "corona," and
"disease." This research addresses how unsupervised domain adaptation
techniques influence the behavior of dense retrievers when adapted to new
domains. Additionally, we demonstrate that integrated gradients are a viable
choice for explaining and analyzing the internal mechanisms of these opaque
neural models.
|
2501.14460
|
MLMC: Interactive multi-label multi-classifier evaluation without
confusion matrices
|
cs.LG
|
Machine learning-based classifiers are commonly evaluated by metrics like
accuracy, but deeper analysis is required to understand their strengths and
weaknesses. MLMC is a visual exploration tool that tackles the challenge of
multi-label classifier comparison and evaluation. It offers a scalable
alternative to confusion matrices which are commonly used for such tasks, but
don't scale well with a large number of classes or labels. Additionally, MLMC
allows users to view classifier performance from an instance perspective, a
label perspective, and a classifier perspective. Our user study shows that the
techniques implemented by MLMC allow for a powerful multi-label classifier
evaluation while preserving user friendliness.
|
2501.14466
|
On Correlating Factors for Domain Adaptation Performance
|
cs.IR stat.AP
|
Dense retrievers have demonstrated significant potential for neural
information retrieval; however, they lack robustness to domain shifts, limiting
their efficacy in zero-shot settings across diverse domains. In this paper, we
set out to analyze the possible factors that lead to successful domain
adaptation of dense retrievers. We include domain similarity proxies between
generated queries to test and source domains. Furthermore, we conduct a case
study comparing two powerful domain adaptation techniques. We find that
generated query type distribution is an important factor, and generating
queries that share a similar domain to the test documents improves the
performance of domain adaptation methods. This study further emphasizes the
importance of domain-tailored generated queries.
|
2501.14469
|
Pesti-Gen: Unleashing a Generative Molecule Approach for Toxicity Aware
Pesticide Design
|
cs.LG cs.AI q-bio.BM q-bio.MN
|
Global climate change has reduced crop resilience and pesticide efficacy,
making reliance on synthetic pesticides inevitable, even though their
widespread use poses significant health and environmental risks. While these
pesticides remain a key tool in pest management, previous machine-learning
applications in pesticide and agriculture have focused on classification or
regression, leaving the fundamental challenge of generating new molecular
structures or designing novel candidates unaddressed. In this paper, we propose
Pesti-Gen, a novel generative model based on variational auto-encoders,
designed to create pesticide candidates with optimized properties for the first
time. Specifically, Pesti-Gen leverages a two-stage learning process: an
initial pre-training phase that captures a generalized chemical structure
representation, followed by a fine-tuning stage that incorporates
toxicity-specific information. The model simultaneously optimizes over multiple
toxicity metrics, such as (1) livestock toxicity and (2) aqua toxicity to
generate environmentally friendly pesticide candidates. Notably, Pesti-Gen
achieves approximately 68\% structural validity in generating new molecular
structures, demonstrating the model's effectiveness in producing optimized and
feasible pesticide candidates, thereby providing a new way for safer and more
sustainable pest management solutions.
|
2501.14473
|
XFSC: A Catalogue of Trustable Semantic Metadata for Data Services and
Providers
|
cs.DB
|
In dataspaces, federation services facilitate key functions such as enabling
participating organizations to establish mutual trust and assisting them in
discovering data and services available for consumption. Discovery is enabled
by a catalogue, where participants publish metadata describing themselves and
their data and service offerings as Verifiable Presentations (VPs), such that
other participants may query them. This paper presents the Eclipse Cross
Federation Services Components (XFSC) Catalogue, which originated as a
catalogue reference implementation for the Gaia-X federated cloud service
architecture but is also generally applicable to metadata required to be
trustable. This implementation provides basic lifecycle management for
DCAT-style metadata records and schemas. It validates submitted VPs for their
cryptographic integrity and trustability, and for their conformance to an
extensible collection of semantic schemas. The claims in the latest versions of
valid VP submissions are extracted into a searchable graph database. The
implementation scales to large numbers of records and is secure by design.
Filling the catalogue with content in a maintainable way requires bindings
towards where data and service offerings are coming from: connectors that
expose resources hosted in an organization's IT infrastructure towards the
dataspace. We demonstrate the integration of our catalogue with the widely used
Eclipse Dataspace Components Connector, enabling real-world use cases of the
German Culture Dataspace. In addition, we discuss potential extensions and
upcoming integrations of the catalogue.
|
2501.14474
|
The Pseudo-Dimension of Contracts
|
cs.GT cs.AI cs.LG econ.TH
|
Algorithmic contract design studies scenarios where a principal incentivizes
an agent to exert effort on her behalf. In this work, we focus on settings
where the agent's type is drawn from an unknown distribution, and formalize an
offline learning framework for learning near-optimal contracts from sample
agent types. A central tool in our analysis is the notion of pseudo-dimension
from statistical learning theory. Beyond its role in establishing upper bounds
on the sample complexity, pseudo-dimension measures the intrinsic complexity of
a class of contracts, offering a new perspective on the tradeoffs between
simplicity and optimality in contract design. Our main results provide
essentially optimal tradeoffs between pseudo-dimension and representation error
(defined as the loss in principal's utility) with respect to linear and bounded
contracts. Using these tradeoffs, we derive sample- and time-efficient learning
algorithms, and demonstrate their near-optimality by providing almost matching
lower bounds on the sample complexity. Conversely, for unbounded contracts, we
prove an impossibility result showing that no learning algorithm exists.
Finally, we extend our techniques in three important ways. First, we provide
refined pseudo-dimension and sample complexity guarantees for the combinatorial
actions model, revealing a novel connection between the number of critical
values and sample complexity. Second, we extend our results to menus of
contracts, showing that their pseudo-dimension scales linearly with the menu
size. Third, we adapt our algorithms to the online learning setting, where we
show that, a polynomial number of type samples suffice to learn near-optimal
bounded contracts. Combined with prior work, this establishes a formal
separation between expert advice and bandit feedback for this setting.
|
2501.14476
|
Avoiding Overfitting in Variable-Order Markov Models: a Cross-Validation
Approach
|
physics.soc-ph cs.SI econ.GN q-fin.EC
|
Higher$\text{-}$order Markov chain models are widely used to represent agent
transitions in dynamic systems, such as passengers in transport networks. They
capture transitions in complex systems by considering not only the current
state but also the path of previously visited states. For example, the
likelihood of train passengers traveling from Paris (current state) to Rome
could increase significantly if their journey originated in Italy (prior
state). Although this approach provides a more faithful representation of the
system than first$\text{-}$order models, we find that commonly used
methods$-$relying on Kullback$\text{-}$Leibler divergence$-$frequently overfit
the data, mistaking fluctuations for higher$\text{-}$order dependencies and
undermining forecasts and resource allocation. Here, we introduce DIVOP
(Detection of Informative Variable$\text{-}$Order Paths), an algorithm that
employs cross$\text{-}$validation to robustly distinguish meaningful
higher$\text{-}$order dependencies from noise. In both synthetic and
real$\text{-}$world datasets, DIVOP outperforms two
state$\text{-}$of$\text{-}$the$\text{-}$art algorithms by achieving higher
precision, recall, and sparser representations of the underlying dynamics. When
applied to global corporate ownership data, DIVOP reveals that tax havens
appear in 82$\%$ of all significant higher$\text{-}$order dependencies,
underscoring their outsized influence in corporate networks. By mitigating
overfitting, DIVOP enables more reliable multi$\text{-}$step predictions and
decision$\text{-}$making, paving the way toward deeper insights into the hidden
structures that drive modern interconnected systems.
|
2501.14483
|
Registration of Longitudinal Liver Examinations for Tumor Progress
Assessment
|
eess.IV cs.AI cs.CV physics.med-ph
|
Assessing cancer progression in liver CT scans is a clinical challenge,
requiring a comparison of scans at different times for the same patient.
Practitioners must identify existing tumors, compare them with prior exams,
identify new tumors, and evaluate overall disease evolution. This process is
particularly complex in liver examinations due to misalignment between exams
caused by several factors. Indeed, longitudinal liver examinations can undergo
different non-pathological and pathological changes due to non-rigid
deformations, the appearance or disappearance of pathologies, and other
variations. In such cases, existing registration approaches, mainly based on
intrinsic features may distort tumor regions, biasing the tumor progress
evaluation step and the corresponding diagnosis. This work proposes a
registration method based only on geometrical and anatomical information from
liver segmentation, aimed at aligning longitudinal liver images for aided
diagnosis. The proposed method is trained and tested on longitudinal liver CT
scans, with 317 patients for training and 53 for testing. Our experimental
results support our claims by showing that our method is better than other
registration techniques by providing a smoother deformation while preserving
the tumor burden (total volume of tissues considered as tumor) within the
volume. Qualitative results emphasize the importance of smooth deformations in
preserving tumor appearance.
|
2501.14484
|
$SpikePack$: Enhanced Information Flow in Spiking Neural Networks with
High Hardware Compatibility
|
cs.NE
|
Spiking Neural Networks (SNNs) hold promise for energy-efficient,
biologically inspired computing. We identify substantial informatio loss during
spike transmission, linked to temporal dependencies in traditional Leaky
Integrate-and-Fire (LIF) neuron-a key factor potentially limiting SNN
performance. Existing SNN architectures also underutilize modern GPUs,
constrained by single-bit spike storage and isolated weight-spike operations
that restrict computational efficiency. We introduce ${SpikePack}$, a neuron
model designed to reduce transmission loss while preserving essential features
like membrane potential reset and leaky integration. ${SpikePack}$ achieves
constant $\mathcal{O}(1)$ time and space complexity, enabling efficient
parallel processing on GPUs and also supporting serial inference on existing
SNN hardware accelerators. Compatible with standard Artificial Neural Network
(ANN) architectures, ${SpikePack}$ facilitates near-lossless ANN-to-SNN
conversion across various networks. Experimental results on tasks such as image
classification, detection, and segmentation show ${SpikePack}$ achieves
significant gains in accuracy and efficiency for both directly trained and
converted SNNs over state-of-the-art models. Tests on FPGA-based platforms
further confirm cross-platform flexibility, delivering high performance and
enhanced sparsity. By enhancing information flow and rethinking SNN-ANN
integration, ${SpikePack}$ advances efficient SNN deployment across diverse
hardware platforms.
|
2501.14486
|
Visual-Lidar Map Alignment for Infrastructure Inspections
|
cs.RO
|
Routine and repetitive infrastructure inspections present safety, efficiency,
and consistency challenges as they are performed manually, often in challenging
or hazardous environments. They can also introduce subjectivity and errors into
the process, resulting in undesirable outcomes. Simultaneous localization and
mapping (SLAM) presents an opportunity to generate high-quality 3D maps that
can be used to extract accurate and objective inspection data. Yet, many SLAM
algorithms are limited in their ability to align 3D maps from repeated
inspections in GPS-denied settings automatically. This limitation hinders
practical long-term asset health assessments by requiring tedious manual
alignment for data association across scans from previous inspections. This
paper introduces a versatile map alignment algorithm leveraging both visual and
lidar data for improved place recognition robustness and presents an
infrastructure-focused dataset tailored for consecutive inspections. By
detaching map alignment from SLAM, our approach enhances infrastructure
inspection pipelines, supports monitoring asset degradation over time, and
invigorates SLAM research by permitting exploration beyond existing
multi-session SLAM algorithms.
|
2501.14488
|
Breaking the Pre-Planning Barrier: Real-Time Adaptive Coordination of
Mission and Charging UAVs Using Graph Reinforcement Learning
|
cs.MA
|
Unmanned Aerial Vehicles (UAVs) are pivotal in applications such as search
and rescue and environmental monitoring, excelling in intelligent perception
tasks. However, their limited battery capacity hinders long-duration and
long-distance missions. Charging UAVs (CUAVs) offers a potential solution by
recharging mission UAVs (MUAVs), but existing methods rely on impractical
pre-planned routes, failing to enable organic cooperation and limiting mission
efficiency. We introduce a novel multi-agent deep reinforcement learning model
named \textbf{H}eterogeneous \textbf{G}raph \textbf{A}ttention
\textbf{M}ulti-agent Deep Deterministic Policy Gradient (HGAM), designed to
dynamically coordinate MUAVs and CUAVs. This approach maximizes data
collection, geographical fairness, and energy efficiency by allowing UAVs to
adapt their routes in real-time to current task demands and environmental
conditions without pre-planning. Our model uses heterogeneous graph attention
networks (GATs) to present heterogeneous agents and facilitate efficient
information exchange. It operates within an actor-critic framework. Simulation
results show that our model significantly improves cooperation among
heterogeneous UAVs, outperforming existing methods in several metrics,
including data collection rate and charging efficiency.
|
2501.14490
|
Channel-wise Parallelizable Spiking Neuron with Multiplication-free
Dynamics and Large Temporal Receptive Fields
|
cs.NE
|
Spiking Neural Networks (SNNs) are distinguished from Artificial Neural
Networks (ANNs) for their sophisticated neuronal dynamics and sparse binary
activations (spikes) inspired by the biological neural system. Traditional
neuron models use iterative step-by-step dynamics, resulting in serial
computation and slow training speed of SNNs. Recently, parallelizable spiking
neuron models have been proposed to fully utilize the massive parallel
computing ability of graphics processing units to accelerate the training of
SNNs. However, existing parallelizable spiking neuron models involve dense
floating operations and can only achieve high long-term dependencies learning
ability with a large order at the cost of huge computational and memory costs.
To solve the dilemma of performance and costs, we propose the mul-free
channel-wise Parallel Spiking Neuron, which is hardware-friendly and suitable
for SNNs' resource-restricted application scenarios. The proposed neuron
imports the channel-wise convolution to enhance the learning ability, induces
the sawtooth dilations to reduce the neuron order, and employs the bit shift
operation to avoid multiplications. The algorithm for design and implementation
of acceleration methods is discussed meticulously. Our methods are validated in
neuromorphic Spiking Heidelberg Digits voices, sequential CIFAR images, and
neuromorphic DVS-Lip vision datasets, achieving the best accuracy among SNNs.
Training speed results demonstrate the effectiveness of our acceleration
methods, providing a practical reference for future research.
|
2501.14491
|
Analyzing the Effect of Linguistic Similarity on Cross-Lingual Transfer:
Tasks and Experimental Setups Matter
|
cs.CL
|
Cross-lingual transfer is a popular approach to increase the amount of
training data for NLP tasks in a low-resource context. However, the best
strategy to decide which cross-lingual data to include is unclear. Prior
research often focuses on a small set of languages from a few language families
and/or a single task. It is still an open question how these findings extend to
a wider variety of languages and tasks. In this work, we analyze cross-lingual
transfer for 266 languages from a wide variety of language families. Moreover,
we include three popular NLP tasks: POS tagging, dependency parsing, and topic
classification. Our findings indicate that the effect of linguistic similarity
on transfer performance depends on a range of factors: the NLP task, the (mono-
or multilingual) input representations, and the definition of linguistic
similarity.
|
2501.14492
|
RealCritic: Towards Effectiveness-Driven Evaluation of Language Model
Critiques
|
cs.CL cs.AI cs.LG
|
Critiques are important for enhancing the performance of Large Language
Models (LLMs), enabling both self-improvement and constructive feedback for
others by identifying flaws and suggesting improvements. However, evaluating
the critique capabilities of LLMs presents a significant challenge due to the
open-ended nature of the task. In this work, we introduce a new benchmark
designed to assess the critique capabilities of LLMs. Unlike existing
benchmarks, which typically function in an open-loop fashion, our approach
employs a closed-loop methodology that evaluates the quality of corrections
generated from critiques. Moreover, the benchmark incorporates features such as
self-critique, cross-critique, and iterative critique, which are crucial for
distinguishing the abilities of advanced reasoning models from more classical
ones. We implement this benchmark using eight challenging reasoning tasks. We
have several interesting findings. First, despite demonstrating comparable
performance in direct chain-of-thought generation, classical LLMs significantly
lag behind the advanced reasoning-based model o1-mini across all critique
scenarios. Second, in self-critique and iterative critique settings, classical
LLMs may even underperform relative to their baseline capabilities. We hope
that this benchmark will serve as a valuable resource to guide future
advancements. The code and data are available at
\url{https://github.com/tangzhy/RealCritic}.
|
2501.14495
|
BILLNET: A Binarized Conv3D-LSTM Network with Logic-gated residual
architecture for hardware-efficient video inference
|
cs.CV cs.AR
|
Long Short-Term Memory (LSTM) and 3D convolution (Conv3D) show impressive
results for many video-based applications but require large memory and
intensive computing. Motivated by recent works on hardware-algorithmic
co-design towards efficient inference, we propose a compact binarized
Conv3D-LSTM model architecture called BILLNET, compatible with a highly
resource-constrained hardware. Firstly, BILLNET proposes to factorize the
costly standard Conv3D by two pointwise convolutions with a grouped convolution
in-between. Secondly, BILLNET enables binarized weights and activations via a
MUX-OR-gated residual architecture. Finally, to efficiently train BILLNET, we
propose a multi-stage training strategy enabling to fully quantize LSTM layers.
Results on Jester dataset show that our method can obtain high accuracy with
extremely low memory and computational budgets compared to existing Conv3D
resource-efficient models.
|
2501.14496
|
A Note on Implementation Errors in Recent Adaptive Attacks Against
Multi-Resolution Self-Ensembles
|
cs.CR cs.CV cs.LG
|
This note documents an implementation issue in recent adaptive attacks (Zhang
et al. [2024]) against the multi-resolution self-ensemble defense (Fort and
Lakshminarayanan [2024]). The implementation allowed adversarial perturbations
to exceed the standard $L_\infty = 8/255$ bound by up to a factor of
20$\times$, reaching magnitudes of up to $L_\infty = 160/255$. When attacks are
properly constrained within the intended bounds, the defense maintains
non-trivial robustness. Beyond highlighting the importance of careful
validation in adversarial machine learning research, our analysis reveals an
intriguing finding: properly bounded adaptive attacks against strong
multi-resolution self-ensembles often align with human perception, suggesting
the need to reconsider how we measure adversarial robustness.
|
2501.14497
|
Evaluating and Improving Graph to Text Generation with Large Language
Models
|
cs.CL
|
Large language models (LLMs) have demonstrated immense potential across
various tasks. However, research for exploring and improving the capabilities
of LLMs in interpreting graph structures remains limited. To address this gap,
we conduct a comprehensive evaluation of prompting current open-source LLMs on
graph-to-text generation tasks. Although we explored the optimal prompting
strategies and proposed a novel and effective diversity-difficulty-based
few-shot sample selection method, we found that the improvements from
tuning-free approaches were incremental, as LLMs struggle with planning on
complex graphs, particularly those with a larger number of triplets. To further
improve LLMs in planning with graph sequences and grounding in truth, we
introduce a new graph-to-text dataset, PlanGTG, annotated with two sub-tasks:
reordering and attribution. Through extensive automatic and human evaluations,
we demonstrate significant improvements in the quality of generated text from
both few-shot learning and fine-tuning perspectives using the PlanGTG dataset.
Our study paves the way for new research directions in graph-to-text
generation. PlanGTG datasets can be found in https://github.com/probe2/kg_text.
|
2501.14499
|
Automated Assignment Grading with Large Language Models: Insights From a
Bioinformatics Course
|
cs.LG cs.CY
|
Providing students with individualized feedback through assignments is a
cornerstone of education that supports their learning and development. Studies
have shown that timely, high-quality feedback plays a critical role in
improving learning outcomes. However, providing personalized feedback on a
large scale in classes with large numbers of students is often impractical due
to the significant time and effort required. Recent advances in natural
language processing and large language models (LLMs) offer a promising solution
by enabling the efficient delivery of personalized feedback. These technologies
can reduce the workload of course staff while improving student satisfaction
and learning outcomes. Their successful implementation, however, requires
thorough evaluation and validation in real classrooms. We present the results
of a practical evaluation of LLM-based graders for written assignments in the
2024/25 iteration of the Introduction to Bioinformatics course at the
University of Ljubljana. Over the course of the semester, more than 100
students answered 36 text-based questions, most of which were automatically
graded using LLMs. In a blind study, students received feedback from both LLMs
and human teaching assistants without knowing the source, and later rated the
quality of the feedback. We conducted a systematic evaluation of six commercial
and open-source LLMs and compared their grading performance with human teaching
assistants. Our results show that with well-designed prompts, LLMs can achieve
grading accuracy and feedback quality comparable to human graders. Our results
also suggest that open-source LLMs perform as well as commercial LLMs, allowing
schools to implement their own grading systems while maintaining privacy.
|
2501.14502
|
LiDAR-Based Vehicle Detection and Tracking for Autonomous Racing
|
cs.RO cs.CV
|
Autonomous racing provides a controlled environment for testing the software
and hardware of autonomous vehicles operating at their performance limits.
Competitive interactions between multiple autonomous racecars however introduce
challenging and potentially dangerous scenarios. Accurate and consistent
vehicle detection and tracking is crucial for overtaking maneuvers, and
low-latency sensor processing is essential to respond quickly to hazardous
situations. This paper presents the LiDAR-based perception algorithms deployed
on Team PoliMOVE's autonomous racecar, which won multiple competitions in the
Indy Autonomous Challenge series. Our Vehicle Detection and Tracking pipeline
is composed of a novel fast Point Cloud Segmentation technique and a specific
Vehicle Pose Estimation methodology, together with a variable-step Multi-Target
Tracking algorithm. Experimental results demonstrate the algorithm's
performance, robustness, computational efficiency, and suitability for
autonomous racing applications, enabling fully autonomous overtaking maneuvers
at velocities exceeding 275 km/h.
|
2501.14503
|
Benchmarking global optimization techniques for unmanned aerial vehicle
path planning
|
cs.NE cs.RO math.OC
|
The Unmanned Aerial Vehicle (UAV) path planning problem is a complex
optimization problem in the field of robotics. In this paper, we investigate
the possible utilization of this problem in benchmarking global optimization
methods. We devise a problem instance generator and pick 56 representative
instances, which we compare to established benchmarking suits through
Exploratory Landscape Analysis to show their uniqueness. For the computational
comparison, we select twelve well-performing global optimization techniques
from both subfields of stochastic algorithms (evolutionary computation methods)
and deterministic algorithms (Dividing RECTangles, or DIRECT-type methods). The
experiments were conducted in settings with varying dimensionality and
computational budgets. The results were analyzed through several criteria
(number of best-found solutions, mean relative error, Friedman ranks) and
utilized established statistical tests. The best-ranking methods for the UAV
problems were almost universally the top-performing evolutionary techniques
from recent competitions on numerical optimization at the Institute of
Electrical and Electronics Engineers Congress on Evolutionary Computation.
Lastly, we discussed the variable dimension characteristics of the studied UAV
problems that remain still largely under-investigated.
|
2501.14506
|
WanJuanSiLu: A High-Quality Open-Source Webtext Dataset for Low-Resource
Languages
|
cs.CL
|
This paper introduces the open-source dataset WanJuanSiLu, designed to
provide high-quality training corpora for low-resource languages, thereby
advancing the research and development of multilingual models. To achieve this,
we have developed a systematic data processing framework tailored for
low-resource languages. This framework encompasses key stages such as data
extraction, corpus cleaning, content deduplication, security filtering, quality
evaluation, and theme classification. Through the implementation of this
framework, we have significantly improved both the quality and security of the
dataset, while maintaining its linguistic diversity. As of now, data for all
five languages have been fully open-sourced. The dataset can be accessed at
https://opendatalab.com/applyMultilingualCorpus, and GitHub repository is
available at https://github.com/opendatalab/WanJuan3.0
|
2501.14510
|
Deep-BrownConrady: Prediction of Camera Calibration and Distortion
Parameters Using Deep Learning and Synthetic Data
|
cs.CV cs.LG
|
This research addresses the challenge of camera calibration and distortion
parameter prediction from a single image using deep learning models. The main
contributions of this work are: (1) demonstrating that a deep learning model,
trained on a mix of real and synthetic images, can accurately predict camera
and lens parameters from a single image, and (2) developing a comprehensive
synthetic dataset using the AILiveSim simulation platform. This dataset
includes variations in focal length and lens distortion parameters, providing a
robust foundation for model training and testing. The training process
predominantly relied on these synthetic images, complemented by a small subset
of real images, to explore how well models trained on synthetic data can
perform calibration tasks on real-world images. Traditional calibration methods
require multiple images of a calibration object from various orientations,
which is often not feasible due to the lack of such images in publicly
available datasets. A deep learning network based on the ResNet architecture
was trained on this synthetic dataset to predict camera calibration parameters
following the Brown-Conrady lens model. The ResNet architecture, adapted for
regression tasks, is capable of predicting continuous values essential for
accurate camera calibration in applications such as autonomous driving,
robotics, and augmented reality.
Keywords: Camera calibration, distortion, synthetic data, deep learning,
residual networks (ResNet), AILiveSim, horizontal field-of-view, principal
point, Brown-Conrady Model.
|
2501.14513
|
ABPT: Amended Backpropagation through Time with Partially Differentiable
Rewards
|
cs.RO cs.AI cs.LG
|
Using the exact gradients of the rewards to directly optimize policy
parameters via backpropagation-through-time (BPTT) enables high training
performance for quadrotor tasks. However, designing a fully differentiable
reward architecture is often challenging. Partially differentiable rewards will
result in biased gradient propagation that degrades training performance. To
overcome this limitation, we propose Amended Backpropagation-through-Time
(ABPT), a novel approach that mitigates gradient bias while preserving the
training efficiency of BPTT. ABPT combines 0-step and N-step returns,
effectively reducing the bias by leveraging value gradients from the learned
Q-value function. Additionally, it adopts entropy regularization and state
initialization mechanisms to encourage exploration during training. We evaluate
ABPT on four representative quadrotor flight tasks. Experimental results
demonstrate that ABPT converges significantly faster and achieves higher
ultimate rewards than existing learning algorithms, particularly in tasks
involving partially differentiable rewards.
|
2501.14514
|
PARASIDE: An Automatic Paranasal Sinus Segmentation and Structure
Analysis Tool for MRI
|
cs.CV cs.LG
|
Chronic rhinosinusitis (CRS) is a common and persistent sinus imflammation
that affects 5 - 12\% of the general population. It significantly impacts
quality of life and is often difficult to assess due to its subjective nature
in clinical evaluation. We introduce PARASIDE, an automatic tool for segmenting
air and soft tissue volumes of the structures of the sinus maxillaris,
frontalis, sphenodalis and ethmoidalis in T1 MRI. By utilizing that
segmentation, we can quantify feature relations that have been observed only
manually and subjectively before. We performed an exemplary study and showed
both volume and intensity relations between structures and radiology reports.
While the soft tissue segmentation is good, the automated annotations of the
air volumes are excellent. The average intensity over air structures are
consistently below those of the soft tissues, close to perfect separability.
Healthy subjects exhibit lower soft tissue volumes and lower intensities. Our
developed system is the first automated whole nasal segmentation of 16
structures, and capable of calculating medical relevant features such as the
Lund-Mackay score.
|
2501.14520
|
Scene Understanding Enabled Semantic Communication with Open Channel
Coding
|
eess.SP cs.CV
|
As communication systems transition from symbol transmission to conveying
meaningful information, sixth-generation (6G) networks emphasize semantic
communication. This approach prioritizes high-level semantic information,
improving robustness and reducing redundancy across modalities like text,
speech, and images. However, traditional semantic communication faces
limitations, including static coding strategies, poor generalization, and
reliance on task-specific knowledge bases that hinder adaptability. To overcome
these challenges, we propose a novel system combining scene understanding,
Large Language Models (LLMs), and open channel coding, named \textbf{OpenSC}.
Traditional systems rely on fixed domain-specific knowledge bases, limiting
their ability to generalize. Our open channel coding approach leverages shared,
publicly available knowledge, enabling flexible, adaptive encoding. This
dynamic system reduces reliance on static task-specific data, enhancing
adaptability across diverse tasks and environments. Additionally, we use scene
graphs for structured semantic encoding, capturing object relationships and
context to improve tasks like Visual Question Answering (VQA). Our approach
selectively encodes key semantic elements, minimizing redundancy and improving
transmission efficiency. Experimental results show significant improvements in
both semantic understanding and efficiency, advancing the potential of
adaptive, generalizable semantic communication in 6G networks.
|
2501.14522
|
Information Age and Correctness for Energy Harvesting Devices with
Random Access
|
cs.IT math.IT
|
We study a large network of energy-harvesting devices that monitor two-state
Markov processes and send status updates to a gateway using the slotted ALOHA
protocol without feedback. We let the devices adjust their transmission
probabilities according to their process state transitions and current battery
levels. Using a Markovian framework, we analyze the average value of a generic
state-dependent penalty function that grows whenever there is a state
estimation error. The age of incorrect information (AoII) is an example of such
penalty function. We propose an accurate and easy-to-compute approximation for
the average penalty. Numerical results demonstrate the benefits of optimizing
the transmission probabilities to minimize the average penalty. The
average-AoII-minimizing strategy can be highly suboptimal in terms of average
penalty when one of the process states is critical, i.e., entails a high
penalty if wrongly estimated. Furthermore, minimizing the average penalty does
not guarantee a low probability of misdetecting a critical state period.
|
2501.14524
|
Training-Free Style and Content Transfer by Leveraging U-Net Skip
Connections in Stable Diffusion 2.*
|
cs.CV
|
Despite significant recent advances in image generation with diffusion
models, their internal latent representations remain poorly understood.
Existing works focus on the bottleneck layer (h-space) of Stable Diffusion's
U-Net or leverage the cross-attention, self-attention, or decoding layers. Our
model, SkipInject takes advantage of U-Net's skip connections. We conduct
thorough analyses on the role of the skip connections and find that the
residual connections passed by the third encoder block carry most of the
spatial information of the reconstructed image, splitting the content from the
style. We show that injecting the representations from this block can be used
for text-based editing, precise modifications, and style transfer. We compare
our methods state-of-the-art style transfer and image editing methods and
demonstrate that our method obtains the best content alignment and optimal
structural preservation tradeoff.
|
2501.14526
|
Robustified Time-optimal Point-to-point Motion Planning and Control
under Uncertainty
|
cs.RO cs.SY eess.SY
|
This paper proposes a novel approach to formulate time-optimal point-to-point
motion planning and control under uncertainty. The approach defines a
robustified two-stage Optimal Control Problem (OCP), in which stage 1, with a
fixed time grid, is seamlessly stitched with stage 2, which features a variable
time grid. Stage 1 optimizes not only the nominal trajectory, but also feedback
gains and corresponding state covariances, which robustify constraints in both
stages. The outcome is a minimized uncertainty in stage 1 and a minimized total
motion time for stage 2, both contributing to the time optimality and safety of
the total motion. A timely replanning strategy is employed to handle changes in
constraints and maintain feasibility, while a tailored iterative algorithm is
proposed for efficient, real-time OCP execution.
|
2501.14528
|
Idiom Detection in Sorani Kurdish Texts
|
cs.CL
|
Idiom detection using Natural Language Processing (NLP) is the computerized
process of recognizing figurative expressions within a text that convey
meanings beyond the literal interpretation of the words. While idiom detection
has seen significant progress across various languages, the Kurdish language
faces a considerable research gap in this area despite the importance of idioms
in tasks like machine translation and sentiment analysis. This study addresses
idiom detection in Sorani Kurdish by approaching it as a text classification
task using deep learning techniques. To tackle this, we developed a dataset
containing 10,580 sentences embedding 101 Sorani Kurdish idioms across diverse
contexts. Using this dataset, we developed and evaluated three deep learning
models: KuBERT-based transformer sequence classification, a Recurrent
Convolutional Neural Network (RCNN), and a BiLSTM model with an attention
mechanism. The evaluations revealed that the transformer model, the fine-tuned
BERT, consistently outperformed the others, achieving nearly 99% accuracy while
the RCNN achieved 96.5% and the BiLSTM 80%. These results highlight the
effectiveness of Transformer-based architectures in low-resource languages like
Kurdish. This research provides a dataset, three optimized models, and insights
into idiom detection, laying a foundation for advancing Kurdish NLP.
|
2501.14531
|
On Hardening DNNs against Noisy Computations
|
cs.LG
|
The success of deep learning has sparked significant interest in designing
computer hardware optimized for the high computational demands of neural
network inference. As further miniaturization of digital CMOS processors
becomes increasingly challenging, alternative computing paradigms, such as
analog computing, are gaining consideration. Particularly for compute-intensive
tasks such as matrix multiplication, analog computing presents a promising
alternative due to its potential for significantly higher energy efficiency
compared to conventional digital technology. However, analog computations are
inherently noisy, which makes it challenging to maintain high accuracy on deep
neural networks. This work investigates the effectiveness of training neural
networks with quantization to increase the robustness against noise.
Experimental results across various network architectures show that
quantization-aware training with constant scaling factors enhances robustness.
We compare these methods with noisy training, which incorporates a noise
injection during training that mimics the noise encountered during inference.
While both two methods increase tolerance against noise, noisy training emerges
as the superior approach for achieving robust neural network performance,
especially in complex neural architectures.
|
2501.14533
|
CheapNVS: Real-Time On-Device Narrow-Baseline Novel View Synthesis
|
cs.CV
|
Single-view novel view synthesis (NVS) is a notorious problem due to its
ill-posed nature, and often requires large, computationally expensive
approaches to produce tangible results. In this paper, we propose CheapNVS: a
fully end-to-end approach for narrow baseline single-view NVS based on a novel,
efficient multiple encoder/decoder design trained in a multi-stage fashion.
CheapNVS first approximates the laborious 3D image warping with lightweight
learnable modules that are conditioned on the camera pose embeddings of the
target view, and then performs inpainting on the occluded regions in parallel
to achieve significant performance gains. Once trained on a subset of Open
Images dataset, CheapNVS outperforms the state-of-the-art despite being 10
times faster and consuming 6% less memory. Furthermore, CheapNVS runs
comfortably in real-time on mobile devices, reaching over 30 FPS on a Samsung
Tab 9+.
|
2501.14534
|
Trick-GS: A Balanced Bag of Tricks for Efficient Gaussian Splatting
|
cs.CV
|
Gaussian splatting (GS) for 3D reconstruction has become quite popular due to
their fast training, inference speeds and high quality reconstruction. However,
GS-based reconstructions generally consist of millions of Gaussians, which
makes them hard to use on computationally constrained devices such as
smartphones. In this paper, we first propose a principled analysis of advances
in efficient GS methods. Then, we propose Trick-GS, which is a careful
combination of several strategies including (1) progressive training with
resolution, noise and Gaussian scales, (2) learning to prune and mask
primitives and SH bands by their significance, and (3) accelerated GS training
framework. Trick-GS takes a large step towards resource-constrained GS, where
faster run-time, smaller and faster-convergence of models is of paramount
concern. Our results on three datasets show that Trick-GS achieves up to 2x
faster training, 40x smaller disk size and 2x faster rendering speed compared
to vanilla GS, while having comparable accuracy.
|
2501.14535
|
Rethinking Encoder-Decoder Flow Through Shared Structures
|
cs.CV cs.LG
|
Dense prediction tasks have enjoyed a growing complexity of encoder
architectures, decoders, however, have remained largely the same. They rely on
individual blocks decoding intermediate feature maps sequentially. We introduce
banks, shared structures that are used by each decoding block to provide
additional context in the decoding process. These structures, through applying
them via resampling and feature fusion, improve performance on depth estimation
for state-of-the-art transformer-based architectures on natural and synthetic
images whilst training on large-scale datasets.
|
2501.14539
|
A Recurrent Spiking Network with Hierarchical Intrinsic Excitability
Modulation for Schema Learning
|
cs.NE cs.LG
|
Schema, a form of structured knowledge that promotes transfer learning, is
attracting growing attention in both neuroscience and artificial intelligence
(AI). Current schema research in neural computation is largely constrained to a
single behavioral paradigm and relies heavily on recurrent neural networks
(RNNs) which lack the neural plausibility and biological interpretability. To
address these limitations, this work first constructs a generalized behavioral
paradigm framework for schema learning and introduces three novel cognitive
tasks, thus supporting a comprehensive schema exploration. Second, we propose a
new model using recurrent spiking neural networks with hierarchical intrinsic
excitability modulation (HM-RSNNs). The top level of the model selects
excitability properties for task-specific demands, while the bottom level
fine-tunes these properties for intra-task problems. Finally, extensive
visualization analyses of HM-RSNNs are conducted to showcase their
computational advantages, track the intrinsic excitability evolution during
schema learning, and examine neural coordination differences across tasks.
Biologically inspired lesion studies further uncover task-specific
distributions of intrinsic excitability within schemas. Experimental results
show that HM-RSNNs significantly outperform RSNN baselines across all tasks and
exceed RNNs in three novel cognitive tasks. Additionally, HM-RSNNs offer deeper
insights into neural dynamics underlying schema learning.
|
2501.14540
|
VERUS-LM: a Versatile Framework for Combining LLMs with Symbolic
Reasoning
|
cs.AI
|
A recent approach to neurosymbolic reasoning is to explicitly combine the
strengths of large language models (LLMs) and symbolic solvers to tackle
complex reasoning tasks. However, current approaches face significant
limitations, including poor generalizability due to task-specific prompts,
inefficiencies caused by the lack of separation between knowledge and queries,
and restricted inferential capabilities. These shortcomings hinder their
scalability and applicability across diverse domains. In this paper, we
introduce VERUS-LM, a novel framework designed to address these challenges.
VERUS-LM employs a generic prompting mechanism, clearly separates domain
knowledge from queries, and supports a wide range of different logical
reasoning tasks. This framework enhances adaptability, reduces computational
cost, and allows for richer forms of reasoning, such as optimization and
constraint satisfaction. We show that our approach succeeds in diverse
reasoning on a novel dataset, markedly outperforming LLMs. Additionally, our
system achieves competitive results on common reasoning benchmarks when
compared to other state-of-the-art approaches, and significantly surpasses them
on the difficult AR-LSAT dataset. By pushing the boundaries of hybrid
reasoning, VERUS-LM represents a significant step towards more versatile
neurosymbolic AI systems
|
2501.14543
|
Reducing Action Space for Deep Reinforcement Learning via Causal Effect
Estimation
|
cs.LG
|
Intelligent decision-making within large and redundant action spaces remains
challenging in deep reinforcement learning. Considering similar but ineffective
actions at each step can lead to repetitive and unproductive trials. Existing
methods attempt to improve agent exploration by reducing or penalizing
redundant actions, yet they fail to provide quantitative and reliable evidence
to determine redundancy. In this paper, we propose a method to improve
exploration efficiency by estimating the causal effects of actions. Unlike
prior methods, our approach offers quantitative results regarding the causality
of actions for one-step transitions. We first pre-train an inverse dynamics
model to serve as prior knowledge of the environment. Subsequently, we classify
actions across the entire action space at each time step and estimate the
causal effect of each action to suppress redundant actions during exploration.
We provide a theoretical analysis to demonstrate the effectiveness of our
method and present empirical results from simulations in environments with
redundant actions to evaluate its performance. Our implementation is available
at https://github.com/agi-brain/cee.git.
|
2501.14544
|
Distributed Conformal Prediction via Message Passing
|
cs.LG cs.AI stat.ML
|
Post-hoc calibration of pre-trained models is critical for ensuring reliable
inference, especially in safety-critical domains such as healthcare. Conformal
Prediction (CP) offers a robust post-hoc calibration framework, providing
distribution-free statistical coverage guarantees for prediction sets by
leveraging held-out datasets. In this work, we address a decentralized setting
where each device has limited calibration data and can communicate only with
its neighbors over an arbitrary graph topology. We propose two
message-passing-based approaches for achieving reliable inference via CP:
quantile-based distributed conformal prediction (Q-DCP) and histogram-based
distributed conformal prediction (H-DCP). Q-DCP employs distributed quantile
regression enhanced with tailored smoothing and regularization terms to
accelerate convergence, while H-DCP uses a consensus-based histogram estimation
approach. Through extensive experiments, we investigate the trade-offs between
hyperparameter tuning requirements, communication overhead, coverage
guarantees, and prediction set sizes across different network topologies.
|
2501.14546
|
Leveraging ChatGPT's Multimodal Vision Capabilities to Rank Satellite
Images by Poverty Level: Advancing Tools for Social Science Research
|
cs.CV cs.AI
|
This paper investigates the novel application of Large Language Models (LLMs)
with vision capabilities to analyze satellite imagery for village-level poverty
prediction. Although LLMs were originally designed for natural language
understanding, their adaptability to multimodal tasks, including geospatial
analysis, has opened new frontiers in data-driven research. By leveraging
advancements in vision-enabled LLMs, we assess their ability to provide
interpretable, scalable, and reliable insights into human poverty from
satellite images. Using a pairwise comparison approach, we demonstrate that
ChatGPT can rank satellite images based on poverty levels with accuracy
comparable to domain experts. These findings highlight both the promise and the
limitations of LLMs in socioeconomic research, providing a foundation for their
integration into poverty assessment workflows. This study contributes to the
ongoing exploration of unconventional data sources for welfare analysis and
opens pathways for cost-effective, large-scale poverty monitoring.
|
2501.14548
|
Large-scale and Fine-grained Vision-language Pre-training for Enhanced
CT Image Understanding
|
cs.CV
|
Artificial intelligence (AI) shows great potential in assisting radiologists
to improve the efficiency and accuracy of medical image interpretation and
diagnosis. However, a versatile AI model requires large-scale data and
comprehensive annotations, which are often impractical in medical settings.
Recent studies leverage radiology reports as a naturally high-quality
supervision for medical images, using contrastive language-image pre-training
(CLIP) to develop language-informed models for radiological image
interpretation. Nonetheless, these approaches typically contrast entire images
with reports, neglecting the local associations between imaging regions and
report sentences, which may undermine model performance and interoperability.
In this paper, we propose a fine-grained vision-language model (fVLM) for
anatomy-level CT image interpretation. Specifically, we explicitly match
anatomical regions of CT images with corresponding descriptions in radiology
reports and perform contrastive pre-training for each anatomy individually.
Fine-grained alignment, however, faces considerable false-negative challenges,
mainly from the abundance of anatomy-level healthy samples and similarly
diseased abnormalities. To tackle this issue, we propose identifying false
negatives of both normal and abnormal samples and calibrating contrastive
learning from patient-level to disease-aware pairing. We curated the largest CT
dataset to date, comprising imaging and report data from 69,086 patients, and
conducted a comprehensive evaluation of 54 major and important disease
diagnosis tasks across 15 main anatomies. Experimental results demonstrate the
substantial potential of fVLM in versatile medical image interpretation. In the
zero-shot classification task, we achieved an average AUC of 81.3% on 54
diagnosis tasks, surpassing CLIP and supervised methods by 12.9% and 8.0%,
respectively.
|
2501.14551
|
Fairness of Deep Ensembles: On the interplay between per-group task
difficulty and under-representation
|
cs.LG
|
Ensembling is commonly regarded as an effective way to improve the general
performance of models in machine learning, while also increasing the robustness
of predictions. When it comes to algorithmic fairness, heterogeneous ensembles,
composed of multiple model types, have been employed to mitigate biases in
terms of demographic attributes such as sex, age or ethnicity. Moreover, recent
work has shown how in multi-class problems even simple homogeneous ensembles
may favor performance of the worst-performing target classes. While homogeneous
ensembles are simpler to implement in practice, it is not yet clear whether
their benefits translate to groups defined not in terms of their target class,
but in terms of demographic or protected attributes, hence improving fairness.
In this work we show how this simple and straightforward method is indeed able
to mitigate disparities, particularly benefiting under-performing subgroups.
Interestingly, this can be achieved without sacrificing overall performance,
which is a common trade-off observed in bias mitigation strategies. Moreover,
we analyzed the interplay between two factors which may result in biases:
sub-group under-representation and the inherent difficulty of the task for each
group. These results revealed that, contrary to popular assumptions, having
balanced datasets may be suboptimal if the task difficulty varies between
subgroups. Indeed, we found that a perfectly balanced dataset may hurt both the
overall performance and the gap between groups. This highlights the importance
of considering the interaction between multiple forces at play in fairness.
|
2501.14557
|
Optimizing Grasping Precision for Industrial Pick-and-Place Tasks
Through a Novel Visual Servoing Approach
|
cs.RO
|
The integration of robotic arm manipulators into industrial manufacturing
lines has become common, thanks to their efficiency and effectiveness in
executing specific tasks. With advancements in camera technology, visual
sensors and perception systems have been incorporated to address more complex
operations. This study introduces a novel visual serving control system
designed for robotic operations in challenging environments, where accurate
object pose estimation is hindered by factors such as vibrations, tool path
deviations, and machining marks. To overcome these obstacles, our solution
focuses on enhancing the accuracy of picking and placing tasks, ensuring
reliable performance across various scenarios. This is accomplished by a novel
visual servoing method based on the integration of two complementary
methodologies: a technique for object localization and a separate approach for
precise control through visual feedback, leveraging their strengths to address
the challenges posed by the industrial context and thereby improving overall
grasping accuracy. Our method employ feedback from perception sensors to adjust
the control loop efficiently, enabling the robotic system to adeptly pick and
place objects. We have introduced a controller capable of seamlessly managing
the detection and manipulation of various shapes and types of objects within an
industrial context, addressing numerous challenges that arise in such
environments.
|
2501.14568
|
Hybrid Quantum-Classical Multi-Agent Pathfinding
|
cs.AI quant-ph
|
Multi-Agent Path Finding (MAPF) focuses on determining conflict-free paths
for multiple agents navigating through a shared space to reach specified goal
locations. This problem becomes computationally challenging, particularly when
handling large numbers of agents, as frequently encountered in practical
applications like coordinating autonomous vehicles. Quantum computing (QC) is a
promising candidate in overcoming such limits. However, current quantum
hardware is still in its infancy and thus limited in terms of computing power
and error robustness. In this work, we present the first optimal hybrid
quantum-classical MAPF algorithm which is based on branch-and-cut-and-prize. QC
is integrated by iteratively solving QUBO problems, based on conflict graphs.
Experiments on actual quantum hardware and results on benchmark data suggest
that our approach dominates previous QUBO formulations and baseline MAPF
solvers.
|
2501.14570
|
coverforest: Conformal Predictions with Random Forest in Python
|
stat.ML cs.LG stat.CO
|
Conformal prediction provides a framework for uncertainty quantification,
specifically in the forms of prediction intervals and sets with
distribution-free guaranteed coverage. While recent cross-conformal techniques
such as CV+ and Jackknife+-after-bootstrap achieve better data efficiency than
traditional split conformal methods, they incur substantial computational costs
due to required pairwise comparisons between training and test samples'
out-of-bag scores. Observing that these methods naturally extend from ensemble
models, particularly random forests, we leverage existing optimized random
forest implementations to enable efficient cross-conformal predictions.
We present coverforest, a Python package that implements efficient conformal
prediction methods specifically optimized for random forests. coverforest
supports both regression and classification tasks through various conformal
prediction methods, including split conformal, CV+, Jackknife+-after-bootstrap,
and adaptive prediction sets. Our package leverages parallel computing and
Cython optimizations to speed up out-of-bag calculations. Our experiments
demonstrate that coverforest's predictions achieve the desired level of
coverage. In addition, its training and prediction times can be faster than an
existing implementation by 2--9 times. The source code for the coverforest is
hosted on GitHub at https://github.com/donlapark/coverforest.
|
2501.14573
|
A Transferable Physics-Informed Framework for Battery Degradation
Diagnosis, Knee-Onset Detection and Knee Prediction
|
eess.SY cs.SY
|
The techno-economic and safety concerns of battery capacity knee occurrence
call for developing online knee detection and prediction methods as an advanced
battery management system (BMS) function. To address this, a transferable
physics-informed framework that consists of a histogram-based feature
engineering method, a hybrid physics-informed model, and a fine-tuning
strategy, is proposed for online battery degradation diagnosis and knee-onset
detection. The hybrid model is first developed and evaluated using a
scenario-aware pipeline in protocol cycling scenarios and then fine-tuned to
create a local model deployed in a dynamic cycling scenario. A 2D
histogram-based feature set is found to be the best choice in both source and
target scenarios. The fine-tuning strategy is proven to be effective in
improving battery degradation mode estimation and degradation phase detection
performance in the target scenario. Again, a strong linear correlation was
found between the identified knee-onset and knee points. As a result, advanced
BMS functions, such as online degradation diagnosis and prognosis, online
knee-onset detection and knee prediction, aging-aware battery classification,
and second-life repurposing, can be enabled through a battery performance
digital twin in the cloud.
|
2501.14576
|
Dynamic Operation and Control of a Multi-Stack Alkaline Water
Electrolysis System with Shared Gas Separators and Lye Circulation: A
Model-Based Study
|
math.OC cs.SY eess.SY
|
An emerging approach for large-scale hydrogen production using renewable
energy is to integrate multiple alkaline water electrolysis (AWE) stacks into a
single balance of plant (BoP) system, sharing components such as gas-lye
separation and lye circulation. This configuration, termed the $N$-in-1 AWE
system, packs $N$ stacks into a modular system, reducing land requirements, the
complexity of plant topology, and overall capital costs. However, the coupling
of these stacks through the shared BoP introduces challenges in dynamic
operation under varying energy inputs, making their performance unclear
compared to traditional 1-in-1 systems. To address this, we develop a
state-space model of the $N$-in-1 AWE system, capturing the dynamic behaviors
of lye circulation, temperature, and HTO impurity, and their impact on energy
conversion efficiency. We then propose a nonlinear model predictive controller
(NMPC) to coordinately optimize inter-stack electrolytic current distribution,
lye flow, and cooling, enabling the system to dynamically track varying load
commands while maximizing efficiency, stabilizing temperature, and limiting HTO
impurity accumulation. Simulation studies on a 4,000 Nm$^3$/h-rated 4-in-1
system verify the proposed controller under dynamic operation. Comparison with
4 independent 1-in-1 systems reveals that, with proper control, the $N$-in-1
configuration offers comparable flexibility in accommodating real-world wind
power inputs. The average differences in the root-mean-square errors (RMSEs)
for load-tracking and stack temperature stabilization, and specific energy
consumption are below 0.014 MW, 2.356 K, and 0.003 kWh/Nm$^3$.
|
2501.14577
|
ZETA: Leveraging Z-order Curves for Efficient Top-k Attention
|
cs.LG cs.AI
|
Over recent years, the Transformer has become a fundamental building block
for sequence modeling architectures. Yet at its core is the use of
self-attention, whose memory and computational cost grow quadratically with the
sequence length $N$, rendering it prohibitively expensive for long sequences. A
promising approach is top-$k$ attention, which selects only the $k$ most
relevant tokens and achieves performance comparable to vanilla self-attention
while significantly reducing space and computational demands. However, causal
masks require the current query token to only attend to past tokens, preventing
the existing top-$k$ attention method from efficiently searching for the most
relevant tokens in parallel, thereby limiting training efficiency. In this
work, we propose ZETA, leveraging \textbf{Z}-Order Curves for
\textbf{E}fficient \textbf{T}op-$k$ \textbf{A}ttention, to enable parallel
querying of past tokens for entire sequences. % in both space and time
complexity of $\mathcal{O}(N \log N)$. We first theoretically show that the
choice of key and query dimensions involves a trade-off between the curse of
dimensionality and the preservation of relative distances after projection. In
light of this insight, we propose reducing the dimensionality of keys and
queries in contrast to values and further leverage $Z$-order curves to map
low-dimensional keys and queries into \emph{one}-dimensional space, which
permits parallel sorting, thereby largely improving the efficiency for top-$k$
token selection. Experimental results demonstrate that ZETA matches the
performance of standard attention on the synthetic \textsc{Multi-Query
Associative Recall} task and outperforms attention and its variants on
\textsc{Long Range Arena} and \textsc{WikiText-103} language modeling.
|
2501.14579
|
Knowledge Graphs Construction from Criminal Court Appeals: Insights from
the French Cassation Court
|
cs.IR
|
Despite growing interest, accurately and reliably representing unstructured
data, such as court decisions, in a structured form, remains a challenge.
Recent advancements in generative AI applied to language modeling enabled the
transformation of text into knowledge graphs, unlocking new opportunities for
analysis and modeling. This paper presents a framework for constructing
knowledge graphs from appeals to the French Cassation Court. The framework
includes a domain-specific ontology and a derived dataset, offering a
foundation for structured legal data representation and analysis.
|
2501.14586
|
A sub-structuring approach for model reduction of frictionally clamped
thin-walled structures
|
eess.SY cs.SY
|
Thin-walled structures clamped by friction joints, such as aircraft skin
panels are exposed to bending-stretching coupling and frictional contact. We
propose an original sub-structuring approach, where the system is divided into
thin-walled and support regions, so that geometrically nonlinear behavior is
relevant only in the former, and nonlinear contact behavior only in the latter.
This permits to derive reduced component models, in principle, with available
techniques. The Hurty-/Craig-Bampton method, combined with an interface
reduction relying on an orthogonal polynomial series, is used to construct the
reduction basis for each component. To model geometrically nonlinear behavior,
implicit condensation is used, where an original, engineering-oriented
proposition is made for the delicate scaling of the static load cases required
to estimate the coefficients of the nonlinear terms. The proposed method is
validated and its computational performance is assessed for the example of a
plate with frictional clamping, using finite element analysis as reference. The
numerical results shed light into an interesting mutual interaction: The extent
of geometric hardening is limited by the reduced boundary stiffness when more
sliding occurs in the clamping. On the other hand, the frictional dissipation
is increased by the tangential loading induced by membrane stretching.
|
2501.14587
|
Visual Localization via Semantic Structures in Autonomous Photovoltaic
Power Plant Inspection
|
cs.CV cs.RO
|
Inspection systems utilizing unmanned aerial vehicles (UAVs) equipped with
thermal cameras are increasingly popular for the maintenance of photovoltaic
(PV) power plants. However, automation of the inspection task is a challenging
problem as it requires precise navigation to capture images from optimal
distances and viewing angles.
This paper presents a novel localization pipeline that directly integrates PV
module detection with UAV navigation, allowing precise positioning during
inspection. Detections are used to identify the power plant structures in the
image and associate these with the power plant model. We define visually
recognizable anchor points for the initial association and use object tracking
to discern global associations. We present three distinct methods for visual
segmentation of PV modules based on traditional computer vision, deep learning,
and their fusion, and we evaluate their performance in relation to the proposed
localization pipeline.
The presented methods were verified and evaluated using custom aerial
inspection data sets, demonstrating their robustness and applicability for
real-time navigation. Additionally, we evaluate the influence of the power
plant model's precision on the localization methods.
|
2501.14588
|
Data Assetization via Resources-decoupled Federated Learning
|
cs.LG
|
With the development of the digital economy, data is increasingly recognized
as an essential resource for both work and life. However, due to privacy
concerns, data owners tend to maximize the value of data through the
circulation of information rather than direct data transfer. Federated learning
(FL) provides an effective approach to collaborative training models while
preserving privacy. However, as model parameters and training data grow, there
are not only real differences in data resources between different data owners,
but also mismatches between data and computing resources. These challenges lead
to inadequate collaboration among data owners, compute centers, and model
owners, reducing the global utility of the three parties and the effectiveness
of data assetization. In this work, we first propose a framework for
resource-decoupled FL involving three parties. Then, we design a Tripartite
Stackelberg Model and theoretically analyze the Stackelberg-Nash equilibrium
(SNE) for participants to optimize global utility. Next, we propose the
Quality-aware Dynamic Resources-decoupled FL algorithm (QD-RDFL), in which we
derive and solve the optimal strategies of all parties to achieve SNE using
backward induction. We also design a dynamic optimization mechanism to improve
the optimal strategy profile by evaluating the contribution of data quality
from data owners to the global model during real training. Finally, our
extensive experiments demonstrate that our method effectively encourages the
linkage of the three parties involved, maximizing the global utility and value
of data assets.
|
2501.14592
|
Improved Vessel Segmentation with Symmetric Rotation-Equivariant U-Net
|
eess.IV cs.CV cs.LG
|
Automated segmentation plays a pivotal role in medical image analysis and
computer-assisted interventions. Despite the promising performance of existing
methods based on convolutional neural networks (CNNs), they neglect useful
equivariant properties for images, such as rotational and reflection
equivariance. This limitation can decrease performance and lead to inconsistent
predictions, especially in applications like vessel segmentation where explicit
orientation is absent. While existing equivariant learning approaches attempt
to mitigate these issues, they substantially increase learning cost, model
size, or both. To overcome these challenges, we propose a novel application of
an efficient symmetric rotation-equivariant (SRE) convolutional (SRE-Conv)
kernel implementation to the U-Net architecture, to learn rotation and
reflection-equivariant features, while also reducing the model size
dramatically. We validate the effectiveness of our method through improved
segmentation performance on retina vessel fundus imaging. Our proposed SRE
U-Net not only significantly surpasses standard U-Net in handling rotated
images, but also outperforms existing equivariant learning methods and does so
with a reduced number of trainable parameters and smaller memory cost. The code
is available at https://github.com/OnofreyLab/sre_conv_segm_isbi2025.
|
2501.14593
|
Geometric Mean Improves Loss For Few-Shot Learning
|
cs.CV
|
Few-shot learning (FSL) is a challenging task in machine learning, demanding
a model to render discriminative classification by using only a few labeled
samples. In the literature of FSL, deep models are trained in a manner of
metric learning to provide metric in a feature space which is well
generalizable to classify samples of novel classes; in the space, even a few
amount of labeled training examples can construct an effective classifier. In
this paper, we propose a novel FSL loss based on \emph{geometric mean} to embed
discriminative metric into deep features. In contrast to the other losses such
as utilizing arithmetic mean in softmax-based formulation, the proposed method
leverages geometric mean to aggregate pair-wise relationships among samples for
enhancing discriminative metric across class categories. The proposed loss is
not only formulated in a simple form but also is thoroughly analyzed in
theoretical ways to reveal its favorable characteristics which are favorable
for learning feature metric in FSL. In the experiments on few-shot image
classification tasks, the method produces competitive performance in comparison
to the other losses.
|
2501.14600
|
On the Homophily of Heterogeneous Graphs: Understanding and Unleashing
|
cs.SI
|
Homophily, the tendency of similar nodes to connect, is a fundamental
phenomenon in network science and a critical factor in the performance of graph
neural networks (GNNs). While existing studies primarily explore homophily in
homogeneous graphs, where nodes share the same type, real-world networks are
often more accurately modeled as heterogeneous graphs (HGs) with diverse node
types and intricate cross-type interactions. This structural diversity
complicates the analysis of homophily, as traditional homophily metrics fail to
account for distinct label spaces across node types. To address this
limitation, we introduce the Cross-Type Homophily Ratio, a novel metric that
quantifies homophily based on the similarity of target information across
different node types. Furthermore, we introduce Cross-Type Homophily-guided
Heterogeneous Graph Pruning, a method designed to selectively remove
low-homophily crosstype edges, thereby enhancing the Cross-Type Homophily Ratio
and boosting the performance of heterogeneous graph neural networks (HGNNs).
Extensive experiments on five real-world HG datasets validate the effectiveness
of our approach, which delivers up to 13.36% average relative performance
improvement for HGNNs, offering a fresh perspective on cross-type homophily in
heterogeneous graph learning.
|
2501.14603
|
Age and Power Minimization via Meta-Deep Reinforcement Learning in UAV
Networks
|
cs.LG cs.AI
|
Age-of-information (AoI) and transmission power are crucial performance
metrics in low energy wireless networks, where information freshness is of
paramount importance. This study examines a power-limited internet of things
(IoT) network supported by a flying unmanned aerial vehicle(UAV) that collects
data. Our aim is to optimize the UAV flight trajectory and scheduling policy to
minimize a varying AoI and transmission power combination. To tackle this
variation, this paper proposes a meta-deep reinforcement learning (RL) approach
that integrates deep Q-networks (DQNs) with model-agnostic meta-learning
(MAML). DQNs determine optimal UAV decisions, while MAML enables scalability
across varying objective functions. Numerical results indicate that the
proposed algorithm converges faster and adapts to new objectives more
effectively than traditional deep RL methods, achieving minimal AoI and
transmission power overall.
|
2501.14604
|
Inverse Evolution Data Augmentation for Neural PDE Solvers
|
cs.LG
|
Neural networks have emerged as promising tools for solving partial
differential equations (PDEs), particularly through the application of neural
operators. Training neural operators typically requires a large amount of
training data to ensure accuracy and generalization. In this paper, we propose
a novel data augmentation method specifically designed for training neural
operators on evolution equations. Our approach utilizes insights from inverse
processes of these equations to efficiently generate data from random
initialization that are combined with original data. To further enhance the
accuracy of the augmented data, we introduce high-order inverse evolution
schemes. These schemes consist of only a few explicit computation steps, yet
the resulting data pairs can be proven to satisfy the corresponding implicit
numerical schemes. In contrast to traditional PDE solvers that require small
time steps or implicit schemes to guarantee accuracy, our data augmentation
method employs explicit schemes with relatively large time steps, thereby
significantly reducing computational costs. Accuracy and efficacy experiments
confirm the effectiveness of our approach. Additionally, we validate our
approach through experiments with the Fourier Neural Operator and UNet on three
common evolution equations that are Burgers' equation, the Allen-Cahn equation
and the Navier-Stokes equation. The results demonstrate a significant
improvement in the performance and robustness of the Fourier Neural Operator
when coupled with our inverse evolution data augmentation method.
|
2501.14605
|
3DLabelProp: Geometric-Driven Domain Generalization for LiDAR Semantic
Segmentation in Autonomous Driving
|
cs.CV
|
Domain generalization aims to find ways for deep learning models to maintain
their performance despite significant domain shifts between training and
inference datasets. This is particularly important for models that need to be
robust or are costly to train. LiDAR perception in autonomous driving is
impacted by both of these concerns, leading to the emergence of various
approaches. This work addresses the challenge by proposing a geometry-based
approach, leveraging the sequential structure of LiDAR sensors, which sets it
apart from the learning-based methods commonly found in the literature. The
proposed method, called 3DLabelProp, is applied on the task of LiDAR Semantic
Segmentation (LSS). Through extensive experimentation on seven datasets, it is
demonstrated to be a state-of-the-art approach, outperforming both naive and
other domain generalization methods.
|
2501.14607
|
ReferDINO: Referring Video Object Segmentation with Visual Grounding
Foundations
|
cs.CV
|
Referring video object segmentation (RVOS) aims to segment target objects
throughout a video based on a text description. Despite notable progress in
recent years, current RVOS models remain struggle to handle complicated object
descriptions due to their limited video-language understanding. To address this
limitation, we present \textbf{ReferDINO}, an end-to-end RVOS model that
inherits strong vision-language understanding from the pretrained visual
grounding foundation models, and is further endowed with effective temporal
understanding and object segmentation capabilities. In ReferDINO, we contribute
three technical innovations for effectively adapting the foundation models to
RVOS: 1) an object-consistent temporal enhancer that capitalizes on the
pretrained object-text representations to enhance temporal understanding and
object consistency; 2) a grounding-guided deformable mask decoder that
integrates text and grounding conditions to generate accurate object masks; 3)
a confidence-aware query pruning strategy that significantly improves the
object decoding efficiency without compromising performance. We conduct
extensive experiments on five public RVOS benchmarks to demonstrate that our
proposed ReferDINO outperforms state-of-the-art methods significantly. Project
page: \url{https://isee-laboratory.github.io/ReferDINO}
|
2501.14610
|
Leveraging Spatial Cues from Cochlear Implant Microphones to Efficiently
Enhance Speech Separation in Real-World Listening Scenes
|
cs.SD cs.AI eess.AS
|
Speech separation approaches for single-channel, dry speech mixtures have
significantly improved. However, real-world spatial and reverberant acoustic
environments remain challenging, limiting the effectiveness of these approaches
for assistive hearing devices like cochlear implants (CIs). To address this, we
quantify the impact of real-world acoustic scenes on speech separation and
explore how spatial cues can enhance separation quality efficiently. We analyze
performance based on implicit spatial cues (inherent in the acoustic input and
learned by the model) and explicit spatial cues (manually calculated spatial
features added as auxiliary inputs). Our findings show that spatial cues (both
implicit and explicit) improve separation for mixtures with spatially separated
and nearby talkers. Furthermore, spatial cues enhance separation when spectral
cues are ambiguous, such as when voices are similar. Explicit spatial cues are
particularly beneficial when implicit spatial cues are weak. For instance,
single CI microphone recordings provide weaker implicit spatial cues than
bilateral CIs, but even single CIs benefit from explicit cues. These results
emphasize the importance of training models on real-world data to improve
generalizability in everyday listening scenarios. Additionally, our statistical
analyses offer insights into how data properties influence model performance,
supporting the development of efficient speech separation approaches for CIs
and other assistive devices in real-world settings.
|
2501.14615
|
Single-neuron deep generative model uncovers underlying physics of
neuronal activity in Ca imaging data
|
q-bio.NC cs.LG
|
Calcium imaging has become a powerful alternative to electrophysiology for
studying neuronal activity, offering spatial resolution and the ability to
measure large populations of neurons in a minimally invasive manner. This
technique has broad applications in neuroscience, neuroengineering, and
medicine, enabling researchers to explore the relationship between neuron
location and activity. Recent advancements in deep generative models (DGMs)
have facilitated the modeling of neuronal population dynamics, uncovering
latent representations that provide insights into behavior prediction and
neuronal variance. However, these models often rely on spike inference
algorithms and primarily focus on population-level dynamics, limiting their
applicability for single-neuron analyses. To address this gap, we propose a
novel framework for single-neuron representation learning using autoregressive
variational autoencoders (AVAEs). Our approach embeds individual neurons'
spatiotemporal signals into a reduced-dimensional space without the need for
spike inference algorithms. The AVAE excels over traditional linear methods by
generating more informative and discriminative latent representations,
improving tasks such as visualization, clustering, and the understanding of
neuronal activity. Additionally, the reconstruction performance of the AVAE
outperforms the state of the art, demonstrating its ability to accurately
recover the original fluorescence signal from the learned representation. Using
realistic simulations, we show that our model captures underlying physical
properties and connectivity patterns, enabling it to distinguish between
different firing and connectivity types. These findings position the AVAE as a
versatile and powerful tool for advancing single-neuron analysis and lays the
groundwork for future integration of multimodal single-cell datasets in
neuroscience.
|
2501.14616
|
QuIP: Experimental design for expensive simulators with many Qualitative
factors via Integer Programming
|
stat.AP cs.RO
|
The need to explore and/or optimize expensive simulators with many
qualitative factors arises in broad scientific and engineering problems. Our
motivating application lies in path planning - the exploration of feasible
paths for navigation, which plays an important role in robotics, surgical
planning and assembly planning. Here, the feasibility of a path is evaluated
via expensive virtual experiments, and its parameter space is typically
discrete and high-dimensional. A carefully selected experimental design is thus
essential for timely decision-making. We propose here a novel framework, called
QuIP, for experimental design of Qualitative factors via Integer Programming
under a Gaussian process surrogate model with an exchangeable covariance
function. For initial design, we show that its asymptotic D-optimal design can
be formulated as a variant of the well-known assignment problem in operations
research, which can be efficiently solved to global optimality using
state-of-the-art integer programming solvers. For sequential design
(specifically, for active learning or black-box optimization), we show that its
design criterion can similarly be formulated as an assignment problem, thus
enabling efficient and reliable optimization with existing solvers. We then
demonstrate the effectiveness of QuIP over existing methods in a suite of path
planning experiments and an application to rover trajectory optimization.
|
2501.14617
|
Funzac at CoMeDi Shared Task: Modeling Annotator Disagreement from
Word-In-Context Perspectives
|
cs.CL
|
In this work, we evaluate annotator disagreement in Word-in-Context (WiC)
tasks exploring the relationship between contextual meaning and disagreement as
part of the CoMeDi shared task competition. While prior studies have modeled
disagreement by analyzing annotator attributes with single-sentence inputs,
this shared task incorporates WiC to bridge the gap between sentence-level
semantic representation and annotator judgment variability. We describe three
different methods that we developed for the shared task, including a feature
enrichment approach that combines concatenation, element-wise differences,
products, and cosine similarity, Euclidean and Manhattan distances to extend
contextual embedding representations, a transformation by Adapter blocks to
obtain task-specific representations of contextual embeddings, and classifiers
of varying complexities, including ensembles. The comparison of our methods
demonstrates improved performance for methods that include enriched and
task-specfic features. While the performance of our method falls short in
comparison to the best system in subtask 1 (OGWiC), it is competitive to the
official evaluation results in subtask 2 (DisWiC).
|
2501.14620
|
Strong Converse Exponent for Remote Lossy Source Coding
|
cs.IT math.IT
|
Past works on remote lossy source coding studied the rate under average
distortion and the error exponent of excess distortion probability. In this
work, we look into how fast the excess distortion probability converges to 1 at
small rates, also known as exponential strong converse. We characterize its
exponent by establishing matched upper and lower bounds. From the exponent, we
also recover two previous results on lossy source coding and biometric
authentication.
|
2501.14622
|
ACT-JEPA: Joint-Embedding Predictive Architecture Improves Policy
Representation Learning
|
cs.LG cs.AI
|
Learning efficient representations for decision-making policies is a
challenge in imitation learning (IL). Current IL methods require expert
demonstrations, which are expensive to collect. Consequently, they often have
underdeveloped world models. Self-supervised learning (SSL) offers an
alternative by allowing models to learn from diverse, unlabeled data, including
failures. However, SSL methods often operate in raw input space, making them
inefficient. In this work, we propose ACT-JEPA, a novel architecture that
integrates IL and SSL to enhance policy representations. We train a policy to
predict (1) action sequences and (2) abstract observation sequences. The first
objective uses action chunking to improve action prediction and reduce
compounding errors. The second objective extends this idea of chunking by
predicting abstract observation sequences. We utilize Joint-Embedding
Predictive Architecture to predict in abstract representation space, allowing
the model to filter out irrelevant details, improve efficiency, and develop a
robust world model. Our experiments show that ACT-JEPA improves the quality of
representations by learning temporal environment dynamics. Additionally, the
model's ability to predict abstract observation sequences results in
representations that effectively generalize to action sequence prediction.
ACT-JEPA performs on par with established baselines across a range of
decision-making tasks.
|
2501.14625
|
Accelerated Preference Elicitation with LLM-Based Proxies
|
cs.GT cs.LG
|
Bidders in combinatorial auctions face significant challenges when describing
their preferences to an auctioneer. Classical work on preference elicitation
focuses on query-based techniques inspired from proper learning--often via
proxies that interface between bidders and an auction mechanism--to
incrementally learn bidder preferences as needed to compute efficient
allocations. Although such elicitation mechanisms enjoy theoretical query
efficiency, the amount of communication required may still be too cognitively
taxing in practice.
We propose a family of efficient LLM-based proxy designs for eliciting
preferences from bidders using natural language. Our proposed mechanism
combines LLM pipelines and DNF-proper-learning techniques to quickly
approximate preferences when communication is limited. To validate our
approach, we create a testing sandbox for elicitation mechanisms that
communicate in natural language. In our experiments, our most promising LLM
proxy design reaches approximately efficient outcomes with five times fewer
queries than classical proper learning based elicitation mechanisms.
|
2501.14630
|
Extracting Problem Structure with LLMs for Optimized SAT Local Search
|
cs.AI
|
Local search preprocessing makes Conflict-Driven Clause Learning (CDCL)
solvers faster by providing high-quality starting points and modern SAT solvers
have incorporated this technique into their preprocessing steps. However, these
tools rely on basic strategies that miss the structural patterns in problems.
We present a method that applies Large Language Models (LLMs) to analyze
Python-based encoding code. This reveals hidden structural patterns in how
problems convert into SAT. Our method automatically generates specialized local
search algorithms that find these patterns and use them to create strong
initial assignments. This works for any problem instance from the same encoding
type. Our tests show encouraging results, achieving faster solving times
compared to baseline preprocessing systems.
|
2501.14633
|
Channel Independent Precoder for OFDM-based Systems over Fading Channels
|
cs.IT eess.SP math.IT
|
In this paper we propose an independent channel precoder for orthogonal
frequency division multiplexing (OFDM) systems over fading channels. The design
of the precoder is based on the information redistribution of the input
modulated symbols amongst the output precoded symbols. The proposed precoder
decreases the variance of the instantaneous noise power at the receiver
produced by the channel variability. The employment of an interleaver together
with a precoding matrix whose size does not depend on the number of data
carriers in an OFDM symbol allows different configurations of time-frequency
diversity which can be easily adapted to the channel conditions. The precoder
is evaluated with a modified Zero Forcing (ZF) equalizer whose maximum gain is
constrained by means of a clipping factor. Thus, the clipping factor limits the
noise power transfer in the receiver deprecoding block in low SNR conditions.
|
2501.14634
|
Recommending Actionable Strategies: A Semantic Approach to Integrating
Analytical Frameworks with Decision Heuristics
|
cs.AI
|
We present a novel approach for recommending actionable strategies by
integrating strategic frameworks with decision heuristics through semantic
analysis. While strategy frameworks provide systematic models for assessment
and planning, and decision heuristics encode experiential knowledge,these
traditions have historically remained separate. Our methodology bridges this
gap using advanced natural language processing (NLP), demonstrated through
integrating frameworks like the 6C model with the Thirty-Six Stratagems. The
approach employs vector space representations and semantic similarity
calculations to map framework parameters to heuristic patterns, supported by a
computational architecture that combines deep semantic processing with
constrained use of Large Language Models. By processing both primary content
and secondary elements (diagrams, matrices) as complementary linguistic
representations, we demonstrate effectiveness through corporate strategy case
studies. The methodology generalizes to various analytical frameworks and
heuristic sets, culminating in a plug-and-play architecture for generating
recommender systems that enable cohesive integration of strategic frameworks
and decision heuristics into actionable guidance.
|
2501.14635
|
Optimal Transport Barycenter via Nonconvex-Concave Minimax Optimization
|
stat.ML cs.LG
|
The optimal transport barycenter (a.k.a. Wasserstein barycenter) is a
fundamental notion of averaging that extends from the Euclidean space to the
Wasserstein space of probability distributions. Computation of the
unregularized barycenter for discretized probability distributions on point
clouds is a challenging task when the domain dimension $d > 1$. Most practical
algorithms for approximating the barycenter problem are based on entropic
regularization. In this paper, we introduce a nearly linear time $O(m \log{m})$
and linear space complexity $O(m)$ primal-dual algorithm, the
Wasserstein-Descent $\dot{\mathbb{H}}^1$-Ascent (WDHA) algorithm, for computing
the exact barycenter when the input probability density functions are
discretized on an $m$-point grid. The key success of the WDHA algorithm hinges
on alternating between two different yet closely related Wasserstein and
Sobolev optimization geometries for the primal barycenter and dual Kantorovich
potential subproblems. Under reasonable assumptions, we establish the
convergence rate and iteration complexity of WDHA to its stationary point when
the step size is appropriately chosen. Superior computational efficacy,
scalability, and accuracy over the existing Sinkhorn-type algorithms are
demonstrated on high-resolution (e.g., $1024 \times 1024$ images) 2D synthetic
and real data.
|
2501.14636
|
A Paired Autoencoder Framework for Inverse Problems via Bayes Risk
Minimization
|
cs.LG cs.NA math.NA
|
In this work, we describe a new data-driven approach for inverse problems
that exploits technologies from machine learning, in particular autoencoder
network structures. We consider a paired autoencoder framework, where two
autoencoders are used to efficiently represent the input and target spaces
separately and optimal mappings are learned between latent spaces, thus
enabling forward and inverse surrogate mappings. We focus on interpretations
using Bayes risk and empirical Bayes risk minimization, and we provide various
theoretical results and connections to existing works on low-rank matrix
approximations. Similar to end-to-end approaches, our paired approach creates a
surrogate model for forward propagation and regularized inversion. However, our
approach outperforms existing approaches in scenarios where training data for
unsupervised learning are readily available but training pairs for supervised
learning are scarce. Furthermore, we show that cheaply computable evaluation
metrics are available through this framework and can be used to predict whether
the solution for a new sample should be predicted well.
|
2501.14637
|
The Paradox of Intervention: Resilience in Adaptive Multi-Role
Coordination Networks
|
physics.soc-ph cs.SI
|
Complex adaptive networks exhibit remarkable resilience, driven by the
dynamic interplay of structure (interactions) and function (state). While
static-network analyses offer valuable insights, understanding how structure
and function co-evolve under external interventions is critical for explaining
system-level adaptation. Using a unique dataset of clandestine criminal
networks, we combine empirical observations with computational modeling to test
the impact of various interventions on network adaptation. Our analysis
examines how networks with specialized roles adapt and form emergent structures
to optimize cost-benefit trade-offs. We find that emergent sparsely connected
networks exhibit greater resilience, revealing a security-efficiency trade-off.
Notably, interventions can trigger a "criminal opacity amplification" effect,
where criminal activity increases despite reduced network visibility. While
node isolation fragments networks, it strengthens remaining active ties. In
contrast, deactivating nodes (analogous to social reintegration) can
unintentionally boost criminal coordination, increasing activity or
connectivity. Failed interventions often lead to temporary functional surges
before reverting to baseline. Surprisingly, stimulating connectivity
destabilizes networks. Effective interventions require precise calibration to
node roles, connection types, and external conditions. These findings challenge
conventional assumptions about connectivity and intervention efficacy in
complex adaptive systems across diverse domains.
|
2501.14641
|
Towards Scalable Topological Regularizers
|
cs.LG math.AT
|
Latent space matching, which consists of matching distributions of features
in latent space, is a crucial component for tasks such as adversarial attacks
and defenses, domain adaptation, and generative modelling. Metrics for
probability measures, such as Wasserstein and maximum mean discrepancy, are
commonly used to quantify the differences between such distributions. However,
these are often costly to compute, or do not appropriately take the geometric
and topological features of the distributions into consideration. Persistent
homology is a tool from topological data analysis which quantifies the
multi-scale topological structure of point clouds, and has recently been used
as a topological regularizer in learning tasks. However, computation costs
preclude larger scale computations, and discontinuities in the gradient lead to
unstable training behavior such as in adversarial tasks. We propose the use of
principal persistence measures, based on computing the persistent homology of a
large number of small subsamples, as a topological regularizer. We provide a
parallelized GPU implementation of this regularizer, and prove that gradients
are continuous for smooth densities. Furthermore, we demonstrate the efficacy
of this regularizer on shape matching, image generation, and semi-supervised
learning tasks, opening the door towards a scalable regularizer for topological
features.
|
2501.14644
|
Whisper D-SGD: Correlated Noise Across Agents for Differentially Private
Decentralized Learning
|
cs.LG cs.AI cs.CR cs.DC
|
Decentralized learning enables distributed agents to train a shared machine
learning model through local computation and peer-to-peer communication.
Although each agent retains its dataset locally, the communication of local
models can still expose private information to adversaries. To mitigate these
threats, local differential privacy (LDP) injects independent noise per agent,
but it suffers a larger utility gap than central differential privacy (CDP). We
introduce Whisper D-SGD, a novel covariance-based approach that generates
correlated privacy noise across agents, unifying several state-of-the-art
methods as special cases. By leveraging network topology and mixing weights,
Whisper D-SGD optimizes the noise covariance to achieve network-wide noise
cancellation. Experimental results show that Whisper D-SGD cancels more noise
than existing pairwise-correlation schemes, substantially narrowing the CDP-LDP
gap and improving model performance under the same privacy guarantees.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.