id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
2501.10879
|
A Benchmark of French ASR Systems Based on Error Severity
|
cs.CL
|
Automatic Speech Recognition (ASR) transcription errors are commonly assessed
using metrics that compare them with a reference transcription, such as Word
Error Rate (WER), which measures spelling deviations from the reference, or
semantic score-based metrics. However, these approaches often overlook what is
understandable to humans when interpreting transcription errors. To address
this limitation, a new evaluation is proposed that categorizes errors into four
levels of severity, further divided into subtypes, based on objective
linguistic criteria, contextual patterns, and the use of content words as the
unit of analysis. This metric is applied to a benchmark of 10 state-of-the-art
ASR systems on French language, encompassing both HMM-based and end-to-end
models. Our findings reveal the strengths and weaknesses of each system,
identifying those that provide the most comfortable reading experience for
users.
|
2501.10884
|
Fixed Point Computation: Beating Brute Force with Smoothed Analysis
|
cs.GT cs.DS cs.LG
|
We propose a new algorithm that finds an $\varepsilon$-approximate fixed
point of a smooth function from the $n$-dimensional $\ell_2$ unit ball to
itself. We use the general framework of finding approximate solutions to a
variational inequality, a problem that subsumes fixed point computation and the
computation of a Nash Equilibrium. The algorithm's runtime is bounded by
$e^{O(n)}/\varepsilon$, under the smoothed-analysis framework. This is the
first known algorithm in such a generality whose runtime is faster than
$(1/\varepsilon)^{O(n)}$, which is a time that suffices for an exhaustive
search. We complement this result with a lower bound of $e^{\Omega(n)}$ on the
query complexity for finding an $O(1)$-approximate fixed point on the unit
ball, which holds even in the smoothed-analysis model, yet without the
assumption that the function is smooth. Existing lower bounds are only known
for the hypercube, and adapting them to the ball does not give non-trivial
results even for finding $O(1/\sqrt{n})$-approximate fixed points.
|
2501.10885
|
CEReBrO: Compact Encoder for Representations of Brain Oscillations Using
Efficient Alternating Attention
|
cs.LG
|
Electroencephalograph (EEG) is a crucial tool for studying brain activity.
Recently, self-supervised learning methods leveraging large unlabeled datasets
have emerged as a potential solution to the scarcity of widely available
annotated EEG data. However, current methods suffer from at least one of the
following limitations: i) sub-optimal EEG signal modeling, ii) model sizes in
the hundreds of millions of trainable parameters, and iii) reliance on private
datasets and/or inconsistent public benchmarks, hindering reproducibility. To
address these challenges, we introduce a Compact Encoder for Representations of
Brain Oscillations using alternating attention (CEReBrO), a new small EEG
foundation model. Our tokenization scheme represents EEG signals at a
per-channel patch granularity. We propose an alternating attention mechanism
that jointly models intra-channel temporal dynamics and inter-channel spatial
correlations, achieving 2x speed improvement with 6x less memory required
compared to standard self-attention. We present several model sizes ranging
from 3.6 million to 85 million parameters. Pre-trained on over 20,000 hours of
publicly available scalp EEG recordings with diverse channel configurations,
our models set new benchmarks in emotion detection and seizure detection tasks,
with competitive performance in anomaly classification and gait prediction.
This validates our models' effectiveness and efficiency.
|
2501.10891
|
OpenEarthMap-SAR: A Benchmark Synthetic Aperture Radar Dataset for
Global High-Resolution Land Cover Mapping
|
eess.IV cs.AI cs.CV eess.SP
|
High-resolution land cover mapping plays a crucial role in addressing a wide
range of global challenges, including urban planning, environmental monitoring,
disaster response, and sustainable development. However, creating accurate,
large-scale land cover datasets remains a significant challenge due to the
inherent complexities of geospatial data, such as diverse terrain, varying
sensor modalities, and atmospheric conditions. Synthetic Aperture Radar (SAR)
imagery, with its ability to penetrate clouds and capture data in all-weather,
day-and-night conditions, offers unique advantages for land cover mapping.
Despite these strengths, the lack of benchmark datasets tailored for SAR
imagery has limited the development of robust models specifically designed for
this data modality. To bridge this gap and facilitate advancements in SAR-based
geospatial analysis, we introduce OpenEarthMap-SAR, a benchmark SAR dataset,
for global high-resolution land cover mapping. OpenEarthMap-SAR consists of 1.5
million segments of 5033 aerial and satellite images with the size of
1024$\times$1024 pixels, covering 35 regions from Japan, France, and the USA,
with partially manually annotated and fully pseudo 8-class land cover labels at
a ground sampling distance of 0.15--0.5 m. We evaluated the performance of
state-of-the-art methods for semantic segmentation and present challenging
problem settings suitable for further technical development. The dataset also
serves the official dataset for IEEE GRSS Data Fusion Contest Track I. The
dataset has been made publicly available at
https://zenodo.org/records/14622048.
|
2501.10893
|
Learn-by-interact: A Data-Centric Framework for Self-Adaptive Agents in
Realistic Environments
|
cs.LG cs.AI
|
Autonomous agents powered by large language models (LLMs) have the potential
to enhance human capabilities, assisting with digital tasks from sending emails
to performing data analysis. The abilities of existing LLMs at such tasks are
often hindered by the lack of high-quality agent data from the corresponding
environments they interact with. We propose Learn-by-interact, a data-centric
framework to adapt LLM agents to any given environments without human
annotations. Learn-by-interact synthesizes trajectories of agent-environment
interactions based on documentations, and constructs instructions by
summarizing or abstracting the interaction histories, a process called backward
construction. We assess the quality of our synthetic data by using them in both
training-based scenarios and training-free in-context learning (ICL), where we
craft innovative retrieval approaches optimized for agents. Extensive
experiments on SWE-bench, WebArena, OSWorld and Spider2-V spanning across
realistic coding, web, and desktop environments show the effectiveness of
Learn-by-interact in various downstream agentic tasks -- baseline results are
improved by up to 12.2\% for ICL with Claude-3.5 and 19.5\% for training with
Codestral-22B. We further demonstrate the critical role of backward
construction, which provides up to 14.0\% improvement for training. Our
ablation studies demonstrate the efficiency provided by our synthesized data in
ICL and the superiority of our retrieval pipeline over alternative approaches
like conventional retrieval-augmented generation (RAG). We expect that
Learn-by-interact will serve as a foundation for agent data synthesis as LLMs
are increasingly deployed at real-world environments.
|
2501.10895
|
Classical and Deep Reinforcement Learning Inventory Control Policies for
Pharmaceutical Supply Chains with Perishability and Non-Stationarity
|
cs.AI cs.LG math.OC
|
We study inventory control policies for pharmaceutical supply chains,
addressing challenges such as perishability, yield uncertainty, and
non-stationary demand, combined with batching constraints, lead times, and lost
sales. Collaborating with Bristol-Myers Squibb (BMS), we develop a realistic
case study incorporating these factors and benchmark three
policies--order-up-to (OUT), projected inventory level (PIL), and deep
reinforcement learning (DRL) using the proximal policy optimization (PPO)
algorithm--against a BMS baseline based on human expertise. We derive and
validate bounds-based procedures for optimizing OUT and PIL policy parameters
and propose a methodology for estimating projected inventory levels, which are
also integrated into the DRL policy with demand forecasts to improve
decision-making under non-stationarity. Compared to a human-driven policy,
which avoids lost sales through higher holding costs, all three implemented
policies achieve lower average costs but exhibit greater cost variability.
While PIL demonstrates robust and consistent performance, OUT struggles under
high lost sales costs, and PPO excels in complex and variable scenarios but
requires significant computational effort. The findings suggest that while DRL
shows potential, it does not outperform classical policies in all numerical
experiments, highlighting 1) the need to integrate diverse policies to manage
pharmaceutical challenges effectively, based on the current state-of-the-art,
and 2) that practical problems in this domain seem to lack a single policy
class that yields universally acceptable performance.
|
2501.10896
|
Robust Joint Message and State Transmission under Arbitrarily Varying
Jamming
|
cs.IT math.IT
|
Joint message and state transmission under arbitrarily varying jamming attack
is investigated. An inner bound of the robust capacity-distortion region is
provided, which includes the worst-case communication rate and the worst-case
estimation rate. The bound is optimal for the joint message and lossless state
communication.
|
2501.10897
|
Unfolding Tensors to Identify the Graph in Discrete Latent Bipartite
Graphical Models
|
math.ST cs.LG stat.TH
|
We use a tensor unfolding technique to prove a new identifiability result for
discrete bipartite graphical models, which have a bipartite graph between an
observed and a latent layer. This model family includes popular models such as
Noisy-Or Bayesian networks for medical diagnosis and Restricted Boltzmann
Machines in machine learning. These models are also building blocks for deep
generative models. Our result on identifying the graph structure enjoys the
following nice properties. First, our identifiability proof is constructive, in
which we innovatively unfold the population tensor under the model into
matrices and inspect the rank properties of the resulting matrices to uncover
the graph. This proof itself gives a population-level structure learning
algorithm that outputs both the number of latent variables and the bipartite
graph. Second, we allow various forms of nonlinear dependence among the
variables, unlike many continuous latent variable graphical models that rely on
linearity to show identifiability. Third, our identifiability condition is
interpretable, only requiring each latent variable to connect to at least two
"pure" observed variables in the bipartite graph. The new result not only
brings novel advances in algebraic statistics, but also has useful implications
for these models' trustworthy applications in scientific disciplines and
interpretable machine learning.
|
2501.10900
|
A Generative Security Application Engineering Curriculum
|
cs.CY cs.AI
|
Generative AI and large language models (LLMs) are transforming security by
automating many tasks being performed manually. With such automation changing
the practice of security as we know it, it is imperative that we prepare future
students for the technology landscape they will ultimately face. Towards this
end, we describe an initial curriculum and course that attempts to show
students how to apply generative AI in order to solve problems in security. By
refocusing security education and training on aspects uniquely suited for
humans and showing students how to leverage automation for the rest, we believe
we can better align security education practices with generative AI as it
evolves.
|
2501.10901
|
ARD-VAE: A Statistical Formulation to Find the Relevant Latent
Dimensions of Variational Autoencoders
|
cs.LG
|
The variational autoencoder (VAE) is a popular, deep, latent-variable model
(DLVM) due to its simple yet effective formulation for modeling the data
distribution. Moreover, optimizing the VAE objective function is more
manageable than other DLVMs. The bottleneck dimension of the VAE is a crucial
design choice, and it has strong ramifications for the model's performance,
such as finding the hidden explanatory factors of a dataset using the
representations learned by the VAE. However, the size of the latent dimension
of the VAE is often treated as a hyperparameter estimated empirically through
trial and error. To this end, we propose a statistical formulation to discover
the relevant latent factors required for modeling a dataset. In this work, we
use a hierarchical prior in the latent space that estimates the variance of the
latent axes using the encoded data, which identifies the relevant latent
dimensions. For this, we replace the fixed prior in the VAE objective function
with a hierarchical prior, keeping the remainder of the formulation unchanged.
We call the proposed method the automatic relevancy detection in the
variational autoencoder (ARD-VAE). We demonstrate the efficacy of the ARD-VAE
on multiple benchmark datasets in finding the relevant latent dimensions and
their effect on different evaluation metrics, such as FID score and
disentanglement analysis.
|
2501.10905
|
A Remote Sensing Image Change Detection Method Integrating Layer
Exchange and Channel-Spatial Differences
|
cs.CV
|
Change detection in remote sensing imagery is a critical technique for Earth
observation, primarily focusing on pixel-level segmentation of change regions
between bi-temporal images. The essence of pixel-level change detection lies in
determining whether corresponding pixels in bi-temporal images have changed. In
deep learning, the spatial and channel dimensions of feature maps represent
different information from the original images. In this study, we found that in
change detection tasks, difference information can be computed not only from
the spatial dimension of bi-temporal features but also from the channel
dimension. Therefore, we designed the Channel-Spatial Difference Weighting
(CSDW) module as an aggregation-distribution mechanism for bi-temporal features
in change detection. This module enhances the sensitivity of the change
detection model to difference features. Additionally, bi-temporal images share
the same geographic location and exhibit strong inter-image correlations. To
construct the correlation between bi-temporal images, we designed a decoding
structure based on the Layer-Exchange (LE) method to enhance the interaction of
bi-temporal features. Comprehensive experiments on the CLCD, PX-CLCD, LEVIR-CD,
and S2Looking datasets demonstrate that the proposed LENet model significantly
improves change detection performance. The code and pre-trained models will be
available at: https://github.com/dyzy41/lenet.
|
2501.10906
|
Explainable Adversarial Attacks on Coarse-to-Fine Classifiers
|
cs.CV cs.CR cs.LG
|
Traditional adversarial attacks typically aim to alter the predicted labels
of input images by generating perturbations that are imperceptible to the human
eye. However, these approaches often lack explainability. Moreover, most
existing work on adversarial attacks focuses on single-stage classifiers, but
multi-stage classifiers are largely unexplored. In this paper, we introduce
instance-based adversarial attacks for multi-stage classifiers, leveraging
Layer-wise Relevance Propagation (LRP), which assigns relevance scores to
pixels based on their influence on classification outcomes. Our approach
generates explainable adversarial perturbations by utilizing LRP to identify
and target key features critical for both coarse and fine-grained
classifications. Unlike conventional attacks, our method not only induces
misclassification but also enhances the interpretability of the model's
behavior across classification stages, as demonstrated by experimental results.
|
2501.10909
|
Fine-Grained Appropriate Reliance: Human-AI Collaboration with a
Multi-Step Transparent Decision Workflow for Complex Task Decomposition
|
cs.AI cs.HC
|
In recent years, the rapid development of AI systems has brought about the
benefits of intelligent services but also concerns about security and
reliability. By fostering appropriate user reliance on an AI system, both
complementary team performance and reduced human workload can be achieved.
Previous empirical studies have extensively analyzed the impact of factors
ranging from task, system, and human behavior on user trust and appropriate
reliance in the context of one-step decision making. However, user reliance on
AI systems in tasks with complex semantics that require multi-step workflows
remains under-explored. Inspired by recent work on task decomposition with
large language models, we propose to investigate the impact of a novel
Multi-Step Transparent (MST) decision workflow on user reliance behaviors. We
conducted an empirical study (N = 233) of AI-assisted decision making in
composite fact-checking tasks (i.e., fact-checking tasks that entail multiple
sub-fact verification steps). Our findings demonstrate that human-AI
collaboration with an MST decision workflow can outperform one-step
collaboration in specific contexts (e.g., when advice from an AI system is
misleading). Further analysis of the appropriate reliance at fine-grained
levels indicates that an MST decision workflow can be effective when users
demonstrate a relatively high consideration of the intermediate steps. Our work
highlights that there is no one-size-fits-all decision workflow that can help
obtain optimal human-AI collaboration. Our insights help deepen the
understanding of the role of decision workflows in facilitating appropriate
reliance. We synthesize important implications for designing effective means to
facilitate appropriate reliance on AI systems in composite tasks, positioning
opportunities for the human-centered AI and broader HCI communities.
|
2501.10910
|
DeepIFSAC: Deep Imputation of Missing Values Using Feature and Sample
Attention within Contrastive Framework
|
cs.LG stat.ML
|
Missing values of varying patterns and rates in real-world tabular data pose
a significant challenge in developing reliable data-driven models. Existing
missing value imputation methods use statistical and traditional machine
learning and are ineffective when the missing rate is high and not at random.
This paper explores row and column attention in tabular data as between-feature
and between-sample attention in a novel framework to reconstruct missing
values. The proposed method uses the CutMix data augmentation within a
contrastive learning framework to improve the uncertainty of missing value
estimation. The performance and generalizability of trained imputation models
are evaluated on set-aside test data folds with missing values. The proposed
framework outperforms nine state-of-the-art imputation methods across several
missing value types and rates (10\%-50\%) on a diverse selection of twelve
tabular data sets. We evaluate the quality of imputed data using real-world
electronic health records with missing values, demonstrating our proposed
framework's superiority to state-of-the-art statistical, machine learning, and
deep imputation methods. This paper highlights the heterogeneity of tabular
data sets to recommend imputation methods based on missing value types and data
characteristics.
|
2501.10913
|
Know "No" Better: A Data-Driven Approach for Enhancing Negation
Awareness in CLIP
|
cs.CV cs.CL
|
While CLIP has significantly advanced multimodal understanding by bridging
vision and language, the inability to grasp negation - such as failing to
differentiate concepts like "parking" from "no parking" - poses substantial
challenges. By analyzing the data used in the public CLIP model's pre-training,
we posit this limitation stems from a lack of negation-inclusive data. To
address this, we introduce data generation pipelines that employ a large
language model (LLM) and a multimodal LLM to produce negation-inclusive
captions. Fine-tuning CLIP with data generated from our pipelines, we develop
NegationCLIP, which enhances negation awareness while preserving the
generality. Moreover, to enable a comprehensive evaluation of negation
understanding, we propose NegRefCOCOg-a benchmark tailored to test VLMs'
ability to interpret negation across diverse expressions and positions within a
sentence. Experiments on various CLIP architectures validate the effectiveness
of our data generation pipelines in enhancing CLIP's ability to perceive
negation accurately. Additionally, NegationCLIP's enhanced negation awareness
has practical applications across various multimodal tasks, demonstrated by
performance gains in text-to-image generation and referring image segmentation.
|
2501.10914
|
Green Video Camouflaged Object Detection
|
cs.CV
|
Camouflaged object detection (COD) aims to distinguish hidden objects
embedded in an environment highly similar to the object. Conventional
video-based COD (VCOD) methods explicitly extract motion cues or employ complex
deep learning networks to handle the temporal information, which is limited by
high complexity and unstable performance. In this work, we propose a green VCOD
method named GreenVCOD. Built upon a green ICOD method, GreenVCOD uses long-
and short-term temporal neighborhoods (TN) to capture joint spatial/temporal
context information for decision refinement. Experimental results show that
GreenVCOD offers competitive performance compared to state-of-the-art VCOD
benchmarks.
|
2501.10915
|
LegalGuardian: A Privacy-Preserving Framework for Secure Integration of
Large Language Models in Legal Practice
|
cs.CL cs.CR cs.IR
|
Large Language Models (LLMs) hold promise for advancing legal practice by
automating complex tasks and improving access to justice. However, their
adoption is limited by concerns over client confidentiality, especially when
lawyers include sensitive Personally Identifiable Information (PII) in prompts,
risking unauthorized data exposure. To mitigate this, we introduce
LegalGuardian, a lightweight, privacy-preserving framework tailored for lawyers
using LLM-based tools. LegalGuardian employs Named Entity Recognition (NER)
techniques and local LLMs to mask and unmask confidential PII within prompts,
safeguarding sensitive data before any external interaction. We detail its
development and assess its effectiveness using a synthetic prompt library in
immigration law scenarios. Comparing traditional NER models with one-shot
prompted local LLM, we find that LegalGuardian achieves a F1-score of 93% with
GLiNER and 97% with Qwen2.5-14B in PII detection. Semantic similarity analysis
confirms that the framework maintains high fidelity in outputs, ensuring robust
utility of LLM-based tools. Our findings indicate that legal professionals can
harness advanced AI technologies without compromising client confidentiality or
the quality of legal documents.
|
2501.10917
|
Decomposing and Fusing Intra- and Inter-Sensor Spatio-Temporal Signal
for Multi-Sensor Wearable Human Activity Recognition
|
cs.CV cs.AI cs.HC
|
Wearable Human Activity Recognition (WHAR) is a prominent research area
within ubiquitous computing. Multi-sensor synchronous measurement has proven to
be more effective for WHAR than using a single sensor. However, existing WHAR
methods use shared convolutional kernels for indiscriminate temporal feature
extraction across each sensor variable, which fails to effectively capture
spatio-temporal relationships of intra-sensor and inter-sensor variables. We
propose the DecomposeWHAR model consisting of a decomposition phase and a
fusion phase to better model the relationships between modality variables. The
decomposition creates high-dimensional representations of each intra-sensor
variable through the improved Depth Separable Convolution to capture local
temporal features while preserving their unique characteristics. The fusion
phase begins by capturing relationships between intra-sensor variables and
fusing their features at both the channel and variable levels. Long-range
temporal dependencies are modeled using the State Space Model (SSM), and later
cross-sensor interactions are dynamically captured through a self-attention
mechanism, highlighting inter-sensor spatial correlations. Our model
demonstrates superior performance on three widely used WHAR datasets,
significantly outperforming state-of-the-art models while maintaining
acceptable computational efficiency. Our codes and supplementary materials are
available at https://github.com/Anakin2555/DecomposeWHAR.
|
2501.10920
|
Data Enrichment Opportunities for Distribution Grid Cable Networks using
Variational Autoencoders
|
cs.LG cs.SY eess.SY
|
Electricity distribution cable networks suffer from incomplete and unbalanced
data, hindering the effectiveness of machine learning models for predictive
maintenance and reliability evaluation. Features such as the installation date
of the cables are frequently missing. To address data scarcity, this study
investigates the application of Variational Autoencoders (VAEs) for data
enrichment, synthetic data generation, imbalanced data handling, and outlier
detection. Based on a proof-of-concept case study for Denmark, targeting the
imputation of missing age information in cable network asset registers, the
analysis underlines the potential of generative models to support data-driven
maintenance. However, the study also highlights several areas for improvement,
including enhanced feature importance analysis, incorporating network
characteristics and external features, and handling biases in missing data.
Future initiatives should expand the application of VAEs by incorporating
semi-supervised learning, advanced sampling techniques, and additional
distribution grid elements, including low-voltage networks, into the analysis.
|
2501.10924
|
Adaptive Target Localization under Uncertainty using Multi-Agent Deep
Reinforcement Learning with Knowledge Transfer
|
cs.LG cs.AI cs.RO
|
Target localization is a critical task in sensitive applications, where
multiple sensing agents communicate and collaborate to identify the target
location based on sensor readings. Existing approaches investigated the use of
Multi-Agent Deep Reinforcement Learning (MADRL) to tackle target localization.
Nevertheless, these methods do not consider practical uncertainties, like false
alarms when the target does not exist or when it is unreachable due to
environmental complexities. To address these drawbacks, this work proposes a
novel MADRL-based method for target localization in uncertain environments. The
proposed MADRL method employs Proximal Policy Optimization to optimize the
decision-making of sensing agents, which is represented in the form of an
actor-critic structure using Convolutional Neural Networks. The observations of
the agents are designed in an optimized manner to capture essential information
in the environment, and a team-based reward functions is proposed to produce
cooperative agents. The MADRL method covers three action dimensionalities that
control the agents' mobility to search the area for the target, detect its
existence, and determine its reachability. Using the concept of Transfer
Learning, a Deep Learning model builds on the knowledge from the MADRL model to
accurately estimating the target location if it is unreachable, resulting in
shared representations between the models for faster learning and lower
computational complexity. Collectively, the final combined model is capable of
searching for the target, determining its existence and reachability, and
estimating its location accurately. The proposed method is tested using a
radioactive target localization environment and benchmarked against existing
methods, showing its efficacy.
|
2501.10926
|
A Semantic Approach to Successive Interference Cancellation for Multiple
Access Networks
|
cs.IT math.IT
|
Differing from the conventional communication system paradigm that models
information source as a sequence of (i.i.d. or stationary) random variables,
the semantic approach aims at extracting and sending the high-level features of
the content deeply contained in the source, thereby breaking the performance
limits from the statistical information theory. As a pioneering work in this
area, the deep learning-enabled semantic communication (DeepSC) constitutes a
novel algorithmic framework based on the transformer--which is a deep learning
tool widely used to process text numerically. The main goal of this work is to
extend the DeepSC approach from the point-to-point link to the multi-user
multiple access channel (MAC). The inter-user interference has long been
identified as the bottleneck of the MAC. In the classic information theory, the
successive interference cancellation (SIC) scheme is a common way to mitigate
interference and achieve the channel capacity. Our main contribution is to
incorporate the SIC scheme into the DeepSC. As opposed to the traditional SIC
that removes interference in the digital symbol domain, the proposed semantic
SIC works in the domain of the semantic word embedding vectors. Furthermore, to
enhance the training efficiency, we propose a pretraining scheme and a partial
retraining scheme that quickly adjust the neural network parameters when new
users are added to the MAC. We also modify the existing loss function to
facilitate training. Finally, we present numerical experiments to demonstrate
the advantage of the proposed semantic approach as compared to the existing
benchmark methods.
|
2501.10928
|
Generative Physical AI in Vision: A Survey
|
cs.CV cs.AI
|
Generative Artificial Intelligence (AI) has rapidly advanced the field of
computer vision by enabling machines to create and interpret visual data with
unprecedented sophistication. This transformation builds upon a foundation of
generative models to produce realistic images, videos, and 3D or 4D content.
Traditionally, generative models primarily focus on visual fidelity while often
neglecting the physical plausibility of generated content. This gap limits
their effectiveness in applications requiring adherence to real-world physical
laws, such as robotics, autonomous systems, and scientific simulations. As
generative AI evolves to increasingly integrate physical realism and dynamic
simulation, its potential to function as a "world simulator" expands-enabling
the modeling of interactions governed by physics and bridging the divide
between virtual and physical realities. This survey systematically reviews this
emerging field of physics-aware generative AI in computer vision, categorizing
methods based on how they incorporate physical knowledge-either through
explicit simulation or implicit learning. We analyze key paradigms, discuss
evaluation protocols, and identify future research directions. By offering a
comprehensive overview, this survey aims to help future developments in
physically grounded generation for vision. The reviewed papers are summarized
at https://github.com/BestJunYu/Awesome-Physics-aware-Generation.
|
2501.10929
|
Issues with Neural Tangent Kernel Approach to Neural Networks
|
stat.ML cs.LG
|
Neural tangent kernels (NTKs) have been proposed to study the behavior of
trained neural networks from the perspective of Gaussian processes. An
important result in this body of work is the theorem of equivalence between a
trained neural network and kernel regression with the corresponding NTK. This
theorem allows for an interpretation of neural networks as special cases of
kernel regression. However, does this theorem of equivalence hold in practice?
In this paper, we revisit the derivation of the NTK rigorously and conduct
numerical experiments to evaluate this equivalence theorem. We observe that
adding a layer to a neural network and the corresponding updated NTK do not
yield matching changes in the predictor error. Furthermore, we observe that
kernel regression with a Gaussian process kernel in the literature that does
not account for neural network training produces prediction errors very close
to that of kernel regression with NTKs. These observations suggest the
equivalence theorem does not hold well in practice and puts into question
whether neural tangent kernels adequately address the training process of
neural networks.
|
2501.10933
|
BeST -- A Novel Source Selection Metric for Transfer Learning
|
cs.LG stat.ML
|
One of the most fundamental, and yet relatively less explored, goals in
transfer learning is the efficient means of selecting top candidates from a
large number of previously trained models (optimized for various "source"
tasks) that would perform the best for a new "target" task with a limited
amount of data. In this paper, we undertake this goal by developing a novel
task-similarity metric (BeST) and an associated method that consistently
performs well in identifying the most transferrable source(s) for a given task.
In particular, our design employs an innovative quantization-level optimization
procedure in the context of classification tasks that yields a measure of
similarity between a source model and the given target data. The procedure uses
a concept similar to early stopping (usually implemented to train deep neural
networks (DNNs) to ensure generalization) to derive a function that
approximates the transfer learning mapping without training. The advantage of
our metric is that it can be quickly computed to identify the top candidate(s)
for a given target task before a computationally intensive transfer operation
(typically using DNNs) can be implemented between the selected source and the
target task. As such, our metric can provide significant computational savings
for transfer learning from a selection of a large number of possible source
models. Through extensive experimental evaluations, we establish that our
metric performs well over different datasets and varying numbers of data
samples.
|
2501.10935
|
TSVC:Tripartite Learning with Semantic Variation Consistency for Robust
Image-Text Retrieval
|
cs.CV cs.AI
|
Cross-modal retrieval maps data under different modality via semantic
relevance. Existing approaches implicitly assume that data pairs are
well-aligned and ignore the widely existing annotation noise, i.e., noisy
correspondence (NC). Consequently, it inevitably causes performance
degradation. Despite attempts that employ the co-teaching paradigm with
identical architectures to provide distinct data perspectives, the differences
between these architectures are primarily stemmed from random initialization.
Thus, the model becomes increasingly homogeneous along with the training
process. Consequently, the additional information brought by this paradigm is
severely limited. In order to resolve this problem, we introduce a Tripartite
learning with Semantic Variation Consistency (TSVC) for robust image-text
retrieval. We design a tripartite cooperative learning mechanism comprising a
Coordinator, a Master, and an Assistant model. The Coordinator distributes
data, and the Assistant model supports the Master model's noisy label
prediction with diverse data. Moreover, we introduce a soft label estimation
method based on mutual information variation, which quantifies the noise in new
samples and assigns corresponding soft labels. We also present a new loss
function to enhance robustness and optimize training effectiveness. Extensive
experiments on three widely used datasets demonstrate that, even at increasing
noise ratios, TSVC exhibits significant advantages in retrieval accuracy and
maintains stable training performance.
|
2501.10937
|
Leveraging Chain of Thought towards Empathetic Spoken Dialogue without
Corresponding Question-Answering Data
|
cs.CL cs.SD eess.AS
|
Empathetic dialogue is crucial for natural human-computer interaction,
allowing the dialogue system to respond in a more personalized and emotionally
aware manner, improving user satisfaction and engagement. The emergence of
large language models (LLMs) has revolutionized dialogue generation by
harnessing their powerful capabilities and shown its potential in multimodal
domains. Many studies have integrated speech with text-based LLMs to take
speech question as input and output text response. However, the lack of spoken
question-answering datasets that include speech style information to supervised
fine-tuning (SFT) limits the performance of these systems. As a result, while
these systems excel at understanding speech content, they often struggle to
generate empathetic responses. In response, we propose a novel approach that
circumvents the need for question-answering data, called Listen, Perceive, and
Express (LPE). Our method employs a two-stage training process, initially
guiding the LLM to listen the content and perceive the emotional aspects of
speech. Subsequently, we utilize Chain-of-Thought (CoT) prompting to unlock the
model's potential for expressing empathetic responses based on listened spoken
content and perceived emotional cues. We employ experiments to prove the
effectiveness of proposed method. To our knowledge, this is the first attempt
to leverage CoT for speech-based dialogue.
|
2501.10938
|
Blockchain-assisted Demonstration Cloning for Multi-Agent Deep
Reinforcement Learning
|
cs.LG cs.AI
|
Multi-Agent Deep Reinforcement Learning (MDRL) is a promising research area
in which agents learn complex behaviors in cooperative or competitive
environments. However, MDRL comes with several challenges that hinder its
usability, including sample efficiency, curse of dimensionality, and
environment exploration. Recent works proposing Federated Reinforcement
Learning (FRL) to tackle these issues suffer from problems related to model
restrictions and maliciousness. Other proposals using reward shaping require
considerable engineering and could lead to local optima. In this paper, we
propose a novel Blockchain-assisted Multi-Expert Demonstration Cloning (MEDC)
framework for MDRL. The proposed method utilizes expert demonstrations in
guiding the learning of new MDRL agents, by suggesting exploration actions in
the environment. A model sharing framework on Blockchain is designed to allow
users to share their trained models, which can be allocated as expert models to
requesting users to aid in training MDRL systems. A Consortium Blockchain is
adopted to enable traceable and autonomous execution without the need for a
single trusted entity. Smart Contracts are designed to manage users and models
allocation, which are shared using IPFS. The proposed framework is tested on
several applications, and is benchmarked against existing methods in FRL,
Reward Shaping, and Imitation Learning-assisted RL. The results show the
outperformance of the proposed framework in terms of learning speed and
resiliency to faulty and malicious models.
|
2501.10940
|
Influence- and Interest-based Worker Recruitment in Crowdsourcing using
Online Social Networks
|
cs.SI
|
Workers recruitment remains a significant issue in Mobile Crowdsourcing
(MCS), where the aim is to recruit a group of workers that maximizes the
expected Quality of Service (QoS). Current recruitment systems assume that a
pre-defined pool of workers is available. However, this assumption is not
always true, especially in cold-start situations, where a new MCS task has just
been released. Additionally, studies show that up to 96\% of the available
candidates are usually not willing to perform the assigned tasks. To tackle
these issues, recent works use Online Social Networks (OSNs) and Influence
Maximization (IM) to advertise about the desired MCS tasks through influencers,
aiming to build larger pools. However, these works suffer from several
limitations, such as 1) the lack of group-based selection methods when choosing
influencers, 2) the lack of a well-defined worker recruitment process following
IM, 3) and the non-dynamicity of the recruitment process, where the workers who
refuse to perform the task are not substituted. In this paper, an Influence-
and Interest-based Worker Recruitment System (IIWRS), using OSNs, is proposed.
The proposed system has two main components: 1) an MCS-, group-, and
interest-based IM approach, using a Genetic Algorithm, to select a set of
influencers from the network to advertise about the MCS tasks, and 2) a dynamic
worker recruitment process which considers the social attributes of workers,
and is able to substitute those who do not accept to perform the assigned
tasks. Empirical studies are performed using real-life datasets, while
comparing IIWRS with existing benchmarks.
|
2501.10943
|
InsQABench: Benchmarking Chinese Insurance Domain Question Answering
with Large Language Models
|
cs.CL cs.AI
|
The application of large language models (LLMs) has achieved remarkable
success in various fields, but their effectiveness in specialized domains like
the Chinese insurance industry remains underexplored. The complexity of
insurance knowledge, encompassing specialized terminology and diverse data
types, poses significant challenges for both models and users. To address this,
we introduce InsQABench, a benchmark dataset for the Chinese insurance sector,
structured into three categories: Insurance Commonsense Knowledge, Insurance
Structured Database, and Insurance Unstructured Documents, reflecting
real-world insurance question-answering tasks.We also propose two methods,
SQL-ReAct and RAG-ReAct, to tackle challenges in structured and unstructured
data tasks. Evaluations show that while LLMs struggle with domain-specific
terminology and nuanced clause texts, fine-tuning on InsQABench significantly
improves performance. Our benchmark establishes a solid foundation for
advancing LLM applications in the insurance domain, with data and code
available at https://github.com/HaileyFamo/InsQABench.git.
|
2501.10945
|
Gradient-Based Multi-Objective Deep Learning: Algorithms, Theories,
Applications, and Beyond
|
cs.LG stat.ML
|
Multi-objective optimization (MOO) in deep learning aims to simultaneously
optimize multiple conflicting objectives, a challenge frequently encountered in
areas like multi-task learning and multi-criteria learning. Recent advancements
in gradient-based MOO methods have enabled the discovery of diverse types of
solutions, ranging from a single balanced solution to finite or even infinite
Pareto sets, tailored to user needs. These developments have broad applications
across domains such as reinforcement learning, computer vision, recommendation
systems, and large language models. This survey provides the first
comprehensive review of gradient-based MOO in deep learning, covering
algorithms, theories, and practical applications. By unifying various
approaches and identifying critical challenges, it serves as a foundational
resource for driving innovation in this evolving field. A comprehensive list of
MOO algorithms in deep learning is available at
\url{https://github.com/Baijiong-Lin/Awesome-Multi-Objective-Deep-Learning}.
|
2501.10950
|
Factor Graph-Based Active SLAM for Spacecraft Proximity Operations
|
cs.RO
|
We investigate a scenario where a chaser spacecraft or satellite equipped
with a monocular camera navigates in close proximity to a target spacecraft.
The satellite's primary objective is to construct a representation of the
operational environment and localize itself within it, utilizing the available
image data. We frame the joint task of state trajectory and map estimation as
an instance of smoothing-based simultaneous localization and mapping (SLAM),
where the underlying structure of the problem is represented as a factor graph.
Rather than considering estimation and planning as separate tasks, we propose
to control the camera observations to actively reduce the uncertainty of the
estimation variables, the spacecraft state, and the map landmarks. This is
accomplished by adopting an information-theoretic metric to reason about the
impact of candidate actions on the evolution of the belief state. Numerical
simulations indicate that the proposed method successfully captures the
interplay between planning and estimation, hence yielding reduced uncertainty
and higher accuracy when compared to commonly adopted passive sensing
strategies.
|
2501.10953
|
Channel Coding for Gaussian Channels with Mean and Variance Constraints
|
cs.IT math.IT
|
We consider channel coding for Gaussian channels with the recently introduced
mean and variance cost constraints. Through matching converse and achievability
bounds, we characterize the optimal first- and second-order performance. The
main technical contribution of this paper is an achievability scheme which uses
random codewords drawn from a mixture of three uniform distributions on
$(n-1)$-spheres of radii $R_1, R_2$ and $R_3$, where $R_i = O(\sqrt{n})$ and
$|R_i - R_j| = O(1)$. To analyze such a mixture distribution, we prove a lemma
giving a uniform $O(\log n)$ bound, which holds with high probability, on the
log ratio of the output distributions $Q_i^{cc}$ and $Q_j^{cc}$, where
$Q_i^{cc}$ is induced by a random channel input uniformly distributed on an
$(n-1)$-sphere of radius $R_i$. To facilitate the application of the usual
central limit theorem, we also give a uniform $O(\log n)$ bound, which holds
with high probability, on the log ratio of the output distributions $Q_i^{cc}$
and $Q^*_i$, where $Q_i^*$ is induced by a random channel input with i.i.d.
components.
|
2501.10956
|
Multimodal Techniques for Malware Classification
|
cs.CR cs.LG
|
The threat of malware is a serious concern for computer networks and systems,
highlighting the need for accurate classification techniques. In this research,
we experiment with multimodal machine learning approaches for malware
classification, based on the structured nature of the Windows Portable
Executable (PE) file format. Specifically, we train Support Vector Machine
(SVM), Long Short-Term Memory (LSTM), and Convolutional Neural Network (CNN)
models on features extracted from PE headers, we train these same models on
features extracted from the other sections of PE files, and train each model on
features extracted from the entire PE file. We then train SVM models on each of
the nine header-sections combinations of these baseline models, using the
output layer probabilities of the component models as feature vectors. We
compare the baseline cases to these multimodal combinations. In our
experiments, we find that the best of the multimodal models outperforms the
best of the baseline cases, indicating that it can be advantageous to train
separate models on distinct parts of Windows PE files.
|
2501.10957
|
MARIO: A Mixed Annotation Framework For Polyp Segmentation
|
cs.CV cs.AI
|
Existing polyp segmentation models are limited by high labeling costs and the
small size of datasets. Additionally, vast polyp datasets remain underutilized
because these models typically rely on a single type of annotation. To address
this dilemma, we introduce MARIO, a mixed supervision model designed to
accommodate various annotation types, significantly expanding the range of
usable data. MARIO learns from underutilized datasets by incorporating five
forms of supervision: pixel-level, box-level, polygon-level, scribblelevel, and
point-level. Each form of supervision is associated with a tailored loss that
effectively leverages the supervision labels while minimizing the noise. This
allows MARIO to move beyond the constraints of relying on a single annotation
type. Furthermore, MARIO primarily utilizes dataset with weak and cheap
annotations, reducing the dependence on large-scale, fully annotated ones.
Experimental results across five benchmark datasets demonstrate that MARIO
consistently outperforms existing methods, highlighting its efficacy in
balancing trade-offs between different forms of supervision and maximizing
polyp segmentation performance
|
2501.10958
|
Rethinking Early-Fusion Strategies for Improved Multimodal Image
Segmentation
|
cs.CV
|
RGB and thermal image fusion have great potential to exhibit improved
semantic segmentation in low-illumination conditions. Existing methods
typically employ a two-branch encoder framework for multimodal feature
extraction and design complicated feature fusion strategies to achieve feature
extraction and fusion for multimodal semantic segmentation. However, these
methods require massive parameter updates and computational effort during the
feature extraction and fusion. To address this issue, we propose a novel
multimodal fusion network (EFNet) based on an early fusion strategy and a
simple but effective feature clustering for training efficient RGB-T semantic
segmentation. In addition, we also propose a lightweight and efficient
multi-scale feature aggregation decoder based on Euclidean distance. We
validate the effectiveness of our method on different datasets and outperform
previous state-of-the-art methods with lower parameters and computation.
|
2501.10963
|
Open FinLLM Leaderboard: Towards Financial AI Readiness
|
cs.CE
|
Financial large language models (FinLLMs) with multimodal capabilities are
envisioned to revolutionize applications across business, finance, accounting,
and auditing. However, real-world adoption requires robust benchmarks of
FinLLMs' and agents' performance. Maintaining an open leaderboard of models is
crucial for encouraging innovative adoption and improving model effectiveness.
In collaboration with Linux Foundation and Hugging Face, we create an open
FinLLM leaderboard, which serves as an open platform for assessing and
comparing LLMs' performance on a wide spectrum of financial tasks. By
demoncratizing access to advanced AI tools and financial knowledge, a chatbot
or agent may enhance the analytical capabilities of the general public to a
professional-level within a few months of usage. This open leaderboard welcomes
contributions from academia, open-source community, industry, and stakeholders.
In particular, we encourage contributions of new datasets, tasks, and models
for continual update. Through fostering a collaborative and open ecosystem, we
seek to ensure the long-term sustainability and relevance of LLMs and agents as
they evolve with the financial sector's needs.
|
2501.10966
|
DC-PCN: Point Cloud Completion Network with Dual-Codebook Guided
Quantization
|
cs.CV cs.AI
|
Point cloud completion aims to reconstruct complete 3D shapes from partial 3D
point clouds. With advancements in deep learning techniques, various methods
for point cloud completion have been developed. Despite achieving encouraging
results, a significant issue remains: these methods often overlook the
variability in point clouds sampled from a single 3D object surface. This
variability can lead to ambiguity and hinder the achievement of more precise
completion results. Therefore, in this study, we introduce a novel point cloud
completion network, namely Dual-Codebook Point Completion Network (DC-PCN),
following an encder-decoder pipeline. The primary objective of DC-PCN is to
formulate a singular representation of sampled point clouds originating from
the same 3D surface. DC-PCN introduces a dual-codebook design to quantize
point-cloud representations from a multilevel perspective. It consists of an
encoder-codebook and a decoder-codebook, designed to capture distinct point
cloud patterns at shallow and deep levels. Additionally, to enhance the
information flow between these two codebooks, we devise an information exchange
mechanism. This approach ensures that crucial features and patterns from both
shallow and deep levels are effectively utilized for completion. Extensive
experiments on the PCN, ShapeNet\_Part, and ShapeNet34 datasets demonstrate the
state-of-the-art performance of our method.
|
2501.10967
|
Advancing General Multimodal Capability of Vision-language Models with
Pyramid-descent Visual Position Encoding
|
cs.CV cs.AI cs.CL
|
Vision-language Models (VLMs) have shown remarkable capabilities in advancing
general artificial intelligence, yet the irrational encoding of visual
positions persists in inhibiting the models' comprehensive perception
performance across different levels of granularity. In this work, we propose
Pyramid-descent Visual Position Encoding (PyPE), a novel approach designed to
enhance the perception of visual tokens within VLMs. By assigning visual
position indexes from the periphery to the center and expanding the central
receptive field incrementally, PyPE addresses the limitations of traditional
raster-scan methods and mitigates the long-term decay effects induced by Rotary
Position Embedding (RoPE). Our method reduces the relative distance between
interrelated visual elements and instruction tokens, promoting a more rational
allocation of attention weights and allowing for a multi-granularity perception
of visual elements and countering the over-reliance on anchor tokens. Extensive
experimental evaluations demonstrate that PyPE consistently improves the
general capabilities of VLMs across various sizes. Code is available at
https://github.com/SakuraTroyChen/PyPE.
|
2501.10969
|
AI Based Font Pair Suggestion Modelling For Graphic Design
|
cs.CV cs.CL
|
One of the key challenges of AI generated designs in Microsoft Designer is
selecting the most contextually relevant and novel fonts for the design
suggestions. Previous efforts involved manually mapping design intent to fonts.
Though this was high quality, this method does not scale for a large number of
fonts (3000+) and numerous user intents for graphic design. In this work we
create font visual embeddings, a font stroke width algorithm, a font category
to font mapping dataset, an LLM-based category utilization description and a
lightweight, low latency knowledge-distilled mini language model (Mini LM V2)
to recommend multiple pairs of contextual heading and subheading fonts for
beautiful and intuitive designs. We also utilize a weighted scoring mechanism,
nearest neighbor approach and stratified sampling to rank the font pairs and
bring novelty to the predictions.
|
2501.10970
|
The Alternative Annotator Test for LLM-as-a-Judge: How to Statistically
Justify Replacing Human Annotators with LLMs
|
cs.CL cs.AI cs.HC
|
The "LLM-as-a-judge" paradigm employs Large Language Models (LLMs) as
annotators and evaluators in tasks traditionally performed by humans. LLM
annotations are widely used, not only in NLP research but also in fields like
medicine, psychology, and social science. Despite their role in shaping study
results and insights, there is no standard or rigorous procedure to determine
whether LLMs can replace human annotators. In this paper, we propose a novel
statistical procedure -- the Alternative Annotator Test (alt-test) -- that
requires only a modest subset of annotated examples to justify using LLM
annotations. Additionally, we introduce a versatile and interpretable measure
for comparing LLM judges. To demonstrate our procedure, we curated a diverse
collection of ten datasets, consisting of language and vision-language tasks,
and conducted experiments with six LLMs and four prompting techniques. Our
results show that LLMs can sometimes replace humans with closed-source LLMs
(such as GPT-4o), outperforming open-source LLMs, and that prompting techniques
yield judges of varying quality. We hope this study encourages more rigorous
and reliable practices.
|
2501.10974
|
Sequential Change Detection for Learning in Piecewise Stationary Bandit
Environments
|
cs.IT cs.SY eess.SY math.IT stat.OT
|
A finite-horizon variant of the quickest change detection problem is
investigated, which is motivated by a change detection problem that arises in
piecewise stationary bandits. The goal is to minimize the \emph{latency}, which
is smallest threshold such that the probability that the detection delay
exceeds the threshold is below a desired low level, while controlling the false
alarm probability to a desired low level. When the pre- and post-change
distributions are unknown, two tests are proposed as candidate solutions. These
tests are shown to attain order optimality in terms of the horizon.
Furthermore, the growth in their latencies with respect to the false alarm
probability and late detection probability satisfies a property that is
desirable in regret analysis for piecewise stationary bandits. Numerical
results are provided to validate the theoretical performance results.
|
2501.10977
|
SMARTe-VR: Student Monitoring and Adaptive Response Technology for
e-learning in Virtual Reality
|
cs.HC cs.CV
|
This work introduces SMARTe-VR, a platform for student monitoring in an
immersive virtual reality environment designed for online education. SMARTe-VR
is aimed to gather data for adaptive learning, focusing on facial biometrics
and learning metadata. The platform allows instructors to create tailored
learning sessions with video lectures, featuring an interface with an Auto QA
system to evaluate understanding, interaction tools (e.g., textbook
highlighting and lecture tagging), and real-time feedback. Additionally, we
release a dataset containing 5 research challenges with data from 10 users in
VR-based TOEIC sessions. This dataset, spanning over 25 hours, includes facial
features, learning metadata, 450 responses, question difficulty levels, concept
tags, and understanding labels. Alongside the database, we present preliminary
experiments using Item Response Theory models, adapted for understanding
detection using facial features. Two architectures were explored: a Temporal
Convolutional Network for local features and a Multilayer Perceptron for global
features.
|
2501.10979
|
Control LLM: Controlled Evolution for Intelligence Retention in LLM
|
cs.LG
|
Large Language Models (LLMs) demand significant computational resources,
making it essential to enhance their capabilities without retraining from
scratch. A key challenge in this domain is \textit{catastrophic forgetting}
(CF), which hampers performance during Continuous Pre-training (CPT) and
Continuous Supervised Fine-Tuning (CSFT). We propose \textbf{Control LLM}, a
novel approach that leverages parallel pre-trained and expanded transformer
blocks, aligning their hidden-states through interpolation strategies This
method effectively preserves performance on existing tasks while seamlessly
integrating new knowledge.
Extensive experiments demonstrate the effectiveness of Control LLM in both
CPT and CSFT. On Llama3.1-8B-Instruct, it achieves significant improvements in
mathematical reasoning ($+14.4\%$ on Math-Hard) and coding performance ($+10\%$
on MBPP-PLUS). On Llama3.1-8B, it enhances multilingual capabilities ($+10.6\%$
on C-Eval, $+6.8\%$ on CMMLU, and $+30.2\%$ on CMMLU-0shot-CoT). It surpasses
existing methods and achieves SOTA among open-source models tuned from the same
base model, using substantially less data and compute. Crucially, these gains
are realized while preserving strong original capabilities, with minimal
degradation ($<4.3\% \text{on MMLU}$) compared to $>35\%$ in open-source Math
and Coding models. This approach has been successfully deployed in LinkedIn's
GenAI-powered job seeker and Ads unit products.
To support further research, we release the training and evaluation code
(https://github.com/linkedin/ControlLLM) along with models trained on public
datasets (https://huggingface.co/ControlLLM) to the community.
|
2501.10980
|
An analysis of the combination of feature selection and machine learning
methods for an accurate and timely detection of lung cancer
|
cs.LG
|
One of the deadliest cancers, lung cancer necessitates an early and precise
diagnosis. Because patients have a better chance of recovering, early
identification of lung cancer is crucial. This review looks at how to diagnose
lung cancer using sophisticated machine learning techniques like Random Forest
(RF) and Support Vector Machine (SVM). The Chi-squared test is one feature
selection strategy that has been successfully applied to find related features
and enhance model performance. The findings demonstrate that these techniques
can improve detection efficiency and accuracy while also assisting in runtime
reduction. This study produces recommendations for further research as well as
ideas to enhance diagnostic techniques. In order to improve healthcare and
create automated methods for detecting lung cancer, this research is a critical
first step.
|
2501.10984
|
Self-CephaloNet: A Two-stage Novel Framework using Operational Neural
Network for Cephalometric Analysis
|
cs.CV math.OC
|
Cephalometric analysis is essential for the diagnosis and treatment planning
of orthodontics. In lateral cephalograms, however, the manual detection of
anatomical landmarks is a time-consuming procedure. Deep learning solutions
hold the potential to address the time constraints associated with certain
tasks; however, concerns regarding their performance have been observed. To
address this critical issue, we proposed an end-to-end cascaded deep learning
framework (Self-CepahloNet) for the task, which demonstrated benchmark
performance over the ISBI 2015 dataset in predicting 19 dental landmarks. Due
to their adaptive nodal capabilities, Self-ONN (self-operational neural
networks) demonstrate superior learning performance for complex feature spaces
over conventional convolutional neural networks. To leverage this attribute, we
introduced a novel self-bottleneck in the HRNetV2 (High Resolution Network)
backbone, which has exhibited benchmark performance on the ISBI 2015 dataset
for the dental landmark detection task. Our first-stage results surpassed
previous studies, showcasing the efficacy of our singular end-to-end deep
learning model, which achieved a remarkable 70.95% success rate in detecting
cephalometric landmarks within a 2mm range for the Test1 and Test2 datasets.
Moreover, the second stage significantly improved overall performance, yielding
an impressive 82.25% average success rate for the datasets above within the
same 2mm distance. Furthermore, external validation was conducted using the PKU
cephalogram dataset. Our model demonstrated a commendable success rate of
75.95% within the 2mm range.
|
2501.10985
|
GRID: Protecting Training Graph from Link Stealing Attacks on GNN Models
|
cs.LG cs.CR
|
Graph neural networks (GNNs) have exhibited superior performance in various
classification tasks on graph-structured data. However, they encounter the
potential vulnerability from the link stealing attacks, which can infer the
presence of a link between two nodes via measuring the similarity of its
incident nodes' prediction vectors produced by a GNN model. Such attacks pose
severe security and privacy threats to the training graph used in GNN models.
In this work, we propose a novel solution, called Graph Link Disguise (GRID),
to defend against link stealing attacks with the formal guarantee of GNN model
utility for retaining prediction accuracy. The key idea of GRID is to add
carefully crafted noises to the nodes' prediction vectors for disguising
adjacent nodes as n-hop indirect neighboring nodes. We take into account the
graph topology and select only a subset of nodes (called core nodes) covering
all links for adding noises, which can avert the noises offset and have the
further advantages of reducing both the distortion loss and the computation
cost. Our crafted noises can ensure 1) the noisy prediction vectors of any two
adjacent nodes have their similarity level like that of two non-adjacent nodes
and 2) the model prediction is unchanged to ensure zero utility loss. Extensive
experiments on five datasets are conducted to show the effectiveness of our
proposed GRID solution against different representative link-stealing attacks
under transductive settings and inductive settings respectively, as well as two
influence-based attacks. Meanwhile, it achieves a much better privacy-utility
trade-off than existing methods when extended to GNNs.
|
2501.10990
|
Societal citations undermine the function of the science reward system
|
cs.DL cs.SI physics.soc-ph
|
Citations in the scientific literature system do not simply reflect
relationships between knowledge but are influenced by non-objective and
societal factors. Citation bias, irresponsible citation, and citation
manipulation are widespread and have become a serious and growing problem.
However, it has been difficult to assess the consequences of mixing societal
factors into the literature system because there was no observable literature
system unmixed with societal factors for comparison. In this paper, we
construct a mathematical theorem network, representing a logic-based and
objective knowledge system, to address this problem. By comparing the
mathematical theorem network and the scientific citation networks, we find that
these two types of networks are significantly different in their structure and
function. In particular, the reward function in citation networks is impaired:
The scientific citation network fails to provide more recognition for more
disruptive results, while the mathematical theorem network can achieve. We
develop a network generation model that can create two types of
links$\unicode{x2014}$logical and societal$\unicode{x2014}$to account for these
differences. The model parameter $q$, which we call the human influence factor,
can control the number of societal links and thus regulate the degree of mixing
of societal factors in the networks. Under this design, the model successfully
reproduces the differences among real networks. These results suggest that the
presence of societal factors undermines the function of the scientific reward
system. To improve the status quo, we advocate for reforming the reference list
format in papers, urging journals to require authors to separately disclose
logical references and social references.
|
2501.10991
|
Front Hair Styling Robot System Using Path Planning for Root-Centric
Strand Adjustment
|
cs.RO
|
Hair styling is a crucial aspect of personal grooming, significantly
influenced by the appearance of front hair. While brushing is commonly used
both to detangle hair and for styling purposes, existing research primarily
focuses on robotic systems for detangling hair, with limited exploration into
robotic hair styling. This research presents a novel robotic system designed to
automatically adjust front hairstyles, with an emphasis on path planning for
root-centric strand adjustment. The system utilizes images to compare the
current hair state with the desired target state through an orientation map of
hair strands. By concentrating on the differences in hair orientation and
specifically targeting adjustments at the root of each strand, the system
performs detailed styling tasks. The path planning approach ensures effective
alignment of the hairstyle with the target, and a closed-loop mechanism refines
these adjustments to accurately evolve the hairstyle towards the desired
outcome. Experimental results demonstrate that the proposed system achieves a
high degree of similarity and consistency in front hair styling, showing
promising results for automated, precise hairstyle adjustments.
|
2501.11002
|
pMixFed: Efficient Personalized Federated Learning through Adaptive
Layer-Wise Mixup
|
cs.LG cs.DC
|
Traditional Federated Learning (FL) methods encounter significant challenges
when dealing with heterogeneous data and providing personalized solutions for
non-IID scenarios. Personalized Federated Learning (PFL) approaches aim to
address these issues by balancing generalization and personalization, often
through parameter decoupling or partial models that freeze some neural network
layers for personalization while aggregating other layers globally. However,
existing methods still face challenges of global-local model discrepancy,
client drift, and catastrophic forgetting, which degrade model accuracy. To
overcome these limitations, we propose pMixFed, a dynamic, layer-wise PFL
approach that integrates mixup between shared global and personalized local
models. Our method introduces an adaptive strategy for partitioning between
personalized and shared layers, a gradual transition of personalization degree
to enhance local client adaptation, improved generalization across clients, and
a novel aggregation mechanism to mitigate catastrophic forgetting. Extensive
experiments demonstrate that pMixFed outperforms state-of-the-art PFL methods,
showing faster model training, increased robustness, and improved handling of
data heterogeneity under different heterogeneous settings.
|
2501.11003
|
Building low-resource African language corpora: A case study of
Kidawida, Kalenjin and Dholuo
|
cs.CL
|
Natural Language Processing is a crucial frontier in artificial intelligence,
with broad applications in many areas, including public health, agriculture,
education, and commerce. However, due to the lack of substantial linguistic
resources, many African languages remain underrepresented in this digital
transformation. This paper presents a case study on the development of
linguistic corpora for three under-resourced Kenyan languages, Kidaw'ida,
Kalenjin, and Dholuo, with the aim of advancing natural language processing and
linguistic research in African communities. Our project, which lasted one year,
employed a selective crowd-sourcing methodology to collect text and speech data
from native speakers of these languages. Data collection involved (1) recording
conversations and translation of the resulting text into Kiswahili, thereby
creating parallel corpora, and (2) reading and recording written texts to
generate speech corpora. We made these resources freely accessible via
open-research platforms, namely Zenodo for the parallel text corpora and
Mozilla Common Voice for the speech datasets, thus facilitating ongoing
contributions and access for developers to train models and develop Natural
Language Processing applications. The project demonstrates how grassroots
efforts in corpus building can support the inclusion of African languages in
artificial intelligence innovations. In addition to filling resource gaps,
these corpora are vital in promoting linguistic diversity and empowering local
communities by enabling Natural Language Processing applications tailored to
their needs. As African countries like Kenya increasingly embrace digital
transformation, developing indigenous language resources becomes essential for
inclusive growth. We encourage continued collaboration from native speakers and
developers to expand and utilize these corpora.
|
2501.11006
|
GREEN-CODE: Optimizing Energy Efficiency in Large Language Models for
Code Generation
|
cs.DC cs.AI cs.PF cs.SE
|
Large Language Models (LLMs) are becoming integral to daily life, showcasing
their vast potential across various Natural Language Processing (NLP) tasks.
Beyond NLP, LLMs are increasingly used in software development tasks, such as
code completion, modification, bug fixing, and code translation. Software
engineers widely use tools like GitHub Copilot and Amazon Q, streamlining
workflows and automating tasks with high accuracy. While the resource and
energy intensity of LLM training is often highlighted, inference can be even
more resource-intensive over time, as it's a continuous process with a high
number of invocations. Therefore, developing resource-efficient alternatives
for LLM inference is crucial for sustainability. This work proposes GREEN-CODE,
a framework for energy-aware code generation in LLMs. GREEN-CODE performs
dynamic early exit during LLM inference. We train a Reinforcement Learning (RL)
agent that learns to balance the trade-offs between accuracy, latency, and
energy consumption. Our approach is evaluated on two open-source LLMs, Llama
3.2 3B and OPT 2.7B, using the JavaCorpus and PY150 datasets. Results show that
our method reduces the energy consumption between 23-50 % on average for code
generation tasks without significantly affecting accuracy.
|
2501.11007
|
HFGCN:Hypergraph Fusion Graph Convolutional Networks for Skeleton-Based
Action Recognition
|
cs.CV cs.LG
|
In recent years, action recognition has received much attention and wide
application due to its important role in video understanding. Most of the
researches on action recognition methods focused on improving the performance
via various deep learning methods rather than the classification of skeleton
points. The topological modeling between skeleton points and body parts was
seldom considered. Although some studies have used a data-driven approach to
classify the topology of the skeleton point, the nature of the skeleton point
in terms of kinematics has not been taken into consideration. Therefore, in
this paper, we draw on the theory of kinematics to adapt the topological
relations of the skeleton point and propose a topological relation
classification based on body parts and distance from core of body. To
synthesize these topological relations for action recognition, we propose a
novel Hypergraph Fusion Graph Convolutional Network (HFGCN). In particular, the
proposed model is able to focus on the human skeleton points and the different
body parts simultaneously, and thus construct the topology, which improves the
recognition accuracy obviously. We use a hypergraph to represent the
categorical relationships of these skeleton points and incorporate the
hypergraph into a graph convolution network to model the higher-order
relationships among the skeleton points and enhance the feature representation
of the network. In addition, our proposed hypergraph attention module and
hypergraph graph convolution module optimize topology modeling in temporal and
channel dimensions, respectively, to further enhance the feature representation
of the network. We conducted extensive experiments on three widely used
datasets.The results validate that our proposed method can achieve the best
performance when compared with the state-of-the-art skeleton-based methods.
|
2501.11009
|
Efficient Reconciliation of Continuous Variable Quantum Key Distribution
with Multiplicatively Repeated Non-Binary LDPC Codes
|
quant-ph cs.IT math.IT
|
Continuous variable quantum key distribution bears the promise of simple
quantum key distribution directly compatible with commercial off the shelf
equipment. However, for a long time its performance was hindered by the absence
of good classical postprocessing capable of distilling secret-keys in the noisy
regime. Advanced coding solutions in the past years have partially addressed
this problem enabling record transmission distances of up to 165 km, and 206 km
over ultra-low loss fiber. In this paper, we show that a very simple coding
solution with a single code is sufficient to extract keys at all noise levels.
This solution has performance competitive with prior results for all levels of
noise, and we show that non-zero keys can be distilled up to a record distance
of 192 km assuming the standard loss of a single-mode optical fiber, and 240 km
over ultra-low loss fibers. Low-rate codes are constructed using
multiplicatively repeated non-binary low-density parity-check codes over a
finite field of characteristic two. This construction only makes use of a
(2,k)-regular non-binary low-density parity-check code as mother code, such
that code design is in fact not required, thus trivializing the code
construction procedure. The construction is also inherently rate-adaptive
thereby allowing to easily create codes of any rate. Rate-adaptive codes are of
special interest for the efficient reconciliation of errors over time or
arbitrary varying channels, as is the case with quantum key distribution. In
short, these codes are highly efficient when reconciling errors over a very
noisy communication channel, and perform well even for short block-length
codes. Finally, the proposed solution is known to be easily amenable to
hardware implementations, thus addressing also the requirements for practical
reconciliation in continuous variable quantum key distribution.
|
2501.11012
|
GenAI Content Detection Task 1: English and Multilingual
Machine-Generated Text Detection: AI vs. Human
|
cs.CL
|
We present the GenAI Content Detection Task~1 -- a shared task on binary
machine generated text detection, conducted as a part of the GenAI workshop at
COLING 2025. The task consists of two subtasks: Monolingual (English) and
Multilingual. The shared task attracted many participants: 36 teams made
official submissions to the Monolingual subtask during the test phase and 26
teams -- to the Multilingual. We provide a comprehensive overview of the data,
a summary of the results -- including system rankings and performance scores --
detailed descriptions of the participating systems, and an in-depth analysis of
submissions.
https://github.com/mbzuai-nlp/COLING-2025-Workshop-on-MGT-Detection-Task1
|
2501.11014
|
Transfer Learning Strategies for Pathological Foundation Models: A
Systematic Evaluation in Brain Tumor Classification
|
eess.IV cs.CV
|
Foundation models pretrained on large-scale pathology datasets have shown
promising results across various diagnostic tasks. Here, we present a
systematic evaluation of transfer learning strategies for brain tumor
classification using these models. We analyzed 252 cases comprising five major
tumor types: glioblastoma, astrocytoma, oligodendroglioma, primary central
nervous system lymphoma, and metastatic tumors. Comparing state-of-the-art
foundation models with conventional approaches, we found that foundation models
demonstrated robust classification performance with as few as 10 patches per
case, challenging the traditional assumption that extensive per-case image
sampling is necessary. Furthermore, our evaluation revealed that simple
transfer learning strategies like linear probing were sufficient, while
fine-tuning often degraded model performance. These findings suggest a paradigm
shift from extensive data collection to efficient utilization of pretrained
features, providing practical implications for implementing AI-assisted
diagnosis in clinical pathology.
|
2501.11015
|
Wireless Control over Edge Networks: Joint User Association and
Communication-Computation Co-Design
|
cs.IT math.IT
|
This paper studies a wireless networked control system with multiple base
stations (BSs) cooperatively coordinating the wireless control of a number of
subsystems each consisting of a plant, a sensor, and an actuator. In this
system, each sensor first offloads the sensing data to its associated BS, which
then employs mobile edge computing (MEC) to process the data and sends the
command signals back to the actuator for remote control. We consider the
time-division-multiple-access (TDMA) service protocol among different BSs to
facilitate the cascaded communication and computation process, in which
different BSs implement the uplink data collection and downlink command
broadcasting over orthogonal time slots. We also employ the massive
multiple-input multiple-output (MIMO) at BSs, based on which each BS serves its
associated sensors or actuators over the same time-frequency resources via
spatial multiplexing. Under this setup, we jointly design the association
between BSs and sensors/actuators as well as the joint communication and
computation resource allocation, with the objective of minimizing the
closed-loop control latency of the multiple subsystems while ensuring their
control stability. The optimization takes into account the transmission
uncertainty caused by both the hyper reliable and low-latency communications
(HRLLC) and the inter-user interference , as well as the communication and
computation resource constraints at distributed nodes. To solve the challenging
non-convex joint optimization problem, we develop an efficient algorithm by
employing the techniques of alternating optimization and successive convex
approximation (SCA). Numerical results show that the proposed joint
BS-sensor/actuator association and resource allocation design significantly
outperforms other heuristic schemes and frequency-division-multiple-access
(FDMA) counterpart.
|
2501.11020
|
Car-GS: Addressing Reflective and Transparent Surface Challenges in 3D
Car Reconstruction
|
cs.CV
|
3D car modeling is crucial for applications in autonomous driving systems,
virtual and augmented reality, and gaming. However, due to the distinctive
properties of cars, such as highly reflective and transparent surface
materials, existing methods often struggle to achieve accurate 3D car
reconstruction.To address these limitations, we propose Car-GS, a novel
approach designed to mitigate the effects of specular highlights and the
coupling of RGB and geometry in 3D geometric and shading reconstruction (3DGS).
Our method incorporates three key innovations: First, we introduce
view-dependent Gaussian primitives to effectively model surface reflections.
Second, we identify the limitations of using a shared opacity parameter for
both image rendering and geometric attributes when modeling transparent
objects. To overcome this, we assign a learnable geometry-specific opacity to
each 2D Gaussian primitive, dedicated solely to rendering depth and normals.
Third, we observe that reconstruction errors are most prominent when the camera
view is nearly orthogonal to glass surfaces. To address this issue, we develop
a quality-aware supervision module that adaptively leverages normal priors from
a pre-trained large-scale normal model.Experimental results demonstrate that
Car-GS achieves precise reconstruction of car surfaces and significantly
outperforms prior methods. The project page is available at
https://lcc815.github.io/Car-GS.
|
2501.11023
|
Investigating the Impact of Language-Adaptive Fine-Tuning on Sentiment
Analysis in Hausa Language Using AfriBERTa
|
cs.CL
|
Sentiment analysis (SA) plays a vital role in Natural Language Processing
(NLP) by ~identifying sentiments expressed in text. Although significant
advances have been made in SA for widely spoken languages, low-resource
languages such as Hausa face unique challenges, primarily due to a lack of
digital resources. This study investigates the effectiveness of
Language-Adaptive Fine-Tuning (LAFT) to improve SA performance in Hausa. We
first curate a diverse, unlabeled corpus to expand the model's linguistic
capabilities, followed by applying LAFT to adapt AfriBERTa specifically to the
nuances of the Hausa language. The adapted model is then fine-tuned on the
labeled NaijaSenti sentiment dataset to evaluate its performance. Our findings
demonstrate that LAFT gives modest improvements, which may be attributed to the
use of formal Hausa text rather than informal social media data. Nevertheless,
the pre-trained AfriBERTa model significantly outperformed models not
specifically trained on Hausa, highlighting the importance of using pre-trained
models in low-resource contexts. This research emphasizes the necessity for
diverse data sources to advance NLP applications for low-resource African
languages. We published the code and the dataset to encourage further research
and facilitate reproducibility in low-resource NLP here:
https://github.com/Sani-Abdullahi-Sani/Natural-Language-Processing/blob/main/Sentiment%20Analysis%20for%20Low%20Resource%20African%20Languages
|
2501.11024
|
Laplacian Eigenvector Centrality
|
cs.SI cs.GT physics.soc-ph
|
Networks significantly influence social, economic, and organizational
outcomes, with centrality measures serving as crucial tools to capture the
importance of individual nodes. This paper introduces Laplacian Eigenvector
Centrality (LEC), a novel framework for network analysis based on spectral
graph theory and the eigendecomposition of the Laplacian matrix. A distinctive
feature of LEC is its adjustable parameter, the LEC order, which enables
researchers to control and assess the scope of centrality measurement using the
Laplacian spectrum. Using random graph models, LEC demonstrates robustness and
scalability across diverse network structures. We connect LEC to equilibrium
responses to external shocks in an economic model, showing how LEC quantifies
agents' roles in attenuating shocks and facilitating coordinated responses
through quadratic optimization. Finally, we apply LEC to the study of
microfinance diffusion, illustrating how it complements classical centrality
measures, such as eigenvector and Katz-Bonacich centralities, by capturing
distinctive aspects of node positions within the network.
|
2501.11030
|
Tracking Mouse from Incomplete Body-Part Observations and Deep-Learned
Deformable-Mouse Model Motion-Track Constraint for Behavior Analysis
|
cs.CV
|
Tracking mouse body parts in video is often incomplete due to occlusions such
that - e.g. - subsequent action and behavior analysis is impeded. In this
conceptual work, videos from several perspectives are integrated via global
exterior camera orientation; body part positions are estimated by 3D
triangulation and bundle adjustment. Consistency of overall 3D track
reconstruction is achieved by introduction of a 3D mouse model, deep-learned
body part movements, and global motion-track smoothness constraint. The
resulting 3D body and body part track estimates are substantially more complete
than the original single-frame-based body part detection, therefore, allowing
improved animal behavior analysis.
|
2501.11031
|
AdaptiveLog: An Adaptive Log Analysis Framework with the Collaboration
of Large and Small Language Model
|
cs.SE cs.AI cs.CL
|
Automated log analysis is crucial to ensure high availability and reliability
of complex systems. The advent of LLMs in NLP has ushered in a new era of
language model-driven automated log analysis, garnering significant interest.
Within this field, two primary paradigms based on language models for log
analysis have become prominent. Small Language Models (SLMs) follow the
pre-train and fine-tune paradigm, focusing on the specific log analysis task
through fine-tuning on supervised datasets. On the other hand, LLMs following
the in-context learning paradigm, analyze logs by providing a few examples in
prompt contexts without updating parameters. Despite their respective
strengths, we notice that SLMs are more cost-effective but less powerful,
whereas LLMs with large parameters are highly powerful but expensive and
inefficient. To trade-off between the performance and inference costs of both
models in automated log analysis, this paper introduces an adaptive log
analysis framework known as AdaptiveLog, which effectively reduces the costs
associated with LLM while ensuring superior results. This framework
collaborates an LLM and a small language model, strategically allocating the
LLM to tackle complex logs while delegating simpler logs to the SLM.
Specifically, to efficiently query the LLM, we propose an adaptive selection
strategy based on the uncertainty estimation of the SLM, where the LLM is
invoked only when the SLM is uncertain. In addition, to enhance the reasoning
ability of the LLM in log analysis tasks, we propose a novel prompt strategy by
retrieving similar error-prone cases as the reference, enabling the model to
leverage past error experiences and learn solutions from these cases. Extensive
experiments demonstrate that AdaptiveLog achieves state-of-the-art results
across different tasks, elevating the overall accuracy of log analysis while
maintaining cost efficiency.
|
2501.11034
|
Generative Retrieval for Book search
|
cs.IR
|
In book search, relevant book information should be returned in response to a
query. Books contain complex, multi-faceted information such as metadata,
outlines, and main text, where the outline provides hierarchical information
between chapters and sections. Generative retrieval (GR) is a new retrieval
paradigm that consolidates corpus information into a single model to generate
identifiers of documents that are relevant to a given query. How can GR be
applied to book search? Directly applying GR to book search is a challenge due
to the unique characteristics of book search: The model needs to retain the
complex, multi-faceted information of the book, which increases the demand for
labeled data. Splitting book information and treating it as a collection of
separate segments for learning might result in a loss of hierarchical
information. We propose an effective Generative retrieval framework for Book
Search (GBS) that features two main components: data augmentation and
outline-oriented book encoding. For data augmentation, GBS constructs multiple
query-book pairs for training; it constructs multiple book identifiers based on
the outline, various forms of book contents, and simulates real book retrieval
scenarios with varied pseudo-queries. This includes coverage-promoting book
identifier augmentation, allowing the model to learn to index effectively, and
diversity-enhanced query augmentation, allowing the model to learn to retrieve
effectively. Outline-oriented book encoding improves length extrapolation
through bi-level positional encoding and retentive attention mechanisms to
maintain context over long sequences. Experiments on a proprietary Baidu
dataset demonstrate that GBS outperforms strong baselines, achieving a 9.8\%
improvement in terms of MRR@20, over the state-of-the-art RIPOR method...
|
2501.11035
|
From Arabic Text to Puzzles: LLM-Driven Development of Arabic
Educational Crosswords
|
cs.CL
|
We present an Arabic crossword puzzle generator from a given text that
utilizes advanced language models such as GPT-4-Turbo, GPT-3.5-Turbo and
Llama3-8B-Instruct, specifically developed for educational purposes, this
innovative generator leverages a meticulously compiled dataset named
Arabic-Clue-Instruct with over 50,000 entries encompassing text, answers,
clues, and categories. This dataset is intricately designed to aid in the
generation of pertinent clues linked to specific texts and keywords within
defined categories. This project addresses the scarcity of advanced educational
tools tailored for the Arabic language, promoting enhanced language learning
and cognitive development. By providing a culturally and linguistically
relevant tool, our objective is to make learning more engaging and effective
through gamification and interactivity. Integrating state-of-the-art artificial
intelligence with contemporary learning methodologies, this tool can generate
crossword puzzles from any given educational text, thereby facilitating an
interactive and enjoyable learning experience. This tool not only advances
educational paradigms but also sets a new standard in interactive and cognitive
learning technologies. The model and dataset are publicly available.
|
2501.11036
|
LF-Steering: Latent Feature Activation Steering for Enhancing Semantic
Consistency in Large Language Models
|
cs.CL
|
Large Language Models (LLMs) often generate inconsistent responses when
prompted with semantically equivalent paraphrased inputs. Recently, activation
steering, a technique that modulates LLMs' behaviours by adjusting their latent
representations during inference time, has been explored to improve the
semantic consistency of LLMs. However, these methods typically operate at the
model component level, such as layer hidden states or attention head outputs.
They face a challenge due to the ``polysemanticity issue'', where the model
components of LLMs typically encode multiple entangled features, making precise
steering difficult. To address this challenge, we drill down to feature-level
representations and propose LF-Steering, a novel activation steering approach
to precisely identify latent feature representations responsible for semantic
inconsistency. More specifically, our method maps the hidden states of the
relevant transformer layer into a sparsely activated, high-dimensional feature
space based on a sparse autoencoder (SAE), ensuring model steering based on
decoupled feature representations with minimal interference. Comprehensive
experiments on NLU and NLG datasets demonstrate the effectiveness of our method
in enhancing semantic consistency, resulting in significant performance gains
for various NLU and NLG tasks.
|
2501.11039
|
Beyond Any-Shot Adaptation: Predicting Optimization Outcome for
Robustness Gains without Extra Pay
|
cs.LG
|
The foundation model enables general-purpose problem-solving and enjoys
desirable rapid adaptation due to its adopted cross-task generalization
paradigms, e.g., pretraining, meta-training, and finetuning. Recent advances in
these paradigms show the crucial role of challenging tasks' prioritized
sampling in enhancing adaptation robustness. However, ranking task difficulties
exhausts massive task queries to evaluate, thus computation and annotation
intensive, which is typically unaffordable in practice. This work underscores
the criticality of both adaptation robustness and learning efficiency,
especially in scenarios where tasks are risky or costly to evaluate, e.g.,
policy evaluations in Markov decision processes (MDPs) or inference with large
models. To this end, we present Model Predictive Task Sampling (MPTS) to
establish connections between the task space and adaptation risk landscape to
form a theoretical guideline in robust active task sampling. MPTS characterizes
the task episodic information with a generative model and directly predicts
task-specific adaptation risk values from posterior inference. The developed
risk learner can amortize expensive evaluation and provably approximately rank
task difficulties in the pursuit of task robust adaptation. MPTS can be
seamlessly integrated into zero-shot, few-shot, and many-shot learning
paradigms. Extensive experimental results are conducted to exhibit the
superiority of the proposed framework, remarkably increasing task adaptation
robustness and retaining learning efficiency in contrast to existing
state-of-the-art (SOTA) methods. The code is available at the project site
https://github.com/thu-rllab/MPTS.
|
2501.11041
|
Enhancing Semantic Consistency of Large Language Models through Model
Editing: An Interpretability-Oriented Approach
|
cs.CL
|
A Large Language Model (LLM) tends to generate inconsistent and sometimes
contradictory outputs when presented with a prompt that has equivalent
semantics but is expressed differently from the original prompt. To achieve
semantic consistency of an LLM, one of the key approaches is to finetune the
model with prompt-output pairs with semantically equivalent meanings. Despite
its effectiveness, a data-driven finetuning method incurs substantial
computation costs in data preparation and model optimization. In this regime,
an LLM is treated as a ``black box'', restricting our ability to gain deeper
insights into its internal mechanism. In this paper, we are motivated to
enhance the semantic consistency of LLMs through a more interpretable method
(i.e., model editing) to this end. We first identify the model components
(i.e., attention heads) that have a key impact on the semantic consistency of
an LLM. We subsequently inject biases into the output of these model components
along the semantic-consistency activation direction. It is noteworthy that
these modifications are cost-effective, without reliance on mass manipulations
of the original model parameters. Through comprehensive experiments on the
constructed NLU and open-source NLG datasets, our method demonstrates
significant improvements in the semantic consistency and task performance of
LLMs. Additionally, our method exhibits promising generalization capabilities
by performing well on tasks beyond the primary tasks.
|
2501.11043
|
BF-STVSR: B-Splines and Fourier-Best Friends for High Fidelity
Spatial-Temporal Video Super-Resolution
|
cs.CV cs.AI
|
Enhancing low-resolution, low-frame-rate videos to high-resolution,
high-frame-rate quality is essential for a seamless user experience, motivating
advancements in Continuous Spatial-Temporal Video Super Resolution (C-STVSR).
While prior methods employ Implicit Neural Representation (INR) for continuous
encoding, they often struggle to capture the complexity of video data, relying
on simple coordinate concatenation and pre-trained optical flow network for
motion representation. Interestingly, we find that adding position encoding,
contrary to common observations, does not improve-and even degrade performance.
This issue becomes particularly pronounced when combined with pre-trained
optical flow networks, which can limit the model's flexibility. To address
these issues, we propose BF-STVSR, a C-STVSR framework with two key modules
tailored to better represent spatial and temporal characteristics of video: 1)
B-spline Mapper for smooth temporal interpolation, and 2) Fourier Mapper for
capturing dominant spatial frequencies. Our approach achieves state-of-the-art
PSNR and SSIM performance, showing enhanced spatial details and natural
temporal consistency.
|
2501.11053
|
Learning with Open-world Noisy Data via Class-independent Margin in Dual
Representation Space
|
cs.LG cs.CV
|
Learning with Noisy Labels (LNL) aims to improve the model generalization
when facing data with noisy labels, and existing methods generally assume that
noisy labels come from known classes, called closed-set noise. However, in
real-world scenarios, noisy labels from similar unknown classes, i.e., open-set
noise, may occur during the training and inference stage. Such open-world noisy
labels may significantly impact the performance of LNL methods. In this study,
we propose a novel dual-space joint learning method to robustly handle the
open-world noise. To mitigate model overfitting on closed-set and open-set
noises, a dual representation space is constructed by two networks. One is a
projection network that learns shared representations in the prototype space,
while the other is a One-Vs-All (OVA) network that makes predictions using
unique semantic representations in the class-independent space. Then, bi-level
contrastive learning and consistency regularization are introduced in two
spaces to enhance the detection capability for data with unknown classes. To
benefit from the memorization effects across different types of samples,
class-independent margin criteria are designed for sample identification, which
selects clean samples, weights closed-set noise, and filters open-set noise
effectively. Extensive experiments demonstrate that our method outperforms the
state-of-the-art methods and achieves an average accuracy improvement of 4.55\%
and an AUROC improvement of 6.17\% on CIFAR80N.
|
2501.11054
|
Temporal Analysis of Adversarial Attacks in Federated Learning
|
cs.LG cs.CR
|
In this paper, we experimentally analyze the robustness of selected Federated
Learning (FL) systems in the presence of adversarial clients. We find that
temporal attacks significantly affect model performance in the FL models
tested, especially when the adversaries are active throughout or during the
later rounds. We consider a variety of classic learning models, including
Multinominal Logistic Regression (MLR), Random Forest, XGBoost, Support Vector
Classifier (SVC), as well as various Neural Network models including Multilayer
Perceptron (MLP), Convolution Neural Network (CNN), Recurrent Neural Network
(RNN), and Long Short-Term Memory (LSTM). Our results highlight the
effectiveness of temporal attacks and the need to develop strategies to make
the FL process more robust against such attacks. We also briefly consider the
effectiveness of defense mechanisms, including outlier detection in the
aggregation algorithm.
|
2501.11057
|
Machine Learning Surrogates for Optimizing Transportation Policies with
Agent-Based Models
|
cs.CE
|
Rapid urbanization and growing urban populations worldwide present
significant challenges for cities, including increased traffic congestion and
air pollution. Effective strategies are needed to manage traffic volumes and
reduce emissions. In practice, traditional traffic flow simulations are used to
test those strategies. However, high computational intensity usually limits
their applicability in investigating a magnitude of different scenarios to
evaluate best policies. This paper presents a first approach of using Graph
Neural Networks (GNN) as surrogates for large-scale agent-based simulation
models. In a case study using the MATSim model of Paris, the GNN effectively
learned the impacts of capacity reduction policies on citywide traffic flow.
Performance analysis across various road types and scenarios revealed that the
GNN could accurately capture policy-induced effects on edge-based traffic
volumes, particularly on roads directly affected by the policies and those with
higher traffic volumes.
|
2501.11063
|
Enhancing Sample Utilization in Noise-Robust Deep Metric Learning With
Subgroup-Based Positive-Pair Selection
|
cs.CV
|
The existence of noisy labels in real-world data negatively impacts the
performance of deep learning models. Although much research effort has been
devoted to improving the robustness towards noisy labels in classification
tasks, the problem of noisy labels in deep metric learning (DML) remains
under-explored. Existing noisy label learning methods designed for DML mainly
discard suspicious noisy samples, resulting in a waste of the training data. To
address this issue, we propose a noise-robust DML framework with SubGroup-based
Positive-pair Selection (SGPS), which constructs reliable positive pairs for
noisy samples to enhance the sample utilization. Specifically, SGPS first
effectively identifies clean and noisy samples by a probability-based clean
sample selectionstrategy. To further utilize the remaining noisy samples, we
discover their potential similar samples based on the subgroup information
given by a subgroup generation module and then aggregate them into informative
positive prototypes for each noisy sample via a positive prototype generation
module. Afterward, a new contrastive loss is tailored for the noisy samples
with their selected positive pairs. SGPS can be easily integrated into the
training process of existing pair-wise DML tasks, like image retrieval and face
recognition. Extensive experiments on multiple synthetic and real-world
large-scale label noise datasets demonstrate the effectiveness of our proposed
method. Without any bells and whistles, our SGPS framework outperforms the
state-of-the-art noisy label DML methods. Code is available at
\url{https://github.com/smuelpeng/SGPS-NoiseFreeDML}.
|
2501.11065
|
Enhancing Neural Spoken Language Recognition: An Exploration with
Multilingual Datasets
|
cs.SD cs.AI cs.LG eess.AS
|
In this research, we advanced a spoken language recognition system, moving
beyond traditional feature vector-based models. Our improvements focused on
effectively capturing language characteristics over extended periods using a
specialized pooling layer. We utilized a broad dataset range from Common-Voice,
targeting ten languages across Indo-European, Semitic, and East Asian families.
The major innovation involved optimizing the architecture of Time Delay Neural
Networks. We introduced additional layers and restructured these networks into
a funnel shape, enhancing their ability to process complex linguistic patterns.
A rigorous grid search determined the optimal settings for these networks,
significantly boosting their efficiency in language pattern recognition from
audio samples. The model underwent extensive training, including a phase with
augmented data, to refine its capabilities. The culmination of these efforts is
a highly accurate system, achieving a 97\% accuracy rate in language
recognition. This advancement represents a notable contribution to artificial
intelligence, specifically in improving the accuracy and efficiency of language
processing systems, a critical aspect in the engineering of advanced speech
recognition technologies.
|
2501.11067
|
IntellAgent: A Multi-Agent Framework for Evaluating Conversational AI
Systems
|
cs.CL cs.AI cs.LG
|
Large Language Models (LLMs) are transforming artificial intelligence,
evolving into task-oriented systems capable of autonomous planning and
execution. One of the primary applications of LLMs is conversational AI
systems, which must navigate multi-turn dialogues, integrate domain-specific
APIs, and adhere to strict policy constraints. However, evaluating these agents
remains a significant challenge, as traditional methods fail to capture the
complexity and variability of real-world interactions. We introduce
IntellAgent, a scalable, open-source multi-agent framework designed to evaluate
conversational AI systems comprehensively. IntellAgent automates the creation
of diverse, synthetic benchmarks by combining policy-driven graph modeling,
realistic event generation, and interactive user-agent simulations. This
innovative approach provides fine-grained diagnostics, addressing the
limitations of static and manually curated benchmarks with coarse-grained
metrics. IntellAgent represents a paradigm shift in evaluating conversational
AI. By simulating realistic, multi-policy scenarios across varying levels of
complexity, IntellAgent captures the nuanced interplay of agent capabilities
and policy constraints. Unlike traditional methods, it employs a graph-based
policy model to represent relationships, likelihoods, and complexities of
policy interactions, enabling highly detailed diagnostics. IntellAgent also
identifies critical performance gaps, offering actionable insights for targeted
optimization. Its modular, open-source design supports seamless integration of
new domains, policies, and APIs, fostering reproducibility and community
collaboration. Our findings demonstrate that IntellAgent serves as an effective
framework for advancing conversational AI by addressing challenges in bridging
research and deployment. The framework is available at
https://github.com/plurai-ai/intellagent
|
2501.11069
|
Refinement Module based on Parse Graph of Feature Map for Human Pose
Estimation
|
cs.CV
|
Parse graphs of the human body can be obtained in the human brain to help
humans complete the human pose estimation (HPE). It contains a hierarchical
structure, like a tree structure, and context relations among nodes. Many
researchers predefine the parse graph of body structure to design HPE
frameworks. However, these frameworks struggle to adapt to instances that
deviate from the predefined parse graph and are often parameter-heavy. Unlike
them, we view the feature map holistically, much like the human body. It can be
optimized using parse graphs, where each node's feature is an implicit
expression rather than a fixed one. This allows it to adapt to more instances,
unconstrained by rigid structural features. In this paper, we design the
Refinement Module based on the Parse Graph of feature map (RMPG), which
includes two stages: top-down decomposition and bottom-up combination. In the
first stage, the feature map is decomposed into multiple sub-feature maps along
the channel. In the second stage, the context relations of sub-feature maps are
calculated to obtain their respective context information and the sub-feature
maps with context information are concatenated along channels to obtain the
refined feature map. Additionally, we design a hierarchical network with fewer
parameters using multiple RMPG modules for HPE according to the parse graph of
body structure, some of which are supervised to obtain context relations among
body parts. Our network achieves excellent results on multiple mainstream human
pose datasets. More importantly, the effectiveness of RMPG is proven on
different methods. The code of RMPG will be open.
|
2501.11079
|
Federated Deep Reinforcement Learning for Energy Efficient
Multi-Functional RIS-Assisted Low-Earth Orbit Networks
|
cs.LG cs.AI eess.SP
|
In this paper, a novel network architecture that deploys the multi-functional
reconfigurable intelligent surface (MF-RIS) in low-Earth orbit (LEO) is
proposed. Unlike traditional RIS with only signal reflection capability, the
MF-RIS can reflect, refract, and amplify signals, as well as harvest energy
from wireless signals. Given the high energy demands in shadow regions where
solar energy is unavailable, MF-RIS is deployed in LEO to enhance signal
coverage and improve energy efficiency (EE). To address this, we formulate a
long-term EE optimization problem by determining the optimal parameters for
MF-RIS configurations, including amplification and phase-shifts, energy
harvesting ratios, and LEO transmit beamforming. To address the complex
non-convex and non-linear problem, a federated learning enhanced multi-agent
deep deterministic policy gradient (FEMAD) scheme is designed. Multi-agent DDPG
of each agent can provide the optimal action policy from its interaction to
environments, whereas federated learning enables the hidden information
exchange among multi-agents. In numerical results, we can observe significant
EE improvements compared to the other benchmarks, including centralized deep
reinforcement learning as well as distributed multi-agent deep deterministic
policy gradient (DDPG). Additionally, the proposed LEO-MF-RIS architecture has
demonstrated its effectiveness, achieving the highest EE performance compared
to the scenarios of fixed/no energy harvesting in MF-RIS, traditional
reflection-only RIS, and deployment without RISs/MF-RISs.
|
2501.11084
|
B-Call: Integrating Ideological Position and Political Cohesion in
Legislative Voting Models
|
cs.SI stat.AP
|
This paper combines two significant areas of political science research:
measuring individual ideological position and cohesion. Although both
approaches help analyze legislative behaviors, no unified model currently
integrates these dimensions. To fill this gap, the paper proposes a methodology
called B-Call that combines ideological positioning with voting cohesion,
treating votes as random variables. The model is empirically validated using
roll-call data from the United States, Brazil, and Chile legislatures, which
represent diverse legislative dynamics. The analysis aims to capture the
complexities of voting and legislative behaviors, resulting in a
two-dimensional indicator. This study addresses gaps in current legislative
voting models, particularly in contexts with limited party control.
|
2501.11086
|
Can LLM Generate Regression Tests for Software Commits?
|
cs.SE cs.AI
|
Large Language Models (LLMs) have shown tremendous promise in automated
software engineering. In this paper, we investigate the opportunities of LLMs
for automatic regression test generation for programs that take highly
structured, human-readable inputs, such as XML parsers or JavaScript
interpreters. Concretely, we explore the following regression test generation
scenarios for such programs that have so far been difficult to test
automatically in the absence of corresponding input grammars:
$\bullet$ Bug finding. Given a code change (e.g., a commit or pull request),
our LLM-based approach generates a test case with the objective of revealing
any bugs that might be introduced if that change is applied.
$\bullet$ Patch testing. Given a patch, our LLM-based approach generates a
test case that fails before but passes after the patch. This test can be added
to the regression test suite to catch similar bugs in the future.
We implement Cleverest, a feedback-directed, zero-shot LLM-based regression
test generation technique, and evaluate its effectiveness on 22 commits to
three subject programs: Mujs, Libxml2, and Poppler. For programs using more
human-readable file formats, like XML or JavaScript, we found Cleverest
performed very well. It generated easy-to-understand bug-revealing or
bug-reproduction test cases for the majority of commits in just under three
minutes -- even when only the code diff or commit message (unless it was too
vague) was given. For programs with more compact file formats, like PDF, as
expected, it struggled to generate effective test cases. However, the
LLM-supplied test cases are not very far from becoming effective (e.g., when
used as a seed by a greybox fuzzer or as a starting point by the developer).
|
2501.11087
|
Leveraging counterfactual concepts for debugging and improving CNN model
performance
|
cs.CV cs.AI
|
Counterfactual explanation methods have recently received significant
attention for explaining CNN-based image classifiers due to their ability to
provide easily understandable explanations that align more closely with human
reasoning. However, limited attention has been given to utilizing
explainability methods to improve model performance. In this paper, we propose
to leverage counterfactual concepts aiming to enhance the performance of CNN
models in image classification tasks. Our proposed approach utilizes
counterfactual reasoning to identify crucial filters used in the
decision-making process. Following this, we perform model retraining through
the design of a novel methodology and loss functions that encourage the
activation of class-relevant important filters and discourage the activation of
irrelevant filters for each class. This process effectively minimizes the
deviation of activation patterns of local predictions and the global activation
patterns of their respective inferred classes. By incorporating counterfactual
explanations, we validate unseen model predictions and identify
misclassifications. The proposed methodology provides insights into potential
weaknesses and biases in the model's learning process, enabling targeted
improvements and enhanced performance. Experimental results on publicly
available datasets have demonstrated an improvement of 1-2\%, validating the
effectiveness of the approach.
|
2501.11088
|
Multi-LiCa: A Motion and Targetless Multi LiDAR-to-LiDAR Calibration
Framework
|
cs.RO
|
Today's autonomous vehicles rely on a multitude of sensors to perceive their
environment. To improve the perception or create redundancy, the sensor's
alignment relative to each other must be known. With Multi-LiCa, we present a
novel approach for the alignment, e.g. calibration. We present an automatic
motion- and targetless approach for the extrinsic multi LiDAR-to-LiDAR
calibration without the need for additional sensor modalities or an initial
transformation input. We propose a two-step process with feature-based matching
for the coarse alignment and a GICP-based fine registration in combination with
a cost-based matching strategy. Our approach can be applied to any number of
sensors and positions if there is a partial overlap between the field of view
of single sensors. We show that our pipeline is better generalized to different
sensor setups and scenarios and is on par or better in calibration accuracy
than existing approaches. The presented framework is integrated in ROS 2 but
can also be used as a standalone application. To build upon our work, our
source code is available at: https://github.com/TUMFTM/Multi_LiCa.
|
2501.11090
|
Dynamic semantic networks for exploration of creative thinking
|
cs.CL
|
Human creativity originates from brain cortical networks that are specialized
in idea generation, processing, and evaluation. The concurrent verbalization of
our inner thoughts during the execution of a design task enables the use of
dynamic semantic networks as a tool for investigating, evaluating, and
monitoring creative thought. The primary advantage of using lexical databases
such as WordNet for reproducible information-theoretic quantification of
convergence or divergence of design ideas in creative problem solving is the
simultaneous handling of both words and meanings, which enables interpretation
of the constructed dynamic semantic networks in terms of underlying
functionally active brain cortical regions involved in concept comprehension
and production. In this study, the quantitative dynamics of semantic measures
computed with a moving time window is investigated empirically in the DTRS10
dataset with design review conversations and detected divergent thinking is
shown to predict success of design ideas. Thus, dynamic semantic networks
present an opportunity for real-time computer-assisted detection of critical
events during creative problem solving, with the goal of employing this
knowledge to artificially augment human creativity.
|
2501.11094
|
Enhanced Suicidal Ideation Detection from Social Media Using a
CNN-BiLSTM Hybrid Model
|
cs.CL cs.AI cs.CY
|
Suicidal ideation detection is crucial for preventing suicides, a leading
cause of death worldwide. Many individuals express suicidal thoughts on social
media, offering a vital opportunity for early detection through advanced
machine learning techniques. The identification of suicidal ideation in social
media text is improved by utilising a hybrid framework that integrates
Convolutional Neural Networks (CNN) and Bidirectional Long Short-Term Memory
(BiLSTM), enhanced with an attention mechanism. To enhance the interpretability
of the model's predictions, Explainable AI (XAI) methods are applied, with a
particular focus on SHapley Additive exPlanations (SHAP), are incorporated. At
first, the model managed to reach an accuracy of 92.81%. By applying
fine-tuning and early stopping techniques, the accuracy improved to 94.29%. The
SHAP analysis revealed key features influencing the model's predictions, such
as terms related to mental health struggles. This level of transparency boosts
the model's credibility while helping mental health professionals understand
and trust the predictions. This work highlights the potential for improving the
accuracy and interpretability of detecting suicidal tendencies, making a
valuable contribution to the progress of mental health monitoring systems. It
emphasizes the significance of blending powerful machine learning methods with
explainability to develop reliable and impactful mental health solutions.
|
2501.11096
|
Reproducibility review of "Why Not Other Classes": Towards
Class-Contrastive Back-Propagation Explanations
|
cs.CV cs.LG
|
"Why Not Other Classes?": Towards Class-Contrastive Back-Propagation
Explanations (Wang & Wang, 2022) provides a method for contrastively explaining
why a certain class in a neural network image classifier is chosen above
others. This method consists of using back-propagation-based explanation
methods from after the softmax layer rather than before. Our work consists of
reproducing the work in the original paper. We also provide extensions to the
paper by evaluating the method on XGradCAM, FullGrad, and Vision Transformers
to evaluate its generalization capabilities. The reproductions show similar
results as the original paper, with the only difference being the visualization
of heatmaps which could not be reproduced to look similar. The generalization
seems to be generally good, with implementations working for Vision
Transformers and alternative back-propagation methods. We also show that the
original paper suffers from issues such as a lack of detail in the method and
an erroneous equation which makes reproducibility difficult. To remedy this we
provide an open-source repository containing all code used for this project.
|
2501.11097
|
Unit Region Encoding: A Unified and Compact Geometry-aware
Representation for Floorplan Applications
|
cs.CV
|
We present the Unit Region Encoding of floorplans, which is a unified and
compact geometry-aware encoding representation for various applications,
ranging from interior space planning, floorplan metric learning to floorplan
generation tasks. The floorplans are represented as the latent encodings on a
set of boundary-adaptive unit region partition based on the clustering of the
proposed geometry-aware density map. The latent encodings are extracted by a
trained network (URE-Net) from the input dense density map and other available
semantic maps. Compared to the over-segmented rasterized images and the
room-level graph structures, our representation can be flexibly adapted to
different applications with the sliced unit regions while achieving higher
accuracy performance and better visual quality. We conduct a variety of
experiments and compare to the state-of-the-art methods on the aforementioned
applications to validate the superiority of our representation, as well as
extensive ablation studies to demonstrate the effect of our slicing choices.
|
2501.11102
|
RDG-GS: Relative Depth Guidance with Gaussian Splatting for Real-time
Sparse-View 3D Rendering
|
cs.CV
|
Efficiently synthesizing novel views from sparse inputs while maintaining
accuracy remains a critical challenge in 3D reconstruction. While advanced
techniques like radiance fields and 3D Gaussian Splatting achieve rendering
quality and impressive efficiency with dense view inputs, they suffer from
significant geometric reconstruction errors when applied to sparse input views.
Moreover, although recent methods leverage monocular depth estimation to
enhance geometric learning, their dependence on single-view estimated depth
often leads to view inconsistency issues across different viewpoints.
Consequently, this reliance on absolute depth can introduce inaccuracies in
geometric information, ultimately compromising the quality of scene
reconstruction with Gaussian splats. In this paper, we present RDG-GS, a novel
sparse-view 3D rendering framework with Relative Depth Guidance based on 3D
Gaussian Splatting. The core innovation lies in utilizing relative depth
guidance to refine the Gaussian field, steering it towards view-consistent
spatial geometric representations, thereby enabling the reconstruction of
accurate geometric structures and capturing intricate textures. First, we
devise refined depth priors to rectify the coarse estimated depth and insert
global and fine-grained scene information to regular Gaussians. Building on
this, to address spatial geometric inaccuracies from absolute depth, we propose
relative depth guidance by optimizing the similarity between spatially
correlated patches of depth and images. Additionally, we also directly deal
with the sparse areas challenging to converge by the adaptive sampling for
quick densification. Across extensive experiments on Mip-NeRF360, LLFF, DTU,
and Blender, RDG-GS demonstrates state-of-the-art rendering quality and
efficiency, making a significant advancement for real-world application.
|
2501.11107
|
ChaosEater: Fully Automating Chaos Engineering with Large Language
Models
|
cs.SE cs.AI cs.CL cs.DC cs.NI
|
Chaos Engineering (CE) is an engineering technique aimed at improving the
resiliency of distributed systems. It involves artificially injecting specific
failures into a distributed system and observing its behavior in response.
Based on the observation, the system can be proactively improved to handle
those failures. Recent CE tools realize the automated execution of predefined
CE experiments. However, defining these experiments and reconfiguring the
system after the experiments still remain manual. To reduce the costs of the
manual operations, we propose \textsc{ChaosEater}, a \textit{system} for
automating the entire CE operations with Large Language Models (LLMs). It
pre-defines the general flow according to the systematic CE cycle and assigns
subdivided operations within the flow to LLMs. We assume systems based on
Infrastructure as Code (IaC), wherein the system configurations and artificial
failures are managed through code. Hence, the LLMs' operations in our
\textit{system} correspond to software engineering tasks, including requirement
definition, code generation and debugging, and testing. We validate our
\textit{system} through case studies on both small and large systems. The
results demonstrate that our \textit{system} significantly reduces both time
and monetary costs while completing reasonable single CE cycles.
|
2501.11109
|
Estimation Error: Distribution and Pointwise Limits
|
cs.IT math.IT
|
In this paper, we examine the distribution and convergence properties of the
estimation error $W = X - \hat{X}(Y)$, where $\hat{X}(Y)$ is the Bayesian
estimator of a random variable $X$ from a noisy observation $Y = X +\sigma Z$
where $\sigma$ is the parameter indicating the strength of noise $Z$. Using the
conditional expectation framework (that is, $\hat{X}(Y)$ is the conditional
mean), we define the normalized error $\mathcal{E}_\sigma = \frac{W}{\sigma}$
and explore its properties.
Specifically, in the first part of the paper, we characterize the probability
density function of $W$ and $\mathcal{E}_\sigma$. Along the way, we also find
conditions for the existence of the inverse functions for the conditional
expectations. In the second part, we study pointwise (i.e., almost sure)
convergence of $\mathcal{E}_\sigma$ under various assumptions about the noise
and the underlying distributions. Our results extend some of the previous
limits of $\mathcal{E}_\sigma$ studied under the $L^2$ convergence, known as
the \emph{mmse dimension}, to the pointwise case.
|
2501.11110
|
Chain-of-Reasoning: Towards Unified Mathematical Reasoning in Large
Language Models via a Multi-Paradigm Perspective
|
cs.CL
|
Large Language Models (LLMs) have made notable progress in mathematical
reasoning, yet they often rely on single-paradigm reasoning that limits their
effectiveness across diverse tasks. In this paper, we introduce
Chain-of-Reasoning (CoR), a novel unified framework that integrates multiple
reasoning paradigms--Natural Language Reasoning (NLR), Algorithmic Reasoning
(AR), and Symbolic Reasoning (SR)--to enable synergistic collaboration. CoR
generates multiple potential answers using different reasoning paradigms and
synthesizes them into a coherent final solution. We propose a Progressive
Paradigm Training (PPT) strategy that allows models to progressively master
these paradigms, culminating in the development of CoR-Math-7B. Experimental
results demonstrate that CoR-Math-7B significantly outperforms current SOTA
models, achieving up to a 41.0% absolute improvement over GPT-4 in theorem
proving tasks and a 7.9% improvement over RL-based methods in arithmetic tasks.
These results showcase the enhanced mathematical comprehensive ability of our
model, achieving significant performance gains on specific tasks and enabling
zero-shot generalization across tasks.
|
2501.11111
|
OpenLiDARMap: Zero-Drift Point Cloud Mapping using Map Priors
|
cs.RO
|
Accurate localization is a critical component of mobile autonomous systems,
especially in Global Navigation Satellite Systems (GNSS)-denied environments
where traditional methods fail. In such scenarios, environmental sensing is
essential for reliable operation. However, approaches such as LiDAR odometry
and Simultaneous Localization and Mapping (SLAM) suffer from drift over long
distances, especially in the absence of loop closures. Map-based localization
offers a robust alternative, but the challenge lies in creating and
georeferencing maps without GNSS support. To address this issue, we propose a
method for creating georeferenced maps without GNSS by using publicly available
data, such as building footprints and surface models derived from sparse aerial
scans. Our approach integrates these data with onboard LiDAR scans to produce
dense, accurate, georeferenced 3D point cloud maps. By combining an Iterative
Closest Point (ICP) scan-to-scan and scan-to-map matching strategy, we achieve
high local consistency without suffering from long-term drift. Thus, we
eliminate the reliance on GNSS for the creation of georeferenced maps. The
results demonstrate that LiDAR-only mapping can produce accurate georeferenced
point cloud maps when augmented with existing map priors.
|
2501.11112
|
A Novel Pearson Correlation-Based Merging Algorithm for Robust
Distributed Machine Learning with Heterogeneous Data
|
cs.LG
|
Federated learning faces significant challenges in scenarios with
heterogeneous data distributions and adverse network conditions, such as
delays, packet loss, and data poisoning attacks. This paper proposes a novel
method based on the SCAFFOLD algorithm to improve the quality of local updates
and enhance the robustness of the global model. The key idea is to form
intermediary nodes by merging local models with high similarity, using the
Pearson correlation coefficient as a similarity measure. The proposed merging
algorithm reduces the number of local nodes while maintaining the accuracy of
the global model, effectively addressing communication overhead and bandwidth
consumption. Experimental results on the MNIST dataset under simulated
federated learning scenarios demonstrate the method's effectiveness. After 10
rounds of training using a CNN model, the proposed approach achieved accuracies
of 0.82, 0.73, and 0.66 under normal conditions, packet loss and data poisoning
attacks, respectively, outperforming the baseline SCAFFOLD algorithm. These
results highlight the potential of the proposed method to improve efficiency
and resilience in federated learning systems.
|
2501.11114
|
Clinical trial cohort selection using Large Language Models on n2c2
Challenges
|
cs.CL cs.AI
|
Clinical trials are a critical process in the medical field for introducing
new treatments and innovations. However, cohort selection for clinical trials
is a time-consuming process that often requires manual review of patient text
records for specific keywords. Though there have been studies on standardizing
the information across the various platforms, Natural Language Processing (NLP)
tools remain crucial for spotting eligibility criteria in textual reports.
Recently, pre-trained large language models (LLMs) have gained popularity for
various NLP tasks due to their ability to acquire a nuanced understanding of
text. In this paper, we study the performance of large language models on
clinical trial cohort selection and leverage the n2c2 challenges to benchmark
their performance. Our results are promising with regard to the incorporation
of LLMs for simple cohort selection tasks, but also highlight the difficulties
encountered by these models as soon as fine-grained knowledge and reasoning are
required.
|
2501.11120
|
Tell me about yourself: LLMs are aware of their learned behaviors
|
cs.CL cs.AI cs.CR cs.LG
|
We study behavioral self-awareness -- an LLM's ability to articulate its
behaviors without requiring in-context examples. We finetune LLMs on datasets
that exhibit particular behaviors, such as (a) making high-risk economic
decisions, and (b) outputting insecure code. Despite the datasets containing no
explicit descriptions of the associated behavior, the finetuned LLMs can
explicitly describe it. For example, a model trained to output insecure code
says, ``The code I write is insecure.'' Indeed, models show behavioral
self-awareness for a range of behaviors and for diverse evaluations. Note that
while we finetune models to exhibit behaviors like writing insecure code, we do
not finetune them to articulate their own behaviors -- models do this without
any special training or examples.
Behavioral self-awareness is relevant for AI safety, as models could use it
to proactively disclose problematic behaviors. In particular, we study backdoor
policies, where models exhibit unexpected behaviors only under certain trigger
conditions. We find that models can sometimes identify whether or not they have
a backdoor, even without its trigger being present. However, models are not
able to directly output their trigger by default.
Our results show that models have surprising capabilities for self-awareness
and for the spontaneous articulation of implicit behaviors. Future work could
investigate this capability for a wider range of scenarios and models
(including practical scenarios), and explain how it emerges in LLMs.
|
2501.11122
|
Optimal Functional $2^{s-1}$-Batch Codes: Exploring New Sufficient
Conditions
|
cs.IT math.IT
|
A functional $k$-batch code of dimension $s$ consists of $n$ servers storing
linear combinations of $s$ linearly independent information bits. These codes
are designed to recover any multiset of $k$ requests, each being a linear
combination of the information bits, by $k$ disjoint subsets of servers. A
recent conjecture suggests that for any set of $k = 2^{s-1}$ requests, the
optimal solution requires $2^s-1$ servers. This paper shows that the problem of
functional $k$-batch codes is equivalent to several other problems. Using these
equivalences, we derive sufficient conditions that improve understanding of the
problem and enhance the ability to find the optimal solution.
|
2501.11123
|
Assessing Semantic Annotation Activities with Formal Concept Analysis
|
cs.CL
|
This paper describes an approach to assessing semantic annotation activities
based on formal concept analysis (FCA). In this approach, annotators use
taxonomical ontologies created by domain experts to annotate digital resources.
Then, using FCA, domain experts are provided with concept lattices that
graphically display how their ontologies were used during the semantic
annotation process. In consequence, they can advise annotators on how to better
use the ontologies, as well as how to refine them to better suit the needs of
the semantic annotators. To illustrate the approach, we describe its
implementation in @note, a Rich Internet Application (RIA) for the
collaborative annotation of digitized literary texts, we exemplify its use with
a case study, and we provide some evaluation results using the method.
|
2501.11124
|
Rethinking Pseudo-Label Guided Learning for Weakly Supervised Temporal
Action Localization from the Perspective of Noise Correction
|
cs.CV
|
Pseudo-label learning methods have been widely applied in weakly-supervised
temporal action localization. Existing works directly utilize weakly-supervised
base model to generate instance-level pseudo-labels for training the
fully-supervised detection head. We argue that the noise in pseudo-labels would
interfere with the learning of fully-supervised detection head, leading to
significant performance leakage. Issues with noisy labels include:(1)
inaccurate boundary localization; (2) undetected short action clips; (3)
multiple adjacent segments incorrectly detected as one segment. To target these
issues, we introduce a two-stage noisy label learning strategy to harness every
potential useful signal in noisy labels. First, we propose a frame-level
pseudo-label generation model with a context-aware denoising algorithm to
refine the boundaries. Second, we introduce an online-revised teacher-student
framework with a missing instance compensation module and an ambiguous instance
correction module to solve the short-action-missing and many-to-one problems.
Besides, we apply a high-quality pseudo-label mining loss in our online-revised
teacher-student framework to add different weights to the noisy labels to train
more effectively. Our model outperforms the previous state-of-the-art method in
detection accuracy and inference speed greatly upon the THUMOS14 and
ActivityNet v1.2 benchmarks.
|
2501.11126
|
SIC-free Multicast Scheduling for Multi-antenna Coded Caching
|
cs.IT math.IT
|
Multi-antenna coded caching (CC) with multicast beamforming typically relies
on a complex successive interference cancellation (SIC) structure to decode a
superposition of multiple streams received by each user. Signal-level CC
schemes require the regeneration and cancellation of interfering signals at the
physical layer of each receiver, which complicates practical implementations.
To address this, we propose a bit-level multicast scheduling scheme enabling
linear, SIC-free decoding of parallel streams by repeatedly transmitting data
terms with linearly independent coefficients. Two reference strategies and a
novel sparse strategy are considered for constructing the coefficient matrix.
The reference cases include the random strategy, which lacks control over
matrix construction, and the equal-distant strategy, which balances users'
interference and data terms equally. In contrast, the sparse strategy minimizes
the number of multicast streams transmitted in parallel during each interval.
This approach simplifies both the decoding process and the beamforming design
by decoupling the desired data terms for each user and reducing the number of
SINR constraints, respectively. To further enhance the symmetric rate, a
successive projection algorithm is applied to exploit channel properties and
optimize user ordering. With the coefficient matrix and optimized user ordering
in place, multicast beamformers are devised to aggregate desired data from
relevant multicast streams. Numerical simulations validate the effectiveness of
the sparse strategy and user scheduling, demonstrating significant gains in
symmetric rate.
|
2501.11127
|
A Regularized Online Newton Method for Stochastic Convex Bandits with
Linear Vanishing Noise
|
math.OC cs.LG stat.ML
|
We study a stochastic convex bandit problem where the subgaussian noise
parameter is assumed to decrease linearly as the learner selects actions closer
and closer to the minimizer of the convex loss function. Accordingly, we
propose a Regularized Online Newton Method (RONM) for solving the problem,
based on the Online Newton Method (ONM) of arXiv:2406.06506. Our RONM reaches a
polylogarithmic regret in the time horizon $n$ when the loss function grows
quadratically in the constraint set, which recovers the results of
arXiv:2402.12042 in linear bandits. Our analyses rely on the growth rate of the
precision matrix $\Sigma_t^{-1}$ in ONM and we find that linear growth solves
the question exactly. These analyses also help us obtain better convergence
rates when the loss function grows faster. We also study and analyze two new
bandit models: stochastic convex bandits with noise scaled to a subgaussian
parameter function and convex bandits with stochastic multiplicative noise.
|
2501.11128
|
A Collection of Question Answering Datasets for Norwegian
|
cs.CL cs.AI
|
This paper introduces a new suite of question answering datasets for
Norwegian; NorOpenBookQA, NorCommonSenseQA, NorTruthfulQA, and NRK-Quiz-QA. The
data covers a wide range of skills and knowledge domains, including world
knowledge, commonsense reasoning, truthfulness, and knowledge about Norway.
Covering both of the written standards of Norwegian - Bokm{\aa}l and Nynorsk -
our datasets comprise over 10k question-answer pairs, created by native
speakers. We detail our dataset creation approach and present the results of
evaluating 11 language models (LMs) in zero- and few-shot regimes. Most LMs
perform better in Bokm{\aa}l than Nynorsk, struggle most with commonsense
reasoning, and are often untruthful in generating answers to questions. All our
datasets and annotation materials are publicly available.
|
2501.11129
|
Optimal Binary Variable-Length Codes with a Bounded Number of 1's per
Codeword: Design, Analysis, and Applications
|
cs.IT cs.DS math.IT
|
In this paper, we consider the problem of constructing optimal average-length
binary codes under the constraint that each codeword must contain at most $D$
ones, where $D$ is a given input parameter. We provide an $O(n^2D)$-time
complexity algorithm for the construction of such codes, where $n$ is the
number of codewords. We also describe several scenarios where the need to
design these kinds of codes naturally arises. Our algorithms allow us to
construct both optimal average-length prefix binary codes and optimal
average-length alphabetic binary codes. In the former case, our $O(n^2D)$-time
algorithm substantially improves on the previously known $O(n^{2+D})$-time
complexity algorithm for the same problem. We also provide a Kraft-like
inequality for the existence of (optimal) variable-length binary codes, subject
to the above-described constraint on the number of 1's in each codeword.
|
2501.11130
|
Efficient and accurate simulation of the Smith-Zener pinning mechanism
during grain growth using a front-tracking numerical framework
|
cs.CE
|
This study proposes a new full-field approach for modeling grain boundary
pinning by second phase particles in two-dimensional polycrystals. These
particles are of great importance during thermomechanical treatments, as they
produce deviations from the microstructural evolution that the alloy produces
in the absence of particles. This phenomenon, well-known as Smith-Zener
pinning, is widely used by metallurgists to control the grain size during the
metal forming process of many alloys. Predictive tools are then needed to
accurately model this phenomenon. This article introduces a new methodology for
the simulation of microstructural evolutions subjected to the presence of
second phase particles. The methodology employs a Lagrangian 2D front-tracking
methodology, while the particles are modeled using discretized circular shapes
or pinning nodes. The evolution of the particles can be considered and modeled
using a constant velocity of particle shrinking. This approach has the
advantages of improving the limited description made of the phenomenon in
vertex approaches, to be usable for a wide range of second-phase particle sizes
and to improve calculation times compared to front-capturing type approaches.
|
2501.11131
|
Spatio-temporal characterisation of underwater noise through semantic
trajectories
|
stat.AP cs.DB
|
Underwater noise pollution from human activities, particularly shipping, has
been recognised as a serious threat to marine life. The sound generated by
vessels can have various adverse effects on fish and aquatic ecosystems in
general. In this setting, the estimation and analysis of the underwater noise
produced by vessels is an important challenge for the preservation of the
marine environment. In this paper we propose a model for the spatio-temporal
characterisation of the underwater noise generated by vessels. The approach is
based on the reconstruction of the vessels' trajectories from Automatic
Identification System (AIS) data and on their deployment in a spatio-temporal
database. Trajectories are enriched with semantic information like the acoustic
characteristics of the vessels' engines or the activity performed by the
vessels. We define a model for underwater noise propagation and use the
trajectories' information to infer how noise propagates in the area of
interest. We develop our approach for the case study of the fishery activities
in the Northern Adriatic sea, an area of the Mediterranean sea which is well
known to be highly exploited. We implement our approach using MobilityDB, an
open source geospatial trajectory data management and analysis platform, which
offers spatio-temporal operators and indexes improving the efficiency of our
system. We use this platform to conduct various analyses of the underwater
noise generated in the Northern Adriatic Sea, aiming at estimating the impact
of fishing activities on underwater noise pollution and at demonstrating the
flexibility and expressiveness of our approach.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.