id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
2501.11924
|
Make Full Use of Testing Information: An Integrated Accelerated Testing
and Evaluation Method for Autonomous Driving Systems
|
cs.AI
|
Testing and evaluation is an important step before the large-scale
application of the autonomous driving systems (ADSs). Based on the three level
of scenario abstraction theory, a testing can be performed within a logical
scenario, followed by an evaluation stage which is inputted with the testing
results of each concrete scenario generated from the logical parameter space.
During the above process, abundant testing information is produced which is
beneficial for comprehensive and accurate evaluations. To make full use of
testing information, this paper proposes an Integrated accelerated Testing and
Evaluation Method (ITEM). Based on a Monte Carlo Tree Search (MCTS) paradigm
and a dual surrogates testing framework proposed in our previous work, this
paper applies the intermediate information (i.e., the tree structure, including
the affiliation of each historical sampled point with the subspaces and the
parent-child relationship between subspaces) generated during the testing stage
into the evaluation stage to achieve accurate hazardous domain identification.
Moreover, to better serve this purpose, the UCB calculation method is improved
to allow the search algorithm to focus more on the hazardous domain boundaries.
Further, a stopping condition is constructed based on the convergence of the
search algorithm. Ablation and comparative experiments are then conducted to
verify the effectiveness of the improvements and the superiority of the
proposed method. The experimental results show that ITEM could well identify
the hazardous domains in both low- and high-dimensional cases, regardless of
the shape of the hazardous domains, indicating its generality and potential for
the safety evaluation of ADSs.
|
2501.11926
|
Multi-Modal Variable-Rate CSI Reconstruction for FDD Massive MIMO
Systems
|
cs.IT eess.SP math.IT
|
In frequency division duplex (FDD) systems, acquiring channel state
information (CSI) at the base station (BS) traditionally relies on limited
feedback from mobile terminals (MTs). However, the accuracy of channel
reconstruction from feedback CSI is inherently constrained by the
rate-distortion trade-off. To overcome this limitation, we propose a
multi-modal channel reconstruction framework that leverages auxiliary data,
such as RGB images or uplink CSI, collected at the BS. By integrating
contextual information from these modalities, the framework mitigates CSI
distortions caused by noise, compression, and quantization. At its core, the
framework utilizes an autoencoder network capable of generating variable-length
CSI, tailored for rate-adaptive multi-modal channel reconstruction. By
augmenting the foundational autoencoder network using a transfer learning-based
multi-modal fusion strategy, we enable accurate channel reconstruction in both
single-modal and multi-modal scenarios. To train and evaluate the network under
diverse and realistic wireless conditions, we construct a synthetic dataset
that pairs wireless channel data with sensor data through 3D modeling and ray
tracing. Simulation results demonstrate that the proposed framework achieves
near-optimal beamforming gains in 5G New Radio (5G NR)-compliant scenarios,
highlighting the potential of sensor data integration to improve CSI
reconstruction accuracy.
|
2501.11927
|
A Lightweight and Interpretable Deepfakes Detection Framework
|
cs.CV cs.AI
|
The recent realistic creation and dissemination of so-called deepfakes poses
a serious threat to social life, civil rest, and law. Celebrity defaming,
election manipulation, and deepfakes as evidence in court of law are few
potential consequences of deepfakes. The availability of open source trained
models based on modern frameworks such as PyTorch or TensorFlow, video
manipulations Apps such as FaceApp and REFACE, and economical computing
infrastructure has easen the creation of deepfakes. Most of the existing
detectors focus on detecting either face-swap, lip-sync, or puppet master
deepfakes, but a unified framework to detect all three types of deepfakes is
hardly explored. This paper presents a unified framework that exploits the
power of proposed feature fusion of hybrid facial landmarks and our novel heart
rate features for detection of all types of deepfakes. We propose novel heart
rate features and fused them with the facial landmark features to better
extract the facial artifacts of fake videos and natural variations available in
the original videos. We used these features to train a light-weight XGBoost to
classify between the deepfake and bonafide videos. We evaluated the performance
of our framework on the world leaders dataset (WLDR) that contains all types of
deepfakes. Experimental results illustrate that the proposed framework offers
superior detection performance over the comparative deepfakes detection
methods. Performance comparison of our framework against the LSTM-FCN, a
candidate of deep learning model, shows that proposed model achieves similar
results, however, it is more interpretable.
|
2501.11929
|
ALoFTRAG: Automatic Local Fine Tuning for Retrieval Augmented Generation
|
cs.LG
|
Retrieval Augmented Generation (RAG) systems have been shown to improve the
accuracy of Large Language Model (LLM) outputs. However, these models can often
achieve low accuracy when applied to new data domains.
We introduce the Automatic Local Fine Tuning of Retrieval Augmented
Generation models (ALoFTRAG) framework, designed to improve the accuracy of RAG
systems on a given domain by training LLMs without manually labeled data or
using larger teacher models.
By generating and filtering synthetic training data and performing LoRA
fine-tuning, ALoFTRAG improves citation and answer accuracy across 20 datasets
in 26 languages by, on average, 8.3% and 3.0% respectively.
Our results demonstrate that ALoFTRAG offers a practical, cost-effective, and
data-secure solution for improving RAG accuracy, making it particularly
applicable to sensitive domains such as healthcare and finance.
|
2501.11930
|
Nocturnal eye inspired liquid to gas phase change soft actuator with
Laser-Induced-Graphene: enhanced environmental light harvesting and
photothermal conversion
|
cs.RO
|
Robotic systems' mobility is constrained by power sources and wiring. While
pneumatic actuators remain tethered to air supplies, we developed a new
actuator utilizing light energy. Inspired by nocturnal animals' eyes, we
designed a bilayer soft actuator incorporating Laser-Induced Graphene (LIG) on
the inner surface of a silicone layer. This design maintains silicone's
transparency and flexibility while achieving 54% faster response time compared
to conventional actuators through enhanced photothermal conversion.
|
2501.11931
|
Construction of Simultaneously Good Polar Codes and Polar Lattices
|
cs.IT math.IT
|
In this work, we investigate the simultaneous goodness of polar codes and
polar lattices. The simultaneous goodness of a lattice or a code means that it
is optimal for both channel coding and source coding simultaneously. The
existence of such kind of lattices was proven by using random lattice
ensembles. Our work provides an explicit construction based on the polarization
technique.
|
2501.11935
|
Web vs. LLMs: An Empirical Study of Learning Behaviors of CS2 Students
|
cs.HC cs.AI
|
LLMs such as ChatGPT have been widely adopted by students in higher education
as tools for learning programming and related concepts. However, it remains
unclear how effective students are and what strategies students use while
learning with LLMs. Since the majority of students' experiences in online
self-learning have come through using search engines such as Google, evaluating
AI tools in this context can help us address these gaps. In this mixed methods
research, we conducted an exploratory within-subjects study to understand how
CS2 students learn programming concepts using both LLMs as well as traditional
online methods such as educational websites and videos to examine how students
approach learning within and across both scenarios. We discovered that students
found it easier to learn a more difficult concept using traditional methods
than using ChatGPT. We also found that students ask fewer follow-ups and use
more keyword-based queries for search engines while their prompts to LLMs tend
to explicitly ask for information.
|
2501.11937
|
MeshONet: A Generalizable and Efficient Operator Learning Method for
Structured Mesh Generation
|
cs.LG cs.AI
|
Mesh generation plays a crucial role in scientific computing. Traditional
mesh generation methods, such as TFI and PDE-based methods, often struggle to
achieve a balance between efficiency and mesh quality. To address this
challenge, physics-informed intelligent learning methods have recently emerged,
significantly improving generation efficiency while maintaining high mesh
quality. However, physics-informed methods fail to generalize when applied to
previously unseen geometries, as even small changes in the boundary shape
necessitate burdensome retraining to adapt to new geometric variations. In this
paper, we introduce MeshONet, the first generalizable intelligent learning
method for structured mesh generation. The method transforms the mesh
generation task into an operator learning problem with multiple input and
solution functions. To effectively overcome the multivariable mapping
restriction of operator learning methods, we propose a dual-branch,
shared-trunk architecture to approximate the mapping between function spaces
based on input-output pairs. Experimental results show that MeshONet achieves a
speedup of up to four orders of magnitude in generation efficiency over
traditional methods. It also enables generalization to different geometries
without retraining, greatly enhancing the practicality of intelligent methods.
|
2501.11938
|
Navigating Robot Swarm Through a Virtual Tube with Flow-Adaptive
Distribution Control
|
cs.RO cs.SY eess.SY
|
With the rapid development of robot swarm technology and its diverse
applications, navigating robot swarms through complex environments has emerged
as a critical research direction. To ensure safe navigation and avoid potential
collisions with obstacles, the concept of virtual tubes has been introduced to
define safe and navigable regions. However, current control methods in virtual
tubes face the congestion issues, particularly in narrow virtual tubes with low
throughput. To address these challenges, we first originally introduce the
concepts of virtual tube area and flow capacity, and develop an new evolution
model for the spatial density function. Next, we propose a novel control method
that combines a modified artificial potential field (APF) for swarm navigation
and density feedback control for distribution regulation, under which a
saturated velocity command is designed. Then, we generate a global velocity
field that not only ensures collision-free navigation through the virtual tube,
but also achieves locally input-to-state stability (LISS) for density tracking
errors, both of which are rigorously proven. Finally, numerical simulations and
realistic applications validate the effectiveness and advantages of the
proposed method in managing robot swarms within narrow virtual tubes.
|
2501.11945
|
Learning to Hop for a Single-Legged Robot with Parallel Mechanism
|
cs.RO
|
This work presents the application of reinforcement learning to improve the
performance of a highly dynamic hopping system with a parallel mechanism.
Unlike serial mechanisms, parallel mechanisms can not be accurately simulated
due to the complexity of their kinematic constraints and closed-loop
structures. Besides, learning to hop suffers from prolonged aerial phase and
the sparse nature of the rewards. To address them, we propose a learning
framework to encode long-history feedback to account for the under-actuation
brought by the prolonged aerial phase. In the proposed framework, we also
introduce a simplified serial configuration for the parallel design to avoid
directly simulating parallel structure during the training. A torque-level
conversion is designed to deal with the parallel-serial conversion to handle
the sim-to-real issue. Simulation and hardware experiments have been conducted
to validate this framework.
|
2501.11949
|
GLAM: Global-Local Variation Awareness in Mamba-based World Model
|
cs.LG
|
Mimicking the real interaction trajectory in the inference of the world model
has been shown to improve the sample efficiency of model-based reinforcement
learning (MBRL) algorithms. Many methods directly use known state sequences for
reasoning. However, this approach fails to enhance the quality of reasoning by
capturing the subtle variation between states. Much like how humans infer
trends in event development from this variation, in this work, we introduce
Global-Local variation Awareness Mamba-based world model (GLAM) that improves
reasoning quality by perceiving and predicting variation between states. GLAM
comprises two Mambabased parallel reasoning modules, GMamba and LMamba, which
focus on perceiving variation from global and local perspectives, respectively,
during the reasoning process. GMamba focuses on identifying patterns of
variation between states in the input sequence and leverages these patterns to
enhance the prediction of future state variation. LMamba emphasizes reasoning
about unknown information, such as rewards, termination signals, and visual
representations, by perceiving variation in adjacent states. By integrating the
strengths of the two modules, GLAM accounts for highervalue variation in
environmental changes, providing the agent with more efficient
imagination-based training. We demonstrate that our method outperforms existing
methods in normalized human scores on the Atari 100k benchmark.
|
2501.11951
|
HERITAGE: An End-to-End Web Platform for Processing Korean Historical
Documents in Hanja
|
cs.CL
|
While Korean historical documents are invaluable cultural heritage,
understanding those documents requires in-depth Hanja expertise. Hanja is an
ancient language used in Korea before the 20th century, whose characters were
borrowed from old Chinese but had evolved in Korea for centuries. Modern
Koreans and Chinese cannot understand Korean historical documents without
substantial additional help, and while previous efforts have produced some
Korean and English translations, this requires in-depth expertise, and so most
of the documents are not translated into any modern language. To address this
gap, we present HERITAGE, the first open-source Hanja NLP toolkit to assist in
understanding and translating the unexplored Korean historical documents
written in Hanja. HERITAGE is a web-based platform providing model predictions
of three critical tasks in historical document understanding via Hanja language
models: punctuation restoration, named entity recognition, and machine
translation (MT). HERITAGE also provides an interactive glossary, which
provides the character-level reading of the Hanja characters in modern Korean,
as well as character-level English definition. HERITAGE serves two purposes.
First, anyone interested in these documents can get a general understanding
from the model predictions and the interactive glossary, especially MT outputs
in Korean and English. Second, since the model outputs are not perfect, Hanja
experts can revise them to produce better annotations and translations. This
would boost the translation efficiency and potentially lead to most of the
historical documents being translated into modern languages, lowering the
barrier on unexplored Korean historical documents.
|
2501.11953
|
Proverbs Run in Pairs: Evaluating Proverb Translation Capability of
Large Language Model
|
cs.CL
|
Despite achieving remarkable performance, machine translation (MT) research
remains underexplored in terms of translating cultural elements in languages,
such as idioms, proverbs, and colloquial expressions. This paper investigates
the capability of state-of-the-art neural machine translation (NMT) and large
language models (LLMs) in translating proverbs, which are deeply rooted in
cultural contexts. We construct a translation dataset of standalone proverbs
and proverbs in conversation for four language pairs. Our experiments show that
the studied models can achieve good translation between languages with similar
cultural backgrounds, and LLMs generally outperform NMT models in proverb
translation. Furthermore, we find that current automatic evaluation metrics
such as BLEU, CHRF++ and COMET are inadequate for reliably assessing the
quality of proverb translation, highlighting the need for more culturally aware
evaluation metrics.
|
2501.11959
|
Noise-Resilient Point-wise Anomaly Detection in Time Series Using Weak
Segment Labels
|
cs.LG
|
Detecting anomalies in temporal data has gained significant attention across
various real-world applications, aiming to identify unusual events and mitigate
potential hazards. In practice, situations often involve a mix of segment-level
labels (detected abnormal events with segments of time points) and unlabeled
data (undetected events), while the ideal algorithmic outcome should be
point-level predictions. Therefore, the huge label information gap between
training data and targets makes the task challenging. In this study, we
formulate the above imperfect information as noisy labels and propose
NRdetector, a noise-resilient framework that incorporates confidence-based
sample selection, robust segment-level learning, and data-centric point-level
detection for multivariate time series anomaly detection. Particularly, to
bridge the information gap between noisy segment-level labels and missing
point-level labels, we develop a novel loss function that can effectively
mitigate the label noise and consider the temporal features. It encourages the
smoothness of consecutive points and the separability of points from segments
with different labels. Extensive experiments on real-world multivariate time
series datasets with 11 different evaluation metrics demonstrate that
NRdetector consistently achieves robust results across multiple real-world
datasets, outperforming various baselines adapted to operate in our setting.
|
2501.11960
|
TAD-Bench: A Comprehensive Benchmark for Embedding-Based Text Anomaly
Detection
|
cs.CL cs.AI
|
Text anomaly detection is crucial for identifying spam, misinformation, and
offensive language in natural language processing tasks. Despite the growing
adoption of embedding-based methods, their effectiveness and generalizability
across diverse application scenarios remain under-explored. To address this, we
present TAD-Bench, a comprehensive benchmark designed to systematically
evaluate embedding-based approaches for text anomaly detection. TAD-Bench
integrates multiple datasets spanning different domains, combining
state-of-the-art embeddings from large language models with a variety of
anomaly detection algorithms. Through extensive experiments, we analyze the
interplay between embeddings and detection methods, uncovering their strengths,
weaknesses, and applicability to different tasks. These findings offer new
perspectives on building more robust, efficient, and generalizable anomaly
detection systems for real-world applications.
|
2501.11963
|
A Contrastive Framework with User, Item and Review Alignment for
Recommendation
|
cs.IR
|
Learning effective latent representations for users and items is the
cornerstone of recommender systems. Traditional approaches rely on user-item
interaction data to map users and items into a shared latent space, but the
sparsity of interactions often poses challenges. While leveraging user reviews
could mitigate this sparsity, existing review-aware recommendation models often
exhibit two key limitations. First, they typically rely on reviews as
additional features, but reviews are not universal, with many users and items
lacking them. Second, such approaches do not integrate reviews into the
user-item space, leading to potential divergence or inconsistency among user,
item, and review representations. To overcome these limitations, our work
introduces a Review-centric Contrastive Alignment Framework for Recommendation
(ReCAFR), which incorporates reviews into the core learning process, ensuring
alignment among user, item, and review representations within a unified space.
Specifically, we leverage two self-supervised contrastive strategies that not
only exploit review-based augmentation to alleviate sparsity, but also align
the tripartite representations to enhance robustness. Empirical studies on
public benchmark datasets demonstrate the effectiveness and robustness of
ReCAFR.
|
2501.11967
|
A Hybrid Attention Framework for Fake News Detection with Large Language
Models
|
cs.CL
|
With the rapid growth of online information, the spread of fake news has
become a serious social challenge. In this study, we propose a novel detection
framework based on Large Language Models (LLMs) to identify and classify fake
news by integrating textual statistical features and deep semantic features.
Our approach utilizes the contextual understanding capability of the large
language model for text analysis and introduces a hybrid attention mechanism to
focus on feature combinations that are particularly important for fake news
identification. Extensive experiments on the WELFake news dataset show that our
model significantly outperforms existing methods, with a 1.5\% improvement in
F1 score. In addition, we assess the interpretability of the model through
attention heat maps and SHAP values, providing actionable insights for content
review strategies. Our framework provides a scalable and efficient solution to
deal with the spread of fake news and helps build a more reliable online
information ecosystem.
|
2501.11968
|
Bridging Visualization and Optimization: Multimodal Large Language
Models on Graph-Structured Combinatorial Optimization
|
cs.AI cs.LG
|
Graph-structured combinatorial challenges are inherently difficult due to
their nonlinear and intricate nature, often rendering traditional computational
methods ineffective or expensive. However, these challenges can be more
naturally tackled by humans through visual representations that harness our
innate ability for spatial reasoning. In this study, we propose transforming
graphs into images to preserve their higher-order structural features
accurately, revolutionizing the representation used in solving graph-structured
combinatorial tasks. This approach allows machines to emulate human-like
processing in addressing complex combinatorial challenges. By combining the
innovative paradigm powered by multimodal large language models (MLLMs) with
simple search techniques, we aim to develop a novel and effective framework for
tackling such problems. Our investigation into MLLMs spanned a variety of
graph-based tasks, from combinatorial problems like influence maximization to
sequential decision-making in network dismantling, as well as addressing six
fundamental graph-related issues. Our findings demonstrate that MLLMs exhibit
exceptional spatial intelligence and a distinctive capability for handling
these problems, significantly advancing the potential for machines to
comprehend and analyze graph-structured data with a depth and intuition akin to
human cognition. These results also imply that integrating MLLMs with simple
optimization strategies could form a novel and efficient approach for
navigating graph-structured combinatorial challenges without complex
derivations, computationally demanding training and fine-tuning.
|
2501.11971
|
SMamba: Sparse Mamba for Event-based Object Detection
|
cs.CV
|
Transformer-based methods have achieved remarkable performance in event-based
object detection, owing to the global modeling ability. However, they neglect
the influence of non-event and noisy regions and process them uniformly,
leading to high computational overhead. To mitigate computation cost, some
researchers propose window attention based sparsification strategies to discard
unimportant regions, which sacrifices the global modeling ability and results
in suboptimal performance. To achieve better trade-off between accuracy and
efficiency, we propose Sparse Mamba (SMamba), which performs adaptive
sparsification to reduce computational effort while maintaining global modeling
capability. Specifically, a Spatio-Temporal Continuity Assessment module is
proposed to measure the information content of tokens and discard uninformative
ones by leveraging the spatiotemporal distribution differences between activity
and noise events. Based on the assessment results, an Information-Prioritized
Local Scan strategy is designed to shorten the scan distance between
high-information tokens, facilitating interactions among them in the spatial
dimension. Furthermore, to extend the global interaction from 2D space to 3D
representations, a Global Channel Interaction module is proposed to aggregate
channel information from a global spatial perspective. Results on three
datasets (Gen1, 1Mpx, and eTram) demonstrate that our model outperforms other
methods in both performance and efficiency.
|
2501.11972
|
"FRAME: Forward Recursive Adaptive Model Extraction -- A Technique for
Advance Feature Selection"
|
cs.LG
|
Feature selection is a crucial preprocessing step in machine learning,
impacting model performance, interpretability, and computational efficiency.
This study introduces a novel hybrid approach, the Forward Recursive Adaptive
Model Extraction Technique (FRAME), which combines Forward Selection and
Recursive Feature Elimination (RFE) to enhance feature selection across diverse
datasets. FRAME integrates the strengths of both methods, balancing exploration
and exploitation of features to optimize selection. A comprehensive evaluation
of FRAME was conducted against traditional methods such as SelectKBest and
Lasso Regression, using high-dimensional, noisy, and heterogeneous datasets.
The results demonstrate that FRAME consistently delivers superior predictive
performance based on downstream machine learning evaluation metrics. It
effectively reduces dimensionality while maintaining robust model performance,
making it particularly valuable for applications requiring interpretable and
accurate predictions, such as biomedical diagnostics. This study highlights the
importance of assessing feature selection methods across varied datasets to
ensure their robustness and generalizability. The findings suggest that FRAME
has significant potential for further enhancement, particularly through
integration with deep learning architectures for adaptive and real-time feature
selection in dynamic environments. By advancing feature selection
methodologies, FRAME offers a practical and effective solution to improve
machine learning applications across multiple domains.
|
2501.11977
|
Leveraging Graph Structures and Large Language Models for End-to-End
Synthetic Task-Oriented Dialogues
|
cs.CL cs.AI
|
Training task-oriented dialogue systems is both costly and time-consuming,
due to the need for high-quality datasets encompassing diverse intents.
Traditional methods depend on extensive human annotation, while recent
advancements leverage large language models (LLMs) to generate synthetic data.
However, these approaches often require custom prompts or code, limiting
accessibility for non-technical users. We introduce GraphTOD, an end-to-end
framework that simplifies the generation of task-oriented dialogues. Users can
create dialogues by specifying transition graphs in JSON format. Our evaluation
demonstrates that GraphTOD generates high-quality dialogues across various
domains, significantly lowering the cost and complexity of dataset creation.
|
2501.11978
|
Weight Distribution of the Weighted Coordinates Poset Block Space and
Singleton Bound
|
cs.IT math.CO math.IT
|
In this paper, we determine the complete weight distribution of the space $
\mathbb{F}_q^N $ endowed by the weighted coordinates poset block metric
($(P,w,\pi)$-metric), also known as the $(P,w,\pi)$-space, thereby obtaining it
for $(P,w)$-space, $(P,\pi)$-space, $\pi$-space, and $P$-space as special
cases. Further, when $P$ is a chain, the resulting space is called as
Niederreiter-Rosenbloom-Tsfasman (NRT) weighted block space and when $P$ is
hierarchical, the resulting space is called as weighted coordinates
hierarchical poset block space. The complete weight distribution of both the
spaces are deduced from the main result. Moreover, we define an $I$-ball for an
ideal $I$ in $P$ and study the characteristics of it in $(P,w,\pi)$-space.
We investigate the relationship between the $I$-perfect codes and $t$-perfect
codes in $(P,w,\pi)$-space. Given an ideal $I$, we investigate how the maximum
distance separability (MDS) is related with $I$-perfect codes and $t$-perfect
codes in $(P,w,\pi)$-space. Duality theorem is derived for an MDS
$(P,w,\pi)$-code when all the blocks are of same length. Finally, the
distribution of codewords among $r$-balls is analyzed in the case of chain
poset, when all the blocks are of same length.
|
2501.11979
|
Linear Feedback Control Systems for Iterative Prompt Optimization in
Large Language Models
|
cs.LG
|
Large Language Models (LLMs) have revolutionized various applications by
generating outputs based on given prompts. However, achieving the desired
output requires iterative prompt refinement. This paper presents a novel
approach that draws parallels between the iterative prompt optimization process
in LLMs and feedback control systems. We iteratively refine the prompt by
treating the deviation between the LLM output and the desired result as an
error term until the output criteria are met. This process is akin to a
feedback control system, where the LLM, despite being non-linear and
non-deterministic, is managed using principles from linear feedback control
systems. We explore the application of different types of controllers within
this framework, providing a mathematical foundation for integrating linear
feedback control mechanisms with LLMs.
|
2501.11980
|
A note on the sample complexity of multi-target detection
|
eess.SP cs.IT math.IT
|
This work studies the sample complexity of the multi-target detection (MTD)
problem, which involves recovering a signal from a noisy measurement containing
multiple instances of a target signal in unknown locations, each transformed by
a random group element. This problem is primarily motivated by single-particle
cryo-electron microscopy (cryo-EM), a groundbreaking technology for determining
the structures of biological molecules. We establish upper and lower bounds for
various MTD models in the high-noise regime as a function of the group, the
distribution over the group, and the arrangement of signal occurrences within
the measurement. The lower bounds are established through a reduction to the
related multi-reference alignment problem, while the upper bounds are derived
from explicit recovery algorithms utilizing autocorrelation analysis. These
findings provide fundamental insights into estimation limits in noisy
environments and lay the groundwork for extending this analysis to more complex
applications, such as cryo-EM.
|
2501.11992
|
Survey on Hand Gesture Recognition from Visual Input
|
cs.CV cs.AI
|
Hand gesture recognition has become an important research area, driven by the
growing demand for human-computer interaction in fields such as sign language
recognition, virtual and augmented reality, and robotics. Despite the rapid
growth of the field, there are few surveys that comprehensively cover recent
research developments, available solutions, and benchmark datasets. This survey
addresses this gap by examining the latest advancements in hand gesture and 3D
hand pose recognition from various types of camera input data including RGB
images, depth images, and videos from monocular or multiview cameras, examining
the differing methodological requirements of each approach. Furthermore, an
overview of widely used datasets is provided, detailing their main
characteristics and application domains. Finally, open challenges such as
achieving robust recognition in real-world environments, handling occlusions,
ensuring generalization across diverse users, and addressing computational
efficiency for real-time applications are highlighted to guide future research
directions. By synthesizing the objectives, methodologies, and applications of
recent studies, this survey offers valuable insights into current trends,
challenges, and opportunities for future research in human hand gesture
recognition.
|
2501.11993
|
Subcode Ensemble Decoding of Linear Block Codes
|
cs.IT math.IT
|
Low-density parity-check (LDPC) codes together with belief propagation (BP)
decoding yield exceptional error correction capabilities in the large block
length regime. Yet, there remains a gap between BP decoding and maximum
likelihood decoding for short block length LDPC codes. In this context,
ensemble decoding schemes yield both reduced latency and good error rates. In
this paper, we propose subcode ensemble decoding (SCED), which employs an
ensemble of decodings on different subcodes of the code. To ensure that all
codewords are decodable, we use the concept of linear coverings and explore
approaches for sampling suitable ensembles for short block length LDPC codes.
Monte-Carlo simulations conducted for three LDPC codes demonstrate that SCED
improves decoding performance compared to stand-alone decoding and automorphism
ensemble decoding. In particular, in contrast to existing schemes, e.g.,
multiple bases belief propagation and automorphism ensemble decoding, SCED does
not require the NP-complete search for low-weight dual codewords or knowledge
of the automorphism group of the code, which is often unknown.
|
2501.12005
|
A note on the relations between mixture models, maximum-likelihood and
entropic optimal transport
|
stat.ML cs.LG
|
This note aims to demonstrate that performing maximum-likelihood estimation
for a mixture model is equivalent to minimizing over the parameters an optimal
transport problem with entropic regularization. The objective is pedagogical:
we seek to present this already known result in a concise and hopefully simple
manner. We give an illustration with Gaussian mixture models by showing that
the standard EM algorithm is a specific block-coordinate descent on an optimal
transport loss.
|
2501.12009
|
Ratio Attack on G+G Convoluted Gaussian Signature
|
cs.CR cs.IT math.IT
|
A lattice-based signature, called G+G convoluted Gaussian signature was
proposed in ASIACRYPT 2023 and was proved secure in the quantum random oracle
model. In this paper, we propose a ratio attack on the G+G convoluted Gaussian
signature to recover the secret key. The attack exploits the fact, proved in
this paper, that the secret key can be obtained from the expected value of the
ratio of signatures which follows a truncated Cauchy distribution. Moreover, we
also compute the number of signatures required to successfully recover the
secret key. Furthermore, we simulate the ratio attack in Sagemath with a few
different parameters as a proof-of-concept of the ratio attack.
|
2501.12011
|
Reference-free Evaluation Metrics for Text Generation: A Survey
|
cs.CL
|
A number of automatic evaluation metrics have been proposed for natural
language generation systems. The most common approach to automatic evaluation
is the use of a reference-based metric that compares the model's output with
gold-standard references written by humans. However, it is expensive to create
such references, and for some tasks, such as response generation in dialogue,
creating references is not a simple matter. Therefore, various reference-free
metrics have been developed in recent years. In this survey, which intends to
cover the full breadth of all NLG tasks, we investigate the most commonly used
approaches, their application, and their other uses beyond evaluating models.
The survey concludes by highlighting some promising directions for future
research.
|
2501.12012
|
TabularARGN: A Flexible and Efficient Auto-Regressive Framework for
Generating High-Fidelity Synthetic Data
|
cs.LG
|
Synthetic data generation for tabular datasets must balance fidelity,
efficiency, and versatility to meet the demands of real-world applications. We
introduce the Tabular Auto-Regressive Generative Network (TabularARGN), a
flexible framework designed to handle mixed-type, multivariate, and sequential
datasets. By training on all possible conditional probabilities, TabularARGN
supports advanced features such as fairness-aware generation, imputation, and
conditional generation on any subset of columns. The framework achieves
state-of-the-art synthetic data quality while significantly reducing training
and inference times, making it ideal for large-scale datasets with diverse
structures. Evaluated across established benchmarks, including realistic
datasets with complex relationships, TabularARGN demonstrates its capability to
synthesize high-quality data efficiently. By unifying flexibility and
performance, this framework paves the way for practical synthetic data
generation across industries.
|
2501.12015
|
Full Proportional Justified Representation
|
cs.GT cs.AI
|
In multiwinner approval voting, forming a committee that proportionally
represents voters' approval ballots is an essential task. The notion of
justified representation (JR) demands that any large "cohesive" group of voters
should be proportionally "represented". The "cohesiveness" is defined in
different ways; two common ways are the following: (C1) demands that the group
unanimously approves a set of candidates proportional to its size, while (C2)
requires each member to approve at least a fixed fraction of such a set.
Similarly, "representation" have been considered in different ways: (R1) the
coalition's collective utility from the winning set exceeds that of any
proportionally sized alternative, and (R2) for any proportionally sized
alternative, at least one member of the coalition derives less utility from it
than from the winning set.
Three of the four possible combinations have been extensively studied:
(C1)-(R1) defines Proportional Justified Representation (PJR), (C1)-(R2)
defines Extended Justified Representation (EJR), (C2)-(R2) defines Full
Justified Representation (FJR). All three have merits, but also drawbacks. PJR
is the weakest notion, and perhaps not sufficiently demanding; EJR may not be
compatible with perfect representation; and it is open whether a committee
satisfying FJR can be found efficiently.
We study the combination (C2)-(R1), which we call Full Proportional Justified
Representation (FPJR). We investigate FPJR's properties and find that it shares
PJR's advantages over EJR: several proportionality axioms (e.g. priceability,
perfect representation) imply FPJR and PJR but not EJR. We also find that
efficient rules like the greedy Monroe rule and the method of equal shares
satisfy FPJR, matching a key advantage of EJR over FJR. However, the
Proportional Approval Voting (PAV) rule may violate FPJR, so neither of EJR and
FPJR implies the other.
|
2501.12016
|
Are Traditional Deep Learning Model Approaches as Effective as a
Retinal-Specific Foundation Model for Ocular and Systemic Disease Detection?
|
cs.CV cs.LG
|
Background: RETFound, a self-supervised, retina-specific foundation model
(FM), showed potential in downstream applications. However, its comparative
performance with traditional deep learning (DL) models remains incompletely
understood. This study aimed to evaluate RETFound against three
ImageNet-pretrained supervised DL models (ResNet50, ViT-base, SwinV2) in
detecting ocular and systemic diseases.
Methods: We fine-tuned/trained RETFound and three DL models on full datasets,
50%, 20%, and fixed sample sizes (400, 200, 100 images, with half comprising
disease cases; for each DR severity class, 100 and 50 cases were used.
Fine-tuned models were tested internally using the SEED (53,090 images) and
APTOS-2019 (3,672 images) datasets and externally validated on population-based
(BES, CIEMS, SP2, UKBB) and open-source datasets (ODIR-5k, PAPILA, GAMMA,
IDRiD, MESSIDOR-2). Model performance was compared using area under the
receiver operating characteristic curve (AUC) and Z-tests with Bonferroni
correction (P<0.05/3).
Interpretation: Traditional DL models are mostly comparable to RETFound for
ocular disease detection with large datasets. However, RETFound is superior in
systemic disease detection with smaller datasets. These findings offer valuable
insights into the respective merits and limitation of traditional models and
FMs.
|
2501.12020
|
On the "Illusion" of Gender Bias in Face Recognition: Explaining the
Fairness Issue Through Non-demographic Attributes
|
cs.CV
|
Face recognition systems (FRS) exhibit significant accuracy differences based
on the user's gender. Since such a gender gap reduces the trustworthiness of
FRS, more recent efforts have tried to find the causes. However, these studies
make use of manually selected, correlated, and small-sized sets of facial
features to support their claims. In this work, we analyse gender bias in face
recognition by successfully extending the search domain to decorrelated
combinations of 40 non-demographic facial characteristics. First, we propose a
toolchain to effectively decorrelate and aggregate facial attributes to enable
a less-biased gender analysis on large-scale data. Second, we introduce two new
fairness metrics to measure fairness with and without context. Based on these
grounds, we thirdly present a novel unsupervised algorithm able to reliably
identify attribute combinations that lead to vanishing bias when used as filter
predicates for balanced testing datasets. The experiments show that the gender
gap vanishes when images of male and female subjects share specific attributes,
clearly indicating that the issue is not a question of biology but of the
social definition of appearance. These findings could reshape our understanding
of fairness in face biometrics and provide insights into FRS, helping to
address gender bias issues.
|
2501.12022
|
Foreign object segmentation in chest x-rays through anatomy-guided shape
insertion
|
cs.CV
|
In this paper, we tackle the challenge of instance segmentation for foreign
objects in chest radiographs, commonly seen in postoperative follow-ups with
stents, pacemakers, or ingested objects in children. The diversity of foreign
objects complicates dense annotation, as shown in insufficient existing
datasets. To address this, we propose the simple generation of synthetic data
through (1) insertion of arbitrary shapes (lines, polygons, ellipses) with
varying contrasts and opacities, and (2) cut-paste augmentations from a small
set of semi-automatically extracted labels. These insertions are guided by
anatomy labels to ensure realistic placements, such as stents appearing only in
relevant vessels. Our approach enables networks to segment complex structures
with minimal manually labeled data. Notably, it achieves performance comparable
to fully supervised models while using 93\% fewer manual annotations.
|
2501.12023
|
Comparative Analysis of Pre-trained Deep Learning Models and DINOv2 for
Cushing's Syndrome Diagnosis in Facial Analysis
|
cs.LG cs.CV eess.IV
|
Cushing's syndrome is a condition caused by excessive glucocorticoid
secretion from the adrenal cortex, often manifesting with moon facies and
plethora, making facial data crucial for diagnosis. Previous studies have used
pre-trained convolutional neural networks (CNNs) for diagnosing Cushing's
syndrome using frontal facial images. However, CNNs are better at capturing
local features, while Cushing's syndrome often presents with global facial
features. Transformer-based models like ViT and SWIN, which utilize
self-attention mechanisms, can better capture long-range dependencies and
global features. Recently, DINOv2, a foundation model based on visual
Transformers, has gained interest. This study compares the performance of
various pre-trained models, including CNNs, Transformer-based models, and
DINOv2, in diagnosing Cushing's syndrome. We also analyze gender bias and the
impact of freezing mechanisms on DINOv2. Our results show that
Transformer-based models and DINOv2 outperformed CNNs, with ViT achieving the
highest F1 score of 85.74%. Both the pre-trained model and DINOv2 had higher
accuracy for female samples. DINOv2 also showed improved performance when
freezing parameters. In conclusion, Transformer-based models and DINOv2 are
effective for Cushing's syndrome classification.
|
2501.12025
|
Low-Cost 3D printed, Biocompatible Ionic Polymer Membranes for Soft
Actuators
|
cond-mat.soft cs.RO
|
Ionic polymer actuators, in essence, consist of ion exchange polymers
sandwiched between layers of electrodes. They have recently gained recognition
as promising candidates for soft actuators due to their lightweight nature,
noise-free operation, and low-driving voltages. However, the materials
traditionally utilized to develop them are often not human/environmentally
friendly. Thus, to address this issue, researchers have been focusing on
developing biocompatible versions of this actuator. Despite this, such
actuators still face challenges in achieving high performance, in payload
capacity, bending capabilities, and response time. In this paper, we present a
biocompatible ionic polymer actuator whose membrane is fully 3D printed
utilizing a direct ink writing method. The structure of the printed membranes
consists of biodegradable ionic fluid encapsulated within layers of activated
carbon polymers. From the microscopic observations of its structure, we
confirmed that the ionic polymer is well encapsulated. The actuators can
achieve a bending performance of up to 124$^\circ$ (curvature of 0.82
$\text{cm}^{-1}$), which, to our knowledge, is the highest curvature attained
by any bending ionic polymer actuator to date. It can operate comfortably up to
a 2 Hz driving frequency and can achieve blocked forces of up to 0.76 mN. Our
results showcase a promising, high-performing biocompatible ionic polymer
actuator, whose membrane can be easily manufactured in a single step using a
standard FDM 3D printer. This approach paves the way for creating customized
designs for functional soft robotic applications, including human-interactive
devices, in the near future.
|
2501.12030
|
Advancing Earth Observation: A Survey on AI-Powered Image Processing in
Satellites
|
cs.LG cs.CV
|
Advancements in technology and reduction in it's cost have led to a
substantial growth in the quality & quantity of imagery captured by Earth
Observation (EO) satellites. This has presented a challenge to the efficacy of
the traditional workflow of transmitting this imagery to Earth for processing.
An approach to addressing this issue is to use pre-trained artificial
intelligence models to process images on-board the satellite, but this is
difficult given the constraints within a satellite's environment. This paper
provides an up-to-date and thorough review of research related to image
processing on-board Earth observation satellites. The significant constraints
are detailed along with the latest strategies to mitigate them.
|
2501.12032
|
Multi-Tenant SmartNICs for In-Network Preprocessing of Recommender
Systems
|
cs.AR cs.DC cs.LG
|
Keeping ML-based recommender models up-to-date as data drifts and evolves is
essential to maintain accuracy. As a result, online data preprocessing plays an
increasingly important role in serving recommender systems. Existing solutions
employ multiple CPU workers to saturate the input bandwidth of a single
training node. Such an approach results in high deployment costs and energy
consumption. For instance, a recent report from industrial deployments shows
that data storage and ingestion pipelines can account for over 60\% of the
power consumption in a recommender system. In this paper, we tackle the issue
from a hardware perspective by introducing Piper, a flexible and
network-attached accelerator that executes data loading and preprocessing
pipelines in a streaming fashion. As part of the design, we define MiniPipe,
the smallest pipeline unit enabling multi-pipeline implementation by executing
various data preprocessing tasks across the single board, giving Piper the
ability to be reconfigured at runtime. Our results, using publicly released
commercial pipelines, show that Piper, prototyped on a power-efficient FPGA,
achieves a 39$\sim$105$\times$ speedup over a server-grade, 128-core CPU and
3$\sim$17$\times$ speedup over GPUs like RTX 3090 and A100 in multiple
pipelines. The experimental analysis demonstrates that Piper provides
advantages in both latency and energy efficiency for preprocessing tasks in
recommender systems, providing an alternative design point for systems that
today are in very high demand.
|
2501.12033
|
Harnessing Generative Pre-Trained Transformer for Datacenter Packet
Trace Generation
|
cs.NI cs.AI
|
Today, the rapid growth of applications reliant on datacenters calls for new
advancements to meet the increasing traffic and computational demands. Traffic
traces from datacenters are essential for further development and optimization
of future datacenters. However, traces are rarely released to the public.
Researchers often use simplified mathematical models that lack the depth needed
to recreate intricate traffic patterns and, thus, miss optimization
opportunities found in realistic traffic. In this preliminary work, we
introduce DTG-GPT, a packet-level Datacenter Traffic Generator (DTG), based on
the generative pre-trained transformer (GPT) architecture used by many
state-of-the-art large language models. We train our model on a small set of
available traffic traces from different domains and offer a simple methodology
to evaluate the fidelity of the generated traces to their original
counterparts. We show that DTG-GPT can synthesize novel traces that mimic the
spatiotemporal patterns found in real traffic traces. We further demonstrate
that DTG-GPT can generate traces for networks of different scales while
maintaining fidelity. Our findings indicate the potential that, in the future,
similar models to DTG-GPT will allow datacenter operators to release traffic
information to the research community via trained GPT models.
|
2501.12040
|
Select2Drive: Pragmatic Communications for Real-Time Collaborative
Autonomous Driving
|
cs.CE
|
Vehicle-to-Everything communications-assisted Autonomous Driving (V2X-AD) has
witnessed remarkable advancements in recent years, with pragmatic
communications (PragComm) emerging as a promising paradigm for real-time
collaboration among vehicles and other agents.Simultaneously, extensive
research has explored the interplay between collaborative perception and
decision-making in end-to-end driving frameworks.In this work, we revisit the
collaborative driving problem and propose the Select2Drive framework to
optimize the utilization of limited computational and communication
resources.Particularly, to mitigate cumulative latency in perception and
decision-making, Select2Drive introduces Distributed Predictive Perception
(DPP) by formulating an active prediction paradigm and simplifies
high-dimensional semantic feature prediction into computation cost-efficient,
motion-aware reconstruction. Given the "less is more" principle that a
broadened perceptual horizon possibly confuses the decision module rather than
contributing to it, Select2Drive utilizes Area-of-Importance-based PragComm
(APC) to prioritize the communications of critical regions, thus boosting both
communication efficiency and decision-making efficacy. Empirical evaluations on
the V2Xverse dataset and CARLA driving simulator demonstrate that Select2Drive
achieves a 11.31% (resp. 7.69%) improvement in offline perception tasks under
limited bandwidth (resp. pose error conditions). Moreover, it delivers at most
14.68% and 31.76% enhancement in closed-loop driving scores and route
completion rates, particularly in scenarios characterized by dense traffic and
high-speed dynamics.
|
2501.12043
|
High-Fidelity Coherent-One-Way QKD Simulation Framework for 6G Networks:
Bridging Theory and Reality
|
quant-ph cs.SY eess.SY
|
Quantum key distribution (QKD) has been emerged as a promising solution for
guaranteeing information-theoretic security. Inspired by this, a great amount
of research effort has been recently put on designing and testing QKD systems
as well as articulating preliminary application scenarios. However, due to the
considerable high-cost of QKD equipment, a lack of QKD communication system
design tools, wide deployment of such systems and networks is challenging.
Motivated by this, this paper introduces a QKD communication system design
tool. First we articulate key operation elements of the QKD, and explain the
feasibility and applicability of coherent-one-way (COW) QKD solutions. Next, we
focus on documenting the corresponding simulation framework as well as defining
the key performance metrics, i.e., quantum bit error rate (QBER), and secrecy
key rate. To verify the accuracy of the simulation framework, we design and
deploy a real-world QKD setup. We perform extensive experiments for three
deployments of diverse transmission distance in the presence or absence of a
QKD eavesdropper. The results reveal an acceptable match between simulations
and experiments rendering the simulation framework a suitable tool for QKD
communication system design.
|
2501.12046
|
Communication-Efficient and Privacy-Adaptable Mechanism for Federated
Learning
|
cs.LG
|
Training machine learning models on decentralized private data via federated
learning (FL) poses two key challenges: communication efficiency and privacy
protection. In this work, we address these challenges within the trusted
aggregator model by introducing a novel approach called the
Communication-Efficient and Privacy-Adaptable Mechanism (CEPAM), achieving both
objectives simultaneously. In particular, CEPAM leverages the rejection-sampled
universal quantizer (RSUQ), a construction of randomized vector quantizer whose
resulting distortion is equivalent to a prescribed noise, such as Gaussian or
Laplace noise, enabling joint differential privacy and compression. Moreover,
we analyze the trade-offs among user privacy, global utility, and transmission
rate of CEPAM by defining appropriate metrics for FL with differential privacy
and compression. Our CEPAM provides the additional benefit of privacy
adaptability, allowing clients and the server to customize privacy protection
based on required accuracy and protection. We assess CEPAM's utility
performance using MNIST dataset, demonstrating that CEPAM surpasses baseline
models in terms of learning accuracy.
|
2501.12048
|
Adaptive Class Learning to Screen Diabetic Disorders in Fundus Images of
Eye
|
cs.CV cs.AI
|
The prevalence of ocular illnesses is growing globally, presenting a
substantial public health challenge. Early detection and timely intervention
are crucial for averting visual impairment and enhancing patient prognosis.
This research introduces a new framework called Class Extension with Limited
Data (CELD) to train a classifier to categorize retinal fundus images. The
classifier is initially trained to identify relevant features concerning
Healthy and Diabetic Retinopathy (DR) classes and later fine-tuned to adapt to
the task of classifying the input images into three classes: Healthy, DR, and
Glaucoma. This strategy allows the model to gradually enhance its
classification capabilities, which is beneficial in situations where there are
only a limited number of labeled datasets available. Perturbation methods are
also used to identify the input image characteristics responsible for
influencing the models decision-making process. We achieve an overall accuracy
of 91% on publicly available datasets.
|
2501.12050
|
Representation Learning with Parameterised Quantum Circuits for
Advancing Speech Emotion Recognition
|
cs.LG cs.SD eess.AS
|
Speech Emotion Recognition (SER) is a complex and challenging task in
human-computer interaction due to the intricate dependencies of features and
the overlapping nature of emotional expressions conveyed through speech.
Although traditional deep learning methods have shown effectiveness, they often
struggle to capture subtle emotional variations and overlapping states. This
paper introduces a hybrid classical-quantum framework that integrates
Parameterised Quantum Circuits (PQCs) with conventional Convolutional Neural
Network (CNN) architectures. By leveraging quantum properties such as
superposition and entanglement, the proposed model enhances feature
representation and captures complex dependencies more effectively than
classical methods. Experimental evaluations conducted on benchmark datasets,
including IEMOCAP, RECOLA, and MSP-Improv, demonstrate that the hybrid model
achieves higher accuracy in both binary and multi-class emotion classification
while significantly reducing the number of trainable parameters. While a few
existing studies have explored the feasibility of using Quantum Circuits to
reduce model complexity, none have successfully shown how they can enhance
accuracy. This study is the first to demonstrate that Quantum Circuits has the
potential to improve the accuracy of SER. The findings highlight the promise of
QML to transform SER, suggesting a promising direction for future research and
practical applications in emotion-aware systems.
|
2501.12051
|
MedS$^3$: Towards Medical Small Language Models with Self-Evolved Slow
Thinking
|
cs.CL
|
Medical language models (MLMs) have become pivotal in advancing medical
natural language processing. However, prior models that rely on pre-training or
supervised fine-tuning often exhibit low data efficiency and limited
practicality in real-world clinical applications. While OpenAI's o1 highlights
test-time scaling in mathematics, attempts to replicate this approach in
medicine typically distill responses from GPT-series models to open-source
models, focusing primarily on multiple-choice tasks. This strategy, though
straightforward, neglects critical concerns like data privacy and realistic
deployment in clinical settings. In this work, we present a deployable,
small-scale medical reasoning system, MedS3, designed for long-chain reasoning
in clinical tasks using a self-evolution paradigm. Starting with a seed dataset
of around 8,000 instances spanning five domains and 16 datasets, we prompt a
base policy model to perform Monte Carlo Tree Search (MCTS) to construct
rule-verifiable reasoning chains. Each reasoning step is assigned an evolution
rollout value, allowing verified trajectories to train the policy model and the
process reward model (PRM). During inference, the policy model generates
multiple responses, and the reward model selects the one with a newly proposed
PRM-guided Vote-Sum (P-VS) strategy. Experiments on eleven evaluation datasets
demonstrate that MedS3 outperforms not only the prior strongest medical model
by 6.59, but also 32B-level general reasoning models by 8.71 points. Code and
data are available at https://github.com/pixas/MedSSS.
|
2501.12052
|
Aggrotech: Leveraging Deep Learning for Sustainable Tomato Disease
Management
|
cs.CV cs.LG
|
Tomato crop health plays a critical role in ensuring agricultural
productivity and food security. Timely and accurate detection of diseases
affecting tomato plants is vital for effective disease management. In this
study, we propose a deep learning-based approach for Tomato Leaf Disease
Detection using two well-established convolutional neural networks (CNNs),
namely VGG19 and Inception v3. The experiment is conducted on the Tomato
Villages Dataset, encompassing images of both healthy tomato leaves and leaves
afflicted by various diseases. The VGG19 model is augmented with fully
connected layers, while the Inception v3 model is modified to incorporate a
global average pooling layer and a dense classification layer. Both models are
trained on the prepared dataset, and their performances are evaluated on a
separate test set. This research employs VGG19 and Inception v3 models on the
Tomato Villages dataset (4525 images) for tomato leaf disease detection. The
models' accuracy of 93.93% with dropout layers demonstrates their usefulness
for crop health monitoring. The paper suggests a deep learning-based strategy
that includes normalization, resizing, dataset preparation, and unique model
architectures. During training, VGG19 and Inception v3 serve as feature
extractors, with possible data augmentation and fine-tuning. Metrics like
accuracy, precision, recall, and F1 score are obtained through evaluation on a
test set and offer important insights into the strengths and shortcomings of
the model. The method has the potential for practical use in precision
agriculture and could help tomato crops prevent illness early on.
|
2501.12053
|
PINNsAgent: Automated PDE Surrogation with Large Language Models
|
cs.CE
|
Solving partial differential equations (PDEs) using neural methods has been a
long-standing scientific and engineering research pursuit. Physics-Informed
Neural Networks (PINNs) have emerged as a promising alternative to traditional
numerical methods for solving PDEs. However, the gap between domain-specific
knowledge and deep learning expertise often limits the practical application of
PINNs. Previous works typically involve manually conducting extensive PINNs
experiments and summarizing heuristic rules for hyperparameter tuning. In this
work, we introduce PINNsAgent, a novel surrogation framework that leverages
large language models (LLMs) and utilizes PINNs as a foundation to bridge the
gap between domain-specific knowledge and deep learning. Specifically,
PINNsAgent integrates (1) Physics-Guided Knowledge Replay (PGKR), which encodes
the essential characteristics of PDEs and their associated best-performing
PINNs configurations into a structured format, enabling efficient knowledge
transfer from solved PDEs to similar problems and (2) Memory Tree Reasoning, a
strategy that effectively explores the search space for optimal PINNs
architectures. By leveraging LLMs and exploration strategies, PINNsAgent
enhances the automation and efficiency of PINNs-based solutions. We evaluate
PINNsAgent on 14 benchmark PDEs, demonstrating its effectiveness in automating
the surrogation process and significantly improving the accuracy of PINNs-based
solutions.
|
2501.12054
|
ORCAst: Operational High-Resolution Current Forecasts
|
cs.CV physics.ao-ph
|
We present ORCAst, a multi-stage, multi-arm network for Operational
high-Resolution Current forecAsts over one week. Producing real-time nowcasts
and forecasts of ocean surface currents is a challenging problem due to
indirect or incomplete information from satellite remote sensing data. Entirely
trained on real satellite data and in situ measurements from drifters, our
model learns to forecast global ocean surface currents using various sources of
ground truth observations in a multi-stage learning procedure. Our multi-arm
encoder-decoder model architecture allows us to first predict sea surface
height and geostrophic currents from larger quantities of nadir and SWOT
altimetry data, before learning to predict ocean surface currents from much
more sparse in situ measurements from drifters. Training our model on specific
regions improves performance. Our model achieves stronger nowcast and forecast
performance in predicting ocean surface currents than various state-of-the-art
methods.
|
2501.12057
|
Unified 3D MRI Representations via Sequence-Invariant Contrastive
Learning
|
cs.CV physics.med-ph
|
Self-supervised deep learning has accelerated 2D natural image analysis but
remains difficult to translate into 3D MRI, where data are scarce and
pre-trained 2D backbones cannot capture volumetric context. We present a
sequence-invariant self-supervised framework leveraging quantitative MRI
(qMRI). By simulating multiple MRI contrasts from a single 3D qMRI scan and
enforcing consistent representations across these contrasts, we learn
anatomy-centric rather than sequence-specific features. This yields a robust 3D
encoder that performs strongly across varied tasks and protocols. Experiments
on healthy brain segmentation (IXI), stroke lesion segmentation (ARC), and MRI
denoising show significant gains over baseline SSL approaches, especially in
low-data settings (up to +8.3% Dice, +4.2 dB PSNR). Our model also generalises
effectively to unseen sites, demonstrating potential for more scalable and
clinically reliable volumetric analysis. All code and trained models are
publicly available.
|
2501.12058
|
Fractional Subadditivity of Submodular Functions: Equality Conditions
and Their Applications
|
cs.IT math.IT
|
Submodular functions are known to satisfy various forms of fractional
subadditivity. This work investigates the conditions for equality to hold
exactly or approximately in the fractional subadditivity of submodular
functions. We establish that a small gap in the inequality implies that the
function is close to being modular, and that the gap is zero if and only if the
function is modular. We then present natural implications of these results for
special cases of submodular functions, such as entropy, relative entropy, and
matroid rank. As a consequence, we characterize the necessary and sufficient
conditions for equality to hold in Shearer's lemma, recovering a result of
Ellis \emph{et al.} (2016) as a special case. We leverage our results to
propose a new multivariate mutual information, which generalizes Watanabe's
total correlation (1960), Han's dual total correlation (1978), and Csisz\'ar
and Narayan's shared information (2004), and analyze its properties. Among
these properties, we extend Watanabe's characterization of total correlation as
the maximum correlation over partitions to fractional partitions. When applied
to matrix determinantal inequalities for positive definite matrices, our
results recover the equality conditions of the classical determinantal
inequalities of Hadamard, Sz\'asz, and Fischer as special cases.
|
2501.12060
|
GSVC: Efficient Video Representation and Compression Through 2D Gaussian
Splatting
|
cs.CV cs.MM
|
3D Gaussian splats have emerged as a revolutionary, effective, learned
representation for static 3D scenes. In this work, we explore using 2D Gaussian
splats as a new primitive for representing videos. We propose GSVC, an approach
to learning a set of 2D Gaussian splats that can effectively represent and
compress video frames. GSVC incorporates the following techniques: (i) To
exploit temporal redundancy among adjacent frames, which can speed up training
and improve the compression efficiency, we predict the Gaussian splats of a
frame based on its previous frame; (ii) To control the trade-offs between file
size and quality, we remove Gaussian splats with low contribution to the video
quality; (iii) To capture dynamics in videos, we randomly add Gaussian splats
to fit content with large motion or newly-appeared objects; (iv) To handle
significant changes in the scene, we detect key frames based on loss
differences during the learning process. Experiment results show that GSVC
achieves good rate-distortion trade-offs, comparable to state-of-the-art video
codecs such as AV1 and VVC, and a rendering speed of 1500 fps for a 1920x1080
video.
|
2501.12061
|
Tackling Uncertainties in Multi-Agent Reinforcement Learning through
Integration of Agent Termination Dynamics
|
cs.LG cs.MA
|
Multi-Agent Reinforcement Learning (MARL) has gained significant traction for
solving complex real-world tasks, but the inherent stochasticity and
uncertainty in these environments pose substantial challenges to efficient and
robust policy learning. While Distributional Reinforcement Learning has been
successfully applied in single-agent settings to address risk and uncertainty,
its application in MARL is substantially limited. In this work, we propose a
novel approach that integrates distributional learning with a safety-focused
loss function to improve convergence in cooperative MARL tasks. Specifically,
we introduce a Barrier Function based loss that leverages safety metrics,
identified from inherent faults in the system, into the policy learning
process. This additional loss term helps mitigate risks and encourages safer
exploration during the early stages of training. We evaluate our method in the
StarCraft II micromanagement benchmark, where our approach demonstrates
improved convergence and outperforms state-of-the-art baselines in terms of
both safety and task completion. Our results suggest that incorporating safety
considerations can significantly enhance learning performance in complex,
multi-agent environments.
|
2501.12066
|
The Generalized Chernoff-Stein Lemma, Applications and Examples
|
cs.IT math.IT
|
In this manuscript we define the notion of "$\delta$-typicality" for both
entropy and relative entropy, as well as a notion of $\epsilon$-goodness and
provide an extension to Stein's lemma for continuous quantities as well as
correlated setups. We apply the derived results on the Gaussian hypothesis
testing problem where the observations are possibly correlated.
|
2501.12067
|
EDoRA: Efficient Weight-Decomposed Low-Rank Adaptation via Singular
Value Decomposition
|
cs.LG cs.AI cs.CL
|
Parameter-efficient fine-tuning methods, such as LoRA, reduces the number of
trainable parameters. However, they often suffer from scalability issues and
differences between their learning pattern and full fine-tuning. To overcome
these limitations, we propose Efficient Weight-Decomposed Low-Rank Adaptation
(EDoRA): a novel PEFT method that decomposes pre-trained weights into magnitude
and directional components. By freezing low-rank matrices, initializing them by
singular value decomposition, and introducing a small trainable matrix between
them, EDoRA achieves substantial reduction in trainable parameters while
maintaining learning capacity. Experimental results on the GLUE benchmark
demonstrate that EDoRA achieves competitive or superior performance compared to
state-of-the-art methods, such as LoRA and DoRA, with up to 30x fewer trainable
parameters. This makes EDoRA a highly efficient solution for adapting LLMs to
diverse tasks under memory-constrained settings. Code is available at
https://github.com/Hamid-Nasiri/EDoRA .
|
2501.12071
|
Co-Paced Learning Strategy Based on Confidence for Flying Bird Object
Detection Model Training
|
cs.CV
|
To mitigate the adverse effects of hard samples on the training of the Flying
Bird Object Detection (FBOD) model for surveillance videos, we propose a
Co-Paced Learning Based on Confidence (CPL-BC) strategy and apply this strategy
to the training process of the FBOD model. This strategy involves maintaining
two models with identical structures but different initial parameter
configurations, which collaborate with each other to select easy samples with
prediction confidence exceeding a set threshold for training. As training
progresses, the strategy gradually lowers the threshold, allowing more samples
to participate, enhancing the model's ability to recognize objects from easy to
hard. Before applying the CPL-BC strategy to train the FBOD models, we
initially trained the two FBOD models to equip them with the capability to
assess the difficulty level of flying bird object samples. Experimental results
on two different datasets of flying bird objects in surveillance videos
demonstrate that, compared to other model learning strategies, CPL-BC
significantly improves detection accuracy, verifying the effectiveness and
advancement of this method.
|
2501.12072
|
Fault-tolerance of [[6, 1, 3]] non-CSS code family generated using
measurements on graph states
|
quant-ph cs.IT math.IT
|
We construct and analyze the fault tolerance of $[[6,1,3]]$ non-CSS quantum
error correcting code under the anisotropic and depolarizing noise models. This
rate-optimized code achieves fault-tolerance using a single ancilla qubit for
syndrome measurement under anisotropic noise conditions. This method was called
fault-tolerance using bare ancilla by Brown \emph{et al.} We give explicit
construction of the code using measurements on non-planar graph states. We also
argue that using our approach, we can construct a family of such fault-tolerant
codes. This method fills a notable gap in constructing fault-tolerant non-CSS
code families.
|
2501.12073
|
Towards autonomous photogrammetric forest inventory using a lightweight
under-canopy robotic drone
|
cs.RO cs.CV
|
Drones are increasingly used in forestry to capture high-resolution remote
sensing data. While operations above the forest canopy are already highly
automated, flying inside forests remains challenging, primarily relying on
manual piloting. Inside dense forests, reliance on the Global Navigation
Satellite System (GNSS) for localization is not feasible. Additionally, the
drone must autonomously adjust its flight path to avoid collisions. Recently,
advancements in robotics have enabled autonomous drone flights in GNSS-denied
obstacle-rich areas. In this article, a step towards autonomous forest data
collection is taken by building a prototype of a robotic under-canopy drone
utilizing state-of-the-art open-source methods and validating its performance
for data collection inside forests. The autonomous flight capability was
evaluated through multiple test flights in two boreal forest test sites. The
tree parameter estimation capability was studied by conducting diameter at
breast height (DBH) estimation using onboard stereo camera data and
photogrammetric methods. The prototype conducted flights in selected
challenging forest environments, and the experiments showed excellent
performance in forest reconstruction with a miniaturized stereoscopic
photogrammetric system. The stem detection algorithm managed to identify 79.31
% of the stems. The DBH estimation had a root mean square error (RMSE) of 3.33
cm (12.79 %) and a bias of 1.01 cm (3.87 %) across all trees. For trees with a
DBH less than 30 cm, the RMSE was 1.16 cm (5.74 %), and the bias was 0.13 cm
(0.64 %). When considering the overall performance in terms of DBH accuracy,
autonomy, and forest complexity, the proposed approach was superior compared to
methods proposed in the scientific literature. Results provided valuable
insights into autonomous forest reconstruction using drones, and several
further development topics were proposed.
|
2501.12074
|
Optimizing Portfolio Performance through Clustering and Sharpe
Ratio-Based Optimization: A Comparative Backtesting Approach
|
cs.LG q-fin.PM
|
Optimizing portfolio performance is a fundamental challenge in financial
modeling, requiring the integration of advanced clustering techniques and
data-driven optimization strategies. This paper introduces a comparative
backtesting approach that combines clustering-based portfolio segmentation and
Sharpe ratio-based optimization to enhance investment decision-making. First,
we segment a diverse set of financial assets into clusters based on their
historical log-returns using K-Means clustering. This segmentation enables the
grouping of assets with similar return characteristics, facilitating targeted
portfolio construction. Next, for each cluster, we apply a Sharpe ratio-based
optimization model to derive optimal weights that maximize risk-adjusted
returns. Unlike traditional mean-variance optimization, this approach directly
incorporates the trade-off between returns and volatility, resulting in a more
balanced allocation of resources within each cluster. The proposed framework is
evaluated through a backtesting study using historical data spanning multiple
asset classes. Optimized portfolios for each cluster are constructed and their
cumulative returns are compared over time against a traditional equal-weighted
benchmark portfolio.
|
2501.12076
|
From Niche to Mainstream: Community Size and Engagement in Social Media
Conversations
|
cs.SI cs.CY
|
The architecture of public discourse has been profoundly reshaped by social
media platforms, which mediate interactions at an unprecedented scale and
complexity. This study analyzes user behavior across six platforms over 33
years, exploring how the size of conversations and communities influences
dialogue dynamics. Our findings reveal that smaller platforms foster richer,
more sustained interactions, while larger platforms drive broader but shorter
participation. Moreover, we observe that the propensity for users to re-engage
in a conversation decreases as community size grows, with niche environments as
a notable exception, where participation remains robust. These findings show an
interdependence between platform architecture, user engagement, and community
dynamics, shedding light on how digital ecosystems shape the structure and
quality of public discourse.
|
2501.12082
|
A Multi-annotated and Multi-modal Dataset for Wide-angle Video Quality
Assessment
|
cs.CV eess.IV
|
Wide-angle video is favored for its wide viewing angle and ability to capture
a large area of scenery, making it an ideal choice for sports and adventure
recording. However, wide-angle video is prone to deformation, exposure and
other distortions, resulting in poor video quality and affecting the perception
and experience, which may seriously hinder its application in fields such as
competitive sports. Up to now, few explorations focus on the quality assessment
issue of wide-angle video. This deficiency primarily stems from the absence of
a specialized dataset for wide-angle videos. To bridge this gap, we construct
the first Multi-annotated and multi-modal Wide-angle Video quality assessment
(MWV) dataset. Then, the performances of state-of-the-art video quality methods
on the MWV dataset are investigated by inter-dataset testing and intra-dataset
testing. Experimental results show that these methods impose significant
limitations on their applicability.
|
2501.12085
|
Scalable Whole Slide Image Representation Using K-Mean Clustering and
Fisher Vector Aggregation
|
cs.CV cs.AI cs.LG
|
Whole slide images (WSIs) are high-resolution, gigapixel sized images that
pose significant computational challenges for traditional machine learning
models due to their size and heterogeneity.In this paper, we present a scalable
and efficient methodology for WSI classification by leveraging patch-based
feature extraction, clustering, and Fisher vector encoding. Initially, WSIs are
divided into fixed size patches, and deep feature embeddings are extracted from
each patch using a pre-trained convolutional neural network (CNN). These
patch-level embeddings are subsequently clustered using K-means clustering,
where each cluster aggregates semantically similar regions of the WSI. To
effectively summarize each cluster, Fisher vector representations are computed
by modeling the distribution of patch embeddings in each cluster as a
parametric Gaussian mixture model (GMM). The Fisher vectors from each cluster
are concatenated into a high-dimensional feature vector, creating a compact and
informative representation of the entire WSI. This feature vector is then used
by a classifier to predict the WSI's diagnostic label. Our method captures
local and global tissue structures and yields robust performance for
large-scale WSI classification, demonstrating superior accuracy and scalability
compared to other approaches.
|
2501.12086
|
DSTSA-GCN: Advancing Skeleton-Based Gesture Recognition with
Semantic-Aware Spatio-Temporal Topology Modeling
|
cs.CV
|
Graph convolutional networks (GCNs) have emerged as a powerful tool for
skeleton-based action and gesture recognition, thanks to their ability to model
spatial and temporal dependencies in skeleton data. However, existing GCN-based
methods face critical limitations: (1) they lack effective spatio-temporal
topology modeling that captures dynamic variations in skeletal motion, and (2)
they struggle to model multiscale structural relationships beyond local joint
connectivity. To address these issues, we propose a novel framework called
Dynamic Spatial-Temporal Semantic Awareness Graph Convolutional Network
(DSTSA-GCN). DSTSA-GCN introduces three key modules: Group Channel-wise Graph
Convolution (GC-GC), Group Temporal-wise Graph Convolution (GT-GC), and
Multi-Scale Temporal Convolution (MS-TCN). GC-GC and GT-GC operate in parallel
to independently model channel-specific and frame-specific correlations,
enabling robust topology learning that accounts for temporal variations.
Additionally, both modules employ a grouping strategy to adaptively capture
multiscale structural relationships. Complementing this, MS-TCN enhances
temporal modeling through group-wise temporal convolutions with diverse
receptive fields. Extensive experiments demonstrate that DSTSA-GCN
significantly improves the topology modeling capabilities of GCNs, achieving
state-of-the-art performance on benchmark datasets for gesture and action
recognition, including SHREC17 Track, DHG-14\/28, NTU-RGB+D, and NTU-RGB+D-120.
|
2501.12087
|
UAV-Assisted Real-Time Disaster Detection Using Optimized Transformer
Model
|
cs.CV
|
Disaster recovery and management present significant challenges, particularly
in unstable environments and hard-to-reach terrains. These difficulties can be
overcome by employing unmanned aerial vehicles (UAVs) equipped with onboard
embedded platforms and camera sensors. In this work, we address the critical
need for accurate and timely disaster detection by enabling onboard aerial
imagery processing and avoiding connectivity, privacy, and latency issues
despite the challenges posed by limited onboard hardware resources. We propose
a UAV-assisted edge framework for real-time disaster management, leveraging our
proposed model optimized for real-time aerial image classification. The
optimization of the model employs post-training quantization techniques. For
real-world disaster scenarios, we introduce a novel dataset, DisasterEye,
featuring UAV-captured disaster scenes as well as ground-level images taken by
individuals on-site. Experimental results demonstrate the effectiveness of our
model, achieving high accuracy with reduced inference latency and memory usage
on resource-constrained devices. The framework's scalability and adaptability
make it a robust solution for real-time disaster detection on resource-limited
UAV platforms.
|
2501.12092
|
Data-Aided Regularization of Direct-Estimate Combiner in Distributed
MIMO Systems
|
eess.SP cs.IT math.IT
|
This paper explores the data-aided regularization of the direct-estimate
combiner in the uplink of a distributed multiple-input multiple-output system.
The network-wide combiner can be computed directly from the pilot signal
received at each access point, eliminating the need for explicit channel
estimation. However, the sample covariance matrix of the received pilot signal
that is used in its computation may significantly deviate from the actual
covariance matrix when the number of pilot symbols is limited. To address this,
we apply a regularization to the sample covariance matrix using a shrinkage
coefficient based on the received data signal. Initially, the shrinkage
coefficient is determined by minimizing the difference between the sample
covariance matrices obtained from the received pilot and data signals. Given
the limitations of this approach in interference-limited scenarios, the
shrinkage coefficient is iteratively optimized using the sample mean squared
error of the hard-decision symbols, which is more closely related to the actual
system's performance, e.g., the symbol error rate (SER). Numerical results
demonstrate that the proposed regularization of the direct-estimate combiner
significantly enhances the SER, particularly when the number of pilot symbols
is limited.
|
2501.12102
|
Proxies for Distortion and Consistency with Applications for Real-World
Image Restoration
|
cs.CV cs.AI cs.LG eess.IV
|
Real-world image restoration deals with the recovery of images suffering from
an unknown degradation. This task is typically addressed while being given only
degraded images, without their corresponding ground-truth versions. In this
hard setting, designing and evaluating restoration algorithms becomes highly
challenging. This paper offers a suite of tools that can serve both the design
and assessment of real-world image restoration algorithms. Our work starts by
proposing a trained model that predicts the chain of degradations a given
real-world measured input has gone through. We show how this estimator can be
used to approximate the consistency -- the match between the measurements and
any proposed recovered image. We also use this estimator as a guiding force for
the design of a simple and highly-effective plug-and-play real-world image
restoration algorithm, leveraging a pre-trained diffusion-based image prior.
Furthermore, this work proposes no-reference proxy measures of MSE and LPIPS,
which, without access to the ground-truth images, allow ranking of real-world
image restoration algorithms according to their (approximate) MSE and LPIPS.
The proposed suite provides a versatile, first of its kind framework for
evaluating and comparing blind image restoration algorithms in real-world
scenarios.
|
2501.12104
|
Teacher Encoder-Student Decoder Denoising Guided Segmentation Network
for Anomaly Detection
|
cs.CV cs.AI
|
Visual anomaly detection is a highly challenging task, often categorized as a
one-class classification and segmentation problem. Recent studies have
demonstrated that the student-teacher (S-T) framework effectively addresses
this challenge. However, most S-T frameworks rely solely on pre-trained teacher
networks to guide student networks in learning multi-scale similar features,
overlooking the potential of the student networks to enhance learning through
multi-scale feature fusion. In this study, we propose a novel model named
PFADSeg, which integrates a pre-trained teacher network, a denoising student
network with multi-scale feature fusion, and a guided anomaly segmentation
network into a unified framework. By adopting a unique teacher-encoder and
student-decoder denoising mode, the model improves the student network's
ability to learn from teacher network features. Furthermore, an adaptive
feature fusion mechanism is introduced to train a self-supervised segmentation
network that synthesizes anomaly masks autonomously, significantly increasing
detection performance. Evaluated on the MVTec AD dataset, PFADSeg achieves
state-of-the-art results with an image-level AUC of 98.9%, a pixel-level mean
precision of 76.4%, and an instance-level mean precision of 78.7%.
|
2501.12106
|
Can open source large language models be used for tumor documentation in
Germany? -- An evaluation on urological doctors' notes
|
cs.CL cs.AI
|
Tumor documentation in Germany is largely done manually, requiring reading
patient records and entering data into structured databases. Large language
models (LLMs) could potentially enhance this process by improving efficiency
and reliability. This evaluation tests eleven different open source LLMs with
sizes ranging from 1-70 billion model parameters on three basic tasks of the
tumor documentation process: identifying tumor diagnoses, assigning ICD-10
codes, and extracting the date of first diagnosis. For evaluating the LLMs on
these tasks, a dataset of annotated text snippets based on anonymized doctors'
notes from urology was prepared. Different prompting strategies were used to
investigate the effect of the number of examples in few-shot prompting and to
explore the capabilities of the LLMs in general. The models Llama 3.1 8B,
Mistral 7B, and Mistral NeMo 12 B performed comparably well in the tasks.
Models with less extensive training data or having fewer than 7 billion
parameters showed notably lower performance, while larger models did not
display performance gains. Examples from a different medical domain than
urology could also improve the outcome in few-shot prompting, which
demonstrates the ability of LLMs to handle tasks needed for tumor
documentation. Open source LLMs show a strong potential for automating tumor
documentation. Models from 7-12 billion parameters could offer an optimal
balance between performance and resource efficiency. With tailored fine-tuning
and well-designed prompting, these models might become important tools for
clinical documentation in the future. The code for the evaluation is available
from https://github.com/stefan-m-lenz/UroLlmEval. We also release the dataset
as a new valuable resource that addresses the shortage of authentic and easily
accessible benchmarks in German-language medical NLP.
|
2501.12113
|
Dual NUP Representations and Min-Maximization in Factor Graphs
|
stat.ML cs.LG cs.SY eess.SP eess.SY
|
Normals with unknown parameters (NUP) can be used to convert nontrivial
model-based estimation problems into iterations of linear least-squares or
Gaussian estimation problems. In this paper, we extend this approach by
augmenting factor graphs with convex-dual variables and pertinent NUP
representations. In particular, in a state space setting, we propose a new
iterative forward-backward algorithm that is dual to a recently proposed
backward-forward algorithm.
|
2501.12115
|
Meta-Sparsity: Learning Optimal Sparse Structures in Multi-task Networks
through Meta-learning
|
cs.LG cs.CV
|
This paper presents meta-sparsity, a framework for learning model sparsity,
basically learning the parameter that controls the degree of sparsity, that
allows deep neural networks (DNNs) to inherently generate optimal sparse shared
structures in multi-task learning (MTL) setting. This proposed approach enables
the dynamic learning of sparsity patterns across a variety of tasks, unlike
traditional sparsity methods that rely heavily on manual hyperparameter tuning.
Inspired by Model Agnostic Meta-Learning (MAML), the emphasis is on learning
shared and optimally sparse parameters in multi-task scenarios by implementing
a penalty-based, channel-wise structured sparsity during the meta-training
phase. This method improves the model's efficacy by removing unnecessary
parameters and enhances its ability to handle both seen and previously unseen
tasks. The effectiveness of meta-sparsity is rigorously evaluated by extensive
experiments on two datasets, NYU-v2 and CelebAMask-HQ, covering a broad
spectrum of tasks ranging from pixel-level to image-level predictions. The
results show that the proposed approach performs well across many tasks,
indicating its potential as a versatile tool for creating efficient and
adaptable sparse neural networks. This work, therefore, presents an approach
towards learning sparsity, contributing to the efforts in the field of sparse
neural networks and suggesting new directions for research towards parsimonious
models.
|
2501.12116
|
Efficient PINNs: Multi-Head Unimodular Regularization of the Solutions
Space
|
cs.LG cs.AI hep-th math.AP
|
We present a machine learning framework to facilitate the solution of
nonlinear multiscale differential equations and, especially, inverse problems
using Physics-Informed Neural Networks (PINNs). This framework is based on what
is called multihead (MH) training, which involves training the network to learn
a general space of all solutions for a given set of equations with certain
variability, rather than learning a specific solution of the system. This setup
is used with a second novel technique that we call Unimodular Regularization
(UR) of the latent space of solutions. We show that the multihead approach,
combined with the regularization, significantly improves the efficiency of
PINNs by facilitating the transfer learning process thereby enabling the
finding of solutions for nonlinear, coupled, and multiscale differential
equations.
|
2501.12118
|
Regularized dynamical parametric approximation of stiff evolution
problems
|
math.NA cs.LG cs.NA
|
Evolutionary deep neural networks have emerged as a rapidly growing field of
research. This paper studies numerical integrators for such and other classes
of nonlinear parametrizations $ u(t) = \Phi(\theta(t)) $, where the evolving
parameters $\theta(t)$ are to be computed. The primary focus is on tackling the
challenges posed by the combination of stiff evolution problems and irregular
parametrizations, which typically arise with neural networks, tensor networks,
flocks of evolving Gaussians, and in further cases of overparametrization. We
propose and analyse regularized parametric versions of the implicit Euler
method and higher-order implicit Runge--Kutta methods for the time integration
of the parameters in nonlinear approximations to evolutionary partial
differential equations and large systems of stiff ordinary differential
equations. At each time step, an ill-conditioned nonlinear optimization problem
is solved approximately with a few regularized Gauss--Newton iterations. Error
bounds for the resulting parametric integrator are derived by relating the
computationally accessible Gauss--Newton iteration for the parameters to the
computationally inaccessible Newton iteration for the underlying non-parametric
time integration scheme. The theoretical findings are supported by numerical
experiments that are designed to show key properties of the proposed parametric
integrators.
|
2501.12119
|
ENTIRE: Learning-based Volume Rendering Time Prediction
|
cs.GR cs.CV cs.LG
|
We present ENTIRE, a novel approach for volume rendering time prediction.
Time-dependent volume data from simulations or experiments typically comprise
complex deforming structures across hundreds or thousands of time steps, which
in addition to the camera configuration has a significant impact on rendering
performance. We first extract a feature vector from a volume that captures its
structure that is relevant for rendering time performance. Then we combine this
feature vector with further relevant parameters (e.g. camera setup), and with
this perform the final prediction. Our experiments conducted on various
datasets demonstrate that our model is capable of efficiently achieving high
prediction accuracy with fast response rates. We showcase ENTIRE's capability
of enabling dynamic parameter adaptation for stable frame rates and load
balancing in two case studies.
|
2501.12121
|
Learning Dynamic Representations via An Optimally-Weighted Maximum Mean
Discrepancy Optimization Framework for Continual Learning
|
cs.LG cs.AI
|
Continual learning has emerged as a pivotal area of research, primarily due
to its advantageous characteristic that allows models to persistently acquire
and retain information. However, catastrophic forgetting can severely impair
model performance. In this study, we address network forgetting by introducing
a novel framework termed Optimally-Weighted Maximum Mean Discrepancy (OWMMD),
which imposes penalties on representation alterations via a Multi-Level Feature
Matching Mechanism (MLFMM). Furthermore, we propose an Adaptive Regularization
Optimization (ARO) strategy to refine the adaptive weight vectors, which
autonomously assess the significance of each feature layer throughout the
optimization process, The proposed ARO approach can relieve the
over-regularization problem and promote the future task learning. We conduct a
comprehensive series of experiments, benchmarking our proposed method against
several established baselines. The empirical findings indicate that our
approach achieves state-of-the-art performance.
|
2501.12123
|
FedCLEAN: byzantine defense by CLustering Errors of Activation maps in
Non-IID federated learning environments
|
cs.CR cs.AI
|
Federated Learning (FL) enables clients to collaboratively train a global
model using their local datasets while reinforcing data privacy. However, FL is
susceptible to poisoning attacks. Existing defense mechanisms assume that
clients' data are independent and identically distributed (IID), making them
ineffective in real-world applications where data are non-IID. This paper
presents FedCLEAN, the first defense capable of filtering attackers' model
updates in a non-IID FL environment. The originality of FedCLEAN is twofold.
First, it relies on a client confidence score derived from the reconstruction
errors of each client's model activation maps for a given trigger set, with
reconstruction errors obtained by means of a Conditional Variational
Autoencoder trained according to a novel server-side strategy. Second, we
propose an ad-hoc trust propagation algorithm based on client scores, which
allows building a cluster of benign clients while flagging potential attackers.
Experimental results on the datasets MNIST and FashionMNIST demonstrate the
robustness of FedCLEAN against Byzantine attackers in non-IID scenarios and a
close-to-zero benign client misclassification rate, even in the absence of an
attack.
|
2501.12124
|
On de Bruijn Array Codes Part II: Linear Codes
|
cs.IT math.IT
|
An M-sequence generated by a primitive polynomial has many interesting and
desirable properties. A pseudo-random array is the two-dimensional
generalization of an M-sequence. Similarly to primitive polynomials, there are
irreducible and reducible polynomials whose all nonzero sequences have the same
length. In this paper, a two-dimensional generalization for such sequences is
given. This generalization is for a pseudo-random array code which is a set of
$r_1 \times r_2$ arrays in which each $n_1 \times n_2$ nonzero matrix is
contained exactly once as a window in one of the arrays. Moreover, these arrays
have the shift-and-add property, i.e., the bitwise addition of two arrays (or a
nontrivial shift of such arrays) is another array (or a shift of another array)
from the code. All the known arrays can be formed by folding sequences
generated from an irreducible polynomial or a reducible polynomial whose
factors have the same degree and the same exponent. Two proof techniques are
used to prove the parameters of the constructed arrays. The first one is based
on another method for constructing some of these arrays. The second one is a
generalization of a known proof technique. This generalization enables to
present pseudo-random arrays with parameters not known before and also a
variety of pseudo-random array codes which cannot be generated by the first
method. The two techniques also suggest two different hierarchies between
pseudo-random array codes. Finally, a method to verify whether a folding of
sequences, generated by these polynomials, yields a pseudo-random array or a
pseudo-random array code, will be presented.
|
2501.12125
|
Heterogeneous Federated Learning System for Sparse Healthcare
Time-Series Prediction
|
cs.LG
|
In this paper, we propose a heterogeneous federated learning (HFL) system for
sparse time series prediction in healthcare, which is a decentralized federated
learning algorithm with heterogeneous transfers. We design dense and sparse
feature tensors to deal with the sparsity of data sources. Heterogeneous
federated learning is developed to share asynchronous parts of networks and
select appropriate models for knowledge transfer. Experimental results show
that the proposed HFL achieves the lowest prediction error among all benchmark
systems on eight out of ten prediction tasks, with MSE reduction of 94.8%,
48.3%, and 52.1% compared to the benchmark systems. These results demonstrate
the effectiveness of HFL in transferring knowledge from heterogeneous domains,
especially in the smaller target domain. Ablation studies then demonstrate the
effectiveness of the designed mechanisms for heterogeneous domain selection and
switching in predicting healthcare time series with privacy, model security,
and heterogeneous knowledge transfer.
|
2501.12128
|
Evaluating Efficiency and Engagement in Scripted and LLM-Enhanced
Human-Robot Interactions
|
cs.RO cs.HC
|
To achieve natural and intuitive interaction with people, HRI frameworks
combine a wide array of methods for human perception, intention communication,
human-aware navigation and collaborative action. In practice, when encountering
unpredictable behavior of people or unexpected states of the environment, these
frameworks may lack the ability to dynamically recognize such states, adapt and
recover to resume the interaction. Large Language Models (LLMs), owing to their
advanced reasoning capabilities and context retention, present a promising
solution for enhancing robot adaptability. This potential, however, may not
directly translate to improved interaction metrics. This paper considers a
representative interaction with an industrial robot involving approach,
instruction, and object manipulation, implemented in two conditions: (1) fully
scripted and (2) including LLM-enhanced responses. We use gaze tracking and
questionnaires to measure the participants' task efficiency, engagement, and
robot perception. The results indicate higher subjective ratings for the LLM
condition, but objective metrics show that the scripted condition performs
comparably, particularly in efficiency and focus during simple tasks. We also
note that the scripted condition may have an edge over LLM-enhanced responses
in terms of response latency and energy consumption, especially for trivial and
repetitive interactions.
|
2501.12133
|
Distributed Multi-Head Learning Systems for Power Consumption Prediction
|
cs.LG
|
As more and more automatic vehicles, power consumption prediction becomes a
vital issue for task scheduling and energy management. Most research focuses on
automatic vehicles in transportation, but few focus on automatic ground
vehicles (AGVs) in smart factories, which face complex environments and
generate large amounts of data. There is an inevitable trade-off between
feature diversity and interference. In this paper, we propose Distributed
Multi-Head learning (DMH) systems for power consumption prediction in smart
factories. Multi-head learning mechanisms are proposed in DMH to reduce noise
interference and improve accuracy. Additionally, DMH systems are designed as
distributed and split learning, reducing the client-to-server transmission
cost, sharing knowledge without sharing local data and models, and enhancing
the privacy and security levels. Experimental results show that the proposed
DMH systems rank in the top-2 on most datasets and scenarios. DMH-E system
reduces the error of the state-of-the-art systems by 14.5% to 24.0%.
Effectiveness studies demonstrate the effectiveness of Pearson
correlation-based feature engineering, and feature grouping with the proposed
multi-head learning further enhances prediction performance.
|
2501.12135
|
Revisit the AWGN-goodness of Polar-like Lattices
|
cs.IT math.IT
|
This paper aims to provide a comprehensive introduction to lattices
constructed based on polar-like codes and demonstrate some of their key
properties, such as AWGN goodness. We first present polar lattices directly
from the perspective of their generator matrix. Next, we discuss their
connection with the recently proposed PAC (polarization adjusted convolutional)
lattices and analyze the structural advantages of PAC lattices, through which
the AWGN-goodness of PAC lattices can be conveniently demonstrated.
|
2501.12136
|
Heterogeneous Federated Learning Systems for Time-Series Power
Consumption Prediction with Multi-Head Embedding Mechanism
|
cs.LG
|
Time-series prediction is increasingly popular in a variety of applications,
such as smart factories and smart transportation. Researchers have used various
techniques to predict power consumption, but existing models lack discussion of
collaborative learning and privacy issues among multiple clients. To address
these issues, we propose Multi-Head Heterogeneous Federated Learning (MHHFL)
systems that consist of multiple head networks, which independently act as
carriers for federated learning. In the federated period, each head network is
embedded into 2-dimensional vectors and shared with the centralized source
pool. MHHFL then selects appropriate source networks and blends the head
networks as knowledge transfer in federated learning. The experimental results
show that the proposed MHHFL systems significantly outperform the benchmark and
state-of-the-art systems and reduce the prediction error by 24.9% to 94.1%. The
ablation studies demonstrate the effectiveness of the proposed mechanisms in
the MHHFL (head network embedding and selection mechanisms), which
significantly outperforms traditional federated average and random transfer.
|
2501.12147
|
Improving Influence-based Instruction Tuning Data Selection for Balanced
Learning of Diverse Capabilities
|
cs.CL cs.AI cs.LG
|
Selecting appropriate training data is crucial for effective instruction
fine-tuning of large language models (LLMs), which aims to (1) elicit strong
capabilities, and (2) achieve balanced performance across a diverse range of
tasks. Influence-based methods show promise in achieving (1) by estimating the
contribution of each training example to the model's predictions, but often
struggle with (2). Our systematic investigation reveals that this
underperformance can be attributed to an inherent bias where certain tasks
intrinsically have greater influence than others. As a result, data selection
is often biased towards these tasks, not only hurting the model's performance
on others but also, counterintuitively, harms performance on these
high-influence tasks themselves.
As a remedy, we propose BIDS, a Balanced and Influential Data Selection
algorithm. BIDS first normalizes influence scores of the training data, and
then iteratively balances data selection by choosing the training example with
the highest influence on the most underrepresented task. Experiments with both
Llama-3 and Mistral-v0.3 on seven benchmarks spanning five diverse capabilities
show that BIDS consistently outperforms both state-of-the-art influence-based
algorithms and other non-influence-based selection frameworks. Surprisingly,
training on a 15% subset selected by BIDS can even outperform full-dataset
training with a much more balanced performance. Our analysis further highlights
the importance of both instance-level normalization and iterative optimization
of selected data for balanced learning of diverse capabilities.
|
2501.12148
|
Deep Unfolding of Fixed-Point Based Algorithm for Weighted Sum Rate
Maximization
|
cs.IT math.IT
|
In this paper, we propose a novel approach that harnesses the standard
interference function, specifically tailored to address the unique challenges
of non-convex optimization in wireless networks. We begin by establishing
theoretical guarantees for our method under the assumption that the
interference function exhibits log-concavity. Building on this foundation, we
develop a Primal-Dual Algorithm (PDA) to approximate the solution to the
Weighted Sum Rate (WSR) maximization problem. To further enhance computational
efficiency, we leverage the deep unfolding technique, significantly reducing
the complexity of the proposed algorithm. Through numerical experiments, we
demonstrate the competitiveness of our method compared to the state-of-the-art
fractional programming benchmark, commonly referred to as FPLinQ.
|
2501.12149
|
On the practical applicability of modern DFT functionals for chemical
computations. Case study of DM21 applicability for geometry optimization
|
physics.comp-ph cond-mat.mtrl-sci cs.AI
|
Density functional theory (DFT) is probably the most promising approach for
quantum chemistry calculations considering its good balance between
calculations precision and speed. In recent years, several neural network-based
functionals have been developed for exchange-correlation energy approximation
in DFT, DM21 developed by Google Deepmind being the most notable between them.
This study focuses on evaluating the efficiency of DM21 functional in
predicting molecular geometries, with a focus on the influence of oscillatory
behavior in neural network exchange-correlation functionals. We implemented
geometry optimization in PySCF for the DM21 functional in geometry optimization
problem, compared its performance with traditional functionals, and tested it
on various benchmarks. Our findings reveal both the potential and the current
challenges of using neural network functionals for geometry optimization in
DFT. We propose a solution extending the practical applicability of such
functionals and allowing to model new substances with their help.
|
2501.12150
|
DNRSelect: Active Best View Selection for Deferred Neural Rendering
|
cs.CV
|
Deferred neural rendering (DNR) is an emerging computer graphics pipeline
designed for high-fidelity rendering and robotic perception. However, DNR
heavily relies on datasets composed of numerous ray-traced images and demands
substantial computational resources. It remains under-explored how to reduce
the reliance on high-quality ray-traced images while maintaining the rendering
fidelity. In this paper, we propose DNRSelect, which integrates a reinforcement
learning-based view selector and a 3D texture aggregator for deferred neural
rendering. We first propose a novel view selector for deferred neural rendering
based on reinforcement learning, which is trained on easily obtained rasterized
images to identify the optimal views. By acquiring only a few ray-traced images
for these selected views, the selector enables DNR to achieve high-quality
rendering. To further enhance spatial awareness and geometric consistency in
DNR, we introduce a 3D texture aggregator that fuses pyramid features from
depth maps and normal maps with UV maps. Given that acquiring ray-traced images
is more time-consuming than generating rasterized images, DNRSelect minimizes
the need for ray-traced data by using only a few selected views while still
achieving high-fidelity rendering results. We conduct detailed experiments and
ablation studies on the NeRF-Synthetic dataset to demonstrate the effectiveness
of DNRSelect. The code will be released.
|
2501.12156
|
Characterization of Invariance, Periodic Solutions and Optimization of
Dynamic Financial Networks
|
eess.SY cs.SY math.DS math.OC
|
Cascading failures, such as bankruptcies and defaults, pose a serious threat
for the resilience of the global financial system. Indeed, because of the
complex investment and cross-holding relations within the system, failures can
occur as a result of the propagation of a financial collapse from one
organization to another. While this problem has been studied in depth from a
static angle, namely, when the system is at an equilibrium, we take a different
perspective and study the corresponding dynamical system. The contribution of
this paper is threefold. First, we carry out a systematic analysis of the
regions of attraction and invariance of the system orthants, defined by the
positive and negative values of the organizations' equity. Second, we
investigate periodic solutions and show through a counterexample that there
could exist periodic solutions of period greater than 2. Finally, we study the
problem of finding the smallest cash injection that would bring the system to
the maximal invariant region of the positive orthant.
|
2501.12157
|
Fast-RF-Shimming: Accelerate RF Shimming in 7T MRI using Deep Learning
|
cs.CV
|
Ultrahigh field (UHF) Magnetic Resonance Imaging (MRI) provides a high
signal-to-noise ratio (SNR), enabling exceptional spatial resolution for
clinical diagnostics and research. However, higher fields introduce challenges
such as transmit radiofrequency (RF) field inhomogeneities, which result in
uneven flip angles and image intensity artifacts. These artifacts degrade image
quality and limit clinical adoption. Traditional RF shimming methods, including
Magnitude Least Squares (MLS) optimization, mitigate RF field inhomogeneity but
are time-intensive and often require the presence of the patient. Recent
machine learning methods, such as RF Shim Prediction by Iteratively Projected
Ridge Regression and other deep learning architectures, offer alternative
approaches but face challenges such as extensive training requirements, limited
complexity, and practical data constraints. This paper introduces a holistic
learning-based framework called Fast RF Shimming, which achieves a 5000-fold
speedup compared to MLS methods. First, random-initialized Adaptive Moment
Estimation (Adam) derives reference shimming weights from multichannel RF
fields. Next, a Residual Network (ResNet) maps RF fields to shimming outputs
while incorporating a confidence parameter into the loss function. Finally, a
Non-uniformity Field Detector (NFD) identifies extreme non-uniform outcomes.
Comparative evaluations demonstrate significant improvements in both speed and
predictive accuracy. The proposed pipeline also supports potential extensions,
such as the integration of anatomical priors or multi-echo data, to enhance the
robustness of RF field correction. This approach offers a faster and more
efficient solution to RF shimming challenges in UHF MRI.
|
2501.12162
|
AdaServe: SLO-Customized LLM Serving with Fine-Grained Speculative
Decoding
|
cs.CL cs.AI cs.DC cs.LG
|
This paper introduces AdaServe, the first LLM serving system to support SLO
customization through fine-grained speculative decoding. AdaServe leverages the
logits of a draft model to predict the speculative accuracy of tokens and
employs a theoretically optimal algorithm to construct token trees for
verification. To accommodate diverse SLO requirements without compromising
throughput, AdaServe employs a speculation-and-selection scheme that first
constructs candidate token trees for each request and then dynamically selects
tokens to meet individual SLO constraints while optimizing throughput.
Comprehensive evaluations demonstrate that AdaServe achieves up to 73% higher
SLO attainment and 74% higher goodput compared to state-of-the-art systems.
These results underscore AdaServe's potential to enhance the efficiency and
adaptability of LLM deployments across varied application scenarios.
|
2501.12166
|
Beyond Window-Based Detection: A Graph-Centric Framework for Discrete
Log Anomaly Detection
|
cs.SE cs.LG
|
Detecting anomalies in discrete event logs is critical for ensuring system
reliability, security, and efficiency. Traditional window-based methods for log
anomaly detection often suffer from context bias and fuzzy localization, which
hinder their ability to precisely and efficiently identify anomalies. To
address these challenges, we propose a graph-centric framework, TempoLog, which
leverages multi-scale temporal graph networks for discrete log anomaly
detection. Unlike conventional methods, TempoLog constructs continuous-time
dynamic graphs directly from event logs, eliminating the need for fixed-size
window grouping. By representing log templates as nodes and their temporal
relationships as edges, the framework dynamically captures both local and
global dependencies across multiple temporal scales. Additionally, a
semantic-aware model enhances detection by incorporating rich contextual
information. Extensive experiments on public datasets demonstrate that our
method achieves state-of-the-art performance in event-level anomaly detection,
significantly outperforming existing approaches in both accuracy and
efficiency.
|
2501.12167
|
Soft-Decision Decoding for LDPC Code-Based Quantitative Group Testing
|
cs.IT math.IT
|
We consider the problem of identifying defective items in a population with
non-adaptive quantitative group testing. For this scenario, Mashauri et al.
recently proposed a low-density parity-check (LDPC) code-based quantitative
group testing scheme with a hard-decision decoding approach (akin to peeling
decoding). This scheme outperforms generalized LDPC code-based quantitative
group testing schemes in terms of the misdetection rate. In this work, we
propose a belief-propagation-based decoder for quantitative group testing with
LDPC codes, where the messages being passed are purely soft. Through extensive
simulations, we show that the proposed soft-information decoder outperforms the
hard-decision decoder Mashauri et al.
|
2501.12169
|
SVGS-DSGAT: An IoT-Enabled Innovation in Underwater Robotic Object
Detection Technology
|
cs.CV
|
With the advancement of Internet of Things (IoT) technology, underwater
target detection and tracking have become increasingly important for ocean
monitoring and resource management. Existing methods often fall short in
handling high-noise and low-contrast images in complex underwater environments,
lacking precision and robustness. This paper introduces a novel SVGS-DSGAT
model that combines GraphSage, SVAM, and DSGAT modules, enhancing feature
extraction and target detection capabilities through graph neural networks and
attention mechanisms. The model integrates IoT technology to facilitate
real-time data collection and processing, optimizing resource allocation and
model responsiveness. Experimental results demonstrate that the SVGS-DSGAT
model achieves an mAP of 40.8% on the URPC 2020 dataset and 41.5% on the
SeaDronesSee dataset, significantly outperforming existing mainstream models.
This IoT-enhanced approach not only excels in high-noise and complex
backgrounds but also improves the overall efficiency and scalability of the
system. This research provides an effective IoT solution for underwater target
detection technology, offering significant practical application value and
broad development prospects.
|
2501.12173
|
ComposeAnyone: Controllable Layout-to-Human Generation with Decoupled
Multimodal Conditions
|
cs.CV
|
Building on the success of diffusion models, significant advancements have
been made in multimodal image generation tasks. Among these, human image
generation has emerged as a promising technique, offering the potential to
revolutionize the fashion design process. However, existing methods often focus
solely on text-to-image or image reference-based human generation, which fails
to satisfy the increasingly sophisticated demands. To address the limitations
of flexibility and precision in human generation, we introduce ComposeAnyone, a
controllable layout-to-human generation method with decoupled multimodal
conditions. Specifically, our method allows decoupled control of any part in
hand-drawn human layouts using text or reference images, seamlessly integrating
them during the generation process. The hand-drawn layout, which utilizes
color-blocked geometric shapes such as ellipses and rectangles, can be easily
drawn, offering a more flexible and accessible way to define spatial layouts.
Additionally, we introduce the ComposeHuman dataset, which provides decoupled
text and reference image annotations for different components of each human
image, enabling broader applications in human image generation tasks. Extensive
experiments on multiple datasets demonstrate that ComposeAnyone generates human
images with better alignment to given layouts, text descriptions, and reference
images, showcasing its multi-task capability and controllability.
|
2501.12174
|
BiMarker: Enhancing Text Watermark Detection for Large Language Models
with Bipolar Watermarks
|
cs.LG
|
The rapid growth of Large Language Models (LLMs) raises concerns about
distinguishing AI-generated text from human content. Existing watermarking
techniques, like \kgw, struggle with low watermark strength and stringent
false-positive requirements. Our analysis reveals that current methods rely on
coarse estimates of non-watermarked text, limiting watermark detectability. To
address this, we propose Bipolar Watermark (\tool), which splits generated text
into positive and negative poles, enhancing detection without requiring
additional computational resources or knowledge of the prompt. Theoretical
analysis and experimental results demonstrate \tool's effectiveness and
compatibility with existing optimization techniques, providing a new
optimization dimension for watermarking in LLM-generated content.
|
2501.12175
|
Less is More: Information Bottleneck Denoised Multimedia Recommendation
|
cs.IR
|
Empowered by semantic-rich content information, multimedia recommendation has
emerged as a potent personalized technique. Current endeavors center around
harnessing multimedia content to refine item representation or uncovering
latent item-item structures based on modality similarity. Despite the
effectiveness, we posit that these methods are usually suboptimal due to the
introduction of irrelevant multimedia features into recommendation tasks. This
stems from the fact that generic multimedia feature extractors, while
well-designed for domain-specific tasks, can inadvertently introduce
task-irrelevant features, leading to potential misguidance of recommenders. In
this work, we propose a denoised multimedia recommendation paradigm via the
Information Bottleneck principle (IB). Specifically, we propose a novel
Information Bottleneck denoised Multimedia Recommendation (IBMRec) model to
tackle the irrelevant feature issue. IBMRec removes task-irrelevant features
from both feature and item-item structure perspectives, which are implemented
by two-level IB learning modules: feature-level (FIB) and graph-level (GIB). In
particular, FIB focuses on learning the minimal yet sufficient multimedia
features. This is achieved by maximizing the mutual information between
multimedia representation and recommendation tasks, while concurrently
minimizing it between multimedia representation and pre-trained multimedia
features. Furthermore, GIB is designed to learn the robust item-item graph
structure, it refines the item-item graph based on preference affinity, then
minimizes the mutual information between the original graph and the refined
one. Extensive experiments across three benchmarks validate the effectiveness
of our proposed model, showcasing high performance, and applicability to
various multimedia recommenders.
|
2501.12176
|
DataPro -- A Standardized Data Understanding and Processing Procedure: A
Case Study of an Eco-Driving Project
|
cs.IR
|
A systematic pipeline for data processing and knowledge discovery is
essential to extracting knowledge from big data and making recommendations for
operational decision-making. The CRISP-DM model is the de-facto standard for
developing data-mining projects in practice. However, advancements in data
processing technologies require enhancements to this framework. This paper
presents the DataPro (a standardized data understanding and processing
procedure) model, which extends CRISP-DM and emphasizes the link between data
scientists and stakeholders by adding the "technical understanding" and
"implementation" phases. Firstly, the "technical understanding" phase aligns
business demands with technical requirements, ensuring the technical team's
accurate comprehension of business goals. Next, the "implementation" phase
focuses on the practical application of developed data science models, ensuring
theoretical models are effectively applied in business contexts. Furthermore,
clearly defining roles and responsibilities in each phase enhances management
and communication among all participants. Afterward, a case study on an
eco-driving data science project for fuel efficiency analysis in the Danish
public transportation sector illustrates the application of the DataPro model.
By following the proposed framework, the project identified key business
objectives, translated them into technical requirements, and developed models
that provided actionable insights for reducing fuel consumption. Finally, the
model is evaluated qualitatively, demonstrating its superiority over other data
science procedures.
|
2501.12178
|
High-dimensional multimodal uncertainty estimation by manifold
alignment:Application to 3D right ventricular strain computations
|
cs.CV
|
Confidence in the results is a key ingredient to improve the adoption of
machine learning methods by clinicians. Uncertainties on the results have been
considered in the literature, but mostly those originating from the learning
and processing methods. Uncertainty on the data is hardly challenged, as a
single sample is often considered representative enough of each subject
included in the analysis. In this paper, we propose a representation learning
strategy to estimate local uncertainties on a physiological descriptor (here,
myocardial deformation) previously obtained from medical images by different
definitions or computations. We first use manifold alignment to match the
latent representations associated to different high-dimensional input
descriptors. Then, we formulate plausible distributions of latent
uncertainties, and finally exploit them to reconstruct uncertainties on the
input high-dimensional descriptors. We demonstrate its relevance for the
quantification of myocardial deformation (strain) from 3D echocardiographic
image sequences of the right ventricle, for which a lack of consensus exists in
its definition and which directional component to use. We used a database of
100 control subjects with right ventricle overload, for which different types
of strain are available at each point of the right ventricle endocardial
surface mesh. Our approach quantifies local uncertainties on myocardial
deformation from different descriptors defining this physiological concept.
Such uncertainties cannot be directly estimated by local statistics on such
descriptors, potentially of heterogeneous types. Beyond this controlled
illustrative application, our methodology has the potential to be generalized
to many other population analyses considering heterogeneous high-dimensional
descriptors.
|
2501.12183
|
Extend Adversarial Policy Against Neural Machine Translation via Unknown
Token
|
cs.CL
|
Generating adversarial examples contributes to mainstream neural machine
translation~(NMT) robustness. However, popular adversarial policies are apt for
fixed tokenization, hindering its efficacy for common character perturbations
involving versatile tokenization. Based on existing adversarial generation via
reinforcement learning~(RL), we propose the `DexChar policy' that introduces
character perturbations for the existing mainstream adversarial policy based on
token substitution. Furthermore, we improve the self-supervised matching that
provides feedback in RL to cater to the semantic constraints required during
training adversaries. Experiments show that our method is compatible with the
scenario where baseline adversaries fail, and can generate high-efficiency
adversarial examples for analysis and optimization of the system.
|
2501.12186
|
Removal of Small Weight Stopping Sets for Asynchronous Unsourced
Multiple Access
|
cs.IT math.IT
|
In this paper, we analyze the formation of small stopping sets in joint
factor graphs describing a frame-asynchronous two-user transmission.
Furthermore, we propose an algorithm to completely avoid small stopping sets in
the joint factor graph over the entire range of symbol delays. The error floor
caused by those stopping sets is completely mitigated. Our key observation is
that, while the order of bits in the codeword is irrelevant in a single-user
environment, it turns out to be crucial in the asynchronous, unsourced two-user
system. Subsequently, our algorithm finds a reordering of variable nodes (VNs)
which avoids the smallest stopping set in the joint graph. We show that further
improvements can be achieved when girth optimization of the single-user graphs
by progressive edge growth (PEG) is used in combination with our proposed
algorithm. Starting with a randomized code construction with optimized degree
distribution, our simulation results show that PEG followed by the proposed
algorithm can improve the average per user probability of error (PUPE) in a
noiseless channel by almost two orders of magnitude for a broad range of frame
delays.
|
2501.12189
|
MirrorCBO: A consensus-based optimization method in the spirit of mirror
descent
|
math.OC cs.LG
|
In this work we propose MirrorCBO, a consensus-based optimization (CBO)
method which generalizes standard CBO in the same way that mirror descent
generalizes gradient descent. For this we apply the CBO methodology to a swarm
of dual particles and retain the primal particle positions by applying the
inverse of the mirror map, which we parametrize as the subdifferential of a
strongly convex function $\phi$. In this way, we combine the advantages of a
derivative-free non-convex optimization algorithm with those of mirror descent.
As a special case, the method extends CBO to optimization problems with convex
constraints. Assuming bounds on the Bregman distance associated to $\phi$, we
provide asymptotic convergence results for MirrorCBO with explicit exponential
rate. Another key contribution is an exploratory numerical study of this new
algorithm across different application settings, focusing on (i)
sparsity-inducing optimization, and (ii) constrained optimization,
demonstrating the competitive performance of MirrorCBO. We observe empirically
that the method can also be used for optimization on (non-convex) submanifolds
of Euclidean space, can be adapted to mirrored versions of other recent CBO
variants, and that it inherits from mirror descent the capability to select
desirable minimizers, like sparse ones. We also include an overview of recent
CBO approaches for constrained optimization and compare their performance to
MirrorCBO.
|
2501.12191
|
A margin-based replacement for cross-entropy loss
|
cs.LG cs.CV
|
Cross-entropy (CE) loss is the de-facto standard for training deep neural
networks to perform classification. However, CE-trained deep neural networks
struggle with robustness and generalisation issues. To alleviate these issues,
we propose high error margin (HEM) loss, a variant of multi-class margin loss
that overcomes the training issues of other margin-based losses. We evaluate
HEM extensively on a range of architectures and datasets. We find that HEM loss
is more effective than cross-entropy loss across a wide range of tasks: unknown
class rejection, adversarial robustness, learning with imbalanced data,
continual learning, and semantic segmentation (a pixel-level classification
task). Despite all training hyper-parameters being chosen for CE loss, HEM is
inferior to CE only in terms of clean accuracy and this difference is
insignificant. We also compare HEM to specialised losses that have previously
been proposed to improve performance on specific tasks. LogitNorm, a loss
achieving state-of-the-art performance on unknown class rejection, produces
similar performance to HEM for this task, but is much poorer for continual
learning and semantic segmentation. Logit-adjusted loss, designed for
imbalanced data, has superior results to HEM for that task, but performs more
poorly on unknown class rejection and semantic segmentation. DICE, a popular
loss for semantic segmentation, is inferior to HEM loss on all tasks, including
semantic segmentation. Thus, HEM often out-performs specialised losses, and in
contrast to them, is a general-purpose replacement for CE loss.
|
2501.12193
|
MyDigiTwin: A Privacy-Preserving Framework for Personalized
Cardiovascular Risk Prediction and Scenario Exploration
|
cs.LG cs.HC
|
Cardiovascular disease (CVD) remains a leading cause of death, and primary
prevention through personalized interventions is crucial. This paper introduces
MyDigiTwin, a framework that integrates health digital twins with personal
health environments to empower patients in exploring personalized health
scenarios while ensuring data privacy. MyDigiTwin uses federated learning to
train predictive models across distributed datasets without transferring raw
data, and a novel data harmonization framework addresses semantic and format
inconsistencies in health data. A proof-of-concept demonstrates the feasibility
of harmonizing and using cohort data to train privacy-preserving CVD prediction
models. This framework offers a scalable solution for proactive, personalized
cardiovascular care and sets the stage for future applications in real-world
healthcare settings.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.