id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
2502.10689
|
Self-Explaining Hypergraph Neural Networks for Diagnosis Prediction
|
cs.LG cs.AI
|
The burgeoning volume of electronic health records (EHRs) has enabled deep
learning models to excel in predictive healthcare. However, for high-stakes
applications such as diagnosis prediction, model interpretability remains
paramount. Existing deep learning diagnosis prediction models with intrinsic
interpretability often assign attention weights to every past diagnosis or
hospital visit, providing explanations lacking flexibility and succinctness. In
this paper, we introduce SHy, a self-explaining hypergraph neural network
model, designed to offer personalized, concise and faithful explanations that
allow for interventions from clinical experts. By modeling each patient as a
unique hypergraph and employing a message-passing mechanism, SHy captures
higher-order disease interactions and extracts distinct temporal phenotypes as
personalized explanations. It also addresses the incompleteness of the EHR data
by accounting for essential false negatives in the original diagnosis record. A
qualitative case study and extensive quantitative evaluations on two real-world
EHR datasets demonstrate the superior predictive performance and
interpretability of SHy over existing state-of-the-art models.
|
2502.10691
|
Controlling Neural Collapse Enhances Out-of-Distribution Detection and
Transfer Learning
|
cs.LG
|
Out-of-distribution (OOD) detection and OOD generalization are widely studied
in Deep Neural Networks (DNNs), yet their relationship remains poorly
understood. We empirically show that the degree of Neural Collapse (NC) in a
network layer is inversely related with these objectives: stronger NC improves
OOD detection but degrades generalization, while weaker NC enhances
generalization at the cost of detection. This trade-off suggests that a single
feature space cannot simultaneously achieve both tasks. To address this, we
develop a theoretical framework linking NC to OOD detection and generalization.
We show that entropy regularization mitigates NC to improve generalization,
while a fixed Simplex Equiangular Tight Frame (ETF) projector enforces NC for
better detection. Based on these insights, we propose a method to control NC at
different DNN layers. In experiments, our method excels at both tasks across
OOD datasets and DNN architectures.
|
2502.10693
|
Extremely Large Full Duplex MIMO for Simultaneous Downlink
Communications and Monostatic Sensing at Sub-THz Frequencies
|
cs.IT cs.ET math.IT
|
The in-band Full Duplex (FD) technology is lately gaining attention as an
enabler for the emerging paradigm of Integrated Sensing and Communications
(ISAC), which envisions seamless integration of sensing mechanisms for
unconnected entities into next generation wireless networks. In this paper, we
present an FD Multiple-Input Multiple-Output (MIMO) system with extremely large
antenna arrays at its transceiver module, which is optimized, considering two
emerging analog beamforming architectures, for simultaneous DownLink (DL)
communications and monostatic-type sensing intended at the sub-THz frequencies,
with the latter operation relying on received reflections of the transmitted
information-bearing signals. A novel optimization framework for the joint
design of the analog and digital transmit beamforming, analog receive
combining, and the digital canceler for the self-interference signal is devised
with the objective to maximize the achievable DL rate, while meeting a
predefined threshold for the position error bound for the unknown
three-dimensional parameters of a passive target. Capitalizing on the
distinctive features of the beamforming architectures with fully-connected
networks of phase shifters and partially-connected arrays of metamaterials, two
ISAC designs are presented. Our simulation results showcase the superiority of
both proposed designs over state-of-the-art schemes, highlighting the role of
various system parameters in the trade-off between the communication and
sensing functionalities.
|
2502.10694
|
Simulations of Common Unsupervised Domain Adaptation Algorithms for
Image Classification
|
cs.LG cs.AI
|
Traditional machine learning assumes that training and test sets are derived
from the same distribution; however, this assumption does not always hold in
practical applications. This distribution disparity can lead to severe
performance drops when the trained model is used in new data sets. Domain
adaptation (DA) is a machine learning technique that aims to address this
problem by reducing the differences between domains. This paper presents
simulation-based algorithms of recent DA techniques, mainly related to
unsupervised domain adaptation (UDA), where labels are available only in the
source domain. Our study compares these techniques with public data sets and
diverse characteristics, highlighting their respective strengths and drawbacks.
For example, Safe Self-Refinement for Transformer-based DA (SSRT) achieved the
highest accuracy (91.6\%) in the office-31 data set during our simulations,
however, the accuracy dropped to 72.4\% in the Office-Home data set when using
limited batch sizes. In addition to improving the reader's comprehension of
recent techniques in DA, our study also highlights challenges and upcoming
directions for research in this domain. The codes are available at
https://github.com/AIPMLab/Domain_Adaptation.
|
2502.10697
|
The Lee weight distributions of several classes of linear codes over
$\mathbb{Z}_4$
|
cs.IT math.IT
|
Let $\mathbb{Z}_4$ denote the ring of integers modulo $4$. The Galois ring
GR$(4,m)$, which consists of $4^m$ elements, represents the Galois extension of
degree $m$ over $\mathbb{Z}_4$. The constructions of codes over $\mathbb{Z}_4$
have garnered significant interest in recent years. In this paper, building
upon previous research, we utilize the defining-set approach to construct
several classes of linear codes over $\mathbb{Z}_4$ by effectively using the
properties of the trace function from GR$(4,m)$ to $\mathbb{Z}_4$. As a result,
we have been able to obtain new linear codes over $\mathbb{Z}_4$ with good
parameters and determine their Lee weight distributions. Upon comparison with
the existing database of $\mathbb{Z}_4$ codes, our construction can yield novel
linear codes, as well as linear codes that possess the best known minimum Lee
distance.
|
2502.10698
|
Superpose Singular Features for Model Merging
|
cs.LG cs.AI
|
Model merging is a critical technique for combining the capabilities of
multiple fine-tuned models without requiring additional training. While
existing methods treat parameters as vectors, they overlook the intrinsic
structure of linear transformation matrices - the core components that comprise
the majority of model parameters. These matrices are fundamental to neural
networks, mapping input representations to output features through linear
combinations. Motivated by the linear representation hypothesis, we introduce
task matrix and propose to Superpose Features from Task Matrix (SFTM), a novel
approach that superposes features from individual task models into a merged
model. SFTM employs singular value decomposition to identify feature bases of
linear transformation matrices and solves a linear system to optimally combine
them while preserving input-output mappings from individual task models.
Extensive experiments on vision transformers and language models demonstrate
that our method consistently outperforms existing methods, achieving superior
performance and enhanced out-of-distribution generalization.
|
2502.10699
|
Exploring Synaptic Resonance in Large Language Models: A Novel Approach
to Contextual Memory Integration
|
cs.CL cs.AI cs.NE
|
Contextual memory integration remains a high challenge in the development of
language models, particularly in tasks that require maintaining coherence over
extended sequences. Traditional approaches, such as self-attention mechanisms
and memory-augmented architectures, often prioritize short-term dependencies,
leading to fragmentation and inconsistency in long-range contextual
understanding. Inspired by principles of synaptic plasticity observed in
biological neural systems, a novel mechanism, Synaptic Resonance, is introduced
to dynamically reinforce relevant memory pathways during training and
inference. Unlike static memory representations, this mechanism continuously
adjusts synaptic weight matrices based on contextual relevance, allowing for
improved information retention without excessive computational overhead.
Evaluations conducted on an open-source language model demonstrate reductions
in perplexity, enhancements in contextual coherence, and increased robustness
against input noise, highlighting the effectiveness of reinforcement-driven
memory modulation. Comparative analysis against baseline models further reveals
that the proposed approach achieves higher memory retention efficiency while
maintaining computational feasibility. The architectural modifications
integrate seamlessly into existing transformer-based frameworks, ensuring
stable convergence and efficient inference without sacrificing scalability.
Applications benefiting from improved long-term contextual consistency, such as
dialogue systems and document summarization, stand to gain from this approach.
Empirical findings suggest that dynamically reinforced memory pathways offer a
promising alternative to conventional memory mechanisms, addressing
longstanding limitations in extended sequence modeling.
|
2502.10701
|
Unpacking the Layers: Exploring Self-Disclosure Norms, Engagement
Dynamics, and Privacy Implications
|
cs.SI cs.HC
|
This paper characterizes the self-disclosure behavior of Reddit users across
11 different types of self-disclosure. We find that at least half of the users
share some type of disclosure in at least 10% of their posts, with half of
these posts having more than one type of disclosure. We show that different
types of self-disclosure are likely to receive varying levels of engagement.
For instance, a Sexual Orientation disclosure garners more comments than other
self-disclosures. We also explore confounding factors that affect future
self-disclosure. We show that users who receive interactions from
(self-disclosure) specific subreddit members are more likely to disclose in the
future. We also show that privacy risks due to self-disclosure extend beyond
Reddit users themselves to include their close contacts, such as family and
friends, as their information is also revealed. We develop a browser plugin for
end-users to flag self-disclosure in their content.
|
2502.10703
|
Artificial intelligence-enabled detection and assessment of Parkinson's
disease using multimodal data: A survey
|
cs.LG cs.SD
|
The rapid emergence of highly adaptable and reusable artificial intelligence
(AI) models is set to revolutionize the medical field, particularly in the
diagnosis and management of Parkinson's disease (PD). Currently, there are no
effective biomarkers for diagnosing PD, assessing its severity, or tracking its
progression. Numerous AI algorithms are now being used for PD diagnosis and
treatment, capable of performing various classification tasks based on
multimodal and heterogeneous disease symptom data, such as gait, hand
movements, and speech patterns of PD patients. They provide expressive
feedback, including predicting the potential likelihood of PD, assessing the
severity of individual or multiple symptoms, aiding in early detection, and
evaluating rehabilitation and treatment effectiveness, thereby demonstrating
advanced medical diagnostic capabilities. Therefore, this work provides a
surveyed compilation of recent works regarding PD detection and assessment
through biometric symptom recognition with a focus on machine learning and deep
learning approaches, emphasizing their benefits, and exposing their weaknesses,
and their impact in opening up newer research avenues. Additionally, it also
presents categorized and characterized descriptions of the datasets,
approaches, and architectures employed to tackle associated constraints.
Furthermore, the paper explores the potential opportunities and challenges
presented by data-driven AI technologies in the diagnosis of PD.
|
2502.10704
|
Occlusion-aware Non-Rigid Point Cloud Registration via Unsupervised
Neural Deformation Correntropy
|
cs.CV cs.AI
|
Non-rigid alignment of point clouds is crucial for scene understanding,
reconstruction, and various computer vision and robotics tasks. Recent
advancements in implicit deformation networks for non-rigid registration have
significantly reduced the reliance on large amounts of annotated training data.
However, existing state-of-the-art methods still face challenges in handling
occlusion scenarios. To address this issue, this paper introduces an innovative
unsupervised method called Occlusion-Aware Registration (OAR) for non-rigidly
aligning point clouds. The key innovation of our method lies in the utilization
of the adaptive correntropy function as a localized similarity measure,
enabling us to treat individual points distinctly. In contrast to previous
approaches that solely minimize overall deviations between two shapes, we
combine unsupervised implicit neural representations with the maximum
correntropy criterion to optimize the deformation of unoccluded regions. This
effectively avoids collapsed, tearing, and other physically implausible
results. Moreover, we present a theoretical analysis and establish the
relationship between the maximum correntropy criterion and the commonly used
Chamfer distance, highlighting that the correntropy-induced metric can be
served as a more universal measure for point cloud analysis. Additionally, we
introduce locally linear reconstruction to ensure that regions lacking
correspondences between shapes still undergo physically natural deformations.
Our method achieves superior or competitive performance compared to existing
approaches, particularly when dealing with occluded geometries. We also
demonstrate the versatility of our method in challenging tasks such as large
deformations, shape interpolation, and shape completion under occlusion
disturbances.
|
2502.10705
|
CoPEFT: Fast Adaptation Framework for Multi-Agent Collaborative
Perception with Parameter-Efficient Fine-Tuning
|
cs.AI
|
Multi-agent collaborative perception is expected to significantly improve
perception performance by overcoming the limitations of single-agent perception
through exchanging complementary information. However, training a robust
collaborative perception model requires collecting sufficient training data
that covers all possible collaboration scenarios, which is impractical due to
intolerable deployment costs. Hence, the trained model is not robust against
new traffic scenarios with inconsistent data distribution and fundamentally
restricts its real-world applicability. Further, existing methods, such as
domain adaptation, have mitigated this issue by exposing the deployment data
during the training stage but incur a high training cost, which is infeasible
for resource-constrained agents. In this paper, we propose a
Parameter-Efficient Fine-Tuning-based lightweight framework, CoPEFT, for fast
adapting a trained collaborative perception model to new deployment
environments under low-cost conditions. CoPEFT develops a Collaboration Adapter
and Agent Prompt to perform macro-level and micro-level adaptations separately.
Specifically, the Collaboration Adapter utilizes the inherent knowledge from
training data and limited deployment data to adapt the feature map to new data
distribution. The Agent Prompt further enhances the Collaboration Adapter by
inserting fine-grained contextual information about the environment. Extensive
experiments demonstrate that our CoPEFT surpasses existing methods with less
than 1\% trainable parameters, proving the effectiveness and efficiency of our
proposed method.
|
2502.10706
|
Raising the Bar in Graph OOD Generalization: Invariant Learning Beyond
Explicit Environment Modeling
|
cs.LG cs.AI
|
Out-of-distribution (OOD) generalization has emerged as a critical challenge
in graph learning, as real-world graph data often exhibit diverse and shifting
environments that traditional models fail to generalize across. A promising
solution to address this issue is graph invariant learning (GIL), which aims to
learn invariant representations by disentangling label-correlated invariant
subgraphs from environment-specific subgraphs. However, existing GIL methods
face two major challenges: (1) the difficulty of capturing and modeling diverse
environments in graph data, and (2) the semantic cliff, where invariant
subgraphs from different classes are difficult to distinguish, leading to poor
class separability and increased misclassifications. To tackle these
challenges, we propose a novel method termed Multi-Prototype Hyperspherical
Invariant Learning (MPHIL), which introduces two key innovations: (1)
hyperspherical invariant representation extraction, enabling robust and highly
discriminative hyperspherical invariant feature extraction, and (2)
multi-prototype hyperspherical classification, which employs class prototypes
as intermediate variables to eliminate the need for explicit environment
modeling in GIL and mitigate the semantic cliff issue. Derived from the
theoretical framework of GIL, we introduce two novel objective functions: the
invariant prototype matching loss to ensure samples are matched to the correct
class prototypes, and the prototype separation loss to increase the distinction
between prototypes of different classes in the hyperspherical space. Extensive
experiments on 11 OOD generalization benchmark datasets demonstrate that MPHIL
achieves state-of-the-art performance, significantly outperforming existing
methods across graph data from various domains and with different distribution
shifts.
|
2502.10707
|
Reading Your Heart: Learning ECG Words and Sentences via Pre-training
ECG Language Model
|
cs.LG cs.AI
|
Electrocardiogram (ECG) is essential for the clinical diagnosis of
arrhythmias and other heart diseases, but deep learning methods based on ECG
often face limitations due to the need for high-quality annotations. Although
previous ECG self-supervised learning (eSSL) methods have made significant
progress in representation learning from unannotated ECG data, they typically
treat ECG signals as ordinary time-series data, segmenting the signals using
fixed-size and fixed-step time windows, which often ignore the form and rhythm
characteristics and latent semantic relationships in ECG signals. In this work,
we introduce a novel perspective on ECG signals, treating heartbeats as words
and rhythms as sentences. Based on this perspective, we first designed the
QRS-Tokenizer, which generates semantically meaningful ECG sentences from the
raw ECG signals. Building on these, we then propose HeartLang, a novel
self-supervised learning framework for ECG language processing, learning
general representations at form and rhythm levels. Additionally, we construct
the largest heartbeat-based ECG vocabulary to date, which will further advance
the development of ECG language processing. We evaluated HeartLang across six
public ECG datasets, where it demonstrated robust competitiveness against other
eSSL methods. Our data and code are publicly available at
https://github.com/PKUDigitalHealth/HeartLang.
|
2502.10708
|
Injecting Domain-Specific Knowledge into Large Language Models: A
Comprehensive Survey
|
cs.CL
|
Large Language Models (LLMs) have demonstrated remarkable success in various
tasks such as natural language understanding, text summarization, and machine
translation. However, their general-purpose nature often limits their
effectiveness in domain-specific applications that require specialized
knowledge, such as healthcare, chemistry, or legal analysis. To address this,
researchers have explored diverse methods to enhance LLMs by integrating
domain-specific knowledge. In this survey, we provide a comprehensive overview
of these methods, which we categorize into four key approaches: dynamic
knowledge injection, static knowledge embedding, modular adapters, and prompt
optimization. Each approach offers unique mechanisms to equip LLMs with domain
expertise, balancing trade-offs between flexibility, scalability, and
efficiency. We discuss how these methods enable LLMs to tackle specialized
tasks, compare their advantages and disadvantages, evaluate domain-specific
LLMs against general LLMs, and highlight the challenges and opportunities in
this emerging field. For those interested in delving deeper into this area, we
also summarize the commonly used datasets and benchmarks. To keep researchers
updated on the latest studies, we maintain an open-source at:
https://github.com/abilliyb/Knowledge_Injection_Survey_Papers, dedicated to
documenting research in the field of specialized LLM.
|
2502.10709
|
An Empirical Analysis of Uncertainty in Large Language Model Evaluations
|
cs.CL cs.AI
|
As LLM-as-a-Judge emerges as a new paradigm for assessing large language
models (LLMs), concerns have been raised regarding the alignment, bias, and
stability of LLM evaluators. While substantial work has focused on alignment
and bias, little research has concentrated on the stability of LLM evaluators.
In this paper, we conduct extensive experiments involving 9 widely used LLM
evaluators across 2 different evaluation settings to investigate the
uncertainty in model-based LLM evaluations. We pinpoint that LLM evaluators
exhibit varying uncertainty based on model families and sizes. With careful
comparative analyses, we find that employing special prompting strategies,
whether during inference or post-training, can alleviate evaluation uncertainty
to some extent. By utilizing uncertainty to enhance LLM's reliability and
detection capability in Out-Of-Distribution (OOD) data, we further fine-tune an
uncertainty-aware LLM evaluator named ConfiLM using a human-annotated
fine-tuning set and assess ConfiLM's OOD evaluation ability on a manually
designed test set sourced from the 2024 Olympics. Experimental results
demonstrate that incorporating uncertainty as additional information during the
fine-tuning phase can largely improve the model's evaluation performance in OOD
scenarios. The code and data are released at:
https://github.com/hasakiXie123/LLM-Evaluator-Uncertainty.
|
2502.10712
|
FuncGenFoil: Airfoil Generation and Editing Model in Function Space
|
cs.LG cs.AI
|
Aircraft manufacturing is the jewel in the crown of industry, among which
generating high-fidelity airfoil geometries with controllable and editable
representations remains a fundamental challenge. While existing
deep-learning-based methods rely on predefined parametric function families,
e.g., B\'ezier curves and discrete point-based representations, they suffer
from inherent trade-offs between expressiveness and resolution flexibility. To
tackle this challenge, we introduce FuncGenFoil, a novel function-space
generative model that directly learns functional airfoil geometries. Our method
inherits both the advantages of arbitrary resolution sampling and the
smoothness of parametric functions, as well as the strong expressiveness of
discrete point-based functions. Empirical evaluations on the AFBench dataset
demonstrate that FuncGenFoil improves upon state-of-the-art methods in airfoil
generation by achieving a relative -74.4 label error reduction and +23.2
diversity increase on the AF-200K dataset. Our results highlight the advantages
of function-space modeling for aerodynamic shape optimization, offering a
powerful and flexible framework for high-fidelity airfoil design. Our code will
be released.
|
2502.10713
|
Improving action segmentation via explicit similarity measurement
|
cs.CV
|
Existing supervised action segmentation methods depend on the quality of
frame-wise classification using attention mechanisms or temporal convolutions
to capture temporal dependencies. Even boundary detection-based methods
primarily depend on the accuracy of an initial frame-wise classification, which
can overlook precise identification of segments and boundaries in case of
low-quality prediction. To address this problem, this paper proposes ASESM
(Action Segmentation via Explicit Similarity Measurement) to enhance the
segmentation accuracy by incorporating explicit similarity evaluation across
frames and predictions. Our supervised learning architecture uses frame-level
multi-resolution features as input to multiple Transformer encoders. The
resulting multiple frame-wise predictions are used for similarity voting to
obtain high quality initial prediction. We apply a newly proposed boundary
correction algorithm that operates based on feature similarity between
consecutive frames to adjust the boundary locations iteratively through the
learning process. The corrected prediction is then further refined through
multiple stages of temporal convolutions. As post-processing, we optionally
apply boundary correction again followed by a segment smoothing method that
removes outlier classes within segments using similarity measurement between
consecutive predictions. Additionally, we propose a fully unsupervised boundary
detection-correction algorithm that identifies segment boundaries based solely
on feature similarity without any training. Experiments on 50Salads, GTEA, and
Breakfast datasets show the effectiveness of both the supervised and
unsupervised algorithms. Code and models are made available on Github.
|
2502.10714
|
Disentangle Nighttime Lens Flares: Self-supervised Generation-based Lens
Flare Removal
|
cs.CV
|
Lens flares arise from light reflection and refraction within sensor arrays,
whose diverse types include glow, veiling glare, reflective flare and so on.
Existing methods are specialized for one specific type only, and overlook the
simultaneous occurrence of multiple typed lens flares, which is common in the
real-world, e.g. coexistence of glow and displacement reflections from the same
light source. These co-occurring lens flares cannot be effectively resolved by
the simple combination of individual flare removal methods, since these
coexisting flares originates from the same light source and are generated
simultaneously within the same sensor array, exhibit a complex interdependence
rather than simple additive relation. To model this interdependent flare
relationship, our Nighttime Lens Flare Formation model is the first attempt to
learn the intrinsic physical relationship between flares on the imaging plane.
Building on this physical model, we introduce a solution to this joint flare
removal task named Self-supervised Generation-based Lens Flare Removal Network
(SGLFR-Net), which is self-supervised without pre-training. Specifically, the
nighttime glow is detangled in PSF Rendering Network(PSFR-Net) based on PSF
Rendering Prior, while the reflective flare is modelled in Texture Prior Based
Reflection Flare Removal Network (TPRR-Net). Empirical evaluations demonstrate
the effectiveness of the proposed method in both joint and individual glare
removal tasks.
|
2502.10716
|
Why Domain Generalization Fail? A View of Necessity and Sufficiency
|
cs.LG stat.ML
|
Despite a strong theoretical foundation, empirical experiments reveal that
existing domain generalization (DG) algorithms often fail to consistently
outperform the ERM baseline. We argue that this issue arises because most DG
studies focus on establishing theoretical guarantees for generalization under
unrealistic assumptions, such as the availability of sufficient, diverse (or
even infinite) domains or access to target domain knowledge. As a result, the
extent to which domain generalization is achievable in scenarios with limited
domains remains largely unexplored. This paper seeks to address this gap by
examining generalization through the lens of the conditions necessary for its
existence and learnability. Specifically, we systematically establish a set of
necessary and sufficient conditions for generalization. Our analysis highlights
that existing DG methods primarily act as regularization mechanisms focused on
satisfying sufficient conditions, while often neglecting necessary ones.
However, sufficient conditions cannot be verified in settings with limited
training domains. In such cases, regularization targeting sufficient conditions
aims to maximize the likelihood of generalization, whereas regularization
targeting necessary conditions ensures its existence. Using this analysis, we
reveal the shortcomings of existing DG algorithms by showing that, while they
promote sufficient conditions, they inadvertently violate necessary conditions.
To validate our theoretical insights, we propose a practical method that
promotes the sufficient condition while maintaining the necessary conditions
through a novel subspace representation alignment strategy. This approach
highlights the advantages of preserving the necessary conditions on
well-established DG benchmarks.
|
2502.10718
|
Hyperdimensional Intelligent Sensing for Efficient Real-Time Audio
Processing on Extreme Edge
|
cs.SD cs.AI eess.AS
|
The escalating challenges of managing vast sensor-generated data,
particularly in audio applications, necessitate innovative solutions. Current
systems face significant computational and storage demands, especially in
real-time applications like gunshot detection systems (GSDS), and the
proliferation of edge sensors exacerbates these issues. This paper proposes a
groundbreaking approach with a near-sensor model tailored for intelligent
audio-sensing frameworks. Utilizing a Fast Fourier Transform (FFT) module,
convolutional neural network (CNN) layers, and HyperDimensional Computing
(HDC), our model excels in low-energy, rapid inference, and online learning. It
is highly adaptable for efficient ASIC design implementation, offering superior
energy efficiency compared to conventional embedded CPUs or GPUs, and is
compatible with the trend of shrinking microphone sensor sizes. Comprehensive
evaluations at both software and hardware levels underscore the model's
efficacy. Software assessments through detailed ROC curve analysis revealed a
delicate balance between energy conservation and quality loss, achieving up to
82.1% energy savings with only 1.39% quality loss. Hardware evaluations
highlight the model's commendable energy efficiency when implemented via ASIC
design, especially with the Google Edge TPU, showcasing its superiority over
prevalent embedded CPUs and GPUs.
|
2502.10720
|
NPSim: Nighttime Photorealistic Simulation From Daytime Images With
Monocular Inverse Rendering and Ray Tracing
|
cs.CV cs.GR
|
Semantic segmentation is an important task for autonomous driving. A powerful
autonomous driving system should be capable of handling images under all
conditions, including nighttime. Generating accurate and diverse nighttime
semantic segmentation datasets is crucial for enhancing the performance of
computer vision algorithms in low-light conditions. In this thesis, we
introduce a novel approach named NPSim, which enables the simulation of
realistic nighttime images from real daytime counterparts with monocular
inverse rendering and ray tracing. NPSim comprises two key components: mesh
reconstruction and relighting. The mesh reconstruction component generates an
accurate representation of the scene structure by combining geometric
information extracted from the input RGB image and semantic information from
its corresponding semantic labels. The relighting component integrates
real-world nighttime light sources and material characteristics to simulate the
complex interplay of light and object surfaces under low-light conditions. The
scope of this thesis mainly focuses on the implementation and evaluation of the
mesh reconstruction component. Through experiments, we demonstrate the
effectiveness of the mesh reconstruction component in producing high-quality
scene meshes and their generality across different autonomous driving datasets.
We also propose a detailed experiment plan for evaluating the entire pipeline,
including both quantitative metrics in training state-of-the-art supervised and
unsupervised semantic segmentation approaches and human perceptual studies,
aiming to indicate the capability of our approach to generate realistic
nighttime images and the value of our dataset in steering future progress in
the field.
|
2502.10721
|
A Comprehensive Survey of Deep Learning for Multivariate Time Series
Forecasting: A Channel Strategy Perspective
|
cs.LG
|
Multivariate Time Series Forecasting (MTSF) plays a crucial role across
diverse fields, ranging from economic, energy, to traffic. In recent years,
deep learning has demonstrated outstanding performance in MTSF tasks. In MTSF,
modeling the correlations among different channels is critical, as leveraging
information from other related channels can significantly improve the
prediction accuracy of a specific channel. This study systematically reviews
the channel modeling strategies for time series and proposes a taxonomy
organized into three hierarchical levels: the strategy perspective, the
mechanism perspective, and the characteristic perspective. On this basis, we
provide a structured analysis of these methods and conduct an in-depth
examination of the advantages and limitations of different channel strategies.
Finally, we summarize and discuss some future research directions to provide
useful research guidance. Moreover, we maintain an up-to-date Github repository
(https://github.com/decisionintelligence/CS4TS) which includes all the papers
discussed in the survey.
|
2502.10723
|
A Mathematics Framework of Artificial Shifted Population Risk and Its
Further Understanding Related to Consistency Regularization
|
cs.LG cs.AI
|
Data augmentation is an important technique in training deep neural networks
as it enhances their ability to generalize and remain robust. While data
augmentation is commonly used to expand the sample size and act as a
consistency regularization term, there is a lack of research on the
relationship between them. To address this gap, this paper introduces a more
comprehensive mathematical framework for data augmentation. Through this
framework, we establish that the expected risk of the shifted population is the
sum of the original population risk and a gap term, which can be interpreted as
a consistency regularization term. The paper also provides a theoretical
understanding of this gap, highlighting its negative effects on the early
stages of training. We also propose a method to mitigate these effects. To
validate our approach, we conducted experiments using same data augmentation
techniques and computing resources under several scenarios, including standard
training, out-of-distribution, and imbalanced classification. The results
demonstrate that our methods surpass compared methods under all scenarios in
terms of generalization ability and convergence stability. We provide our code
implementation at the following link: https://github.com/ydlsfhll/ASPR.
|
2502.10724
|
Semantics-aware Test-time Adaptation for 3D Human Pose Estimation
|
cs.CV
|
This work highlights a semantics misalignment in 3D human pose estimation.
For the task of test-time adaptation, the misalignment manifests as overly
smoothed and unguided predictions. The smoothing settles predictions towards
some average pose. Furthermore, when there are occlusions or truncations, the
adaptation becomes fully unguided. To this end, we pioneer the integration of a
semantics-aware motion prior for the test-time adaptation of 3D pose
estimation. We leverage video understanding and a well-structured motion-text
space to adapt the model motion prediction to adhere to video semantics during
test time. Additionally, we incorporate a missing 2D pose completion based on
the motion-text similarity. The pose completion strengthens the motion prior's
guidance for occlusions and truncations. Our method significantly improves
state-of-the-art 3D human pose estimation TTA techniques, with more than 12%
decrease in PA-MPJPE on 3DPW and 3DHP.
|
2502.10725
|
PropNet: a White-Box and Human-Like Network for Sentence Representation
|
cs.CL cs.AI
|
Transformer-based embedding methods have dominated the field of sentence
representation in recent years. Although they have achieved remarkable
performance on NLP missions, such as semantic textual similarity (STS) tasks,
their black-box nature and large-data-driven training style have raised
concerns, including issues related to bias, trust, and safety. Many efforts
have been made to improve the interpretability of embedding models, but these
problems have not been fundamentally resolved. To achieve inherent
interpretability, we propose a purely white-box and human-like sentence
representation network, PropNet. Inspired by findings from cognitive science,
PropNet constructs a hierarchical network based on the propositions contained
in a sentence. While experiments indicate that PropNet has a significant gap
compared to state-of-the-art (SOTA) embedding models in STS tasks, case studies
reveal substantial room for improvement. Additionally, PropNet enables us to
analyze and understand the human cognitive processes underlying STS benchmarks.
|
2502.10728
|
Construction A Lattice Design Based on the Truncated Union Bound
|
cs.IT math.IT
|
This paper considers $n= 128$ dimensional construction A lattice design,
using binary codes with known minimum Hamming distance and codeword
multiplicity, the number of minimum weight codeword. A truncated theta series
of the lattice is explicitly given to obtain the truncated union bound to
estimate the word error rate under maximum likelihood decoding. The best
component code is selected by minimizing the required volume-to-noise ratio
(VNR) for a target word error rate $P_e$. The estimate becomes accurate for
$P_e \leq 10^{-4}$, and design examples are given with the best extended BCH
codes and polar codes for $P_e= 10^{-4}$ to $10^{-8}$. A lower error rate is
achieved compared to that by the classic balanced distance rule and the equal
error probability rule. The $(128, 106, 8)$ EBCH code gives the best-known
$n=128$ construction A lattice at $P_e= 10^{-5}$.
|
2502.10729
|
VarGes: Improving Variation in Co-Speech 3D Gesture Generation via
StyleCLIPS
|
cs.CV
|
Generating expressive and diverse human gestures from audio is crucial in
fields like human-computer interaction, virtual reality, and animation. Though
existing methods have achieved remarkable performance, they often exhibit
limitations due to constrained dataset diversity and the restricted amount of
information derived from audio inputs. To address these challenges, we present
VarGes, a novel variation-driven framework designed to enhance co-speech
gesture generation by integrating visual stylistic cues while maintaining
naturalness. Our approach begins with the Variation-Enhanced Feature Extraction
(VEFE) module, which seamlessly incorporates \textcolor{blue}{style-reference}
video data into a 3D human pose estimation network to extract StyleCLIPS,
thereby enriching the input with stylistic information. Subsequently, we employ
the Variation-Compensation Style Encoder (VCSE), a transformer-style encoder
equipped with an additive attention mechanism pooling layer, to robustly encode
diverse StyleCLIPS representations and effectively manage stylistic variations.
Finally, the Variation-Driven Gesture Predictor (VDGP) module fuses MFCC audio
features with StyleCLIPS encodings via cross-attention, injecting this fused
data into a cross-conditional autoregressive model to modulate 3D human gesture
generation based on audio input and stylistic clues. The efficacy of our
approach is validated on benchmark datasets, where it outperforms existing
methods in terms of gesture diversity and naturalness. The code and video
results will be made publicly available upon
acceptance:https://github.com/mookerr/VarGES/ .
|
2502.10732
|
Rule-Bottleneck Reinforcement Learning: Joint Explanation and Decision
Optimization for Resource Allocation with Language Agents
|
cs.LG cs.AI
|
Deep Reinforcement Learning (RL) is remarkably effective in addressing
sequential resource allocation problems in domains such as healthcare, public
policy, and resource management. However, deep RL policies often lack
transparency and adaptability, challenging their deployment alongside human
decision-makers. In contrast, Language Agents, powered by large language models
(LLMs), provide human-understandable reasoning but may struggle with effective
decision making. To bridge this gap, we propose Rule-Bottleneck Reinforcement
Learning (RBRL), a novel framework that jointly optimizes decision and
explanations. At each step, RBRL generates candidate rules with an LLM, selects
among them using an attention-based RL policy, and determines the environment
action with an explanation via chain-of-thought reasoning. The RL rule
selection is optimized using the environment rewards and an explainability
metric judged by the LLM. Evaluations in real-world scenarios highlight RBRL's
competitive performance with deep RL and efficiency gains over LLM fine-tuning.
A survey further confirms the enhanced quality of its explanations.
|
2502.10734
|
Motion planning for highly-dynamic unconditioned reflexes based on
chained Signed Distance Functions
|
cs.RO
|
The unconditioned reflex (e.g., protective reflex), which is the innate
reaction of the organism and usually performed through the spinal cord rather
than the brain, can enable organisms to escape harms from environments. In this
paper, we propose an online, highly-dynamic motion planning algorithm to endow
manipulators the highly-dynamic unconditioned reflexes to humans and/or
environments. Our method is based on a chained version of Signed Distance
Functions (SDFs), which can be pre-computed and stored. Our proposed algorithm
is divided into two stages. In the offline stage, we create 3 groups of local
SDFs to store the geometric information of the manipulator and its working
environment. In the online stage, the pre-computed local SDFs are chained
together according the configuration of the manipulator, to provide global
geometric information about the environment. While the point clouds of the
dynamic objects serve as query points to look up these local SDFs for quickly
generating escape velocity. Then we propose a modified geometric Jacobian
matrix and use the Jacobian-pseudo-inverse method to generate real-time reflex
behaviors to avoid the static and dynamic obstacles in the environment. The
benefits of our method are validated in both static and dynamic scenarios. In
the static scenario, our method identifies the path solutions with lower time
consumption and shorter trajectory length compared to existing solutions. In
the dynamic scenario, our method can reliably pursue the dynamic target point,
avoid dynamic obstacles, and react to these obstacles within 1ms, which
surpasses the unconditioned reflex reaction time of humans.
|
2502.10735
|
OPTISHEAR: Towards Efficient and Adaptive Pruning of Large Language
Models via Evolutionary Optimization
|
cs.CL
|
Post-training pruning has emerged as a crucial optimization technique as
large language models (LLMs) continue to grow rapidly. However, the significant
variations in weight distributions across different LLMs make fixed pruning
strategies inadequate for multiple models. In this paper, we introduce
\textbf{\textsc{OptiShear}}, an efficient evolutionary optimization framework
for adaptive LLM pruning. Our framework features two key innovations: an
effective search space built on our Meta pruning metric to handle diverse
weight distributions, and a model-wise reconstruction error for rapid
evaluation during search trials. We employ Non-dominated Sorting Genetic
Algorithm III (NSGA-III) to optimize both pruning metrics and layerwise
sparsity ratios. Through extensive evaluation on LLaMA-1/2/3 and Mistral models
(7B-70B) across multiple benchmarks, we demonstrate that our adaptive pruning
metrics consistently outperform existing methods. Additionally, our discovered
layerwise sparsity ratios enhance the effectiveness of other pruning metrics.
The framework exhibits strong cross-task and cross-model generalizability,
providing a cost-effective solution for model compression.
|
2502.10739
|
BASE-SQL: A powerful open source Text-To-SQL baseline approach
|
cs.CL
|
The conversion of natural language into SQL language for querying databases
(Text-to-SQL) has broad application prospects and has attracted widespread
attention. At present, the mainstream Text-to-SQL methods are mainly divided
into in-context learning (ICL) based methods and supervised fine-tuning (SFT)
based methods. ICL-based methods can achieve relatively good results thanks to
the use of the most advanced closed-source models. However, in real-world
application scenarios, factors such as data privacy, SQL generation efficiency
and cost need to be considered. SFT-based methods have certain advantages. At
present, methods based on fine-tuning of open source models lack
easy-to-implement and effective (cost-effective) baseline methods. We propose a
pipeline-based method using open source model fine-tuning, referred to as
BASE-SQL, which includes four components: Schema Linking, Candidate SQL
Generate, SQL Revision and SQL Merge Revision. Experimental results show that
BASE-SQL uses the open source model Qwen2.5-Coder-32B-Instruct, and achieves an
accuracy of 67.47% on the BIRD development set and 88.9% on the Spider test
set, which is significantly better than other methods using open source models,
and even exceeds several methods using the GPT-4o closed-source model. At the
same time, BASE-SQL is easy to implement and highly efficient (on average, only
five calls to the large language model are required to generate SQL once). The
code will be open sourced at https://github.com/CycloneBoy/base_sql.
|
2502.10742
|
The Philosophical Foundations of Growing AI Like A Child
|
cs.AI
|
Despite excelling in high-level reasoning, current language models lack
robustness in real-world scenarios and perform poorly on fundamental
problem-solving tasks that are intuitive to humans. This paper argues that both
challenges stem from a core discrepancy between human and machine cognitive
development. While both systems rely on increasing representational power, the
absence of core knowledge-foundational cognitive structures in humans-prevents
language models from developing robust, generalizable abilities, where complex
skills are grounded in simpler ones within their respective domains. It
explores empirical evidence of core knowledge in humans, analyzes why language
models fail to acquire it, and argues that this limitation is not an inherent
architectural constraint. Finally, it outlines a workable proposal for
systematically integrating core knowledge into future multi-modal language
models through the large-scale generation of synthetic training data using a
cognitive prototyping strategy.
|
2502.10743
|
1bit-Merging: Dynamic Quantized Merging for Large Language Models
|
cs.CL
|
Recent advances in large language models have led to specialized models
excelling in specific domains, creating a need for efficient model merging
techniques. While traditional merging approaches combine parameters into a
single static model, they often compromise task-specific performance. However,
task-specific routing methods maintain accuracy but introduce substantial
storage overhead. We present \texttt{1bit}-Merging, a novel framework that
integrates task-specific routing with 1-bit quantized task vectors to balance
performance and storage efficiency. Our approach leverages the observation that
different task-specific models store knowledge in distinct layers-chat models
primarily in attention layers and math/code models in MLP layers-enabling
targeted compression strategies. Through extensive experiments with LLaMA2 and
Mistral model families across chat, mathematical reasoning, and code generation
tasks, we demonstrate that \texttt{1bit}-Merging achieves comparable or
superior performance to existing methods while significantly reducing storage
requirements. Our framework offers a practical solution for combining
specialized models while maintaining their individual strengths and addressing
the storage challenges of current approaches.
|
2502.10749
|
LoRE-Merging: Exploring Low-Rank Estimation For Large Language Model
Merging
|
cs.CL cs.AI
|
While most current approaches rely on further training techniques, such as
fine-tuning or reinforcement learning, to enhance model capacities, model
merging stands out for its ability of improving models without requiring any
additional training. In this paper, we propose a unified framework for model
merging based on low-rank estimation of task vectors without the need for
access to the base model, named \textsc{LoRE-Merging}. Our approach is
motivated by the observation that task vectors from fine-tuned models
frequently exhibit a limited number of dominant singular values, making
low-rank estimations less prone to interference. We implement the method by
formulating the merging problem as an optimization problem. Extensive empirical
experiments demonstrate the effectiveness of our framework in mitigating
interference and preserving task-specific information, thereby advancing the
state-of-the-art performance in model merging techniques.
|
2502.10750
|
Human-Centric Community Detection in Hybrid Metaverse Networks with
Integrated AI Entities
|
cs.SI cs.AI
|
Community detection is a cornerstone problem in social network analysis
(SNA), aimed at identifying cohesive communities with minimal external links.
However, the rise of generative AI and Metaverse introduce complexities by
creating hybrid human-AI social networks (denoted by HASNs), where traditional
methods fall short, especially in human-centric settings. This paper introduces
a novel community detection problem in HASNs (denoted by MetaCD), which seeks
to enhance human connectivity within communities while reducing the presence of
AI nodes. Effective processing of MetaCD poses challenges due to the delicate
trade-off between excluding certain AI nodes and maintaining community
structure. To address this, we propose CUSA, an innovative framework
incorporating AI-aware clustering techniques that navigate this trade-off by
selectively retaining AI nodes that contribute to community integrity.
Furthermore, given the scarcity of real-world HASNs, we devise four strategies
for synthesizing these networks under various hypothetical scenarios. Empirical
evaluations on real social networks, reconfigured as HASNs, demonstrate the
effectiveness and practicality of our approach compared to traditional non-deep
learning and graph neural network (GNN)-based methods.
|
2502.10760
|
Why is prompting hard? Understanding prompts on binary sequence
predictors
|
cs.CL cs.LG stat.ML
|
Large language models (LLMs) can be prompted to do many tasks, but finding
good prompts is not always easy, nor is understanding some performant prompts.
We explore these issues by viewing prompting as conditioning a near-optimal
sequence predictor (LLM) pretrained on diverse data sources. Through numerous
prompt search experiments, we show that the unintuitive patterns in optimal
prompts can be better understood given the pretraining distribution, which is
often unavailable in practice. Moreover, even using exhaustive search, reliably
identifying optimal prompts from practical neural predictors can be difficult.
Further, we demonstrate that common prompting methods, such as using intuitive
prompts or samples from the targeted task, are in fact suboptimal. Thus, this
work takes an initial step towards understanding the difficulties in finding
and understanding optimal prompts from a statistical and empirical perspective.
|
2502.10761
|
A Whole-Body Disturbance Rejection Control Framework for Dynamic Motions
in Legged Robots
|
cs.RO
|
This letter presents a control framework for legged robots that enables
self-perception and resistance to external disturbances and model
uncertainties. First, a novel disturbance estimator is proposed, integrating
adaptive control and extended state observers (ESO) to estimate external
disturbances and model uncertainties. This estimator is embedded within the
whole-body control framework to compensate for disturbances in the legged
system. Second, a comprehensive whole-body disturbance rejection control
framework (WB-DRC) is introduced, accounting for the robot's full-body
dynamics. Compared to previous whole-body control frameworks, WB-DRC
effectively handles external disturbances and model uncertainties, with the
potential to adapt to complex terrain. Third, simulations of both biped and
quadruped robots are conducted in the Gazebo simulator to demonstrate the
effectiveness and versatility of WB-DRC. Finally, extensive experimental trials
on the quadruped robot validate the robustness and stability of the robot
system using WB-DRC under various disturbance conditions.
|
2502.10762
|
Bone Soups: A Seek-and-Soup Model Merging Approach for Controllable
Multi-Objective Generation
|
cs.LG cs.AI cs.CL
|
User information needs are often highly diverse and varied. A key challenge
in current research is how to achieve controllable multi-objective generation
while enabling rapid adaptation to accommodate diverse user demands during test
time. Existing solutions, such as Rewarded Soup, focus on merging language
models individually tuned on single objectives. While easy to implement and
widely used, these approaches face limitations in achieving optimal performance
due to their disregard for the impacts of competing objectives on model tuning.
To address this issue, we propose Bone Soup, a novel model merging approach
that first seeks a series of backbone models by considering the impacts of
multiple objectives and then makes the soup (i.e., merge the backbone models).
Specifically, Bone Soup begins by training multiple backbone models for
different objectives using multi-objective reinforcement learning. Each
backbone model is guided by a combination of backbone reward signals. To ensure
that these models are optimal for the Pareto front, the backbone rewards are
crafted by combining standard reward functions into basis vectors, which can
then be modified through a rule-based construction method. Bone Soup leverages
a symmetric circulant matrix mapping to generate the merging coefficients,
which are used to merge the backbone models according to user preferences.
Extensive experimental results demonstrate that Bone Soup exhibits strong
controllability and Pareto optimality in controllable multi-objective
generation, providing a more effective and efficient approach to addressing
diverse user needs at test time.
|
2502.10764
|
Learning to Explain Air Traffic Situation
|
cs.LG
|
Understanding how air traffic controllers construct a mental 'picture' of
complex air traffic situations is crucial but remains a challenge due to the
inherently intricate, high-dimensional interactions between aircraft, pilots,
and controllers. Previous work on modeling the strategies of air traffic
controllers and their mental image of traffic situations often centers on
specific air traffic control tasks or pairwise interactions between aircraft,
neglecting to capture the comprehensive dynamics of an air traffic situation.
To address this issue, we propose a machine learning-based framework for
explaining air traffic situations. Specifically, we employ a Transformer-based
multi-agent trajectory model that encapsulates both the spatio-temporal
movement of aircraft and social interaction between them. By deriving attention
scores from the model, we can quantify the influence of individual aircraft on
overall traffic dynamics. This provides explainable insights into how air
traffic controllers perceive and understand the traffic situation. Trained on
real-world air traffic surveillance data collected from the terminal airspace
around Incheon International Airport in South Korea, our framework effectively
explicates air traffic situations. This could potentially support and enhance
the decision-making and situational awareness of air traffic controllers.
|
2502.10768
|
Evaluating improvements on using Large Language Models (LLMs) for
property extraction in the Open Research Knowledge Graph (ORKG)
|
cs.IR cs.AI cs.CL
|
Current research highlights the great potential of Large Language Models
(LLMs) for constructing Scholarly Knowledge Graphs (SKGs). One particularly
complex step in this process is relation extraction, aimed at identifying
suitable properties to describe the content of research. This study builds
directly on previous research of three Open Research Knowledge Graph (ORKG)
team members who assessed the readiness of LLMs such as GPT-3.5, Llama 2, and
Mistral for property extraction in scientific literature. Given the moderate
performance observed, the previous work concluded that fine-tuning is needed to
improve these models' alignment with scientific tasks and their emulation of
human expertise. Expanding on this prior experiment, this study evaluates the
impact of advanced prompt engineering techniques and demonstrates that these
techniques can highly significantly enhance the results. Additionally, this
study extends the property extraction process to include property matching to
existing ORKG properties, which are retrieved via the API. The evaluation
reveals that results generated through advanced prompt engineering achieve a
higher proportion of matches with ORKG properties, further emphasizing the
enhanced alignment achieved. Moreover, this lays the groundwork for addressing
challenges such as the inconsistency of ORKG properties, an issue highlighted
in prior studies. By assigning unique URIs and using standardized terminology,
this work increases the consistency of the properties, fulfilling a crucial
aspect of Linked Data and FAIR principles - core commitments of ORKG. This, in
turn, significantly enhances the applicability of ORKG content for subsequent
tasks such as comparisons of research publications. Finally, the study
concludes with recommendations for future improvements in the overall property
extraction process.
|
2502.10776
|
A Distillation-based Future-aware Graph Neural Network for Stock Trend
Prediction
|
cs.LG cs.AI q-fin.PM
|
Stock trend prediction involves forecasting the future price movements by
analyzing historical data and various market indicators. With the advancement
of machine learning, graph neural networks (GNNs) have been extensively
employed in stock prediction due to their powerful capability to capture
spatiotemporal dependencies of stocks. However, despite the efforts of various
GNN stock predictors to enhance predictive performance, the improvements remain
limited, as they focus solely on analyzing historical spatiotemporal
dependencies, overlooking the correlation between historical and future
patterns. In this study, we propose a novel distillation-based future-aware GNN
framework (DishFT-GNN) for stock trend prediction. Specifically, DishFT-GNN
trains a teacher model and a student model, iteratively. The teacher model
learns to capture the correlation between distribution shifts of historical and
future data, which is then utilized as intermediate supervision to guide the
student model to learn future-aware spatiotemporal embeddings for accurate
prediction. Through extensive experiments on two real-world datasets, we verify
the state-of-the-art performance of DishFT-GNN.
|
2502.10777
|
Fast Transmission Control Adaptation for URLLC via Channel Knowledge Map
and Meta-Learning
|
cs.IT math.IT
|
This paper considers methods for delivering ultra reliable low latency
communication (URLLC) to enable mission-critical Internet of Things (IoT)
services in wireless environments with unknown channel distribution. The
methods rely upon the historical channel gain samples of a few locations in a
target area. We formulate a non-trivial transmission control adaptation problem
across the target area under the URLLC constraints. Then we propose two
solutions to solve this problem. The first is a power scaling scheme in
conjunction with the deep reinforcement learning (DRL) algorithm with the help
of the channel knowledge map (CKM) without retraining, where the CKM employs
the spatial correlation of the channel characteristics from the historical
channel gain samples. The second solution is model agnostic meta-learning
(MAML) based metareinforcement learning algorithm that is trained from the
known channel gain samples following distinct channel distributions and can
quickly adapt to the new environment within a few steps of gradient update.
Simulation results indicate that the DRL-based algorithm can effectively meet
the reliability requirement of URLLC under various quality-of-service (QoS)
constraints. Then the adaptation capabilities of the power scaling scheme and
meta-reinforcement learning algorithm are also validated.
|
2502.10784
|
Preconditioned Inexact Stochastic ADMM for Deep Model
|
cs.LG
|
The recent advancement of foundation models (FMs) has brought about a
paradigm shift, revolutionizing various sectors worldwide. The popular
optimizers used to train these models are stochastic gradient descent-based
algorithms, which face inherent limitations, such as slow convergence and
stringent assumptions for convergence. In particular, data heterogeneity
arising from distributed settings poses significant challenges to their
theoretical and numerical performance. This paper develops an algorithm, PISA
({P}reconditioned {I}nexact {S}tochastic {A}lternating Direction Method of
Multipliers), which enables scalable parallel computing and supports various
second-moment schemes. Grounded in rigorous theoretical guarantees, the
algorithm converges under the sole assumption of Lipschitz continuity of the
gradient, thereby removing the need for other conditions commonly imposed by
stochastic methods. This capability enables PISA to tackle the challenge of
data heterogeneity effectively. Comprehensive experimental evaluations for
training or fine-tuning diverse FMs, including vision models, large language
models, reinforcement learning models, generative adversarial networks, and
recurrent neural networks, demonstrate its superior numerical performance
compared to various state-of-the-art optimizers.
|
2502.10785
|
REGNav: Room Expert Guided Image-Goal Navigation
|
cs.CV
|
Image-goal navigation aims to steer an agent towards the goal location
specified by an image. Most prior methods tackle this task by learning a
navigation policy, which extracts visual features of goal and observation
images, compares their similarity and predicts actions. However, if the agent
is in a different room from the goal image, it's extremely challenging to
identify their similarity and infer the likely goal location, which may result
in the agent wandering around. Intuitively, when humans carry out this task,
they may roughly compare the current observation with the goal image, having an
approximate concept of whether they are in the same room before executing the
actions. Inspired by this intuition, we try to imitate human behaviour and
propose a Room Expert Guided Image-Goal Navigation model (REGNav) to equip the
agent with the ability to analyze whether goal and observation images are taken
in the same room. Specifically, we first pre-train a room expert with an
unsupervised learning technique on the self-collected unlabelled room images.
The expert can extract the hidden room style information of goal and
observation images and predict their relationship about whether they belong to
the same room. In addition, two different fusion approaches are explored to
efficiently guide the agent navigation with the room relation knowledge.
Extensive experiments show that our REGNav surpasses prior state-of-the-art
works on three popular benchmarks.
|
2502.10786
|
Epidemic-guided deep learning for spatiotemporal forecasting of
Tuberculosis outbreak
|
cs.LG q-bio.QM stat.ML
|
Tuberculosis (TB) remains a formidable global health challenge, driven by
complex spatiotemporal transmission dynamics and influenced by factors such as
population mobility and behavioral changes. We propose an Epidemic-Guided Deep
Learning (EGDL) approach that fuses mechanistic epidemiological principles with
advanced deep learning techniques to enhance early warning systems and
intervention strategies for TB outbreaks. Our framework is built upon a
networked Susceptible-Infectious-Recovered (SIR) model augmented with a
saturated incidence rate and graph Laplacian diffusion, capturing both
long-term transmission dynamics and region-specific population mobility
patterns. Compartmental model parameters are rigorously estimated using
Bayesian inference via the Markov Chain Monte Carlo (MCMC) approach.
Theoretical analysis leveraging the comparison principle and Green's formula
establishes global stability properties of the disease-free and endemic
equilibria. Building on these epidemiological insights, we design two
forecasting architectures, EGDL-Parallel and EGDL-Series, that integrate the
mechanistic outputs of the networked SIR model within deep neural networks.
This integration mitigates the overfitting risks commonly encountered in
data-driven methods and filters out noise inherent in surveillance data,
resulting in reliable forecasts of real-world epidemic trends. Experiments
conducted on TB incidence data from 47 prefectures in Japan demonstrate that
our approach delivers robust and accurate predictions across multiple time
horizons (short to medium-term forecasts). Additionally, incorporating
uncertainty quantification through conformal prediction enhances the model's
practical utility for guiding targeted public health interventions.
|
2502.10789
|
ReReLRP -- Remembering and Recognizing Tasks with LRP
|
cs.LG
|
Deep neural networks have revolutionized numerous research fields and
applications. Despite their widespread success, a fundamental limitation known
as catastrophic forgetting remains, where models fail to retain their ability
to perform previously learned tasks after being trained on new ones. This
limitation is particularly acute in certain continual learning scenarios, where
models must integrate the knowledge from new domains with their existing
capabilities. Traditional approaches to mitigate this problem typically rely on
memory replay mechanisms, storing either original data samples, prototypes, or
activation patterns. Although effective, these methods often introduce
significant computational overhead, raise privacy concerns, and require the use
of dedicated architectures. In this work we present ReReLRP (Remembering and
Recognizing with LRP), a novel solution that leverages Layerwise Relevance
Propagation (LRP) to preserve information across tasks. Our contribution
provides increased privacy of existing replay-free methods while additionally
offering built-in explainability, flexibility of model architecture and
deployment, and a new mechanism to increase memory storage efficiency. We
validate our approach on a wide variety of datasets, demonstrating results
comparable with a well-known replay-based method in selected scenarios.
|
2502.10790
|
Which Features are Best for Successor Features?
|
cs.LG math.OC stat.ML
|
In reinforcement learning, universal successor features (SFs) are a way to
provide zero-shot adaptation to new tasks at test time: they provide optimal
policies for all downstream reward functions lying in the linear span of a set
of base features. But it is unclear what constitutes a good set of base
features, that could be useful for a wide set of downstream tasks beyond their
linear span. Laplacian eigenfunctions (the eigenfunctions of
$\Delta+\Delta^\ast$ with $\Delta$ the Laplacian operator of some reference
policy and $\Delta^\ast$ that of the time-reversed dynamics) have been argued
to play a role, and offer good empirical performance.
Here, for the first time, we identify the optimal base features based on an
objective criterion of downstream performance, in a non-tautological way
without assuming the downstream tasks are linear in the features. We do this
for three generic classes of downstream tasks: reaching a random goal state,
dense random Gaussian rewards, and random ``scattered'' sparse rewards. The
features yielding optimal expected downstream performance turn out to be the
\emph{same} for these three task families. They do not coincide with Laplacian
eigenfunctions in general, though they can be expressed from $\Delta$: in the
simplest case (deterministic environment and decay factor $\gamma$ close to
$1$), they are the eigenfunctions of $\Delta^{-1}+(\Delta^{-1})^\ast$.
We obtain these results under an assumption of large behavior cloning
regularization with respect to a reference policy, a setting often used for
offline RL. Along the way, we get new insights into
KL-regularized\option{natural} policy gradient, and into the lack of SF
information in the norm of Bellman gaps.
|
2502.10792
|
Tackling the Zero-Shot Reinforcement Learning Loss Directly
|
cs.LG
|
Zero-shot reinforcement learning (RL) methods aim at instantly producing a
behavior for an RL task in a given environment, from a description of the
reward function. These methods are usually tested by evaluating their average
performance on a series of downstream tasks. Yet they cannot be trained
directly for that objective, unless the distribution of downstream tasks is
known. Existing approaches either use other learning criteria [BBQ+ 18, TRO23,
TO21, HDB+ 19], or explicitly set a prior on downstream tasks, such as reward
functions given by a random neural network [FPAL24].
Here we prove that the zero-shot RL loss can be optimized directly, for a
range of non-informative priors such as white noise rewards, temporally smooth
rewards, ``scattered'' sparse rewards, or a combination of those.
Thus, it is possible to learn the optimal zero-shot features algorithmically,
for a wide mixture of priors.
Surprisingly, the white noise prior leads to an objective almost identical to
the one in VISR [HDB+19], via a different approach. This shows that some
seemingly arbitrary choices in VISR, such as Von Mises--Fisher distributions,
do maximize downstream performance. This also suggests more efficient ways to
tackle the VISR objective.
Finally, we discuss some consequences and limitations of the zero-shot RL
objective, such as its tendency to produce narrow optimal features if only
using Gaussian dense reward priors.
|
2502.10793
|
Dynamic Influence Tracker: Measuring Time-Varying Sample Influence
During Training
|
stat.ML cs.AI cs.LG
|
Existing methods for measuring training sample influence on models only
provide static, overall measurements, overlooking how sample influence changes
during training. We propose Dynamic Influence Tracker (DIT), which captures the
time-varying sample influence across arbitrary time windows during training.
DIT offers three key insights: 1) Samples show different time-varying
influence patterns, with some samples important in the early training stage
while others become important later. 2) Sample influences show a weak
correlation between early and late stages, demonstrating that the model
undergoes distinct learning phases with shifting priorities. 3) Analyzing
influence during the convergence period provides more efficient and accurate
detection of corrupted samples than full-training analysis. Supported by
theoretical guarantees without assuming loss convexity or model convergence,
DIT significantly outperforms existing methods, achieving up to 0.99
correlation with ground truth and above 98\% accuracy in detecting corrupted
samples in complex architectures.
|
2502.10794
|
Distraction is All You Need for Multimodal Large Language Model
Jailbreaking
|
cs.CV
|
Multimodal Large Language Models (MLLMs) bridge the gap between visual and
textual data, enabling a range of advanced applications. However, complex
internal interactions among visual elements and their alignment with text can
introduce vulnerabilities, which may be exploited to bypass safety mechanisms.
To address this, we analyze the relationship between image content and task and
find that the complexity of subimages, rather than their content, is key.
Building on this insight, we propose the Distraction Hypothesis, followed by a
novel framework called Contrasting Subimage Distraction Jailbreaking (CS-DJ),
to achieve jailbreaking by disrupting MLLMs alignment through multi-level
distraction strategies. CS-DJ consists of two components: structured
distraction, achieved through query decomposition that induces a distributional
shift by fragmenting harmful prompts into sub-queries, and visual-enhanced
distraction, realized by constructing contrasting subimages to disrupt the
interactions among visual elements within the model. This dual strategy
disperses the model's attention, reducing its ability to detect and mitigate
harmful content. Extensive experiments across five representative scenarios and
four popular closed-source MLLMs, including GPT-4o-mini, GPT-4o, GPT-4V, and
Gemini-1.5-Flash, demonstrate that CS-DJ achieves average success rates of
52.40% for the attack success rate and 74.10% for the ensemble attack success
rate. These results reveal the potential of distraction-based approaches to
exploit and bypass MLLMs' defenses, offering new insights for attack
strategies.
|
2502.10801
|
FaceSwapGuard: Safeguarding Facial Privacy from DeepFake Threats through
Identity Obfuscation
|
cs.CR cs.AI cs.CV
|
DeepFakes pose a significant threat to our society. One representative
DeepFake application is face-swapping, which replaces the identity in a facial
image with that of a victim. Although existing methods partially mitigate these
risks by degrading the quality of swapped images, they often fail to disrupt
the identity transformation effectively. To fill this gap, we propose
FaceSwapGuard (FSG), a novel black-box defense mechanism against deepfake
face-swapping threats. Specifically, FSG introduces imperceptible perturbations
to a user's facial image, disrupting the features extracted by identity
encoders. When shared online, these perturbed images mislead face-swapping
techniques, causing them to generate facial images with identities
significantly different from the original user. Extensive experiments
demonstrate the effectiveness of FSG against multiple face-swapping techniques,
reducing the face match rate from 90\% (without defense) to below 10\%. Both
qualitative and quantitative studies further confirm its ability to confuse
human perception, highlighting its practical utility. Additionally, we
investigate key factors that may influence FSG and evaluate its robustness
against various adaptive adversaries.
|
2502.10802
|
CoCoEvo: Co-Evolution of Programs and Test Cases to Enhance Code
Generation
|
cs.SE cs.AI
|
Large Language Models (LLMs) have shown remarkable performance in automated
code generation. However, existing approaches often rely heavily on pre-defined
test cases, which become impractical in scenarios where such cases are
unavailable. While prior works explore filtering techniques between programs
and test cases, they overlook the refinement of test cases. To address this
limitation, we introduce CoCoEvo, a novel LLM-based co-evolution framework that
simultaneously evolves programs and test cases. CoCoEvo eliminates the
dependency on pre-defined test cases by generating both programs and test cases
directly from natural language problem descriptions and function headers. The
framework employs specialized evolutionary operators, including LLM-based
crossover and mutation operators for program evolution, along with a test case
generation operator for test case evolution. Additionally, we propose
optimization strategies such as a crossover rate scheduler to balance
exploration and convergence, and a multi-objective optimization method for test
case selection. Experimental results on multiple state-of-the-art LLMs
demonstrate that CoCoEvo surpasses existing methods, achieving state-of-the-art
performance in automated code generation and testing. These results underscore
the potential of co-evolutionary techniques in advancing the field of automated
programming.
|
2502.10803
|
PDA: Generalizable Detection of AI-Generated Images via Post-hoc
Distribution Alignment
|
cs.CR cs.AI cs.CV
|
The rapid advancement of generative models has led to the proliferation of
highly realistic AI-generated images, posing significant challenges for
detection methods to generalize across diverse and evolving generative
techniques. Existing approaches often fail to adapt to unknown models without
costly retraining, limiting their practicability. To fill this gap, we propose
Post-hoc Distribution Alignment (PDA), a novel approach for the generalizable
detection for AI-generated images. The key idea is to use the known generative
model to regenerate undifferentiated test images. This process aligns the
distributions of the re-generated real images with the known fake images,
enabling effective distinction from unknown fake images. PDA employs a two-step
detection framework: 1) evaluating whether a test image aligns with the known
fake distribution based on deep k-nearest neighbor (KNN) distance, and 2)
re-generating test images using known generative models to create pseudo-fake
images for further classification. This alignment strategy allows PDA to
effectively detect fake images without relying on unseen data or requiring
retraining. Extensive experiments demonstrate the superiority of PDA, achieving
96.73\% average accuracy across six state-of-the-art generative models,
including GANs, diffusion models, and text-to-image models, and improving by
16.07\% over the best baseline. Through t-SNE visualizations and KNN distance
analysis, we provide insights into PDA's effectiveness in separating real and
fake images. Our work provides a flexible and effective solution for real-world
fake image detection, advancing the generalization ability of detection
systems.
|
2502.10807
|
HybriDNA: A Hybrid Transformer-Mamba2 Long-Range DNA Language Model
|
cs.LG cs.AI q-bio.GN
|
Advances in natural language processing and large language models have
sparked growing interest in modeling DNA, often referred to as the "language of
life". However, DNA modeling poses unique challenges. First, it requires the
ability to process ultra-long DNA sequences while preserving single-nucleotide
resolution, as individual nucleotides play a critical role in DNA function.
Second, success in this domain requires excelling at both generative and
understanding tasks: generative tasks hold potential for therapeutic and
industrial applications, while understanding tasks provide crucial insights
into biological mechanisms and diseases. To address these challenges, we
propose HybriDNA, a decoder-only DNA language model that incorporates a hybrid
Transformer-Mamba2 architecture, seamlessly integrating the strengths of
attention mechanisms with selective state-space models. This hybrid design
enables HybriDNA to efficiently process DNA sequences up to 131kb in length
with single-nucleotide resolution. HybriDNA achieves state-of-the-art
performance across 33 DNA understanding datasets curated from the BEND, GUE,
and LRB benchmarks, and demonstrates exceptional capability in generating
synthetic cis-regulatory elements (CREs) with desired properties. Furthermore,
we show that HybriDNA adheres to expected scaling laws, with performance
improving consistently as the model scales from 300M to 3B and 7B parameters.
These findings underscore HybriDNA's versatility and its potential to advance
DNA research and applications, paving the way for innovations in understanding
and engineering the "language of life".
|
2502.10810
|
SVBench: A Benchmark with Temporal Multi-Turn Dialogues for Streaming
Video Understanding
|
cs.CV
|
Despite the significant advancements of Large Vision-Language Models (LVLMs)
on established benchmarks, there remains a notable gap in suitable evaluation
regarding their applicability in the emerging domain of long-context streaming
video understanding. Current benchmarks for video understanding typically
emphasize isolated single-instance text inputs and fail to evaluate the
capacity to sustain temporal reasoning throughout the entire duration of video
streams. To address these limitations, we introduce SVBench, a pioneering
benchmark with temporal multi-turn question-answering chains specifically
designed to thoroughly assess the capabilities of streaming video understanding
of current LVLMs. We design a semi-automated annotation pipeline to obtain
49,979 Question-Answer (QA) pairs of 1,353 streaming videos, which includes
generating QA chains that represent a series of consecutive multi-turn
dialogues over video segments and constructing temporal linkages between
successive QA chains. Our experimental results, obtained from 14 models in
dialogue and streaming evaluations, reveal that while the closed-source GPT-4o
outperforms others, most open-source LVLMs struggle with long-context streaming
video understanding. We also construct a StreamingChat model, which
significantly outperforms open-source LVLMs on our SVBench and achieves
comparable performance on diverse vision-language benchmarks. We expect SVBench
to advance the research of streaming video understanding by providing a
comprehensive and in-depth analysis of current LVLMs. Our benchmark and model
can be accessed at https://yzy-bupt.github.io/SVBench.
|
2502.10812
|
ResiComp: Loss-Resilient Image Compression via Dual-Functional Masked
Visual Token Modeling
|
eess.IV cs.IT math.IT
|
Recent advancements in neural image codecs (NICs) are of significant
compression performance, but limited attention has been paid to their error
resilience.
These resulting NICs tend to be sensitive to packet losses, which are
prevalent in real-time communications.
In this paper, we investigate how to elevate the resilience ability of NICs
to combat packet losses.
We propose ResiComp, a pioneering neural image compression framework with
feature-domain packet loss concealment (PLC).
Motivated by the inherent consistency between generation and compression, we
advocate merging the tasks of entropy modeling and PLC into a unified framework
focused on latent space context modeling.
To this end, we take inspiration from the impressive generative capabilities
of large language models (LLMs), particularly the recent advances of masked
visual token modeling (MVTM).
During training, we integrate MVTM to mirror the effects of packet loss,
enabling a dual-functional Transformer to restore the masked latents by
predicting their missing values and conditional probability mass functions.
Our ResiComp jointly optimizes compression efficiency and loss resilience.
Moreover, ResiComp provides flexible coding modes, allowing for explicitly
adjusting the efficiency-resilience trade-off in response to varying Internet
or wireless network conditions.
Extensive experiments demonstrate that ResiComp can significantly enhance the
NIC's resilience against packet losses, while exhibits a worthy trade-off
between compression efficiency and packet loss resilience.
|
2502.10813
|
Transformer-Driven Modeling of Variable Frequency Features for
Classifying Student Engagement in Online Learning
|
cs.CV
|
The COVID-19 pandemic and the internet's availability have recently boosted
online learning. However, monitoring engagement in online learning is a
difficult task for teachers. In this context, timely automatic student
engagement classification can help teachers in making adaptive adjustments to
meet students' needs. This paper proposes EngageFormer, a transformer based
architecture with sequence pooling using video modality for engagement
classification. The proposed architecture computes three views from the input
video and processes them in parallel using transformer encoders; the global
encoder then processes the representation from each encoder, and finally, multi
layer perceptron (MLP) predicts the engagement level. A learning centered
affective state dataset is curated from existing open source databases. The
proposed method achieved an accuracy of 63.9%, 56.73%, 99.16%, 65.67%, and
74.89% on Dataset for Affective States in E-Environments (DAiSEE), Bahcesehir
University Multimodal Affective Database-1 (BAUM-1), Yawning Detection Dataset
(YawDD), University of Texas at Arlington Real-Life Drowsiness Dataset
(UTA-RLDD), and curated learning-centered affective state dataset respectively.
The achieved results on the BAUM-1, DAiSEE, and YawDD datasets demonstrate
state-of-the-art performance, indicating the superiority of the proposed model
in accurately classifying affective states on these datasets. Additionally, the
results obtained on the UTA-RLDD dataset, which involves two-class
classification, serve as a baseline for future research. These results provide
a foundation for further investigations and serve as a point of reference for
future works to compare and improve upon.
|
2502.10816
|
BalanceBenchmark: A Survey for Imbalanced Learning
|
cs.LG cs.AI
|
Multimodal learning has gained attention for its capacity to integrate
information from different modalities. However, it is often hindered by the
multimodal imbalance problem, where certain modality dominates while others
remain underutilized. Although recent studies have proposed various methods to
alleviate this problem, they lack comprehensive and fair comparisons. In this
paper, we systematically categorize various mainstream multimodal imbalance
algorithms into four groups based on the strategies they employ to mitigate
imbalance. To facilitate a comprehensive evaluation of these methods, we
introduce BalanceBenchmark, a benchmark including multiple widely used
multidimensional datasets and evaluation metrics from three perspectives:
performance, imbalance degree, and complexity. To ensure fair comparisons, we
have developed a modular and extensible toolkit that standardizes the
experimental workflow across different methods. Based on the experiments using
BalanceBenchmark, we have identified several key insights into the
characteristics and advantages of different method groups in terms of
performance, balance degree and computational complexity. We expect such
analysis could inspire more efficient approaches to address the imbalance
problem in the future, as well as foundation models. The code of the toolkit is
available at https://github.com/GeWu-Lab/BalanceBenchmark.
|
2502.10818
|
On Vanishing Gradients, Over-Smoothing, and Over-Squashing in GNNs:
Bridging Recurrent and Graph Learning
|
cs.LG cs.AI
|
Graph Neural Networks (GNNs) are models that leverage the graph structure to
transmit information between nodes, typically through the message-passing
operation. While widely successful, this approach is well known to suffer from
the over-smoothing and over-squashing phenomena, which result in
representational collapse as the number of layers increases and insensitivity
to the information contained at distant and poorly connected nodes,
respectively. In this paper, we present a unified view of these problems
through the lens of vanishing gradients, using ideas from linear control theory
for our analysis. We propose an interpretation of GNNs as recurrent models and
empirically demonstrate that a simple state-space formulation of a GNN
effectively alleviates over-smoothing and over-squashing at no extra trainable
parameter cost. Further, we show theoretically and empirically that (i) GNNs
are by design prone to extreme gradient vanishing even after a few layers; (ii)
Over-smoothing is directly related to the mechanism causing vanishing
gradients; (iii) Over-squashing is most easily alleviated by a combination of
graph rewiring and vanishing gradient mitigation. We believe our work will help
bridge the gap between the recurrent and graph neural network literature and
will unlock the design of new deep and performant GNNs.
|
2502.10819
|
Sensing With Communication Signals: From Information Theory to Signal
Processing
|
cs.IT math.IT
|
The Integrated Sensing and Communications (ISAC) paradigm is anticipated to
be a cornerstone of the upcoming 6G networks. In order to optimize the use of
wireless resources, 6G ISAC systems need to harness the communication data
payload signals, which are inherently random, for both sensing and
communication (S&C) purposes. This tutorial paper provides a comprehensive
technical overview of the fundamental theory and signal processing
methodologies for ISAC transmission with random communication signals. We begin
by introducing the deterministic-random tradeoff (DRT) between S&C from an
information-theoretic perspective, emphasizing the need for specialized signal
processing techniques tailored to random ISAC signals. Building on this
foundation, we review the core signal models and processing pipelines for
communication-centric ISAC systems, and analyze the average squared
auto-correlation function (ACF) of random ISAC signals, which serves as a
fundamental performance metric for multi-target ranging tasks. Drawing insights
from these theoretical results, we outline the design principles for the three
key components of communication-centric ISAC systems: modulation schemes,
constellation design, and pulse shaping filters. The goal is to either enhance
sensing performance without compromising communication efficiency or to
establish a scalable tradeoff between the two. We then extend our analysis from
a single-antenna ISAC system to its multi-antenna counterpart, discussing
recent advancements in multi-input multi-output (MIMO) precoding techniques
specifically designed for random ISAC signals. We conclude by highlighting
several open challenges and future research directions in the field of sensing
with communication signals.
|
2502.10822
|
NeuroAMP: A Novel End-to-end General Purpose Deep Neural Amplifier for
Personalized Hearing Aids
|
eess.AS cs.AI cs.SD
|
The prevalence of hearing aids is increasing. However, optimizing the
amplification processes of hearing aids remains challenging due to the
complexity of integrating multiple modular components in traditional methods.
To address this challenge, we present NeuroAMP, a novel deep neural network
designed for end-to-end, personalized amplification in hearing aids. NeuroAMP
leverages both spectral features and the listener's audiogram as inputs, and we
investigate four architectures: Convolutional Neural Network (CNN), Long
Short-Term Memory (LSTM), Convolutional Recurrent Neural Network (CRNN), and
Transformer. We also introduce Denoising NeuroAMP, an extension that integrates
noise reduction along with amplification capabilities for improved performance
in real-world scenarios. To enhance generalization, a comprehensive data
augmentation strategy was employed during training on diverse speech (TIMIT and
TMHINT) and music (Cadenza Challenge MUSIC) datasets. Evaluation using the
Hearing Aid Speech Perception Index (HASPI), Hearing Aid Speech Quality Index
(HASQI), and Hearing Aid Audio Quality Index (HAAQI) demonstrates that the
Transformer architecture within NeuroAMP achieves the best performance, with
SRCC scores of 0.9927 (HASQI) and 0.9905 (HASPI) on TIMIT, and 0.9738 (HAAQI)
on the Cadenza Challenge MUSIC dataset. Notably, our data augmentation strategy
maintains high performance on unseen datasets (e.g., VCTK, MUSDB18-HQ).
Furthermore, Denoising NeuroAMP outperforms both the conventional NAL-R+WDRC
approach and a two-stage baseline on the VoiceBank+DEMAND dataset, achieving a
10% improvement in both HASPI (0.90) and HASQI (0.59) scores. These results
highlight the potential of NeuroAMP and Denoising NeuroAMP to deliver notable
improvements in personalized hearing aid amplification.
|
2502.10825
|
MITRE ATT&CK Applications in Cybersecurity and The Way Forward
|
cs.CR cs.AI
|
The MITRE ATT&CK framework is a widely adopted tool for enhancing
cybersecurity, supporting threat intelligence, incident response, attack
modeling, and vulnerability prioritization. This paper synthesizes research on
its application across these domains by analyzing 417 peer-reviewed
publications. We identify commonly used adversarial tactics, techniques, and
procedures (TTPs) and examine the integration of natural language processing
(NLP) and machine learning (ML) with ATT&CK to improve threat detection and
response. Additionally, we explore the interoperability of ATT&CK with other
frameworks, such as the Cyber Kill Chain, NIST guidelines, and STRIDE,
highlighting its versatility. The paper further evaluates the framework from
multiple perspectives, including its effectiveness, validation methods, and
sector-specific challenges, particularly in industrial control systems (ICS)
and healthcare. We conclude by discussing current limitations and proposing
future research directions to enhance the applicability of ATT&CK in dynamic
cybersecurity environments.
|
2502.10826
|
Improved Offline Contextual Bandits with Second-Order Bounds: Betting
and Freezing
|
cs.LG cs.IT math.IT stat.ML
|
We consider the off-policy selection and learning in contextual bandits where
the learner aims to select or train a reward-maximizing policy using data
collected by a fixed behavior policy. Our contribution is two-fold. First, we
propose a novel off-policy selection method that leverages a new betting-based
confidence bound applied to an inverse propensity weight sequence. Our
theoretical analysis reveals that our method achieves a significantly better,
variance-adaptive guarantee upon prior art. Second, we propose a novel and
generic condition on the optimization objective for off-policy learning that
strikes a difference balance in bias and variance. One special case that we
call freezing tends to induce small variance, which is preferred in small-data
regimes. Our analysis shows that they match the best existing guarantee. In our
empirical study, our selection method outperforms existing methods, and
freezing exhibits improved performance in small-sample regimes.
|
2502.10827
|
E-3DGS: Event-Based Novel View Rendering of Large-Scale Scenes Using 3D
Gaussian Splatting
|
cs.CV cs.GR
|
Novel view synthesis techniques predominantly utilize RGB cameras, inheriting
their limitations such as the need for sufficient lighting, susceptibility to
motion blur, and restricted dynamic range. In contrast, event cameras are
significantly more resilient to these limitations but have been less explored
in this domain, particularly in large-scale settings. Current methodologies
primarily focus on front-facing or object-oriented (360-degree view) scenarios.
For the first time, we introduce 3D Gaussians for event-based novel view
synthesis. Our method reconstructs large and unbounded scenes with high visual
quality. We contribute the first real and synthetic event datasets tailored for
this setting. Our method demonstrates superior novel view synthesis and
consistently outperforms the baseline EventNeRF by a margin of 11-25% in PSNR
(dB) while being orders of magnitude faster in reconstruction and rendering.
|
2502.10828
|
The Vendiscope: An Algorithmic Microscope For Data Collections
|
cs.LG cond-mat.mtrl-sci cs.AI q-bio.QM
|
The evolution of microscopy, beginning with its invention in the late 16th
century, has continuously enhanced our ability to explore and understand the
microscopic world, enabling increasingly detailed observations of structures
and phenomena. In parallel, the rise of data-driven science has underscored the
need for sophisticated methods to explore and understand the composition of
complex data collections. This paper introduces the Vendiscope, the first
algorithmic microscope designed to extend traditional microscopy to
computational analysis. The Vendiscope leverages the Vendi scores -- a family
of differentiable diversity metrics rooted in ecology and quantum mechanics --
and assigns weights to data points based on their contribution to the overall
diversity of the collection. These weights enable high-resolution data analysis
at scale. We demonstrate this across biology, materials science, and machine
learning (ML). We analyzed the $250$ million protein sequences in the protein
universe, discovering that over $200$ million are near-duplicates and that
AlphaFold fails on proteins with Gene Ontology (GO) functions that contribute
most to diversity. Applying the Vendiscope to the Materials Project database
led to similar findings: more than $85\%$ of the crystals with formation energy
data are near-duplicates and ML models perform poorly on materials that enhance
diversity. Additionally, the Vendiscope can be used to study phenomena such as
memorization in generative models. We used the Vendiscope to identify memorized
training samples from $13$ different generative models and found that the
best-performing ones often memorize the training samples that contribute least
to diversity. Our findings demonstrate that the Vendiscope can serve as a
powerful tool for data-driven science.
|
2502.10833
|
Order-agnostic Identifier for Large Language Model-based Generative
Recommendation
|
cs.IR
|
Leveraging Large Language Models (LLMs) for generative recommendation has
attracted significant research interest, where item tokenization is a critical
step. It involves assigning item identifiers for LLMs to encode user history
and generate the next item. Existing approaches leverage either token-sequence
identifiers, representing items as discrete token sequences, or single-token
identifiers, using ID or semantic embeddings. Token-sequence identifiers face
issues such as the local optima problem in beam search and low generation
efficiency due to step-by-step generation. In contrast, single-token
identifiers fail to capture rich semantics or encode Collaborative Filtering
(CF) information, resulting in suboptimal performance.
To address these issues, we propose two fundamental principles for item
identifier design: 1) integrating both CF and semantic information to fully
capture multi-dimensional item information, and 2) designing order-agnostic
identifiers without token dependency, mitigating the local optima issue and
achieving simultaneous generation for generation efficiency. Accordingly, we
introduce a novel set identifier paradigm for LLM-based generative
recommendation, representing each item as a set of order-agnostic tokens. To
implement this paradigm, we propose SETRec, which leverages CF and semantic
tokenizers to obtain order-agnostic multi-dimensional tokens. To eliminate
token dependency, SETRec uses a sparse attention mask for user history encoding
and a query-guided generation mechanism for simultaneous token generation. We
instantiate SETRec on T5 and Qwen (from 1.5B to 7B). Extensive experiments
demonstrate its effectiveness under various scenarios (e.g., full ranking,
warm- and cold-start ranking, and various item popularity groups). Moreover,
results validate SETRec's superior efficiency and show promising scalability on
cold-start items as model sizes increase.
|
2502.10834
|
Prosocial Media
|
cs.CY cs.SI
|
Social media empower distributed content creation by algorithmically
harnessing "the social fabric" (explicit and implicit signals of association)
to serve this content. While this overcomes the bottlenecks and biases of
traditional gatekeepers, many believe it has unsustainably eroded the very
social fabric it depends on by maximizing engagement for advertising revenue.
This paper participates in open and ongoing considerations to translate social
and political values and conventions, specifically social cohesion, into
platform design. We propose an alternative platform model that the social
fabric an explicit output as well as input. Citizens are members of communities
defined by explicit affiliation or clusters of shared attitudes. Both have
internal divisions, as citizens are members of intersecting communities, which
are themselves internally diverse. Each is understood to value content that
bridge (viz. achieve consensus across) and balance (viz. represent fairly) this
internal diversity, consistent with the principles of the Hutchins Commission
(1947). Content is labeled with social provenance, indicating for which
community or citizen it is bridging or balancing. Subscription payments allow
citizens and communities to increase the algorithmic weight on the content they
value in the content serving algorithm. Advertisers may, with consent of
citizen or community counterparties, target them in exchange for payment or
increase in that party's algorithmic weight. Underserved and emerging
communities and citizens are optimally subsidized/supported to develop into
paying participants. Content creators and communities that curate content are
rewarded for their contributions with algorithmic weight and/or revenue. We
discuss applications to productivity (e.g. LinkedIn), political (e.g. X), and
cultural (e.g. TikTok) platforms.
|
2502.10835
|
Back Attention: Understanding and Enhancing Multi-Hop Reasoning in Large
Language Models
|
cs.CL
|
We investigate how large language models perform latent multi-hop reasoning
in prompts like "Wolfgang Amadeus Mozart's mother's spouse is". To analyze this
process, we introduce logit flow, an interpretability method that traces how
logits propagate across layers and positions toward the final prediction. Using
logit flow, we identify four distinct stages in single-hop knowledge
prediction: (A) entity subject enrichment, (B) entity attribute extraction, (C)
relation subject enrichment, and (D) relation attribute extraction. Extending
this analysis to multi-hop reasoning, we find that failures often stem from the
relation attribute extraction stage, where conflicting logits reduce prediction
accuracy. To address this, we propose back attention, a novel mechanism that
enables lower layers to leverage higher-layer hidden states from different
positions during attention computation. With back attention, a 1-layer
transformer achieves the performance of a 2-layer transformer. Applied to four
LLMs, back attention improves accuracy on five reasoning datasets,
demonstrating its effectiveness in enhancing latent multi-hop reasoning
ability.
|
2502.10838
|
Generalizable speech deepfake detection via meta-learned LoRA
|
eess.AS cs.LG cs.SD
|
Generalizable deepfake detection can be formulated as a detection problem
where labels (bonafide and fake) are fixed but distributional drift affects the
deepfake set. We can always train our detector with one-selected attacks and
bonafide data, but an attacker can generate new attacks by just retraining his
generator with a different seed. One reasonable approach is to simply pool all
different attack types available in training time. Our proposed approach is to
utilize meta-learning in combination with LoRA adapters to learn the structure
in the training data that is common to all attack types.
|
2502.10841
|
SkyReels-A1: Expressive Portrait Animation in Video Diffusion
Transformers
|
cs.CV
|
We present SkyReels-A1, a simple yet effective framework built upon video
diffusion Transformer to facilitate portrait image animation. Existing
methodologies still encounter issues, including identity distortion, background
instability, and unrealistic facial dynamics, particularly in head-only
animation scenarios. Besides, extending to accommodate diverse body proportions
usually leads to visual inconsistencies or unnatural articulations. To address
these challenges, SkyReels-A1 capitalizes on the strong generative capabilities
of video DiT, enhancing facial motion transfer precision, identity retention,
and temporal coherence. The system incorporates an expression-aware
conditioning module that enables seamless video synthesis driven by
expression-guided landmark inputs. Integrating the facial image-text alignment
module strengthens the fusion of facial attributes with motion trajectories,
reinforcing identity preservation. Additionally, SkyReels-A1 incorporates a
multi-stage training paradigm to incrementally refine the correlation between
expressions and motion while ensuring stable identity reproduction. Extensive
empirical evaluations highlight the model's ability to produce visually
coherent and compositionally diverse results, making it highly applicable to
domains such as virtual avatars, remote communication, and digital media
generation.
|
2502.10842
|
Mobile Robotic Multi-View Photometric Stereo
|
cs.CV cs.RO
|
Multi-View Photometric Stereo (MVPS) is a popular method for fine-detailed 3D
acquisition of an object from images. Despite its outstanding results on
diverse material objects, a typical MVPS experimental setup requires a
well-calibrated light source and a monocular camera installed on an immovable
base. This restricts the use of MVPS on a movable platform, limiting us from
taking MVPS benefits in 3D acquisition for mobile robotics applications. To
this end, we introduce a new mobile robotic system for MVPS. While the proposed
system brings advantages, it introduces additional algorithmic challenges.
Addressing them, in this paper, we further propose an incremental approach for
mobile robotic MVPS. Our approach leverages a supervised learning setup to
predict per-view surface normal, object depth, and per-pixel uncertainty in
model-predicted results. A refined depth map per view is obtained by solving an
MVPS-driven optimization problem proposed in this paper. Later, we fuse the
refined depth map while tracking the camera pose w.r.t the reference frame to
recover globally consistent object 3D geometry. Experimental results show the
advantages of our robotic system and algorithm, featuring the local
high-frequency surface detail recovery with globally consistent object shape.
Our work is beyond any MVPS system yet presented, providing encouraging results
on objects with unknown reflectance properties using fewer frames without a
tiring calibration and installation process, enabling computationally efficient
robotic automation approach to photogrammetry. The proposed approach is nearly
100 times computationally faster than the state-of-the-art MVPS methods such as
[1, 2] while maintaining the similar results when tested on subjects taken from
the benchmark DiLiGenT MV dataset [3].
|
2502.10843
|
LEAPS: A discrete neural sampler via locally equivariant networks
|
cs.LG stat.CO stat.ML
|
We propose LEAPS, an algorithm to sample from discrete distributions known up
to normalization by learning a rate matrix of a continuous-time Markov chain
(CTMC). LEAPS can be seen as a continuous-time formulation of annealed
importance sampling and sequential Monte Carlo methods, extended so that the
variance of the importance weights is offset by the inclusion of the CTMC. To
derive these importance weights, we introduce a set of Radon-Nikodym
derivatives of CTMCs over their path measures. Because the computation of these
weights is intractable with standard neural network parameterizations of rate
matrices, we devise a new compact representation for rate matrices via what we
call locally equivariant functions. To parameterize them, we introduce a family
of locally equivariant multilayer perceptrons, attention layers, and
convolutional networks, and provide an approach to make deep networks that
preserve the local equivariance. This property allows us to propose a scalable
training algorithm for the rate matrix such that the variance of the importance
weights associated to the CTMC are minimal. We demonstrate the efficacy of
LEAPS on problems in statistical physics.
|
2502.10848
|
Implicit Neural Representations of Molecular Vector-Valued Functions
|
cs.LG q-bio.QM
|
Molecules have various computational representations, including numerical
descriptors, strings, graphs, point clouds, and surfaces. Each representation
method enables the application of various machine learning methodologies from
linear regression to graph neural networks paired with large language models.
To complement existing representations, we introduce the representation of
molecules through vector-valued functions, or $n$-dimensional vector fields,
that are parameterized by neural networks, which we denote molecular neural
fields. Unlike surface representations, molecular neural fields capture
external features and the hydrophobic core of macromolecules such as proteins.
Compared to discrete graph or point representations, molecular neural fields
are compact, resolution independent and inherently suited for interpolation in
spatial and temporal dimensions. These properties inherited by molecular neural
fields lend themselves to tasks including the generation of molecules based on
their desired shape, structure, and composition, and the resolution-independent
interpolation between molecular conformations in space and time. Here, we
provide a framework and proofs-of-concept for molecular neural fields, namely,
the parametrization and superresolution reconstruction of a protein-ligand
complex using an auto-decoder architecture and the embedding of molecular
volumes in latent space using an auto-encoder architecture.
|
2502.10851
|
To Bin or not to Bin: Alternative Representations of Mass Spectra
|
cs.LG physics.chem-ph q-bio.QM
|
Mass spectrometry, especially so-called tandem mass spectrometry, is commonly
used to assess the chemical diversity of samples. The resulting mass
fragmentation spectra are representations of molecules of which the structure
may have not been determined. This poses the challenge of experimentally
determining or computationally predicting molecular structures from mass
spectra. An alternative option is to predict molecular properties or molecular
similarity directly from spectra. Various methodologies have been proposed to
embed mass spectra for further use in machine learning tasks. However, these
methodologies require preprocessing of the spectra, which often includes
binning or sub-sampling peaks with the main reasoning of creating uniform
vector sizes and removing noise. Here, we investigate two alternatives to the
binning of mass spectra before down-stream machine learning tasks, namely,
set-based and graph-based representations. Comparing the two proposed
representations to train a set transformer and a graph neural network on a
regression task, respectively, we show that they both perform substantially
better than a multilayer perceptron trained on binned data.
|
2502.10852
|
Multilingual Encoder Knows more than You Realize: Shared Weights
Pretraining for Extremely Low-Resource Languages
|
cs.CL cs.AI
|
While multilingual language models like XLM-R have advanced multilingualism
in NLP, they still perform poorly in extremely low-resource languages. This
situation is exacerbated by the fact that modern LLMs such as LLaMA and Qwen
support far fewer languages than XLM-R, making text generation models
non-existent for many languages in the world. To tackle this challenge, we
propose a novel framework for adapting multilingual encoders to text generation
in extremely low-resource languages. By reusing the weights between the encoder
and the decoder, our framework allows the model to leverage the learned
semantic space of the encoder, enabling efficient learning and effective
generalization in low-resource languages. Applying this framework to four
Chinese minority languages, we present XLM-SWCM, and demonstrate its superior
performance on various downstream tasks even when compared with much larger
models.
|
2502.10853
|
Sparse learning with concave regularization: relaxation of the
irrepresentable condition
|
math.OC cs.SY eess.SY
|
Learning sparse models from data is an important task in all those frameworks
where relevant information should be identified within a large dataset. This
can be achieved by formulating and solving suitable sparsity promoting
optimization problems. As to linear regression models, Lasso is the most
popular convex approach, based on an $\ell_1$-norm regularization. In contrast,
in this paper, we analyse a concave regularized approach, and we prove that it
relaxes the irrepresentable condition, which is sufficient and essentially
necessary for Lasso to select the right significant parameters. In practice,
this has the benefit of reducing the number of necessary measurements with
respect to Lasso. Since the proposed problem is non-convex, we also discuss
different algorithms to solve it, and we illustrate the obtained enhancement
via numerical experiments.
|
2502.10855
|
Towards Effective Extraction and Evaluation of Factual Claims
|
cs.CL
|
A common strategy for fact-checking long-form content generated by Large
Language Models (LLMs) is extracting simple claims that can be verified
independently. Since inaccurate or incomplete claims compromise fact-checking
results, ensuring claim quality is critical. However, the lack of a
standardized evaluation framework impedes assessment and comparison of claim
extraction methods. To address this gap, we propose a framework for evaluating
claim extraction in the context of fact-checking along with automated,
scalable, and replicable methods for applying this framework, including novel
approaches for measuring coverage and decontextualization. We also introduce
Claimify, an LLM-based claim extraction method, and demonstrate that it
outperforms existing methods under our evaluation framework. A key feature of
Claimify is its ability to handle ambiguity and extract claims only when there
is high confidence in the correct interpretation of the source text.
|
2502.10857
|
Divergent Thoughts toward One Goal: LLM-based Multi-Agent Collaboration
System for Electronic Design Automation
|
cs.CL
|
Recently, with the development of tool-calling capabilities in large language
models (LLMs), these models have demonstrated significant potential for
automating electronic design automation (EDA) flows by interacting with EDA
tool APIs via EDA scripts. However, considering the limited understanding of
EDA tools, LLMs face challenges in practical scenarios where diverse interfaces
of EDA tools exist across different platforms. Additionally, EDA flow
automation often involves intricate, long-chain tool-calling processes,
increasing the likelihood of errors in intermediate steps. Any errors will lead
to the instability and failure of EDA flow automation. To address these
challenges, we introduce EDAid, a multi-agent collaboration system where
multiple agents harboring divergent thoughts converge towards a common goal,
ensuring reliable and successful EDA flow automation. Specifically, each agent
is controlled by ChipLlama models, which are expert LLMs fine-tuned for EDA
flow automation. Our experiments demonstrate the state-of-the-art (SOTA)
performance of our ChipLlama models and validate the effectiveness of our EDAid
in the automation of complex EDA flows, showcasing superior performance
compared to single-agent systems.
|
2502.10858
|
Is Depth All You Need? An Exploration of Iterative Reasoning in LLMs
|
cs.AI cs.CL
|
Deep iterative chain-of-thought (CoT) reasoning enables LLMs to tackle
complex tasks by progressively activating relevant pre-trained knowledge.
However, it faces challenges in ensuring continual improvement and determining
a stopping criterion. In this paper, we investigate whether the relevant
knowledge that contributes directly to solving the given question can be
activated from the initial reasoning path, thus circumventing the need for
iterative refinement. Our experiments reveal that increasing the diversity of
initial reasoning paths can achieve comparable or superior performance, a
concept we term \textit{breadth reasoning}. However, existing breadth reasoning
approaches, such as self-consistency, offer limited diversity. To address this
limitation, we propose a simple yet effective method that enhances reasoning
breadth by integrating contextual exploration with reduced sampling randomness.
Extensive experiments demonstrate that our approach significantly outperforms
deep iterative reasoning. Our code is provided in
https://github.com/zongqianwu/breadth.
|
2502.10862
|
Accelerated co-design of robots through morphological pretraining
|
cs.RO
|
The co-design of robot morphology and neural control typically requires using
reinforcement learning to approximate a unique control policy gradient for each
body plan, demanding massive amounts of training data to measure the
performance of each design. Here we show that a universal, morphology-agnostic
controller can be rapidly and directly obtained by gradient-based optimization
through differentiable simulation. This process of morphological pretraining
allows the designer to explore non-differentiable changes to a robot's physical
layout (e.g. adding, removing and recombining discrete body parts) and
immediately determine which revisions are beneficial and which are deleterious
using the pretrained model. We term this process "zero-shot evolution" and
compare it with the simultaneous co-optimization of a universal controller
alongside an evolving design population. We find the latter results in
diversity collapse, a previously unknown pathology whereby the population --
and thus the controller's training data -- converges to similar designs that
are easier to steer with a shared universal controller. We show that zero-shot
evolution with a pretrained controller quickly yields a diversity of highly
performant designs, and by fine-tuning the pretrained controller on the current
population throughout evolution, diversity is not only preserved but
significantly increased as superior performance is achieved.
|
2502.10864
|
Recursions for quadratic rotation symmetric functions weights
|
cs.IT math.CO math.IT
|
A Boolean function in $n$ variables is rotation symmetric (RS) if it is
invariant under powers of $\rho(x_1, \ldots, x_n) = (x_2, \ldots, x_n, x_1)$.
An RS function is called monomial rotation symmetric (MRS) if it is generated
by applying powers of $\rho$ to a single monomial. The author showed in $2017$
that for any RS function $f_n$ in $n$ variables, the sequence of Hamming
weights $wt(f_n)$ for all values of $n$ satisfies a linear recurrence with
associated recursion polynomial given by the minimal polynomial of a {\em rules
matrix}. Examples showed that the usual formula for the weights $wt(f_n)$ in
terms of powers of the roots of the minimal polynomial always has simple
coefficients. The conjecture that this is always true is the Easy Coefficients
Conjecture (ECC). The present paper proves the ECC if the rules matrix
satisfies a certain condition. Major applications include an enormous decrease
in the amount of computation that is needed to determine the values of
$wt(f_n)$ for a quadratic RS function $f_n$ if either $n$ or the order of the
recursion for the weights is large, and a simpler way to determine the Dickson
form of $f_n.$ The ECC also enables rapid computation of generating functions
which give the values of $wt(f_n)$ as coefficients in a power series.
|
2502.10867
|
A Tutorial on LLM Reasoning: Relevant Methods behind ChatGPT o1
|
cs.AI cs.CL
|
OpenAI o1 has shown that applying reinforcement learning to integrate
reasoning steps directly during inference can significantly improve a model's
reasoning capabilities. This result is exciting as the field transitions from
the conventional autoregressive method of generating answers to a more
deliberate approach that models the slow-thinking process through step-by-step
reasoning training. Reinforcement learning plays a key role in both the model's
training and decoding processes. In this article, we present a comprehensive
formulation of reasoning problems and investigate the use of both model-based
and model-free approaches to better support this slow-thinking framework.
|
2502.10868
|
NitiBench: A Comprehensive Studies of LLM Frameworks Capabilities for
Thai Legal Question Answering
|
cs.CL
|
The application of large language models (LLMs) in the legal domain holds
significant potential for information retrieval and question answering, yet
Thai legal QA systems face challenges due to a lack of standardized evaluation
benchmarks and the complexity of Thai legal structures. This paper introduces
NitiBench, a benchmark comprising two datasets: the NitiBench-CCL, covering
general Thai financial law, and the NitiBench-Tax, which includes real-world
tax law cases requiring advanced legal reasoning. We evaluate
retrieval-augmented generation (RAG) and long-context LLM-based approaches to
address three key research questions: the impact of domain-specific components
like section-based chunking and cross-referencing, the comparative performance
of different retrievers and LLMs, and the viability of long-context LLMs as an
alternative to RAG. Our results show that section-based chunking significantly
improves retrieval and end-to-end performance, current retrievers struggle with
complex queries, and long-context LLMs still underperform RAG-based systems in
Thai legal QA. To support fair evaluation, we propose tailored multi-label
retrieval metrics and the use of an LLM-as-judge for coverage and contradiction
detection method. These findings highlight the limitations of current Thai
legal NLP solutions and provide a foundation for future research in the field.
We also open-sourced our codes and dataset to available publicly.
|
2502.10870
|
Hybrid high-order methods for elasto-acoustic wave propagation in the
time domain
|
math.NA cs.CE cs.NA
|
We devise a Hybrid High-Order (HHO) method for the coupling between the
acoustic and elastic wave equations in the time domain. A first-order
formulation in time is considered. The HHO method can use equal-order and
mixed-order settings, as well as O(1)- and O(1/h)-stabilizations. An
energy-error estimate is established in the time-continuous case. A numerical
spectral analysis is performed, showing that O(1)-stabilization is required to
avoid excessive CFL limitations for explicit time discretizations. Moreover,
the spectral radius of the stiffness matrix is fairly independent of the
geometry of the mesh cells. For analytical solutions on general meshes, optimal
convergence rates of order (k+1) are shown in both equal- and mixed-order
settings using O(1)-stabilization, whereas order (k+2) is achieved in the
mixed-order setting using O(1/h)-stabilization. Test cases with a Ricker
wavelet as an initial condition showcase the relevance of the proposed method
for the simulation of elasto-acoustic wave propagation across media with
contrasted material properties.
|
2502.10871
|
The Representation and Recall of Interwoven Structured Knowledge in
LLMs: A Geometric and Layered Analysis
|
cs.CL cs.AI cs.LG
|
This study investigates how large language models (LLMs) represent and recall
multi-associated attributes across transformer layers. We show that
intermediate layers encode factual knowledge by superimposing related
attributes in overlapping spaces, along with effective recall even when
attributes are not explicitly prompted. In contrast, later layers refine
linguistic patterns and progressively separate attribute representations,
optimizing task-specific outputs while appropriately narrowing attribute
recall. We identify diverse encoding patterns including, for the first time,
the observation of 3D spiral structures when exploring information related to
the periodic table of elements. Our findings reveal a dynamic transition in
attribute representations across layers, contributing to mechanistic
interpretability and providing insights for understanding how LLMs handle
complex, interrelated knowledge.
|
2502.10874
|
Indexing Join Inputs for Fast Queries and Maintenance
|
cs.DB
|
In database systems, joins are often expensive despite many years of research
producing numerous join algorithms. Precomputed and materialized join views
deliver the best query performance, whereas traditional indexes, used as
pre-sorted inputs for merge joins, permit very efficient maintenance. Neither
traditional indexes nor materialized join views require blocking phases, in
contrast to query-time sorting and transient indexes, e.g., hash tables in hash
joins, that impose high memory requirements and possibly spill to temporary
storage.
Here, we introduce a hybrid of traditional indexing and materialized join
views. The *merged index* can be implemented with traditional b-trees, permits
high-bandwidth maintenance using log-structured merge-forests, supports all
join types (inner joins, all outer joins, all semi joins), and enables
non-blocking query processing. Experiments across a wide range of scenarios
confirm its query performance comparable to materialized join views and
maintenance efficiency comparable to traditional indexes.
|
2502.10875
|
A Geometric Approach to Personalized Recommendation with Set-Theoretic
Constraints Using Box Embeddings
|
cs.IR cs.AI cs.LG
|
Personalized item recommendation typically suffers from data sparsity, which
is most often addressed by learning vector representations of users and items
via low-rank matrix factorization. While this effectively densifies the matrix
by assuming users and movies can be represented by linearly dependent latent
features, it does not capture more complicated interactions. For example,
vector representations struggle with set-theoretic relationships, such as
negation and intersection, e.g. recommending a movie that is "comedy and
action, but not romance". In this work, we formulate the problem of
personalized item recommendation as matrix completion where rows are
set-theoretically dependent. To capture this set-theoretic dependence we
represent each user and attribute by a hyper-rectangle or box (i.e. a Cartesian
product of intervals). Box embeddings can intuitively be understood as
trainable Venn diagrams, and thus not only inherently represent similarity (via
the Jaccard index), but also naturally and faithfully support arbitrary
set-theoretic relationships. Queries involving set-theoretic constraints can be
efficiently computed directly on the embedding space by performing geometric
operations on the representations. We empirically demonstrate the superiority
of box embeddings over vector-based neural methods on both simple and complex
item recommendation queries by up to 30 \% overall.
|
2502.10876
|
Super Resolution image reconstructs via total variation-based image
deconvolution: a majorization-minimization approach
|
cs.CV
|
This work aims to reconstruct image sequences with Total Variation regularity
in super-resolution. We consider, in particular, images of scenes for which the
point-to-point image transformation is a plane projective transformation. We
first describe the super-resolution image's imaging observation model, an
interpolation and Fusion estimator, and Projection on Convex Sets. We explain
motion and compute the optical flow of a sequence of images using the
Horn-Shunck algorithm to estimate motion. We then propose a Total Variation
regulazer via a Majorization-Minimization approach to obtain a suitable result.
Super Resolution restoration from motion measurements is also discussed.
Finally, the simulation's part demonstrates the power of the proposed
methodology. As expected, this model does not give real-time results, as seen
in the numerical experiments section, but it is the cornerstone for future
approaches. Finally, the simulation's part demonstrates the power of the
proposed methodology. As expected, this model does not give real-time results,
as seen in the numerical experiments section, but it is the cornerstone for
future approaches.
|
2502.10878
|
Broadcast Channel Cooperative Gain: An Operational Interpretation of
Partial Information Decomposition
|
cs.IT cs.AI cs.LG math.IT
|
Partial information decomposition has recently found applications in
biological signal processing and machine learning. Despite its impacts, the
decomposition was introduced through an informal and heuristic route, and its
exact operational meaning is unclear. In this work, we fill this gap by
connecting partial information decomposition to the capacity of the broadcast
channel, which has been well-studied in the information theory literature. We
show that the synergistic information in the decomposition can be rigorously
interpreted as the cooperative gain, or a lower bound of this gain, on the
corresponding broadcast channel. This interpretation can help practitioners to
better explain and expand the applications of the partial information
decomposition technique.
|
2502.10881
|
CiteCheck: Towards Accurate Citation Faithfulness Detection
|
cs.CL
|
Citation faithfulness detection is critical for enhancing retrieval-augmented
generation (RAG) systems, yet large-scale Chinese datasets for this task are
scarce. Existing methods face prohibitive costs due to the need for manually
annotated negative samples. To address this, we introduce the first large-scale
Chinese dataset CiteCheck for citation faithfulness detection, constructed via
a cost-effective approach using two-stage manual annotation. This method
balances positive and negative samples while significantly reducing annotation
expenses. CiteCheck comprises training and test splits. Experiments demonstrate
that: (1) the test samples are highly challenging, with even state-of-the-art
LLMs failing to achieve high accuracy; and (2) training data augmented with
LLM-generated negative samples enables smaller models to attain strong
performance using parameter-efficient fine-tuning. CiteCheck provides a robust
foundation for advancing citation faithfulness detection in Chinese RAG
systems. The dataset is publicly available to facilitate research.
|
2502.10883
|
Learning Identifiable Structures Helps Avoid Bias in DNN-based
Supervised Causal Learning
|
cs.LG cs.AI stat.ME
|
Causal discovery is a structured prediction task that aims to predict causal
relations among variables based on their data samples. Supervised Causal
Learning (SCL) is an emerging paradigm in this field. Existing Deep Neural
Network (DNN)-based methods commonly adopt the "Node-Edge approach", in which
the model first computes an embedding vector for each variable-node, then uses
these variable-wise representations to concurrently and independently predict
for each directed causal-edge. In this paper, we first show that this
architecture has some systematic bias that cannot be mitigated regardless of
model size and data size. We then propose SiCL, a DNN-based SCL method that
predicts a skeleton matrix together with a v-tensor (a third-order tensor
representing the v-structures). According to the Markov Equivalence Class (MEC)
theory, both the skeleton and the v-structures are identifiable causal
structures under the canonical MEC setting, so predictions about skeleton and
v-structures do not suffer from the identifiability limit in causal discovery,
thus SiCL can avoid the systematic bias in Node-Edge architecture, and enable
consistent estimators for causal discovery. Moreover, SiCL is also equipped
with a specially designed pairwise encoder module with a unidirectional
attention layer to model both internal and external relationships of pairs of
nodes. Experimental results on both synthetic and real-world benchmarks show
that SiCL significantly outperforms other DNN-based SCL approaches.
|
2502.10886
|
MET-Bench: Multimodal Entity Tracking for Evaluating the Limitations of
Vision-Language and Reasoning Models
|
cs.CL
|
Entity tracking is a fundamental challenge in natural language understanding,
requiring models to maintain coherent representations of entities. Previous
work has benchmarked entity tracking performance in purely text-based tasks. We
introduce MET-Bench, a multimodal entity tracking benchmark designed to
evaluate the ability of vision-language models to track entity states across
modalities. Using two structured domains, Chess and the Shell Game, we assess
how effectively current models integrate textual and image-based state updates.
Our findings reveal a significant performance gap between text-based and
image-based tracking and that this performance gap stems from deficits in
visual reasoning rather than perception. We further show that explicit
text-based reasoning strategies improve performance, yet substantial
limitations remain, especially in long-horizon multimodal scenarios. Our
results highlight the need for improved multimodal representations and
reasoning techniques to bridge the gap between textual and visual entity
tracking.
|
2502.10887
|
RemInD: Remembering Anatomical Variations for Interpretable Domain
Adaptive Medical Image Segmentation
|
cs.CV
|
This work presents a novel Bayesian framework for unsupervised domain
adaptation (UDA) in medical image segmentation. While prior works have explored
this clinically significant task using various strategies of domain alignment,
they often lack an explicit and explainable mechanism to ensure that target
image features capture meaningful structural information. Besides, these
methods are prone to the curse of dimensionality, inevitably leading to
challenges in interpretability and computational efficiency. To address these
limitations, we propose RemInD, a framework inspired by human adaptation.
RemInD learns a domain-agnostic latent manifold, characterized by several
anchors, to memorize anatomical variations. By mapping images onto this
manifold as weighted anchor averages, our approach ensures realistic and
reliable predictions. This design mirrors how humans develop representative
components to understand images and then retrieve component combinations from
memory to guide segmentation. Notably, model prediction is determined by two
explainable factors: a low-dimensional anchor weight vector, and a spatial
deformation. This design facilitates computationally efficient and
geometry-adherent adaptation by aligning weight vectors between domains on a
probability simplex. Experiments on two public datasets, encompassing cardiac
and abdominal imaging, demonstrate the superiority of RemInD, which achieves
state-of-the-art performance using a single alignment approach, outperforming
existing methods that often rely on multiple complex alignment strategies.
|
2502.10889
|
Nonlinear Feedback Linearization and LQG/LTR Control: A Comparative
Study for a Single-Machine Infinite-Bus System
|
eess.SY cs.SY math.OC
|
This paper presents a comparative study of three advanced control strategies
for a single-machine infinite-bus (SMIB) system: the nonlinear feedback
linearizing controller (NFLC), the integral-NFLC (INFLC), and the
linear-quadratic-Gaussian/loop transfer recovery (LQG/LTR) control. The NFLC
and INFLC techniques use exact feedback linearization to precisely cancel the
SMIB system nonlinearities, enabling the use of decentralized, linear, and
optimal controllers for the decoupled generator and turbine-governor systems
while remaining unaffected by the SMIB system's internal dynamics and operating
conditions. In contrast, the LQG/LTR approach employs an enhanced Kalman
filter, designed using the LTR procedure and a detailed frequency-domain
loop-shaping analysis, to achieve a reasonable trade-off between optimal
performance, noise/disturbance rejection, robustness recovery, and stability
margins for the SMIB system. We provide a control synthesis framework for
constructing practical, verifiable, scalable, and resilient linear and
nonlinear controllers for SMIB and multi-machine power systems by utilizing a
high-fidelity plant model for validation, a reduced-order control-design model,
and the correlations between the two models' control inputs. Rigorous
simulations and comparative analysis of the proposed controllers and a
full-state linear-quadratic regulator show the benefits, constraints, and
trade-offs of each controller in terms of transient response, steady-state
error, robustness, rotor angle stability, frequency control, and voltage
regulation under different operating conditions. Ultimately, this study aims to
guide the selection of appropriate control strategies for large-scale power
systems, enhancing the overall resilience and reliability of the electric grid.
|
2502.10894
|
Bridging the Sim-to-Real Gap for Athletic Loco-Manipulation
|
cs.RO cs.AI cs.LG
|
Achieving athletic loco-manipulation on robots requires moving beyond
traditional tracking rewards - which simply guide the robot along a reference
trajectory - to task rewards that drive truly dynamic, goal-oriented behaviors.
Commands such as "throw the ball as far as you can" or "lift the weight as
quickly as possible" compel the robot to exhibit the agility and power inherent
in athletic performance. However, training solely with task rewards introduces
two major challenges: these rewards are prone to exploitation (reward hacking),
and the exploration process can lack sufficient direction. To address these
issues, we propose a two-stage training pipeline. First, we introduce the
Unsupervised Actuator Net (UAN), which leverages real-world data to bridge the
sim-to-real gap for complex actuation mechanisms without requiring access to
torque sensing. UAN mitigates reward hacking by ensuring that the learned
behaviors remain robust and transferable. Second, we use a pre-training and
fine-tuning strategy that leverages reference trajectories as initial hints to
guide exploration. With these innovations, our robot athlete learns to lift,
throw, and drag with remarkable fidelity from simulation to reality.
|
2502.10896
|
Developing Conversational Speech Systems for Robots to Detect Speech
Biomarkers of Cognition in People Living with Dementia
|
cs.CL
|
This study presents the development and testing of a conversational speech
system designed for robots to detect speech biomarkers indicative of cognitive
impairments in people living with dementia (PLwD). The system integrates a
backend Python WebSocket server and a central core module with a large language
model (LLM) fine-tuned for dementia to process user input and generate robotic
conversation responses in real-time in less than 1.5 seconds. The frontend user
interface, a Progressive Web App (PWA), displays information and biomarker
score graphs on a smartphone in real-time to human users (PLwD, caregivers,
clinicians). Six speech biomarkers based on the existing literature - Altered
Grammar, Pragmatic Impairments, Anomia, Disrupted Turn-Taking, Slurred
Pronunciation, and Prosody Changes - were developed for the robot conversation
system using two datasets, one that included conversations of PLwD with a human
clinician (DementiaBank dataset) and one that included conversations of PLwD
with a robot (Indiana dataset). We also created a composite speech biomarker
that combined all six individual biomarkers into a single score. The speech
system's performance was first evaluated on the DementiaBank dataset showing
moderate correlation with MMSE scores, with the composite biomarker score
outperforming individual biomarkers. Analysis of the Indiana dataset revealed
higher and more variable biomarker scores, suggesting potential differences due
to study populations (e.g. severity of dementia) and the conversational
scenario (human-robot conversations are different from human-human). The
findings underscore the need for further research on the impact of
conversational scenarios on speech biomarkers and the potential clinical
applications of robotic speech systems.
|
2502.10899
|
Breaking Down the Hierarchy: A New Approach to Leukemia Classification
|
cs.CV cs.AI cs.LG
|
The complexities inherent to leukemia, multifaceted cancer affecting white
blood cells, pose considerable diagnostic and treatment challenges, primarily
due to reliance on laborious morphological analyses and expert judgment that
are susceptible to errors. Addressing these challenges, this study presents a
refined, comprehensive strategy leveraging advanced deep-learning techniques
for the classification of leukemia subtypes. We commence by developing a
hierarchical label taxonomy, paving the way for differentiating between various
subtypes of leukemia. The research further introduces a novel hierarchical
approach inspired by clinical procedures capable of accurately classifying
diverse types of leukemia alongside reactive and healthy cells. An integral
part of this study involves a meticulous examination of the performance of
Convolutional Neural Networks (CNNs) and Vision Transformers (ViTs) as
classifiers. The proposed method exhibits an impressive success rate, achieving
approximately 90\% accuracy across all leukemia subtypes, as substantiated by
our experimental results. A visual representation of the experimental findings
is provided to enhance the model's explainability and aid in understanding the
classification process.
|
2502.10906
|
PCGRLLM: Large Language Model-Driven Reward Design for Procedural
Content Generation Reinforcement Learning
|
cs.AI
|
Reward design plays a pivotal role in the training of game AIs, requiring
substantial domain-specific knowledge and human effort. In recent years,
several studies have explored reward generation for training game agents and
controlling robots using large language models (LLMs). In the content
generation literature, there has been early work on generating reward functions
for reinforcement learning agent generators. This work introduces PCGRLLM, an
extended architecture based on earlier work, which employs a feedback mechanism
and several reasoning-based prompt engineering techniques. We evaluate the
proposed method on a story-to-reward generation task in a two-dimensional
environment using two state-of-the-art LLMs, demonstrating the generalizability
of our approach. Our experiments provide insightful evaluations that
demonstrate the capabilities of LLMs essential for content generation tasks.
The results highlight significant performance improvements of 415% and 40%
respectively, depending on the zero-shot capabilities of the language model.
Our work demonstrates the potential to reduce human dependency in game AI
development, while supporting and enhancing creative processes.
|
2502.10907
|
Local Multiple Traces Formulation for Heterogeneous Electromagnetic
Scattering: Implementation and Preconditioning
|
cs.CE
|
We consider the three-dimensional time-harmonic electromagnetic (EM) wave
scattering transmission problem involving heterogeneous scatterers. The fields
are approximated using the local multiple traces formulation (MTF), originally
introduced for acoustic scattering. This scheme assigns independent boundary
unknowns to each subdomain and weakly enforces Calder\'on identities along with
interface transmission conditions. As a result, the MTF effectively handles
shared points or edges among multiple subdomains, while supporting various
preconditioning and parallelization strategies. Nevertheless, implementing
standard solvers presents significant challenges, particularly in managing the
degrees of freedom associated with subdomains and their interfaces. To address
these difficulties, we propose a novel framework that suitably defines
approximation spaces and enables the efficient exchange of normal vectors
across subdomain boundaries. This framework leverages the skeleton mesh,
representing the union of all interfaces, as the computational backbone, and
constitutes the first scalable implementation of the EM MTF. Furthermore, we
conduct several numerical experiments, exploring the effects of increasing
subdomains and block On-Surface-Raditation-Condition (OSRC) preconditioning, to
validate our approach and provide insights for future developments.
|
2502.10908
|
Automatic Quality Assessment of First Trimester Crown-Rump-Length
Ultrasound Images
|
cs.CV cs.AI cs.LG
|
Fetal gestational age (GA) is vital clinical information that is estimated
during pregnancy in order to assess fetal growth. This is usually performed by
measuring the crown-rump-length (CRL) on an ultrasound image in the Dating scan
which is then correlated with fetal age and growth trajectory. A major issue
when performing the CRL measurement is ensuring that the image is acquired at
the correct view, otherwise it could be misleading. Although clinical
guidelines specify the criteria for the correct CRL view, sonographers may not
regularly adhere to such rules. In this paper, we propose a new deep
learning-based solution that is able to verify the adherence of a CRL image to
clinical guidelines in order to assess image quality and facilitate accurate
estimation of GA. We first segment out important fetal structures then use the
localized structures to perform a clinically-guided mapping that verifies the
adherence of criteria. The segmentation method combines the benefits of
Convolutional Neural Network (CNN) and the Vision Transformer (ViT) to segment
fetal structures in ultrasound images and localize important fetal landmarks.
For segmentation purposes, we compare our proposed work with UNet and show that
our CNN/ViT-based method outperforms an optimized version of UNet. Furthermore,
we compare the output of the mapping with classification CNNs when assessing
the clinical criteria and the overall acceptability of CRL images. We show that
the proposed mapping is not only explainable but also more accurate than the
best performing classification CNNs.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.