id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
2501.14850
|
On the locality bias and results in the Long Range Arena
|
cs.CL cs.AI
|
The Long Range Arena (LRA) benchmark was designed to evaluate the performance
of Transformer improvements and alternatives in long-range dependency modeling
tasks. The Transformer and its main variants performed poorly on this
benchmark, and a new series of architectures such as State Space Models (SSMs)
gained some traction, greatly outperforming Transformers in the LRA. Recent
work has shown that with a denoising pre-training phase, Transformers can
achieve competitive results in the LRA with these new architectures. In this
work, we discuss and explain the superiority of architectures such as MEGA and
SSMs in the Long Range Arena, as well as the recent improvement in the results
of Transformers, pointing to the positional and local nature of the tasks. We
show that while the LRA is a benchmark for long-range dependency modeling, in
reality most of the performance comes from short-range dependencies. Using
training techniques to mitigate data inefficiency, Transformers are able to
reach state-of-the-art performance with proper positional encoding. In
addition, with the same techniques, we were able to remove all restrictions
from SSM convolutional kernels and learn fully parameterized convolutions
without decreasing performance, suggesting that the design choices behind SSMs
simply added inductive biases and learning efficiency for these particular
tasks. Our insights indicate that LRA results should be interpreted with
caution and call for a redesign of the benchmark.
|
2501.14851
|
JustLogic: A Comprehensive Benchmark for Evaluating Deductive Reasoning
in Large Language Models
|
cs.CL cs.AI cs.LG cs.LO
|
Logical reasoning is a critical component of Large Language Models (LLMs),
and substantial research efforts in recent years have aimed to enhance their
deductive reasoning capabilities. However, existing deductive reasoning
benchmarks, which are crucial for evaluating and advancing LLMs, are inadequate
due to their lack of task complexity, presence of prior knowledge as a
confounder, and superficial error analysis. To address these deficiencies, we
introduce JustLogic, a synthetically generated deductive reasoning benchmark
designed for rigorous evaluation of LLMs. JustLogic is (i) highly complex,
capable of generating a diverse range of linguistic patterns, vocabulary, and
argument structures; (ii) prior knowledge independent, eliminating the
advantage of models possessing prior knowledge and ensuring that only deductive
reasoning is used to answer questions; and (iii) capable of in-depth error
analysis on the heterogeneous effects of reasoning depth and argument form on
model accuracy. Our experimental results on JustLogic reveal that most
state-of-the-art (SOTA) LLMs perform significantly worse than the human
average, demonstrating substantial room for model improvement. All code and
data are available at https://github.com/michaelchen-lab/JustLogic
|
2501.14856
|
Noise-conditioned Energy-based Annealed Rewards (NEAR): A Generative
Framework for Imitation Learning from Observation
|
cs.RO cs.AI
|
This paper introduces a new imitation learning framework based on
energy-based generative models capable of learning complex, physics-dependent,
robot motion policies through state-only expert motion trajectories. Our
algorithm, called Noise-conditioned Energy-based Annealed Rewards (NEAR),
constructs several perturbed versions of the expert's motion data distribution
and learns smooth, and well-defined representations of the data distribution's
energy function using denoising score matching. We propose to use these learnt
energy functions as reward functions to learn imitation policies via
reinforcement learning. We also present a strategy to gradually switch between
the learnt energy functions, ensuring that the learnt rewards are always
well-defined in the manifold of policy-generated samples. We evaluate our
algorithm on complex humanoid tasks such as locomotion and martial arts and
compare it with state-only adversarial imitation learning algorithms like
Adversarial Motion Priors (AMP). Our framework sidesteps the optimisation
challenges of adversarial imitation learning techniques and produces results
comparable to AMP in several quantitative metrics across multiple imitation
settings.
|
2501.14859
|
Dynamic Adaptation of LoRA Fine-Tuning for Efficient and Task-Specific
Optimization of Large Language Models
|
cs.CL cs.LG
|
This paper presents a novel methodology of fine-tuning for large language
models-dynamic LoRA. Building from the standard Low-Rank Adaptation framework,
this methodology further adds dynamic adaptation mechanisms to improve
efficiency and performance. The key contribution of dynamic LoRA lies within
its adaptive weight allocation mechanism coupled with an input feature-based
adaptive strategy. These enhancements allow for a more precise fine-tuning
process that is more tailored to specific tasks. Traditional LoRA methods use
static adapter settings, not considering the different importance of model
layers. In contrast, dynamic LoRA introduces a mechanism that dynamically
evaluates the layer's importance during fine-tuning. This evaluation enables
the reallocation of adapter parameters to fit the unique demands of each
individual task, which leads to better optimization results. Another gain in
flexibility arises from the consideration of the input feature distribution,
which helps the model generalize better when faced with complicated and diverse
datasets. The joint approach boosts not only the performance over each single
task but also the generalization ability of the model. The efficiency of the
dynamic LoRA was validated in experiments on benchmark datasets, such as GLUE,
with surprising results. More specifically, this method achieved 88.1% accuracy
with an F1-score of 87.3%. Noticeably, these improvements were made at a slight
increase in computational costs: only 0.1% more resources than standard LoRA.
This balance between performance and efficiency positions dynamic LoRA as a
practical, scalable solution for fine-tuning LLMs, especially in
resource-constrained scenarios. To take it a step further, its adaptability
makes it a promising foundation for much more advanced applications, including
multimodal tasks.
|
2501.14861
|
A Deep-Unfolding-Optimized Coordinate-Descent Data-Detector ASIC for
mmWave Massive MIMO
|
cs.IT eess.SP math.IT
|
We present a 22 nm FD-SOI (fully depleted silicon-on-insulator)
application-specific integrated circuit (ASIC) implementation of a novel
soft-output Gram-domain block coordinate descent (GBCD) data detector for
massive multi-user (MU) multiple-input multiple-output (MIMO) systems. The ASIC
simultaneously addresses the high throughput requirements for millimeter wave
(mmWave) communication, stringent area and power budget per subcarrier in an
orthogonal frequency-division multiplexing (OFDM) system, and error-rate
performance challenges posed by realistic mmWave channels. The proposed GBCD
algorithm utilizes a posterior mean estimate (PME) denoiser and is optimized
using deep unfolding, which results in superior error-rate performance even in
scenarios with highly correlated channels or where the number of user equipment
(UE) data streams is comparable to the number of basestation (BS) antennas. The
fabricated GBCD ASIC supports up to 16 UEs transmitting QPSK to 256-QAM symbols
to a 128-antenna BS, and achieves a peak throughput of 7.1 Gbps at 367 mW. The
core area is only 0.97 mm$^2$ thanks to a reconfigurable array of processing
elements that enables extensive resource sharing. Measurement results
demonstrate that the proposed GBCD data-detector ASIC achieves best-in-class
throughput and area efficiency.
|
2501.14877
|
DrawEduMath: Evaluating Vision Language Models with Expert-Annotated
Students' Hand-Drawn Math Images
|
cs.CL cs.CV
|
In real-world settings, vision language models (VLMs) should robustly handle
naturalistic, noisy visual content as well as domain-specific language and
concepts. For example, K-12 educators using digital learning platforms may need
to examine and provide feedback across many images of students' math work. To
assess the potential of VLMs to support educators in settings like this one, we
introduce DrawEduMath, an English-language dataset of 2,030 images of students'
handwritten responses to K-12 math problems. Teachers provided detailed
annotations, including free-form descriptions of each image and 11,661
question-answer (QA) pairs. These annotations capture a wealth of pedagogical
insights, ranging from students' problem-solving strategies to the composition
of their drawings, diagrams, and writing. We evaluate VLMs on teachers' QA
pairs, as well as 44,362 synthetic QA pairs derived from teachers' descriptions
using language models (LMs). We show that even state-of-the-art VLMs leave much
room for improvement on DrawEduMath questions. We also find that synthetic QAs,
though imperfect, can yield similar model rankings as teacher-written QAs. We
release DrawEduMath to support the evaluation of VLMs' abilities to reason
mathematically over images gathered with educational contexts in mind.
|
2501.14883
|
Verify with Caution: The Pitfalls of Relying on Imperfect Factuality
Metrics
|
cs.CL cs.LG
|
Improvements in large language models have led to increasing optimism that
they can serve as reliable evaluators of natural language generation outputs.
In this paper, we challenge this optimism by thoroughly re-evaluating five
state-of-the-art factuality metrics on a collection of 11 datasets for
summarization, retrieval-augmented generation, and question answering. We find
that these evaluators are inconsistent with each other and often misestimate
system-level performance, both of which can lead to a variety of pitfalls. We
further show that these metrics exhibit biases against highly paraphrased
outputs and outputs that draw upon faraway parts of the source documents. We
urge users of these factuality metrics to proceed with caution and manually
validate the reliability of these metrics in their domain of interest before
proceeding.
|
2501.14885
|
Hybrid Interpretable Deep Learning Framework for Skin Cancer Diagnosis:
Integrating Radial Basis Function Networks with Explainable AI
|
cs.CV
|
Skin cancer is one of the most prevalent and potentially life-threatening
diseases worldwide, necessitating early and accurate diagnosis to improve
patient outcomes. Conventional diagnostic methods, reliant on clinical
expertise and histopathological analysis, are often time-intensive, subjective,
and prone to variability. To address these limitations, we propose a novel
hybrid deep learning framework that integrates convolutional neural networks
(CNNs) with Radial Basis Function (RBF) Networks to achieve high classification
accuracy and enhanced interpretability. The motivation for incorporating RBF
Networks lies in their intrinsic interpretability and localized response to
input features, which make them well-suited for tasks requiring transparency
and fine-grained decision-making. Unlike traditional deep learning models that
rely on global feature representations, RBF Networks allow for mapping segments
of images to chosen prototypes, exploiting salient features within a single
image. This enables clinicians to trace predictions to specific, interpretable
patterns. The framework incorporates segmentation-based feature extraction,
active learning for prototype selection, and K-Medoids clustering to focus on
these salient features. Evaluations on the ISIC 2016 and ISIC 2017 datasets
demonstrate the model's effectiveness, achieving classification accuracies of
83.02\% and 72.15\% using ResNet50, respectively, and outperforming VGG16-based
configurations. By generating interpretable explanations for predictions, the
framework aligns with clinical workflows, bridging the gap between predictive
performance and trustworthiness. This study highlights the potential of hybrid
models to deliver actionable insights, advancing the development of reliable
AI-assisted diagnostic tools for high-stakes medical applications.
|
2501.14889
|
Iterative Feature Space Optimization through Incremental Adaptive
Evaluation
|
cs.LG
|
Iterative feature space optimization involves systematically evaluating and
adjusting the feature space to improve downstream task performance. However,
existing works suffer from three key limitations:1) overlooking differences
among data samples leads to evaluation bias; 2) tailoring feature spaces to
specific machine learning models results in overfitting and poor
generalization; 3) requiring the evaluator to be retrained from scratch during
each optimization iteration significantly reduces the overall efficiency of the
optimization process. To bridge these gaps, we propose a gEneralized Adaptive
feature Space Evaluator (EASE) to efficiently produce optimal and generalized
feature spaces. This framework consists of two key components: Feature-Sample
Subspace Generator and Contextual Attention Evaluator. The first component aims
to decouple the information distribution within the feature space to mitigate
evaluation bias. To achieve this, we first identify features most relevant to
prediction tasks and samples most challenging for evaluation based on feedback
from the subsequent evaluator. This decoupling strategy makes the evaluator
consistently target the most challenging aspects of the feature space. The
second component intends to incrementally capture evolving patterns of the
feature space for efficient evaluation. We propose a weighted-sharing
multi-head attention mechanism to encode key characteristics of the feature
space into an embedding vector for evaluation. Moreover, the evaluator is
updated incrementally, retaining prior evaluation knowledge while incorporating
new insights, as consecutive feature spaces during the optimization process
share partial information. Extensive experiments on fourteen real-world
datasets demonstrate the effectiveness of the proposed framework. Our code and
data are publicly available.
|
2501.14892
|
Causal Graphs Meet Thoughts: Enhancing Complex Reasoning in
Graph-Augmented LLMs
|
cs.AI cs.CL
|
In knowledge-intensive tasks, especially in high-stakes domains like medicine
and law, it is critical not only to retrieve relevant information but also to
provide causal reasoning and explainability. Large language models (LLMs) have
achieved remarkable performance in natural language understanding and
generation tasks. However, they often suffer from limitations such as
difficulty in incorporating new knowledge, generating hallucinations, and
explaining their reasoning process. To address these challenges, integrating
knowledge graphs with Graph Retrieval-Augmented Generation (Graph RAG) has
emerged as an effective solution. Traditional Graph RAG methods often rely on
simple graph traversal or semantic similarity, which do not capture causal
relationships or align well with the model's internal reasoning steps. This
paper proposes a novel pipeline that filters large knowledge graphs to
emphasize cause-effect edges, aligns the retrieval process with the model's
chain-of-thought (CoT), and enhances reasoning through multi-stage path
improvements. Experiments on medical question-answering tasks show consistent
gains, with up to a 10\% absolute improvement across multiple large language
models (LLMs). This approach demonstrates the value of combining causal
reasoning with stepwise retrieval, leading to more interpretable and logically
grounded solutions for complex queries.
|
2501.14894
|
Improving reliability of uncertainty-aware gaze estimation with
probability calibration
|
cs.CV
|
Current deep learning powered appearance based uncertainty-aware gaze
estimation models produce inconsistent and unreliable uncertainty estimation
that limits their adoptions in downstream applications. In this study, we
propose a workflow to improve the accuracy of uncertainty estimation using
probability calibration with a few post hoc samples. The probability
calibration process employs a simple secondary regression model to compensate
for inaccuracies in estimated uncertainties from the deep learning model.
Training of the secondary model is detached from the main deep learning model
and thus no expensive weight tuning is required. The added calibration process
is lightweight and relatively independent from the deep learning process,
making it fast to run and easy to implement. We evaluated the effectiveness of
the calibration process under four potential application scenarios with two
datasets that have distinctive image characteristics due to the data collection
setups. The calibration process is most effective when the calibration and
testing data share similar characteristics. Even under suboptimal circumstances
that calibration and testing data differ, the calibration process can still
make corrections to reduce prediction errors in uncertainty estimates made by
uncalibrated models.
|
2501.14896
|
Glissando-Net: Deep sinGLe vIew category level poSe eStimation ANd 3D
recOnstruction
|
cs.CV
|
We present a deep learning model, dubbed Glissando-Net, to simultaneously
estimate the pose and reconstruct the 3D shape of objects at the category level
from a single RGB image. Previous works predominantly focused on either
estimating poses(often at the instance level), or reconstructing shapes, but
not both. Glissando-Net is composed of two auto-encoders that are jointly
trained, one for RGB images and the other for point clouds. We embrace two key
design choices in Glissando-Net to achieve a more accurate prediction of the 3D
shape and pose of the object given a single RGB image as input. First, we
augment the feature maps of the point cloud encoder and decoder with
transformed feature maps from the image decoder, enabling effective 2D-3D
interaction in both training and prediction. Second, we predict both the 3D
shape and pose of the object in the decoder stage. This way, we better utilize
the information in the 3D point clouds presented only in the training stage to
train the network for more accurate prediction. We jointly train the two
encoder-decoders for RGB and point cloud data to learn how to pass latent
features to the point cloud decoder during inference. In testing, the encoder
of the 3D point cloud is discarded. The design of Glissando-Net is inspired by
codeSLAM. Unlike codeSLAM, which targets 3D reconstruction of scenes, we focus
on pose estimation and shape reconstruction of objects, and directly predict
the object pose and a pose invariant 3D reconstruction without the need of the
code optimization step. Extensive experiments, involving both ablation studies
and comparison with competing methods, demonstrate the efficacy of our proposed
method, and compare favorably with the state-of-the-art.
|
2501.14905
|
Measuring and Mitigating Hallucinations in Vision-Language Dataset
Generation for Remote Sensing
|
cs.CV
|
Vision language models have achieved impressive results across various
fields. However, adoption in remote sensing remains limited, largely due to the
scarcity of paired image-text data. To bridge this gap, synthetic caption
generation has gained interest, traditionally relying on rule-based methods
that use metadata or bounding boxes. While these approaches provide some
description, they often lack the depth needed to capture complex wide-area
scenes. Large language models (LLMs) offer a promising alternative for
generating more descriptive captions, yet they can produce generic outputs and
are prone to hallucination. In this paper, we propose a new method to enhance
vision-language datasets for remote sensing by integrating maps as external
data sources, enabling the generation of detailed, context-rich captions.
Additionally, we present methods to measure and mitigate hallucinations in
LLM-generated text. We introduce fMoW-mm, a multimodal dataset incorporating
satellite imagery, maps, metadata, and text annotations. We demonstrate its
effectiveness for automatic target recognition in few-shot settings, achieving
superior performance compared to other vision-language remote sensing datasets.
|
2501.14906
|
What is a Relevant Signal-to-Noise Ratio for Numerical Differentiation?
|
eess.SY cs.SY
|
In applications that involve sensor data, a useful measure of signal-to-noise
ratio (SNR) is the ratio of the root-mean-squared (RMS) signal to the RMS
sensor noise. The present paper shows that, for numerical differentiation, the
traditional SNR is ineffective. In particular, it is shown that, for a harmonic
signal with harmonic sensor noise, a natural and relevant SNR is given by the
ratio of the RMS of the derivative of the signal to the RMS of the derivative
of the sensor noise. For a harmonic signal with white sensor noise, an
effective SNR is derived. Implications of these observations for signal
processing are discussed.
|
2501.14910
|
A cluster mean approach for topology optimization of natural frequencies
and bandgaps with simple/multiple eigenfrequencies
|
cs.CE math.OC
|
This study presents a novel approach utilizing cluster means to address the
non-differentiability issue arising from multiple eigenvalues in eigenfrequency
and bandgap optimization. By constructing symmetric functions of repeated
eigenvalues -- including cluster mean, p-norm and KS functions -- the study
confirms their differentiability when all repeated eigenvalues are included,
i.e., clusters are complete. Numerical sensitivity analyses indicate that,
under some symmetry conditions, multiple eigenvalues may also be differentiable
w.r.t the symmetric design variables. Notably, regardless of enforced symmetry,
the cluster mean approach guarantees differentiability of multiple eigenvalues,
offering a reliable solution strategy in eigenfrequency topology optimization.
Optimization schemes are proposed to maximize eigenfrequencies and bandgaps by
integrating cluster means with the bound formulations. The efficacy of the
proposed method is demonstrated through numerical examples on 2D and 3D solids
and plate structures. All optimization results demonstrate smooth convergence
under simple/multiple eigenvalues.
|
2501.14912
|
Feasible Learning
|
cs.LG cs.AI
|
We introduce Feasible Learning (FL), a sample-centric learning paradigm where
models are trained by solving a feasibility problem that bounds the loss for
each training sample. In contrast to the ubiquitous Empirical Risk Minimization
(ERM) framework, which optimizes for average performance, FL demands
satisfactory performance on every individual data point. Since any model that
meets the prescribed performance threshold is a valid FL solution, the choice
of optimization algorithm and its dynamics play a crucial role in shaping the
properties of the resulting solutions. In particular, we study a primal-dual
approach which dynamically re-weights the importance of each sample during
training. To address the challenge of setting a meaningful threshold in
practice, we introduce a relaxation of FL that incorporates slack variables of
minimal norm. Our empirical analysis, spanning image classification, age
regression, and preference optimization in large language models, demonstrates
that models trained via FL can learn from data while displaying improved tail
behavior compared to ERM, with only a marginal impact on average performance.
|
2501.14914
|
Light3R-SfM: Towards Feed-forward Structure-from-Motion
|
cs.CV cs.LG
|
We present Light3R-SfM, a feed-forward, end-to-end learnable framework for
efficient large-scale Structure-from-Motion (SfM) from unconstrained image
collections. Unlike existing SfM solutions that rely on costly matching and
global optimization to achieve accurate 3D reconstructions, Light3R-SfM
addresses this limitation through a novel latent global alignment module. This
module replaces traditional global optimization with a learnable attention
mechanism, effectively capturing multi-view constraints across images for
robust and precise camera pose estimation. Light3R-SfM constructs a sparse
scene graph via retrieval-score-guided shortest path tree to dramatically
reduce memory usage and computational overhead compared to the naive approach.
Extensive experiments demonstrate that Light3R-SfM achieves competitive
accuracy while significantly reducing runtime, making it ideal for 3D
reconstruction tasks in real-world applications with a runtime constraint. This
work pioneers a data-driven, feed-forward SfM approach, paving the way toward
scalable, accurate, and efficient 3D reconstruction in the wild.
|
2501.14917
|
Self-reflecting Large Language Models: A Hegelian Dialectical Approach
|
cs.CL cs.HC cs.LG
|
Investigating NLP through a philosophical lens has recently caught
researcher's eyes as it connects computational methods with classical schools
of philosophy. This paper introduces a philosophical approach inspired by the
Hegelian Dialectic for LLMs' self-reflection, utilizing a self-dialectical
approach to emulate internal critiques and then synthesize new ideas by
resolving the contradicting points. Moreover, this paper investigates the
effect of LLMs' temperature for generation by establishing a dynamic annealing
approach, which promotes the creativity in the early stages and gradually
refines it by focusing on the nuances, as well as a fixed temperature strategy
for generation. Our proposed approach is examined to determine its ability to
generate novel ideas from an initial proposition. Additionally, a Multi Agent
Majority Voting (MAMV) strategy is leveraged to assess the validity and novelty
of the generated ideas, which proves beneficial in the absence of domain
experts. Our experiments show promise in generating new ideas and provide a
stepping stone for future research.
|
2501.14918
|
3D/2D Registration of Angiograms using Silhouette-based Differentiable
Rendering
|
cs.CV
|
We present a method for 3D/2D registration of Digital Subtraction Angiography
(DSA) images to provide valuable insight into brain hemodynamics and
angioarchitecture. Our approach formulates the registration as a pose
estimation problem, leveraging both anteroposterior and lateral DSA views and
employing differentiable rendering. Preliminary experiments on real and
synthetic datasets demonstrate the effectiveness of our method, with both
qualitative and quantitative evaluations highlighting its potential for
clinical applications. The code is available at
https://github.com/taewoonglee17/TwoViewsDSAReg.
|
2501.14921
|
Achieving uniform side information gain with multilevel lattice codes
over the ring of integers
|
cs.IT math.IT
|
The index coding problem aims to optimise broadcast communication by taking
advantage of receiver-side information to improve transmission efficiency. In
this letter, we explore the application of Construction $\pi_A$ lattices to
index coding. We introduce a coding scheme, named \textit{CRT lattice index
coding}, using Construction $\pi_A$ over $\mathbb{Z}$ to address the index
coding problem. It is derived an upper bound for side information gain of a CRT
lattice index code and conditions for the uniformity of this gain. The
efficiency of this approach is shown through theoretical analysis and code
design examples.
|
2501.14922
|
Search results diversification in competitive search
|
cs.IR cs.GT
|
In Web retrieval, there are many cases of competition between authors of Web
documents: their incentive is to have their documents highly ranked for queries
of interest. As such, the Web is a prominent example of a competitive search
setting. Past work on competitive search focused on ranking functions based
solely on relevance estimation. We study ranking functions that integrate a
results-diversification aspect. We show that the competitive search setting
with diversity-based ranking has an equilibrium. Furthermore, we theoretically
and empirically show that the phenomenon of authors mimicking content in
documents highly ranked in the past, which was demonstrated in previous work,
is mitigated when search results diversification is applied.
|
2501.14926
|
Interpretability in Parameter Space: Minimizing Mechanistic Description
Length with Attribution-based Parameter Decomposition
|
cs.LG stat.ML
|
Mechanistic interpretability aims to understand the internal mechanisms
learned by neural networks. Despite recent progress toward this goal, it
remains unclear how best to decompose neural network parameters into
mechanistic components. We introduce Attribution-based Parameter Decomposition
(APD), a method that directly decomposes a neural network's parameters into
components that (i) are faithful to the parameters of the original network,
(ii) require a minimal number of components to process any input, and (iii) are
maximally simple. Our approach thus optimizes for a minimal length description
of the network's mechanisms. We demonstrate APD's effectiveness by successfully
identifying ground truth mechanisms in multiple toy experimental settings:
Recovering features from superposition; separating compressed computations; and
identifying cross-layer distributed representations. While challenges remain to
scaling APD to non-toy models, our results suggest solutions to several open
problems in mechanistic interpretability, including identifying minimal
circuits in superposition, offering a conceptual foundation for 'features', and
providing an architecture-agnostic framework for neural network decomposition.
|
2501.14928
|
Decision Making in Changing Environments: Robustness, Query-Based
Learning, and Differential Privacy
|
cs.LG cs.AI cs.IT math.IT math.ST stat.ML stat.TH
|
We study the problem of interactive decision making in which the underlying
environment changes over time subject to given constraints. We propose a
framework, which we call \textit{hybrid Decision Making with Structured
Observations} (hybrid DMSO), that provides an interpolation between the
stochastic and adversarial settings of decision making. Within this framework,
we can analyze local differentially private (LDP) decision making, query-based
learning (in particular, SQ learning), and robust and smooth decision making
under the same umbrella, deriving upper and lower bounds based on variants of
the Decision-Estimation Coefficient (DEC). We further establish strong
connections between the DEC's behavior, the SQ dimension, local minimax
complexity, learnability, and joint differential privacy. To showcase the
framework's power, we provide new results for contextual bandits under the LDP
constraint.
|
2501.14929
|
Motion-enhancement to Echocardiography Segmentation via Inserting a
Temporal Attention Module: An Efficient, Adaptable, and Scalable Approach
|
cs.CV cs.AI
|
Cardiac anatomy segmentation is essential for clinical assessment of cardiac
function and disease diagnosis to inform treatment and intervention. In
performing segmentation, deep learning (DL) algorithms improved accuracy
significantly compared to traditional image processing approaches. More
recently, studies showed that enhancing DL segmentation with motion information
can further improve it. A range of methods for injecting motion information has
been proposed, but many of them increase the dimensionality of input images
(which is computationally expensive) or have not used an optimal method to
insert motion information, such as non-DL registration, non-attention-based
networks or single-headed attention. Here, we present a novel,
computation-efficient alternative where a novel, scalable temporal attention
module (TAM) extracts temporal feature interactions multiple times and where
TAM has a multi-headed, KQV projection cross-attention architecture. The module
can be seamlessly integrated into a wide range of existing CNN- or
Transformer-based networks, providing novel flexibility for inclusion in future
implementations. Extensive evaluations on different cardiac datasets, 2D
echocardiography (CAMUS), and 3D echocardiography (MITEA) demonstrate the
model's effectiveness when integrated into well-established backbone networks
like UNet, FCN8s, UNetR, SwinUNetR, and the recent I2UNet. We further find that
the optimized TAM-enhanced FCN8s network performs well compared to contemporary
alternatives. Our results confirm TAM's robustness, scalability, and
generalizability across diverse datasets and backbones.
|
2501.14932
|
Explaining Categorical Feature Interactions Using Graph Covariance and
LLMs
|
stat.ML cs.AI cs.LG
|
Modern datasets often consist of numerous samples with abundant features and
associated timestamps. Analyzing such datasets to uncover underlying events
typically requires complex statistical methods and substantial domain
expertise. A notable example, and the primary data focus of this paper, is the
global synthetic dataset from the Counter Trafficking Data Collaborative (CTDC)
-- a global hub of human trafficking data containing over 200,000 anonymized
records spanning from 2002 to 2022, with numerous categorical features for each
record. In this paper, we propose a fast and scalable method for analyzing and
extracting significant categorical feature interactions, and querying large
language models (LLMs) to generate data-driven insights that explain these
interactions. Our approach begins with a binarization step for categorical
features using one-hot encoding, followed by the computation of graph
covariance at each time. This graph covariance quantifies temporal changes in
dependence structures within categorical data and is established as a
consistent dependence measure under the Bernoulli distribution. We use this
measure to identify significant feature pairs, such as those with the most
frequent trends over time or those exhibiting sudden spikes in dependence at
specific moments. These extracted feature pairs, along with their timestamps,
are subsequently passed to an LLM tasked with generating potential explanations
of the underlying events driving these dependence changes. The effectiveness of
our method is demonstrated through extensive simulations, and its application
to the CTDC dataset reveals meaningful feature pairs and potential data stories
underlying the observed feature interactions.
|
2501.14933
|
Conformal Inference of Individual Treatment Effects Using Conditional
Density Estimates
|
stat.ML cs.LG
|
In an era where diverse and complex data are increasingly accessible, the
precise prediction of individual treatment effects (ITE) becomes crucial across
fields such as healthcare, economics, and public policy. Current
state-of-the-art approaches, while providing valid prediction intervals through
Conformal Quantile Regression (CQR) and related techniques, often yield overly
conservative prediction intervals. In this work, we introduce a conformal
inference approach to ITE using the conditional density of the outcome given
the covariates. We leverage the reference distribution technique to efficiently
estimate the conditional densities as the score functions under a two-stage
conformal ITE framework. We show that our prediction intervals are not only
marginally valid but are narrower than existing methods. Experimental results
further validate the usefulness of our method.
|
2501.14934
|
Temporal Binding Foundation Model for Material Property Recognition via
Tactile Sequence Perception
|
cs.RO cs.AI
|
Robots engaged in complex manipulation tasks require robust material property
recognition to ensure adaptability and precision. Traditionally, visual data
has been the primary source for object perception; however, it often proves
insufficient in scenarios where visibility is obstructed or detailed
observation is needed. This gap highlights the necessity of tactile sensing as
a complementary or primary input for material recognition. Tactile data becomes
particularly essential in contact-rich, small-scale manipulations where subtle
deformations and surface interactions cannot be accurately captured by vision
alone. This letter presents a novel approach leveraging a temporal binding
foundation model for tactile sequence understanding to enhance material
property recognition. By processing tactile sensor data with a temporal focus,
the proposed system captures the sequential nature of tactile interactions,
similar to human fingertip perception. Additionally, this letter demonstrates
that, through tailored and specific design, the foundation model can more
effectively capture temporal information embedded in tactile sequences,
advancing material property understanding. Experimental results validate the
model's capability to capture these temporal patterns, confirming its utility
for material property recognition in visually restricted scenarios. This work
underscores the necessity of embedding advanced tactile data processing
frameworks within robotic systems to achieve truly embodied and responsive
manipulation capabilities.
|
2501.14936
|
Context-Aware Neural Gradient Mapping for Fine-Grained Instruction
Processing
|
cs.CL cs.AI
|
The integration of contextual embeddings into the optimization processes of
large language models is an advancement in natural language processing. The
Context-Aware Neural Gradient Mapping framework introduces a dynamic gradient
adjustment mechanism, incorporating contextual embeddings directly into the
optimization process. This approach facilitates real-time parameter
adjustments, enhancing task-specific generalization even in the presence of
sparse or noisy data inputs. The mathematical foundation of this framework
relies on gradient descent modifications, where contextual embeddings are
derived from a supplementary neural network trained to map input features to
optimal adaptation gradients. By employing differential geometry principles,
high-dimensional input dependencies are encoded into low-dimensional gradient
manifolds, enabling efficient adaptation without necessitating the retraining
of the entire model. Empirical evaluations demonstrate that the proposed
framework consistently outperforms baseline models across various metrics,
including accuracy, robustness to noise, and computational efficiency. The
integration of context-specific embeddings allows for a more complex
understanding of language, thereby improving the model's ability to handle
diverse linguistic phenomena. Furthermore, the computational efficiency
achieved through this method demonstrates its scalability for large-scale
language models operating under diverse constraints.
|
2501.14939
|
Principal Graph Encoder Embedding and Principal Community Detection
|
cs.SI stat.ML
|
In this paper, we introduce the concept of principal communities and propose
a principal graph encoder embedding method that concurrently detects these
communities and achieves vertex embedding. Given a graph adjacency matrix with
vertex labels, the method computes a sample community score for each community,
ranking them to measure community importance and estimate a set of principal
communities. The method then produces a vertex embedding by retaining only the
dimensions corresponding to these principal communities. Theoretically, we
define the population version of the encoder embedding and the community score
based on a random Bernoulli graph distribution. We prove that the population
principal graph encoder embedding preserves the conditional density of the
vertex labels and that the population community score successfully
distinguishes the principal communities. We conduct a variety of simulations to
demonstrate the finite-sample accuracy in detecting ground-truth principal
communities, as well as the advantages in embedding visualization and
subsequent vertex classification. The method is further applied to a set of
real-world graphs, showcasing its numerical advantages, including robustness to
label noise and computational scalability.
|
2501.14940
|
CASE-Bench: Context-Aware SafEty Benchmark for Large Language Models
|
cs.CL cs.AI
|
Aligning large language models (LLMs) with human values is essential for
their safe deployment and widespread adoption. Current LLM safety benchmarks
often focus solely on the refusal of individual problematic queries, which
overlooks the importance of the context where the query occurs and may cause
undesired refusal of queries under safe contexts that diminish user experience.
Addressing this gap, we introduce CASE-Bench, a Context-Aware SafEty Benchmark
that integrates context into safety assessments of LLMs. CASE-Bench assigns
distinct, formally described contexts to categorized queries based on
Contextual Integrity theory. Additionally, in contrast to previous studies
which mainly rely on majority voting from just a few annotators, we recruited a
sufficient number of annotators necessary to ensure the detection of
statistically significant differences among the experimental conditions based
on power analysis. Our extensive analysis using CASE-Bench on various
open-source and commercial LLMs reveals a substantial and significant influence
of context on human judgments (p<0.0001 from a z-test), underscoring the
necessity of context in safety evaluations. We also identify notable mismatches
between human judgments and LLM responses, particularly in commercial models
within safe contexts.
|
2501.14941
|
On the Optimality of Gaussian Code-books for Signaling over a Two-Users
Weak Gaussian Interference Channel
|
cs.IT math.IT math.PR
|
This article shows that the capacity region of a 2-users weak Gaussian
interference channel is achieved using Gaussian code-books. The approach relies
on traversing the boundary in incremental steps. Starting from a corner point
with Gaussian code-books, and relying on calculus of variation, it is shown
that the end point in each step is achieved using Gaussian code-books.
|
2501.14942
|
Force-Based Robotic Imitation Learning: A Two-Phase Approach for
Construction Assembly Tasks
|
cs.RO cs.AI
|
The drive for efficiency and safety in construction has boosted the role of
robotics and automation. However, complex tasks like welding and pipe insertion
pose challenges due to their need for precise adaptive force control, which
complicates robotic training. This paper proposes a two-phase system to improve
robot learning, integrating human-derived force feedback. The first phase
captures real-time data from operators using a robot arm linked with a virtual
simulator via ROS-Sharp. In the second phase, this feedback is converted into
robotic motion instructions, using a generative approach to incorporate force
feedback into the learning process. This method's effectiveness is demonstrated
through improved task completion times and success rates. The framework
simulates realistic force-based interactions, enhancing the training data's
quality for precise robotic manipulation in construction tasks.
|
2501.14945
|
MATCHA:Towards Matching Anything
|
cs.CV
|
Establishing correspondences across images is a fundamental challenge in
computer vision, underpinning tasks like Structure-from-Motion, image editing,
and point tracking. Traditional methods are often specialized for specific
correspondence types, geometric, semantic, or temporal, whereas humans
naturally identify alignments across these domains. Inspired by this
flexibility, we propose MATCHA, a unified feature model designed to ``rule them
all'', establishing robust correspondences across diverse matching tasks.
Building on insights that diffusion model features can encode multiple
correspondence types, MATCHA augments this capacity by dynamically fusing
high-level semantic and low-level geometric features through an attention-based
module, creating expressive, versatile, and robust features. Additionally,
MATCHA integrates object-level features from DINOv2 to further boost
generalization, enabling a single feature capable of matching anything.
Extensive experiments validate that MATCHA consistently surpasses
state-of-the-art methods across geometric, semantic, and temporal matching
tasks, setting a new foundation for a unified approach for the fundamental
correspondence problem in computer vision. To the best of our knowledge, MATCHA
is the first approach that is able to effectively tackle diverse matching tasks
with a single unified feature.
|
2501.14948
|
HECLIP: Histology-Enhanced Contrastive Learning for Imputation of
Transcriptomics Profiles
|
cs.CE q-bio.QM
|
Histopathology, particularly hematoxylin and eosin (H\&E) staining, plays a
critical role in diagnosing and characterizing pathological conditions by
highlighting tissue morphology. However, H\&E-stained images inherently lack
molecular information, requiring costly and resource-intensive methods like
spatial transcriptomics to map gene expression with spatial resolution. To
address these challenges, we introduce HECLIP (Histology-Enhanced Contrastive
Learning for Imputation of Profiles), an innovative deep learning framework
that bridges the gap between histological imaging and molecular profiling.
HECLIP is specifically designed to infer gene expression profiles directly from
H\&E-stained images, eliminating the need for expensive spatial transcriptomics
assays. HECLIP leverages an advanced image-centric contrastive loss function to
optimize image representation learning, ensuring that critical morphological
patterns in histology images are effectively captured and translated into
accurate gene expression profiles. This design enhances the predictive power of
the image modality while minimizing reliance on gene expression data. Through
extensive benchmarking on publicly available datasets, HECLIP demonstrates
superior performance compared to existing approaches, delivering robust and
biologically meaningful predictions. Detailed ablation studies further
underscore its effectiveness in extracting molecular insights from histology
images. Additionally, HECLIP's scalable and cost-efficient approach positions
it as a transformative tool for both research and clinical applications,
driving advancements in precision medicine. The source code for HECLIP is
openly available at https://github.com/QSong-github/HECLIP.
|
2501.14951
|
E-Gen: Leveraging E-Graphs to Improve Continuous Representations of
Symbolic Expressions
|
cs.LG cs.CL cs.SC
|
As vector representations have been pivotal in advancing natural language
processing (NLP), some prior research has concentrated on creating embedding
techniques for mathematical expressions by leveraging mathematically equivalent
expressions. While effective, these methods are limited by the training data.
In this work, we propose augmenting prior algorithms with larger synthetic
dataset, using a novel e-graph-based generation scheme. This new mathematical
dataset generation scheme, E-Gen, improves upon prior dataset-generation
schemes that are limited in size and operator types. We use this dataset to
compare embedding models trained with two methods: (1) training the model to
generate mathematically equivalent expressions, and (2) training the model
using contrastive learning to group mathematically equivalent expressions
explicitly. We evaluate the embeddings generated by these methods against prior
work on both in-distribution and out-of-distribution language processing tasks.
Finally, we compare the performance of our embedding scheme against
state-of-the-art large language models and demonstrate that embedding-based
language processing methods perform better than LLMs on several tasks,
demonstrating the necessity of optimizing embedding methods for the
mathematical data modality.
|
2501.14954
|
MISCON: A Mission-Driven Conversational Consultant for Pre-Venture
Entrepreneurs in Food Deserts
|
cs.AI cs.CL cs.IR
|
This work-in-progress report describes MISCON, a conversational consultant
being developed for a public mission project called NOURISH. With MISCON,
aspiring small business owners in a food-insecure region and their advisors in
Community-based organizations would be able to get information, recommendation
and analysis regarding setting up food businesses. MISCON conversations are
modeled as state machine that uses a heterogeneous knowledge graph as well as
several analytical tools and services including a variety of LLMs. In this
short report, we present the functional architecture and some design
considerations behind MISCON.
|
2501.14956
|
ExPerT: Effective and Explainable Evaluation of Personalized Long-Form
Text Generation
|
cs.CL cs.AI cs.IR
|
Evaluating personalized text generated by large language models (LLMs) is
challenging, as only the LLM user, i.e., prompt author, can reliably assess the
output, but re-engaging the same individuals across studies is infeasible. This
paper addresses the challenge of evaluating personalized text generation by
introducing ExPerT, an explainable reference-based evaluation framework. ExPerT
leverages an LLM to extract atomic aspects and their evidence from the
generated and reference texts, match the aspects, and evaluate their alignment
based on content and writing style -- two key attributes in personalized text
generation. Additionally, ExPerT generates detailed, fine-grained explanations
for every step of the evaluation process, enhancing transparency and
interpretability. Our experiments demonstrate that ExPerT achieves a 7.2%
relative improvement in alignment with human judgments compared to the
state-of-the-art text generation evaluation methods. Furthermore, human
evaluators rated the usability of ExPerT's explanations at 4.7 out of 5,
highlighting its effectiveness in making evaluation decisions more
interpretable.
|
2501.14959
|
The Curious Case of Arbitrariness in Machine Learning
|
cs.LG cs.AI
|
Algorithmic modelling relies on limited information in data to extrapolate
outcomes for unseen scenarios, often embedding an element of arbitrariness in
its decisions. A perspective on this arbitrariness that has recently gained
interest is multiplicity-the study of arbitrariness across a set of "good
models", i.e., those likely to be deployed in practice. In this work, we
systemize the literature on multiplicity by: (a) formalizing the terminology
around model design choices and their contribution to arbitrariness, (b)
expanding the definition of multiplicity to incorporate underrepresented forms
beyond just predictions and explanations, (c) clarifying the distinction
between multiplicity and other traditional lenses of arbitrariness, i.e.,
uncertainty and variance, and (d) distilling the benefits and potential risks
of multiplicity into overarching trends, situating it within the broader
landscape of responsible AI. We conclude by identifying open research questions
and highlighting emerging trends in this young but rapidly growing area of
research.
|
2501.14960
|
LLM4DistReconfig: A Fine-tuned Large Language Model for Power
Distribution Network Reconfiguration
|
cs.LG cs.AI cs.CL
|
Power distribution networks are evolving due to the integration of DERs and
increased customer participation. To maintain optimal operation, minimize
losses, and meet varying load demands, frequent network reconfiguration is
necessary. Traditionally, the reconfiguration task relies on optimization
software and expert operators, but as systems grow more complex, faster and
more adaptive solutions are required without expert intervention. Data-driven
reconfiguration is gaining traction for its accuracy, speed, and robustness
against incomplete network data. LLMs, with their ability to capture complex
patterns, offer a promising approach for efficient and responsive network
reconfiguration in evolving complex power networks.
In this work, we introduce LLM4DistReconfig, a deep learning-based approach
utilizing a fine-tuned LLM to solve the distribution network reconfiguration
problem. By carefully crafting prompts and designing a custom loss function, we
train the LLM with inputs representing network parameters such as buses,
available lines, open lines, node voltages, and system loss. The model then
predicts optimal reconfigurations by outputting updated network configurations
that minimize system loss while meeting operational constraints. Our approach
significantly reduces inference time compared to classical algorithms, allowing
for near real-time optimal reconfiguration after training. Experimental results
show that our method generates optimal configurations minimizing system loss
for five individual and a combined test dataset. It also produces minimal
invalid edges, no cycles, or subgraphs across all datasets, fulfilling
domain-specific needs. Additionally, the generated responses contain less than
5% improper outputs on seen networks and satisfactory results on unseen
networks, demonstrating its effectiveness and reliability for the
reconfiguration task.
|
2501.14964
|
Personalized Layer Selection for Graph Neural Networks
|
cs.LG
|
Graph Neural Networks (GNNs) combine node attributes over a fixed granularity
of the local graph structure around a node to predict its label. However,
different nodes may relate to a node-level property with a different
granularity of its local neighborhood, and using the same level of smoothing
for all nodes can be detrimental to their classification. In this work, we
challenge the common fact that a single GNN layer can classify all nodes of a
graph by training GNNs with a distinct personalized layer for each node.
Inspired by metric learning, we propose a novel algorithm, MetSelect1, to
select the optimal representation layer to classify each node. In particular,
we identify a prototype representation of each class in a transformed GNN layer
and then, classify using the layer where the distance is smallest to a class
prototype after normalizing with that layer's variance. Results on 10 datasets
and 3 different GNNs show that we significantly improve the node classification
accuracy of GNNs in a plug-and-play manner. We also find that using variable
layers for prediction enables GNNs to be deeper and more robust to poisoning
attacks. We hope this work can inspire future works to learn more adaptive and
personalized graph representations.
|
2501.14970
|
AI-driven Wireless Positioning: Fundamentals, Standards,
State-of-the-art, and Challenges
|
eess.SP cs.AI cs.LG
|
Wireless positioning technologies hold significant value for applications in
autonomous driving, extended reality (XR), unmanned aerial vehicles (UAVs), and
more. With the advancement of artificial intelligence (AI), leveraging AI to
enhance positioning accuracy and robustness has emerged as a field full of
potential. Driven by the requirements and functionalities defined in the 3rd
Generation Partnership Project (3GPP) standards, AI/machine learning (ML)-based
positioning is becoming a key technology to overcome the limitations of
traditional methods. This paper begins with an introduction to the fundamentals
of AI and wireless positioning, covering AI models, algorithms, positioning
applications, emerging wireless technologies, and the basics of positioning
techniques. Subsequently, focusing on standardization progress, we provide a
comprehensive review of the evolution of 3GPP positioning standards, with an
emphasis on the integration of AI/ML technologies in recent and upcoming
releases. Based on the AI/ML-assisted positioning and direct AI/ML positioning
schemes outlined in the standards, we conduct an in-depth investigation of
related research. we focus on state-of-the-art (SOTA) research in AI-based
line-of-sight (LOS)/non-line-of-sight (NLOS) detection, time of arrival
(TOA)/time difference of arrival (TDOA) estimation, and angle estimation
techniques. For Direct AI/ML Positioning, we explore SOTA advancements in
fingerprint-based positioning, knowledge-assisted AI positioning, and channel
charting-based positioning. Furthermore, we introduce publicly available
datasets for wireless positioning and conclude by summarizing the challenges
and opportunities of AI-driven wireless positioning.
|
2501.14971
|
Automatic Link Selection in Multi-Channel Multiple Access with Link
Failures
|
eess.SY cs.SY
|
This paper focuses on the problem of automatic link selection in
multi-channel multiple access control using bandit feedback. In particular, a
controller assigns multiple users to multiple channels in a time slotted
system, where in each time slot at most one user can be assigned to a given
channel and at most one channel can be assigned to a given user. Given that
user $i$ is assigned to channel $j$, the transmission fails with a fixed
probability $f_{i,j}$. The failure probabilities are not known to the
controller. The assignments are made dynamically using success/failure
feedback. The goal is to maximize the time average utility, where we consider
an arbitrary (possibly nonsmooth) concave and entrywise nondecreasing utility
function. The problem of merely maximizing the total throughput has a solution
of always assigning the same user-channel pairs and can be unfair to certain
users, particularly when the number of channels is less than the number of
users. Instead, our scheme allows various types of fairness, such as
proportional fairness, maximizing the minimum, or combinations of these by
defining the appropriate utility function. We propose two algorithms for this
task. The first algorithm is adaptive and gets within
$\mathcal{O}(\log(T)/T^{1/3})$ of optimality over any interval of $T$
consecutive slots over which the success probabilities do not change. The
second algorithm has faster $\mathcal{O}(\sqrt{\log(T)/T})$ performance over
the first $T$ slots, but does not adapt well if probabilities change.
|
2501.14976
|
A review of annotation classification tools in the educational domain
|
cs.CL cs.DL
|
An annotation consists of a portion of information that is associated with a
piece of content in order to explain something about the content or to add more
information. The use of annotations as a tool in the educational field has
positive effects on the learning process. The usual way to use this instrument
is to provide students with contents, usually textual, with which they must
associate annotations. In most cases this task is performed in groups of
students who work collaboratively. This process encourages analysis and
understanding of the contents since they have to understand them in order to
annotate them, and also encourages teamwork. To facilitate its use, computer
applications have been devel-oped in recent decades that implement the
annotation process and offer a set of additional functionalities. One of these
functionalities is the classification of the annotations made. This
functionality can be exploited in various ways in the learning process, such as
guiding the students in the annotation process, providing information to the
student about how the annotation process is done and to the teacher about how
the students write and how they understand the content, as well as implementing
other innovative educational processes. In this sense, the classification of
annotations plays a critical role in the application of the annotation in the
educational field. There are many studies of annotations, but most of them
consider the classification aspect marginally only. This paper presents an
initial study of the classification mech-anisms used in the annotation tools,
identifying four types of cases: absence of classification mechanisms,
classification based on pre-established vocabularies, classification based on
extensible vocabularies, and classification based on struc-tured vocabularies.
|
2501.14980
|
A Deep State Space Model for Rainfall-Runoff Simulations
|
cs.LG cs.AI physics.ao-ph
|
The classical way of studying the rainfall-runoff processes in the water
cycle relies on conceptual or physically-based hydrologic models. Deep learning
(DL) has recently emerged as an alternative and blossomed in hydrology
community for rainfall-runoff simulations. However, the decades-old Long
Short-Term Memory (LSTM) network remains the benchmark for this task,
outperforming newer architectures like Transformers. In this work, we propose a
State Space Model (SSM), specifically the Frequency Tuned Diagonal State Space
Sequence (S4D-FT) model, for rainfall-runoff simulations. The proposed S4D-FT
is benchmarked against the established LSTM and a physically-based Sacramento
Soil Moisture Accounting model across 531 watersheds in the contiguous United
States (CONUS). Results show that S4D-FT is able to outperform the LSTM model
across diverse regions. Our pioneering introduction of the S4D-FT for
rainfall-runoff simulations challenges the dominance of LSTM in the hydrology
community and expands the arsenal of DL tools available for hydrological
modeling.
|
2501.14981
|
The Muddy Waters of Modeling Empathy in Language: The Practical Impacts
of Theoretical Constructs
|
cs.CL
|
Conceptual operationalizations of empathy in NLP are varied, with some having
specific behaviors and properties, while others are more abstract. How these
variations relate to one another and capture properties of empathy observable
in text remains unclear. To provide insight into this, we analyze the transfer
performance of empathy models adapted to empathy tasks with different
theoretical groundings. We study (1) the dimensionality of empathy definitions,
(2) the correspondence between the defined dimensions and measured/observed
properties, and (3) the conduciveness of the data to represent them, finding
they have a significant impact to performance compared to other transfer
setting features. Characterizing the theoretical grounding of empathy tasks as
direct, abstract, or adjacent further indicates that tasks that directly
predict specified empathy components have higher transferability. Our work
provides empirical evidence for the need for precise and multidimensional
empathy operationalizations.
|
2501.14984
|
The Cloud and Flock Polynomials of q-Matroids
|
math.CO cs.IT math.IT
|
We show that the Whitney function of a q-matroid can be determined from the
cloud and flock polynomials associated to the cyclic flats. These polynomials
capture information about the corank (resp., nullity) of certain spaces whose
cyclic core (resp., closure) is the given cyclic flat. Going one step further,
we prove that the Whitney function, and in fact all cloud and flock
polynomials, are determined by the configuration of the q-matroid, that is the
abstract lattice of cyclic flats together with the corank-nullity data.
Examples illustrate that the converses of the above statements are not true.
This has the consequence that the Whitney function of a direct sum is not
determined by the Whitney functions of the summands.
|
2501.14985
|
DepressionX: Knowledge Infused Residual Attention for Explainable
Depression Severity Assessment
|
cs.LG
|
In today's interconnected society, social media platforms have become an
important part of our lives, where individuals virtually express their
thoughts, emotions, and moods. These expressions offer valuable insights into
their mental health. This paper explores the use of platforms like Facebook,
$\mathbb{X}$ (formerly Twitter), and Reddit for mental health assessments. We
propose a domain knowledge-infused residual attention model called DepressionX
for explainable depression severity detection. Existing deep learning models on
this problem have shown considerable performance, but they often lack
transparency in their decision-making processes. In healthcare, where decisions
are critical, the need for explainability is crucial. In our model, we address
the critical gap by focusing on the explainability of depression severity
detection while aiming for a high performance accuracy. In addition to being
explainable, our model consistently outperforms the state-of-the-art models by
over 7% in terms of $\text{F}_1$ score on balanced as well as imbalanced
datasets. Our ultimate goal is to establish a foundation for trustworthy and
comprehensible analysis of mental disorders via social media.
|
2501.14991
|
Advances in Set Function Learning: A Survey of Techniques and
Applications
|
cs.LG
|
Set function learning has emerged as a crucial area in machine learning,
addressing the challenge of modeling functions that take sets as inputs. Unlike
traditional machine learning that involves fixed-size input vectors where the
order of features matters, set function learning demands methods that are
invariant to permutations of the input set, presenting a unique and complex
problem. This survey provides a comprehensive overview of the current
development in set function learning, covering foundational theories, key
methodologies, and diverse applications. We categorize and discuss existing
approaches, focusing on deep learning approaches, such as DeepSets and Set
Transformer based methods, as well as other notable alternative methods beyond
deep learning, offering a complete view of current models. We also introduce
various applications and relevant datasets, such as point cloud processing and
multi-label classification, highlighting the significant progress achieved by
set function learning methods in these domains. Finally, we conclude by
summarizing the current state of set function learning approaches and
identifying promising future research directions, aiming to guide and inspire
further advancements in this promising field.
|
2501.14992
|
Extensive Exploration in Complex Traffic Scenarios using Hierarchical
Reinforcement Learning
|
cs.LG cs.RO
|
Developing an automated driving system capable of navigating complex traffic
environments remains a formidable challenge. Unlike rule-based or supervised
learning-based methods, Deep Reinforcement Learning (DRL) based controllers
eliminate the need for domain-specific knowledge and datasets, thus providing
adaptability to various scenarios. Nonetheless, a common limitation of existing
studies on DRL-based controllers is their focus on driving scenarios with
simple traffic patterns, which hinders their capability to effectively handle
complex driving environments with delayed, long-term rewards, thus compromising
the generalizability of their findings. In response to these limitations, our
research introduces a pioneering hierarchical framework that efficiently
decomposes intricate decision-making problems into manageable and interpretable
subtasks. We adopt a two step training process that trains the high-level
controller and low-level controller separately. The high-level controller
exhibits an enhanced exploration potential with long-term delayed rewards, and
the low-level controller provides longitudinal and lateral control ability
using short-term instantaneous rewards. Through simulation experiments, we
demonstrate the superiority of our hierarchical controller in managing complex
highway driving situations.
|
2501.14994
|
Robust Cross-Etiology and Speaker-Independent Dysarthric Speech
Recognition
|
cs.SD cs.AI cs.LG eess.AS
|
In this paper, we present a speaker-independent dysarthric speech recognition
system, with a focus on evaluating the recently released Speech Accessibility
Project (SAP-1005) dataset, which includes speech data from individuals with
Parkinson's disease (PD). Despite the growing body of research in dysarthric
speech recognition, many existing systems are speaker-dependent and adaptive,
limiting their generalizability across different speakers and etiologies. Our
primary objective is to develop a robust speaker-independent model capable of
accurately recognizing dysarthric speech, irrespective of the speaker.
Additionally, as a secondary objective, we aim to test the cross-etiology
performance of our model by evaluating it on the TORGO dataset, which contains
speech samples from individuals with cerebral palsy (CP) and amyotrophic
lateral sclerosis (ALS). By leveraging the Whisper model, our
speaker-independent system achieved a CER of 6.99% and a WER of 10.71% on the
SAP-1005 dataset. Further, in cross-etiology settings, we achieved a CER of
25.08% and a WER of 39.56% on the TORGO dataset. These results highlight the
potential of our approach to generalize across unseen speakers and different
etiologies of dysarthria.
|
2501.14995
|
GreenAuto: An Automated Platform for Sustainable AI Model Design on Edge
Devices
|
cs.LG
|
We present GreenAuto, an end-to-end automated platform designed for
sustainable AI model exploration, generation, deployment, and evaluation.
GreenAuto employs a Pareto front-based search method within an expanded neural
architecture search (NAS) space, guided by gradient descent to optimize model
exploration. Pre-trained kernel-level energy predictors estimate energy
consumption across all models, providing a global view that directs the search
toward more sustainable solutions. By automating performance measurements and
iteratively refining the search process, GreenAuto demonstrates the efficient
identification of sustainable AI models without the need for human
intervention.
|
2501.14997
|
Causal Discovery via Bayesian Optimization
|
cs.LG stat.ML
|
Existing score-based methods for directed acyclic graph (DAG) learning from
observational data struggle to recover the causal graph accurately and
sample-efficiently. To overcome this, in this study, we propose DrBO (DAG
recovery via Bayesian Optimization)-a novel DAG learning framework leveraging
Bayesian optimization (BO) to find high-scoring DAGs. We show that, by
sophisticatedly choosing the promising DAGs to explore, we can find
higher-scoring ones much more efficiently. To address the scalability issues of
conventional BO in DAG learning, we replace Gaussian Processes commonly
employed in BO with dropout neural networks, trained in a continual manner,
which allows for (i) flexibly modeling the DAG scores without overfitting, (ii)
incorporation of uncertainty into the estimated scores, and (iii) scaling with
the number of evaluations. As a result, DrBO is computationally efficient and
can find the accurate DAG in fewer trials and less time than existing
state-of-the-art methods. This is demonstrated through an extensive set of
empirical evaluations on many challenging settings with both synthetic and real
data. Our implementation is available at https://github.com/baosws/DrBO.
|
2501.14998
|
Federated Retrieval Augmented Generation for Multi-Product Question
Answering
|
cs.CL
|
Recent advancements in Large Language Models and Retrieval-Augmented
Generation have boosted interest in domain-specific question-answering for
enterprise products. However, AI Assistants often face challenges in
multi-product QA settings, requiring accurate responses across diverse domains.
Existing multi-domain RAG-QA approaches either query all domains
indiscriminately, increasing computational costs and LLM hallucinations, or
rely on rigid resource selection, which can limit search results. We introduce
MKP-QA, a novel multi-product knowledge-augmented QA framework with
probabilistic federated search across domains and relevant knowledge. This
method enhances multi-domain search quality by aggregating query-domain and
query-passage probabilistic relevance. To address the lack of suitable
benchmarks for multi-product QAs, we also present new datasets focused on three
Adobe products: Adobe Experience Platform, Target, and Customer Journey
Analytics. Our experiments show that MKP-QA significantly boosts multi-product
RAG-QA performance in terms of both retrieval accuracy and response quality.
|
2501.14999
|
VideoPure: Diffusion-based Adversarial Purification for Video
Recognition
|
cs.CV
|
Recent work indicates that video recognition models are vulnerable to
adversarial examples, posing a serious security risk to downstream
applications. However, current research has primarily focused on adversarial
attacks, with limited work exploring defense mechanisms. Furthermore, due to
the spatial-temporal complexity of videos, existing video defense methods face
issues of high cost, overfitting, and limited defense performance. Recently,
diffusion-based adversarial purification methods have achieved robust defense
performance in the image domain. However, due to the additional temporal
dimension in videos, directly applying these diffusion-based adversarial
purification methods to the video domain suffers performance and efficiency
degradation. To achieve an efficient and effective video adversarial defense
method, we propose the first diffusion-based video purification framework to
improve video recognition models' adversarial robustness: VideoPure. Given an
adversarial example, we first employ temporal DDIM inversion to transform the
input distribution into a temporally consistent and trajectory-defined
distribution, covering adversarial noise while preserving more video structure.
Then, during DDIM denoising, we leverage intermediate results at each denoising
step and conduct guided spatial-temporal optimization, removing adversarial
noise while maintaining temporal consistency. Finally, we input the list of
optimized intermediate results into the video recognition model for multi-step
voting to obtain the predicted class. We investigate the defense performance of
our method against black-box, gray-box, and adaptive attacks on benchmark
datasets and models. Compared with other adversarial purification methods, our
method overall demonstrates better defense performance against different
attacks. Our code is available at https://github.com/deep-kaixun/VideoPure.
|
2501.15000
|
MDEval: Evaluating and Enhancing Markdown Awareness in Large Language
Models
|
cs.CL cs.IR
|
Large language models (LLMs) are expected to offer structured Markdown
responses for the sake of readability in web chatbots (e.g., ChatGPT). Although
there are a myriad of metrics to evaluate LLMs, they fail to evaluate the
readability from the view of output content structure. To this end, we focus on
an overlooked yet important metric -- Markdown Awareness, which directly
impacts the readability and structure of the content generated by these
language models. In this paper, we introduce MDEval, a comprehensive benchmark
to assess Markdown Awareness for LLMs, by constructing a dataset with 20K
instances covering 10 subjects in English and Chinese. Unlike traditional
model-based evaluations, MDEval provides excellent interpretability by
combining model-based generation tasks and statistical methods. Our results
demonstrate that MDEval achieves a Spearman correlation of 0.791 and an
accuracy of 84.1% with human, outperforming existing methods by a large margin.
Extensive experimental results also show that through fine-tuning over our
proposed dataset, less performant open-source models are able to achieve
comparable performance to GPT-4o in terms of Markdown Awareness. To ensure
reproducibility and transparency, MDEval is open sourced at
https://github.com/SWUFE-DB-Group/MDEval-Benchmark.
|
2501.15001
|
What if Eye...? Computationally Recreating Vision Evolution
|
cs.AI cs.CV cs.NE q-bio.NC
|
Vision systems in nature show remarkable diversity, from simple
light-sensitive patches to complex camera eyes with lenses. While natural
selection has produced these eyes through countless mutations over millions of
years, they represent just one set of realized evolutionary paths. Testing
hypotheses about how environmental pressures shaped eye evolution remains
challenging since we cannot experimentally isolate individual factors.
Computational evolution offers a way to systematically explore alternative
trajectories. Here we show how environmental demands drive three fundamental
aspects of visual evolution through an artificial evolution framework that
co-evolves both physical eye structure and neural processing in embodied
agents. First, we demonstrate computational evidence that task specific
selection drives bifurcation in eye evolution - orientation tasks like
navigation in a maze leads to distributed compound-type eyes while an object
discrimination task leads to the emergence of high-acuity camera-type eyes.
Second, we reveal how optical innovations like lenses naturally emerge to
resolve fundamental tradeoffs between light collection and spatial precision.
Third, we uncover systematic scaling laws between visual acuity and neural
processing, showing how task complexity drives coordinated evolution of sensory
and computational capabilities. Our work introduces a novel paradigm that
illuminates evolutionary principles shaping vision by creating targeted
single-player games where embodied agents must simultaneously evolve visual
systems and learn complex behaviors. Through our unified genetic encoding
framework, these embodied agents serve as next-generation hypothesis testing
machines while providing a foundation for designing manufacturable bio-inspired
vision systems. Website: http://eyes.mit.edu/
|
2501.15005
|
Towards Distributed Backdoor Attacks with Network Detection in
Decentralized Federated Learning
|
cs.LG
|
Distributed backdoor attacks (DBA) have shown a higher attack success rate
than centralized attacks in centralized federated learning (FL). However, it
has not been investigated in the decentralized FL. In this paper, we
experimentally demonstrate that, while directly applying DBA to decentralized
FL, the attack success rate depends on the distribution of attackers in the
network architecture. Considering that the attackers can not decide their
location, this paper aims to achieve a high attack success rate regardless of
the attackers' location distribution. Specifically, we first design a method to
detect the network by predicting the distance between any two attackers on the
network. Then, based on the distance, we organize the attackers in different
clusters. Lastly, we propose an algorithm to \textit{dynamically} embed local
patterns decomposed from a global pattern into the different attackers in each
cluster. We conduct a thorough empirical investigation and find that our method
can, in benchmark datasets, outperform both centralized attacks and naive DBA
in different decentralized frameworks.
|
2501.15007
|
Controllable Protein Sequence Generation with LLM Preference
Optimization
|
cs.AI cs.CE q-bio.QM
|
Designing proteins with specific attributes offers an important solution to
address biomedical challenges. Pre-trained protein large language models (LLMs)
have shown promising results on protein sequence generation. However, to
control sequence generation for specific attributes, existing work still
exhibits poor functionality and structural stability. In this paper, we propose
a novel controllable protein design method called CtrlProt. We finetune a
protein LLM with a new multi-listwise preference optimization strategy to
improve generation quality and support multi-attribute controllable generation.
Experiments demonstrate that CtrlProt can meet functionality and structural
stability requirements effectively, achieving state-of-the-art performance in
both single-attribute and multi-attribute protein sequence generation.
|
2501.15008
|
HuGDiffusion: Generalizable Single-Image Human Rendering via 3D Gaussian
Diffusion
|
cs.CV
|
We present HuGDiffusion, a generalizable 3D Gaussian splatting (3DGS)
learning pipeline to achieve novel view synthesis (NVS) of human characters
from single-view input images. Existing approaches typically require monocular
videos or calibrated multi-view images as inputs, whose applicability could be
weakened in real-world scenarios with arbitrary and/or unknown camera poses. In
this paper, we aim to generate the set of 3DGS attributes via a diffusion-based
framework conditioned on human priors extracted from a single image.
Specifically, we begin with carefully integrated human-centric feature
extraction procedures to deduce informative conditioning signals. Based on our
empirical observations that jointly learning the whole 3DGS attributes is
challenging to optimize, we design a multi-stage generation strategy to obtain
different types of 3DGS attributes. To facilitate the training process, we
investigate constructing proxy ground-truth 3D Gaussian attributes as
high-quality attribute-level supervision signals. Through extensive
experiments, our HuGDiffusion shows significant performance improvements over
the state-of-the-art methods. Our code will be made publicly available.
|
2501.15013
|
An Information-Theoretic Efficient Capacity Region for Multi-User
Interference Channel
|
cs.IT math.IT
|
We investigate the capacity region of multi-user interference channels (IC),
where each user encodes multiple sub-user components. By unifying chain-rule
decomposition with the Entropy Power Inequality (EPI), we reason that
single-user Gaussian codebooks suffice to achieve optimal performance, thus
obviating any need for intricate auxiliary variables or joint typicality
arguments. Our partial-MAC formulation enumerates sub-user decoding orders
while only imposing constraints for sub-users actually decoded. This
significantly reduces complexity relative to enumerating all subsets or
bruteforcing over all successive interference cancellation (SIC) decoding order
combinations at all receivers. This leads to a finite but comprehensive
construction of all achievable rate tuples under sum-power constraints, while
guaranteeing that each receiver fully recovers its intended sub-user signals.
Consequently, known single-user Gaussian capacity results generalize naturally
to multi-user scenarios, revealing a cohesive framework for analyzing
multi-user IC. Our results thus offer a streamlined, tractable pathway for
designing next-generation cell-free wireless networks that rely on IC
mechanisms, efficiently exploiting interference structure while minimizing
overhead. Overall, this provides a unifying perspective.
|
2501.15014
|
On Accelerating Edge AI: Optimizing Resource-Constrained Environments
|
cs.LG cs.AI cs.NE
|
Resource-constrained edge deployments demand AI solutions that balance high
performance with stringent compute, memory, and energy limitations. In this
survey, we present a comprehensive overview of the primary strategies for
accelerating deep learning models under such constraints. First, we examine
model compression techniques-pruning, quantization, tensor decomposition, and
knowledge distillation-that streamline large models into smaller, faster, and
more efficient variants. Next, we explore Neural Architecture Search (NAS), a
class of automated methods that discover architectures inherently optimized for
particular tasks and hardware budgets. We then discuss compiler and deployment
frameworks, such as TVM, TensorRT, and OpenVINO, which provide
hardware-tailored optimizations at inference time. By integrating these three
pillars into unified pipelines, practitioners can achieve multi-objective
goals, including latency reduction, memory savings, and energy efficiency-all
while maintaining competitive accuracy. We also highlight emerging frontiers in
hierarchical NAS, neurosymbolic approaches, and advanced distillation tailored
to large language models, underscoring open challenges like pre-training
pruning for massive networks. Our survey offers practical insights, identifies
current research gaps, and outlines promising directions for building scalable,
platform-independent frameworks to accelerate deep learning models at the edge.
|
2501.15017
|
SPOCK 2.0: Update to the FeatureClassifier in the Stability of Planetary
Orbital Configurations Klassifier
|
astro-ph.EP astro-ph.IM cs.LG
|
The Stability of Planetary Orbital Configurations Klassifier (SPOCK) package
collects machine learning models for predicting the stability and collisional
evolution of compact planetary systems. In this paper we explore improvements
to SPOCK's binary stability classifier (FeatureClassifier), which predicts
orbital stability by collecting data over a short N-body integration of a
system. We find that by using a system-specific timescale (rather than a fixed
$10^4$ orbits) for the integration, and by using this timescale as an
additional feature, we modestly improve the model's AUC metric from 0.943 to
0.950 (AUC=1 for a perfect model). We additionally discovered that $\approx
10\%$ of N-body integrations in SPOCK's original training dataset were
duplicated by accident, and that $<1\%$ were misclassified as stable when they
in fact led to ejections. We provide a cleaned dataset of 100,000+ unique
integrations, release a newly trained stability classification model, and make
minor updates to the API.
|
2501.15019
|
Utilizing Graph Neural Networks for Effective Link Prediction in
Microservice Architectures
|
cs.LG
|
Managing microservice architectures in distributed systems is complex and
resource intensive due to the high frequency and dynamic nature of inter
service interactions. Accurate prediction of these future interactions can
enhance adaptive monitoring, enabling proactive maintenance and resolution of
potential performance issues before they escalate. This study introduces a
Graph Neural Network GNN based approach, specifically using a Graph Attention
Network GAT, for link prediction in microservice Call Graphs. Unlike social
networks, where interactions tend to occur sporadically and are often less
frequent, microservice Call Graphs involve highly frequent and time sensitive
interactions that are essential to operational performance. Our approach
leverages temporal segmentation, advanced negative sampling, and GATs attention
mechanisms to model these complex interactions accurately. Using real world
data, we evaluate our model across performance metrics such as AUC, Precision,
Recall, and F1 Score, demonstrating its high accuracy and robustness in
predicting microservice interactions. Our findings support the potential of
GNNs for proactive monitoring in distributed systems, paving the way for
applications in adaptive resource management and performance optimization.
|
2501.15021
|
AKVQ-VL: Attention-Aware KV Cache Adaptive 2-Bit Quantization for
Vision-Language Models
|
cs.CL
|
Vision-language models (VLMs) show remarkable performance in multimodal
tasks. However, excessively long multimodal inputs lead to oversized Key-Value
(KV) caches, resulting in significant memory consumption and I/O bottlenecks.
Previous KV quantization methods for Large Language Models (LLMs) may alleviate
these issues but overlook the attention saliency differences of multimodal
tokens, resulting in suboptimal performance. In this paper, we investigate the
attention-aware token saliency patterns in VLM and propose AKVQ-VL. AKVQ-VL
leverages the proposed Text-Salient Attention (TSA) and Pivot-Token-Salient
Attention (PSA) patterns to adaptively allocate bit budgets. Moreover,
achieving extremely low-bit quantization requires effectively addressing
outliers in KV tensors. AKVQ-VL utilizes the Walsh-Hadamard transform (WHT) to
construct outlier-free KV caches, thereby reducing quantization difficulty.
Evaluations of 2-bit quantization on 12 long-context and multimodal tasks
demonstrate that AKVQ-VL maintains or even improves accuracy, outperforming
LLM-oriented methods. AKVQ-VL can reduce peak memory usage by 2.13x, support up
to 3.25x larger batch sizes and 2.46x throughput.
|
2501.15022
|
Using Large Language Models for education managements in Vietnamese with
low resources
|
cs.CL cs.AI
|
Large language models (LLMs), such as GPT-4, Gemini 1.5, Claude 3.5 Sonnet,
and Llama3, have demonstrated significant advancements in various NLP tasks
since the release of ChatGPT in 2022. Despite their success, fine-tuning and
deploying LLMs remain computationally expensive, especially in
resource-constrained environments. In this paper, we proposed VietEduFrame, a
framework specifically designed to apply LLMs to educational management tasks
in Vietnamese institutions. Our key contribution includes the development of a
tailored dataset, derived from student education documents at Hanoi VNU, which
addresses the unique challenges faced by educational systems with limited
resources. Through extensive experiments, we show that our approach outperforms
existing methods in terms of accuracy and efficiency, offering a promising
solution for improving educational management in under-resourced environments.
While our framework leverages synthetic data to supplement real-world examples,
we discuss potential limitations regarding broader applicability and robustness
in future implementations.
|
2501.15030
|
OptiSeq: Ordering Examples On-The-Fly for In-Context Learning
|
cs.LG cs.AI cs.CL cs.PF
|
Developers using LLMs and LLM-based agents in their applications have
provided plenty of anecdotal evidence that in-context-learning (ICL) is
fragile. In this paper, we show that in addition to the quantity and quality of
examples, the order in which the in-context examples are listed in the prompt
affects the output of the LLM and, consequently, their performance. While prior
work has explored improving ICL through dataset-dependent techniques, we
introduce OptiSeq, a purely inference-time, dataset-free optimization method
that efficiently determines the best example order. OptiSeq leverages log
probabilities of LLM-generated outputs to systematically prune the search space
of possible orderings and recommend the best order(s) by distinguishing
orderings that yield high levels of accuracy and those that underperform.
Extensive empirical evaluation on multiple LLMs, datasets, and prompts
demonstrate that OptiSeq improves accuracy by 5.5 - 10.5 percentage points
across multiple tasks.
|
2501.15034
|
Divergence-Augmented Policy Optimization
|
cs.LG cs.AI stat.ML
|
In deep reinforcement learning, policy optimization methods need to deal with
issues such as function approximation and the reuse of off-policy data.
Standard policy gradient methods do not handle off-policy data well, leading to
premature convergence and instability. This paper introduces a method to
stabilize policy optimization when off-policy data are reused. The idea is to
include a Bregman divergence between the behavior policy that generates the
data and the current policy to ensure small and safe policy updates with
off-policy data. The Bregman divergence is calculated between the state
distributions of two policies, instead of only on the action probabilities,
leading to a divergence augmentation formulation. Empirical experiments on
Atari games show that in the data-scarce scenario where the reuse of off-policy
data becomes necessary, our method can achieve better performance than other
state-of-the-art deep reinforcement learning algorithms.
|
2501.15035
|
Semi-supervised Anomaly Detection with Extremely Limited Labels in
Dynamic Graphs
|
cs.LG
|
Semi-supervised graph anomaly detection (GAD) has recently received
increasing attention, which aims to distinguish anomalous patterns from graphs
under the guidance of a moderate amount of labeled data and a large volume of
unlabeled data. Although these proposed semi-supervised GAD methods have
achieved great success, their superior performance will be seriously degraded
when the provided labels are extremely limited due to some unpredictable
factors. Besides, the existing methods primarily focus on anomaly detection in
static graphs, and little effort was paid to consider the continuous evolution
characteristic of graphs over time (dynamic graphs). To address these
challenges, we propose a novel GAD framework (EL$^{2}$-DGAD) to tackle anomaly
detection problem in dynamic graphs with extremely limited labels.
Specifically, a transformer-based graph encoder model is designed to more
effectively preserve evolving graph structures beyond the local neighborhood.
Then, we incorporate an ego-context hypersphere classification loss to classify
temporal interactions according to their structure and temporal neighborhoods
while ensuring the normal samples are mapped compactly against anomalous data.
Finally, the above loss is further augmented with an ego-context contrasting
module which utilizes unlabeled data to enhance model generalization. Extensive
experiments on four datasets and three label rates demonstrate the
effectiveness of the proposed method in comparison to the existing GAD methods.
|
2501.15038
|
Adaptive Client Selection in Federated Learning: A Network Anomaly
Detection Use Case
|
cs.LG cs.AI
|
Federated Learning (FL) has become a widely used approach for training
machine learning models on decentralized data, addressing the significant
privacy concerns associated with traditional centralized methods. However, the
efficiency of FL relies on effective client selection and robust privacy
preservation mechanisms. Ineffective client selection can result in suboptimal
model performance, while inadequate privacy measures risk exposing sensitive
data.
This paper introduces a client selection framework for FL that incorporates
differential privacy and fault tolerance. The proposed adaptive approach
dynamically adjusts the number of selected clients based on model performance
and system constraints, ensuring privacy through the addition of calibrated
noise.
The method is evaluated on a network anomaly detection use case using the
UNSW-NB15 and ROAD datasets. Results demonstrate up to a 7% improvement in
accuracy and a 25% reduction in training time compared to the FedL2P approach.
Additionally, the study highlights trade-offs between privacy budgets and model
performance, with higher privacy budgets leading to reduced noise and improved
accuracy. While the fault tolerance mechanism introduces a slight performance
decrease, it enhances robustness against client failures. Statistical
validation using the Mann-Whitney U test confirms the significance of these
improvements, with results achieving a p-value of less than 0.05.
|
2501.15040
|
Complementary Subspace Low-Rank Adaptation of Vision-Language Models for
Few-Shot Classification
|
cs.CV
|
Vision language model (VLM) has been designed for large scale image-text
alignment as a pretrained foundation model. For downstream few shot
classification tasks, parameter efficient fine-tuning (PEFT) VLM has gained
much popularity in the computer vision community. PEFT methods like prompt
tuning and linear adapter have been studied for fine-tuning VLM while low rank
adaptation (LoRA) algorithm has rarely been considered for few shot fine-tuning
VLM. The main obstacle to use LoRA for few shot fine-tuning is the catastrophic
forgetting problem. Because the visual language alignment knowledge is
important for the generality in few shot learning, whereas low rank adaptation
interferes with the most informative direction of the pretrained weight matrix.
We propose the complementary subspace low rank adaptation (Comp-LoRA) method to
regularize the catastrophic forgetting problem in few shot VLM finetuning. In
detail, we optimize the low rank matrix in the complementary subspace, thus
preserving the general vision language alignment ability of VLM when learning
the novel few shot information. We conduct comparison experiments of the
proposed Comp-LoRA method and other PEFT methods on fine-tuning VLM for few
shot classification. And we also present the suppression on the catastrophic
forgetting problem of our proposed method against directly applying LoRA to
VLM. The results show that the proposed method surpasses the baseline method by
about +1.0\% Top-1 accuracy and preserves the VLM zero-shot performance over
the baseline method by about +1.3\% Top-1 accuracy.
|
2501.15042
|
SCCD: A Session-based Dataset for Chinese Cyberbullying Detection
|
cs.CL
|
The rampant spread of cyberbullying content poses a growing threat to
societal well-being. However, research on cyberbullying detection in Chinese
remains underdeveloped, primarily due to the lack of comprehensive and reliable
datasets. Notably, no existing Chinese dataset is specifically tailored for
cyberbullying detection. Moreover, while comments play a crucial role within
sessions, current session-based datasets often lack detailed, fine-grained
annotations at the comment level. To address these limitations, we present a
novel Chinese cyber-bullying dataset, termed SCCD, which consists of 677
session-level samples sourced from a major social media platform Weibo.
Moreover, each comment within the sessions is annotated with fine-grained
labels rather than conventional binary class labels. Empirically, we evaluate
the performance of various baseline methods on SCCD, highlighting the
challenges for effective Chinese cyberbullying detection.
|
2501.15043
|
Prompt-Aware Controllable Shadow Removal
|
cs.CV
|
Shadow removal aims to restore the image content in shadowed regions. While
deep learning-based methods have shown promising results, they still face key
challenges: 1) uncontrolled removal of all shadows, or 2) controllable removal
but heavily relies on precise shadow region masks. To address these issues, we
introduce a novel paradigm: prompt-aware controllable shadow removal. Unlike
existing approaches, our paradigm allows for targeted shadow removal from
specific subjects based on user prompts (e.g., dots, lines, or subject masks).
This approach eliminates the need for shadow annotations and offers flexible,
user-controlled shadow removal. Specifically, we propose an end-to-end
learnable model, the Prompt-Aware Controllable Shadow Removal Network
(PACSRNet). PACSRNet consists of two key modules: a prompt-aware module that
generates shadow masks for the specified subject based on the user prompt, and
a shadow removal module that uses the shadow prior from the first module to
restore the content in the shadowed regions. Additionally, we enhance the
shadow removal module by incorporating feature information from the
prompt-aware module through a linear operation, providing prompt-guided support
for shadow removal. Recognizing that existing shadow removal datasets lack
diverse user prompts, we contribute a new dataset specifically designed for
prompt-based controllable shadow removal. Extensive experimental results
demonstrate the effectiveness and superiority of PACSRNet.
|
2501.15045
|
Towards Robust Unsupervised Attention Prediction in Autonomous Driving
|
cs.CV cs.AI
|
Robustly predicting attention regions of interest for self-driving systems is
crucial for driving safety but presents significant challenges due to the
labor-intensive nature of obtaining large-scale attention labels and the domain
gap between self-driving scenarios and natural scenes. These challenges are
further exacerbated by complex traffic environments, including camera
corruption under adverse weather, noise interferences, and central bias from
long-tail distributions. To address these issues, we propose a robust
unsupervised attention prediction method. An Uncertainty Mining Branch refines
predictions by analyzing commonalities and differences across multiple
pre-trained models on natural scenes, while a Knowledge Embedding Block bridges
the domain gap by incorporating driving knowledge to adaptively enhance
pseudo-labels. Additionally, we introduce RoboMixup, a novel data augmentation
method that improves robustness against corruption through soft attention and
dynamic augmentation, and mitigates central bias by integrating random cropping
into Mixup as a regularizer. To systematically evaluate robustness in
self-driving attention prediction, we introduce the DriverAttention-C
benchmark, comprising over 100k frames across three subsets: BDD-A-C,
DR(eye)VE-C, and DADA-2000-C. Our method achieves performance equivalent to or
surpassing fully supervised state-of-the-art approaches on three public
datasets and the proposed robustness benchmark, reducing relative corruption
degradation by 58.8% and 52.8%, and improving central bias robustness by 12.4%
and 11.4% in KLD and CC metrics, respectively. Code and data are available at
https://github.com/zaplm/DriverAttention.
|
2501.15046
|
Evaluating Hallucination in Large Vision-Language Models based on
Context-Aware Object Similarities
|
cs.CV cs.AI cs.LG
|
Despite their impressive performance on multi-modal tasks, large
vision-language models (LVLMs) tend to suffer from hallucinations. An important
type is object hallucination, where LVLMs generate objects that are
inconsistent with the images shown to the model. Existing works typically
attempt to quantify object hallucinations by detecting and measuring the
fraction of hallucinated objects in generated captions. Additionally, more
recent work also measures object hallucinations by directly querying the LVLM
with binary questions about the presence of likely hallucinated objects based
on object statistics like top-k frequent objects and top-k co-occurring
objects. In this paper, we present Context-Aware Object Similarities (CAOS), a
novel approach for evaluating object hallucination in LVLMs using object
statistics as well as the generated captions. CAOS uniquely integrates object
statistics with semantic relationships between objects in captions and
ground-truth data. Moreover, existing approaches usually only detect and
measure hallucinations belonging to a predetermined set of in-domain objects
(typically the set of all ground-truth objects for the training dataset) and
ignore generated objects that are not part of this set, leading to
under-evaluation. To address this, we further employ language model--based
object recognition to detect potentially out-of-domain hallucinated objects and
use an ensemble of LVLMs for verifying the presence of such objects in the
query image. CAOS also examines the sequential dynamics of object generation,
shedding light on how the order of object appearance influences hallucinations,
and employs word embedding models to analyze the semantic reasons behind
hallucinations. CAOS aims to offer a nuanced understanding of the hallucination
tendencies of LVLMs by providing a systematic framework to identify and
interpret object hallucinations.
|
2501.15048
|
YouTube Recommendations Reinforce Negative Emotions: Auditing
Algorithmic Bias with Emotionally-Agentic Sock Puppets
|
cs.SI cs.CY
|
Personalized recommendation algorithms, like those on YouTube, significantly
shape online content consumption. These systems aim to maximize engagement by
learning users' preferences and aligning content accordingly but may
unintentionally reinforce impulsive and emotional biases. Using a sock-puppet
audit methodology, this study examines YouTube's capacity to recognize and
reinforce emotional preferences. Simulated user accounts with assigned
emotional preferences navigate the platform, selecting videos that align with
their assigned preferences and recording subsequent recommendations. Our
findings reveal reveal that YouTube amplifies negative emotions, such as anger
and grievance, by increasing their prevalence and prominence in
recommendations. This reinforcement intensifies over time and persists across
contexts. Surprisingly, contextual recommendations often exceed personalized
ones in reinforcing emotional alignment. These findings suggest the algorithm
amplifies user biases, contributing to emotional filter bubbles and raising
concerns about user well-being and societal impacts. The study emphasizes the
need for balancing personalization with content diversity and user agency.
|
2501.15051
|
Abstractive Text Summarization for Bangla Language Using NLP and Machine
Learning Approaches
|
cs.CL
|
Text summarization involves reducing extensive documents to short sentences
that encapsulate the essential ideas. The goal is to create a summary that
effectively conveys the main points of the original text. We spend a
significant amount of time each day reading the newspaper to stay informed
about current events both domestically and internationally. While reading
newspapers enriches our knowledge, we sometimes come across unnecessary content
that isn't particularly relevant to our lives. In this paper, we introduce a
neural network model designed to summarize Bangla text into concise and
straightforward paragraphs, aiming for greater stability and efficiency.
|
2501.15052
|
Graph-Based Cross-Domain Knowledge Distillation for Cross-Dataset
Text-to-Image Person Retrieval
|
cs.CV cs.AI cs.MM
|
Video surveillance systems are crucial components for ensuring public safety
and management in smart city. As a fundamental task in video surveillance,
text-to-image person retrieval aims to retrieve the target person from an image
gallery that best matches the given text description. Most existing
text-to-image person retrieval methods are trained in a supervised manner that
requires sufficient labeled data in the target domain. However, it is common in
practice that only unlabeled data is available in the target domain due to the
difficulty and cost of data annotation, which limits the generalization of
existing methods in practical application scenarios. To address this issue, we
propose a novel unsupervised domain adaptation method, termed Graph-Based
Cross-Domain Knowledge Distillation (GCKD), to learn the cross-modal feature
representation for text-to-image person retrieval in a cross-dataset scenario.
The proposed GCKD method consists of two main components. Firstly, a
graph-based multi-modal propagation module is designed to bridge the
cross-domain correlation among the visual and textual samples. Secondly, a
contrastive momentum knowledge distillation module is proposed to learn the
cross-modal feature representation using the online knowledge distillation
strategy. By jointly optimizing the two modules, the proposed method is able to
achieve efficient performance for cross-dataset text-to-image person retrieval.
acExtensive experiments on three publicly available text-to-image person
retrieval datasets demonstrate the effectiveness of the proposed GCKD method,
which consistently outperforms the state-of-the-art baselines.
|
2501.15053
|
Exploring the impact of Optimised Hyperparameters on Bi-LSTM-based
Contextual Anomaly Detector
|
cs.LG cs.AI
|
The exponential growth in the usage of Internet of Things in daily life has
caused immense increase in the generation of time series data. Smart homes is
one such domain where bulk of data is being generated and anomaly detection is
one of the many challenges addressed by researchers in recent years. Contextual
anomaly is a kind of anomaly that may show deviation from the normal pattern
like point or sequence anomalies, but it also requires prior knowledge about
the data domain and the actions that caused the deviation. Recent studies based
on Recurrent Neural Networks (RNN) have demonstrated strong performance in
anomaly detection. This study explores the impact of automatically tuned
hyperparamteres on Unsupervised Online Contextual Anomaly Detection (UoCAD)
approach by proposing UoCAD with Optimised Hyperparamnters (UoCAD-OH). UoCAD-OH
conducts hyperparameter optimisation on Bi-LSTM model in an offline phase and
uses the fine-tuned hyperparameters to detect anomalies during the online
phase. The experiments involve evaluating the proposed framework on two smart
home air quality datasets containing contextual anomalies. The evaluation
metrics used are Precision, Recall, and F1 score.
|
2501.15054
|
An Attempt to Unraveling Token Prediction Refinement and Identifying
Essential Layers of Large Language Models
|
cs.CL cs.AI cs.LG
|
This research aims to unravel how large language models (LLMs) iteratively
refine token predictions (or, in a general sense, vector predictions). We
utilized a logit lens technique to analyze the model's token predictions
derived from intermediate representations. Specifically, we focused on how LLMs
access and use information from input contexts, and how positioning of relevant
information affects the model's token prediction refinement process. Our
findings for multi-document question answering task, by varying input context
lengths (the number of documents), using GPT-2, revealed that the number of
layers between the first layer that the model predicted next tokens correctly
and the later layers that the model finalized its correct predictions, as a
function of the position of relevant information (i.e., placing the relevant
one at the beginning, middle, or end of the input context), has a nearly
inverted U shape. We found that the gap between these two layers, on average,
diminishes when relevant information is positioned at the beginning or end of
the input context, suggesting that the model requires more refinements when
processing longer contexts with relevant information situated in the middle,
and highlighting which layers are essential for determining the correct output.
Our analysis provides insights about how token predictions are distributed
across different conditions, and establishes important connections to existing
hypotheses and previous findings in AI safety research and development.
|
2501.15055
|
Group Ligands Docking to Protein Pockets
|
q-bio.BM cs.AI
|
Molecular docking is a key task in computational biology that has attracted
increasing interest from the machine learning community. While existing methods
have achieved success, they generally treat each protein-ligand pair in
isolation. Inspired by the biochemical observation that ligands binding to the
same target protein tend to adopt similar poses, we propose \textsc{GroupBind},
a novel molecular docking framework that simultaneously considers multiple
ligands docking to a protein. This is achieved by introducing an interaction
layer for the group of ligands and a triangle attention module for embedding
protein-ligand and group-ligand pairs. By integrating our approach with
diffusion-based docking model, we set a new S performance on the PDBBind blind
docking benchmark, demonstrating the effectiveness of our proposed molecular
docking paradigm.
|
2501.15056
|
Feedback-Aware Monte Carlo Tree Search for Efficient Information Seeking
in Goal-Oriented Conversations
|
cs.AI cs.CL cs.HC cs.LG
|
The ability to identify and acquire missing information is a critical
component of effective decision making and problem solving. With the rise of
conversational artificial intelligence (AI) systems, strategically formulating
information-seeking questions becomes crucial and demands efficient methods to
guide the search process. We introduce a novel approach to adaptive
question-asking through a combination of Large Language Models (LLM) for
generating questions that maximize information gain, Monte Carlo Tree Search
(MCTS) for constructing and leveraging a decision tree across multiple samples,
and a hierarchical feedback mechanism to learn from past interactions. We
present two key innovations: (1) an adaptive MCTS algorithm that balances
exploration and exploitation for efficient search over potential questions; and
(2) a clustering-based feedback algorithm that leverages prior experience to
guide future interactions. Each incoming sample is assigned to a cluster based
on its semantic similarity with previously observed samples. Our UCT (Upper
Confidence bound for Trees) formulation selects optimal questions by combining
expected rewards, a function of information gain, with a cluster-specific bonus
that decays with depth, to emphasize the importance of early-stage questions
that have proven effective for narrowing the solution space in similar samples.
Experiments across three domains, including medical diagnosis and
troubleshooting, demonstrate that our method leads to an average of 12%
improvement in success rates and a 10x reduction in the average number of LLM
calls made per conversation for the search process, in comparison to the state
of the art.
|
2501.15057
|
Predictive Modeling and Uncertainty Quantification of Fatigue Life in
Metal Alloys using Machine Learning
|
cs.LG cond-mat.mtrl-sci
|
Recent advancements in machine learning-based methods have demonstrated great
potential for improved property prediction in material science. However,
reliable estimation of the confidence intervals for the predicted values
remains a challenge, due to the inherent complexities in material modeling.
This study introduces a novel approach for uncertainty quantification in
fatigue life prediction of metal materials based on integrating knowledge from
physics-based fatigue life models and machine learning models. The proposed
approach employs physics-based input features estimated using the Basquin
fatigue model to augment the experimentally collected data of fatigue life.
Furthermore, a physics-informed loss function that enforces boundary
constraints for the estimated fatigue life of considered materials is
introduced for the neural network models. Experimental validation on datasets
comprising collected data from fatigue life tests for Titanium alloys and
Carbon steel alloys demonstrates the effectiveness of the proposed approach.
The synergy between physics-based models and data-driven models enhances the
consistency in predicted values and improves uncertainty interval estimates.
|
2501.15058
|
KETA: Kinematic-Phrases-Enhanced Text-to-Motion Generation via
Fine-grained Alignment
|
cs.CV
|
Motion synthesis plays a vital role in various fields of artificial
intelligence. Among the various conditions of motion generation, text can
describe motion details elaborately and is easy to acquire, making
text-to-motion(T2M) generation important. State-of-the-art T2M techniques
mainly leverage diffusion models to generate motions with text prompts as
guidance, tackling the many-to-many nature of T2M tasks. However, existing T2M
approaches face challenges, given the gap between the natural language domain
and the physical domain, making it difficult to generate motions fully
consistent with the texts.
We leverage kinematic phrases(KP), an intermediate representation that
bridges these two modalities, to solve this. Our proposed method, KETA,
decomposes the given text into several decomposed texts via a language model.
It trains an aligner to align decomposed texts with the KP segments extracted
from the generated motions. Thus, it's possible to restrict the behaviors for
diffusion-based T2M models. During the training stage, we deploy the text-KP
alignment loss as an auxiliary goal to supervise the models. During the
inference stage, we refine our generated motions for multiple rounds in our
decoder structure, where we compute the text-KP distance as the guidance signal
in each new round. Experiments demonstrate that KETA achieves up to 1.19x,
2.34x better R precision and FID value on both backbones of the base model,
motion diffusion model. Compared to a wide range of T2M generation models. KETA
achieves either the best or the second-best performance.
|
2501.15061
|
PolaFormer: Polarity-aware Linear Attention for Vision Transformers
|
cs.CV cs.AI
|
Linear attention has emerged as a promising alternative to softmax-based
attention, leveraging kernelized feature maps to reduce complexity from
quadratic to linear in sequence length. However, the non-negative constraint on
feature maps and the relaxed exponential function used in approximation lead to
significant information loss compared to the original query-key dot products,
resulting in less discriminative attention maps with higher entropy. To address
the missing interactions driven by negative values in query-key pairs, we
propose a polarity-aware linear attention mechanism that explicitly models both
same-signed and opposite-signed query-key interactions, ensuring comprehensive
coverage of relational information. Furthermore, to restore the spiky
properties of attention maps, we provide a theoretical analysis proving the
existence of a class of element-wise functions (with positive first and second
derivatives) that can reduce entropy in the attention distribution. For
simplicity, and recognizing the distinct contributions of each dimension, we
employ a learnable power function for rescaling, allowing strong and weak
attention signals to be effectively separated. Extensive experiments
demonstrate that the proposed PolaFormer improves performance on various vision
tasks, enhancing both expressiveness and efficiency by up to 4.6%.
|
2501.15062
|
Exact Fit Attention in Node-Holistic Graph Convolutional Network for
Improved EEG-Based Driver Fatigue Detection
|
cs.LG
|
EEG-based fatigue monitoring can effectively reduce the incidence of related
traffic accidents. In the past decade, with the advancement of deep learning,
convolutional neural networks (CNN) have been increasingly used for EEG signal
processing. However, due to the data's non-Euclidean characteristics, existing
CNNs may lose important spatial information from EEG, specifically channel
correlation. Thus, we propose the node-holistic graph convolutional network
(NHGNet), a model that uses graphic convolution to dynamically learn each
channel's features. With exact fit attention optimization, the network captures
inter-channel correlations through a trainable adjacency matrix. The
interpretability is enhanced by revealing critical areas of brain activity and
their interrelations in various mental states. In validations on two public
datasets, NHGNet outperforms the SOTAs. Specifically, in the intra-subject,
NHGNet improved detection accuracy by at least 2.34% and 3.42%, and in the
inter-subjects, it improved by at least 2.09% and 15.06%. Visualization
research on the model revealed that the central parietal area plays an
important role in detecting fatigue levels, whereas the frontal and temporal
lobes are essential for maintaining vigilance.
|
2501.15063
|
Cross-modal Context Fusion and Adaptive Graph Convolutional Network for
Multimodal Conversational Emotion Recognition
|
cs.CL
|
Emotion recognition has a wide range of applications in human-computer
interaction, marketing, healthcare, and other fields. In recent years, the
development of deep learning technology has provided new methods for emotion
recognition. Prior to this, many emotion recognition methods have been
proposed, including multimodal emotion recognition methods, but these methods
ignore the mutual interference between different input modalities and pay
little attention to the directional dialogue between speakers. Therefore, this
article proposes a new multimodal emotion recognition method, including a cross
modal context fusion module, an adaptive graph convolutional encoding module,
and an emotion classification module. The cross modal context module includes a
cross modal alignment module and a context fusion module, which are used to
reduce the noise introduced by mutual interference between different input
modalities. The adaptive graph convolution module constructs a dialogue
relationship graph for extracting dependencies and self dependencies between
speakers. Our model has surpassed some state-of-the-art methods on publicly
available benchmark datasets and achieved high recognition accuracy.
|
2501.15065
|
Task Arithmetic in Trust Region: A Training-Free Model Merging Approach
to Navigate Knowledge Conflicts
|
cs.LG cs.AI
|
Multi-task model merging offers an efficient solution for integrating
knowledge from multiple fine-tuned models, mitigating the significant
computational and storage demands associated with multi-task training. As a key
technique in this field, Task Arithmetic (TA) defines task vectors by
subtracting the pre-trained model ($\theta_{\text{pre}}$) from the fine-tuned
task models in parameter space, then adjusting the weight between these task
vectors and $\theta_{\text{pre}}$ to balance task-generalized and task-specific
knowledge. Despite the promising performance of TA, conflicts can arise among
the task vectors, particularly when different tasks require distinct model
adaptations. In this paper, we formally define this issue as knowledge
conflicts, characterized by the performance degradation of one task after
merging with a model fine-tuned for another task. Through in-depth analysis, we
show that these conflicts stem primarily from the components of task vectors
that align with the gradient of task-specific losses at $\theta_{\text{pre}}$.
To address this, we propose Task Arithmetic in Trust Region (TATR), which
defines the trust region as dimensions in the model parameter space that cause
only small changes (corresponding to the task vector components with gradient
orthogonal direction) in the task-specific losses. Restricting parameter
merging within this trust region, TATR can effectively alleviate knowledge
conflicts. Moreover, TATR serves as both an independent approach and a
plug-and-play module compatible with a wide range of TA-based methods.
Extensive empirical evaluations on eight distinct datasets robustly demonstrate
that TATR improves the multi-task performance of several TA-based model merging
methods by an observable margin.
|
2501.15067
|
CG-RAG: Research Question Answering by Citation Graph
Retrieval-Augmented LLMs
|
cs.IR cs.LG
|
Research question answering requires accurate retrieval and contextual
understanding of scientific literature. However, current Retrieval-Augmented
Generation (RAG) methods often struggle to balance complex document
relationships with precise information retrieval. In this paper, we introduce
Contextualized Graph Retrieval-Augmented Generation (CG-RAG), a novel framework
that integrates sparse and dense retrieval signals within graph structures to
enhance retrieval efficiency and subsequently improve generation quality for
research question answering. First, we propose a contextual graph
representation for citation graphs, effectively capturing both explicit and
implicit connections within and across documents. Next, we introduce
Lexical-Semantic Graph Retrieval (LeSeGR), which seamlessly integrates sparse
and dense retrieval signals with graph encoding. It bridges the gap between
lexical precision and semantic understanding in citation graph retrieval,
demonstrating generalizability to existing graph retrieval and hybrid retrieval
methods. Finally, we present a context-aware generation strategy that utilizes
the retrieved graph-structured information to generate precise and contextually
enriched responses using large language models (LLMs). Extensive experiments on
research question answering benchmarks across multiple domains demonstrate that
our CG-RAG framework significantly outperforms RAG methods combined with
various state-of-the-art retrieval approaches, delivering superior retrieval
accuracy and generation quality.
|
2501.15068
|
An Atomic Skill Library Construction Method for Data-Efficient Embodied
Manipulation
|
cs.RO
|
Embodied manipulation is a fundamental ability in the realm of embodied
artificial intelligence. Although current embodied manipulation models show
certain generalizations in specific settings, they struggle in new environments
and tasks due to the complexity and diversity of real-world scenarios. The
traditional end-to-end data collection and training manner leads to significant
data demands. Decomposing end-to-end tasks into atomic skills helps reduce data
requirements and improves the task success rate. However, existing methods are
limited by predefined skill sets that cannot be dynamically updated. To address
the issue, we introduce a three-wheeled data-driven method to build an atomic
skill library. We divide tasks into subtasks using the Vision-Language-Planning
(VLP). Then, atomic skill definitions are formed by abstracting the subtasks.
Finally, an atomic skill library is constructed via data collection and
Vision-Language-Action (VLA) fine-tuning. As the atomic skill library expands
dynamically with the three-wheel update strategy, the range of tasks it can
cover grows naturally. In this way, our method shifts focus from end-to-end
tasks to atomic skills, significantly reducing data costs while maintaining
high performance and enabling efficient adaptation to new tasks. Extensive
experiments in real-world settings demonstrate the effectiveness and efficiency
of our approach.
|
2501.15070
|
Unifying Prediction and Explanation in Time-Series Transformers via
Shapley-based Pretraining
|
cs.LG
|
In this paper, we propose ShapTST, a framework that enables time-series
transformers to efficiently generate Shapley-value-based explanations alongside
predictions in a single forward pass. Shapley values are widely used to
evaluate the contribution of different time-steps and features in a test
sample, and are commonly generated through repeatedly inferring on each sample
with different parts of information removed. Therefore, it requires expensive
inference-time computations that occur at every request for model explanations.
In contrast, our framework unifies the explanation and prediction in training
through a novel Shapley-based pre-training design, which eliminates the
undesirable test-time computation and replaces it with a single-time
pre-training. Moreover, this specialized pre-training benefits the prediction
performance by making the transformer model more effectively weigh different
features and time-steps in the time-series, particularly improving the
robustness against data noise that is common to raw time-series data. We
experimentally validated our approach on eight public datasets, where our
time-series model achieved competitive results in both classification and
regression tasks, while providing Shapley-based explanations similar to those
obtained with post-hoc computation. Our work offers an efficient and
explainable solution for time-series analysis tasks in the safety-critical
applications.
|
2501.15071
|
Gaze-based Task Decomposition for Robot Manipulation in Imitation
Learning
|
cs.RO
|
In imitation learning for robotic manipulation, decomposing object
manipulation tasks into multiple sub-tasks is essential. This decomposition
enables the reuse of learned skills in varying contexts and the combination of
acquired skills to perform novel tasks, rather than merely replicating
demonstrated motions. Gaze plays a critical role in human object manipulation,
where it is strongly correlated with hand movements. We hypothesize that an
imitating agent's gaze control, fixating on specific landmarks and
transitioning between them, simultaneously segments demonstrated manipulations
into sub-tasks. In this study, we propose a simple yet robust task
decomposition method based on gaze transitions. The method leverages
teleoperation, a common modality in robotic manipulation for collecting
demonstrations, in which a human operator's gaze is measured and used for task
decomposition as a substitute for an imitating agent's gaze. Notably, our
method achieves consistent task decomposition across all demonstrations for
each task, which is desirable in contexts such as machine learning. We applied
this method to demonstrations of various tasks and evaluated the
characteristics and consistency of the resulting sub-tasks. Furthermore,
through extensive testing across a wide range of hyperparameter variations, we
demonstrated that the proposed method possesses the robustness necessary for
application to different robotic systems.
|
2501.15073
|
SpatioTemporal Learning for Human Pose Estimation in Sparsely-Labeled
Videos
|
cs.CV
|
Human pose estimation in videos remains a challenge, largely due to the
reliance on extensive manual annotation of large datasets, which is expensive
and labor-intensive. Furthermore, existing approaches often struggle to capture
long-range temporal dependencies and overlook the complementary relationship
between temporal pose heatmaps and visual features. To address these
limitations, we introduce STDPose, a novel framework that enhances human pose
estimation by learning spatiotemporal dynamics in sparsely-labeled videos.
STDPose incorporates two key innovations: 1) A novel Dynamic-Aware Mask to
capture long-range motion context, allowing for a nuanced understanding of pose
changes. 2) A system for encoding and aggregating spatiotemporal
representations and motion dynamics to effectively model spatiotemporal
relationships, improving the accuracy and robustness of pose estimation.
STDPose establishes a new performance benchmark for both video pose propagation
(i.e., propagating pose annotations from labeled frames to unlabeled frames)
and pose estimation tasks, across three large-scale evaluation datasets.
Additionally, utilizing pseudo-labels generated by pose propagation, STDPose
achieves competitive performance with only 26.7% labeled data.
|
2501.15074
|
PatentLMM: Large Multimodal Model for Generating Descriptions for Patent
Figures
|
cs.CV cs.AI
|
Writing comprehensive and accurate descriptions of technical drawings in
patent documents is crucial to effective knowledge sharing and enabling the
replication and protection of intellectual property. However, automation of
this task has been largely overlooked by the research community. To this end,
we introduce PatentDesc-355K, a novel large-scale dataset containing ~355K
patent figures along with their brief and detailed textual descriptions
extracted from more than 60K US patent documents. In addition, we propose
PatentLMM - a novel multimodal large language model specifically tailored to
generate high-quality descriptions of patent figures. Our proposed PatentLMM
comprises two key components: (i) PatentMME, a specialized multimodal vision
encoder that captures the unique structural elements of patent figures, and
(ii) PatentLLaMA, a domain-adapted version of LLaMA fine-tuned on a large
collection of patents. Extensive experiments demonstrate that training a vision
encoder specifically designed for patent figures significantly boosts the
performance, generating coherent descriptions compared to fine-tuning
similar-sized off-the-shelf multimodal models. PatentDesc-355K and PatentLMM
pave the way for automating the understanding of patent figures, enabling
efficient knowledge sharing and faster drafting of patent documents. We make
the code and data publicly available.
|
2501.15076
|
Cryptanalysis via Machine Learning Based Information Theoretic Metrics
|
cs.CR cs.IT cs.LG math.IT
|
The fields of machine learning (ML) and cryptanalysis share an interestingly
common objective of creating a function, based on a given set of inputs and
outputs. However, the approaches and methods in doing so vary vastly between
the two fields. In this paper, we explore integrating the knowledge from the ML
domain to provide empirical evaluations of cryptosystems. Particularly, we
utilize information theoretic metrics to perform ML-based distribution
estimation. We propose two novel applications of ML algorithms that can be
applied in a known plaintext setting to perform cryptanalysis on any
cryptosystem. We use mutual information neural estimation to calculate a
cryptosystem's mutual information leakage, and a binary cross entropy
classification to model an indistinguishability under chosen plaintext attack
(CPA). These algorithms can be readily applied in an audit setting to evaluate
the robustness of a cryptosystem and the results can provide a useful empirical
bound. We evaluate the efficacy of our methodologies by empirically analyzing
several encryption schemes. Furthermore, we extend the analysis to novel
network coding-based cryptosystems and provide other use cases for our
algorithms. We show that our classification model correctly identifies the
encryption schemes that are not IND-CPA secure, such as DES, RSA, and AES ECB,
with high accuracy. It also identifies the faults in CPA-secure cryptosystems
with faulty parameters, such a reduced counter version of AES-CTR. We also
conclude that with our algorithms, in most cases a smaller-sized neural network
using less computing power can identify vulnerabilities in cryptosystems,
providing a quick check of the sanity of the cryptosystem and help to decide
whether to spend more resources to deploy larger networks that are able to
break the cryptosystem.
|
2501.15077
|
NetChain: Authenticated Blockchain Top-k Graph Data Queries and its
Application in Asset Management
|
cs.CR cs.DB
|
As a valuable digital resource, graph data is an important data asset, which
has been widely utilized across various fields to optimize decision-making and
enable smarter solutions. To manage data assets, blockchain is widely used to
enable data sharing and trading, but it cannot supply complex analytical
queries. vChain was proposed to achieve verifiable boolean queries over
blockchain by designing an embedded authenticated data structure (ADS).
However, for generating (non-)existence proofs, vChain suffers from expensive
storage and computation costs in ADS construction, along with high
communication and verification costs. In this paper, we propose a novel
NetChain framework that enables efficient top-k queries over on-chain graph
data with verifiability. Specifically, we design a novel authenticated
two-layer index that supports (non-)existence proof generation in block-level
and built-in verifiability for matched objects. To further alleviate the
computation and verification overhead, an optimized variant NetChain+ is
derived. The authenticity of our frameworks is validated through security
analysis. Evaluations show that NetChain and NetChain+ outperform vChain,
respectively achieving up to 85X and 31X improvements on ADS construction.
Moreover, compared with vChain, NetChain+ reduces the communication and
verification costs by 87% and 96% respectively.
|
2501.15078
|
Impact-resistant, autonomous robots inspired by tensegrity architecture
|
cs.RO
|
Future robots will navigate perilous, remote environments with resilience and
autonomy. Researchers have proposed building robots with compliant bodies to
enhance robustness, but this approach often sacrifices the autonomous
capabilities expected of rigid robots. Inspired by tensegrity architecture, we
introduce a tensegrity robot -- a hybrid robot made from rigid struts and
elastic tendons -- that demonstrates the advantages of compliance and the
autonomy necessary for task performance. This robot boasts impact resistance
and autonomy in a field environment and additional advances in the state of the
art, including surviving harsh impacts from drops (at least 5.7 m), accurately
reconstructing its shape and orientation using on-board sensors, achieving high
locomotion speeds (18 bar lengths per minute), and climbing the steepest
incline of any tensegrity robot (28 degrees). We characterize the robot's
locomotion on unstructured terrain, showcase its autonomous capabilities in
navigation tasks, and demonstrate its robustness by rolling it off a cliff.
|
2501.15079
|
Salvaging Forbidden Treasure in Medical Data: Utilizing Surrogate
Outcomes and Single Records for Rare Event Modeling
|
stat.ME cs.LG stat.AP stat.ML
|
The vast repositories of Electronic Health Records (EHR) and medical claims
hold untapped potential for studying rare but critical events, such as suicide
attempt. Conventional setups often model suicide attempt as a univariate
outcome and also exclude any ``single-record'' patients with a single
documented encounter due to a lack of historical information. However, patients
who were diagnosed with suicide attempts at the only encounter could, to some
surprise, represent a substantial proportion of all attempt cases in the data,
as high as 70--80%. We innovate a hybrid and integrative learning framework to
leverage concurrent outcomes as surrogates and harness the forbidden yet
precious information from single-record data. Our approach employs a supervised
learning component to learn the latent variables that connect primary (e.g.,
suicide) and surrogate outcomes (e.g., mental disorders) to historical
information. It simultaneously employs an unsupervised learning component to
utilize the single-record data, through the shared latent variables. As such,
our approach offers a general strategy for information integration that is
crucial to modeling rare conditions and events. With hospital inpatient data
from Connecticut, we demonstrate that single-record data and concurrent
diagnoses indeed carry valuable information, and utilizing them can
substantially improve suicide risk modeling.
|
2501.15081
|
Can Large Language Models Be Trusted as Black-Box Evolutionary
Optimizers for Combinatorial Problems?
|
cs.NE cs.AI
|
Evolutionary computation excels in complex optimization but demands deep
domain knowledge, restricting its accessibility. Large Language Models (LLMs)
offer a game-changing solution with their extensive knowledge and could
democratize the optimization paradigm. Although LLMs possess significant
capabilities, they may not be universally effective, particularly since
evolutionary optimization encompasses multiple stages. It is therefore
imperative to evaluate the suitability of LLMs as evolutionary optimizer (EVO).
Thus, we establish a series of rigid standards to thoroughly examine the
fidelity of LLM-based EVO output in different stages of evolutionary
optimization and then introduce a robust error-correction mechanism to mitigate
the output uncertainty. Furthermore, we explore a cost-efficient method that
directly operates on entire populations with excellent effectiveness in
contrast to individual-level optimization. Through extensive experiments, we
rigorously validate the performance of LLMs as operators targeted for
combinatorial problems. Our findings provide critical insights and valuable
observations, advancing the understanding and application of LLM-based
optimization.
|
2501.15084
|
Hierarchical Pattern Decryption Methodology for Ransomware Detection
Using Probabilistic Cryptographic Footprints
|
cs.CR cs.AI
|
The increasing sophistication of encryption-based ransomware has demanded
innovative approaches to detection and mitigation, prompting the development of
a hierarchical framework grounded in probabilistic cryptographic analysis. By
focusing on the statistical characteristics of encryption patterns, the
proposed methodology introduces a layered approach that combines advanced
clustering algorithms with machine learning to isolate ransomware-induced
anomalies. Through comprehensive testing across diverse ransomware families,
the framework demonstrated exceptional accuracy, effectively distinguishing
malicious encryption operations from benign activities while maintaining low
false positive rates. The system's design integrates dynamic feedback
mechanisms, enabling adaptability to varying cryptographic complexities and
operational environments. Detailed entropy-based evaluations revealed its
sensitivity to subtle deviations in encryption workflows, offering a robust
alternative to traditional detection methods reliant on static signatures or
heuristics. Computational benchmarks confirmed its scalability and efficiency,
achieving consistent performance even under high data loads and complex
cryptographic scenarios. The inclusion of real-time clustering and anomaly
evaluation ensures rapid response capabilities, addressing critical latency
challenges in ransomware detection. Performance comparisons with established
methods highlighted its improvements in detection efficacy, particularly
against advanced ransomware employing extended key lengths and unique
cryptographic protocols.
|
2501.15085
|
Data Center Cooling System Optimization Using Offline Reinforcement
Learning
|
cs.AI cs.LG cs.SY eess.SY
|
The recent advances in information technology and artificial intelligence
have fueled a rapid expansion of the data center (DC) industry worldwide,
accompanied by an immense appetite for electricity to power the DCs. In a
typical DC, around 30~40% of the energy is spent on the cooling system rather
than on computer servers, posing a pressing need for developing new
energy-saving optimization technologies for DC cooling systems. However,
optimizing such real-world industrial systems faces numerous challenges,
including but not limited to a lack of reliable simulation environments,
limited historical data, and stringent safety and control robustness
requirements. In this work, we present a novel physics-informed offline
reinforcement learning (RL) framework for energy efficiency optimization of DC
cooling systems. The proposed framework models the complex dynamical patterns
and physical dependencies inside a server room using a purposely designed graph
neural network architecture that is compliant with the fundamental
time-reversal symmetry. Because of its well-behaved and generalizable
state-action representations, the model enables sample-efficient and robust
latent space offline policy learning using limited real-world operational data.
Our framework has been successfully deployed and verified in a large-scale
production DC for closed-loop control of its air-cooling units (ACUs). We
conducted a total of 2000 hours of short and long-term experiments in the
production DC environment. The results show that our method achieves 14~21%
energy savings in the DC cooling system, without any violation of the safety or
operational constraints. Our results have demonstrated the significant
potential of offline RL in solving a broad range of data-limited,
safety-critical real-world industrial control problems.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.