id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
2501.02002 | HMM-LSTM Fusion Model for Economic Forecasting | cs.LG econ.EM stat.ME | This paper explores the application of Hidden Markov Models (HMM) and Long
Short-Term Memory (LSTM) neural networks for economic forecasting, focusing on
predicting CPI inflation rates. The study explores a new approach that
integrates HMM-derived hidden states and means as additional features for LSTM
modeling, aiming to enhance the interpretability and predictive performance of
the models. The research begins with data collection and preprocessing,
followed by the implementation of the HMM to identify hidden states
representing distinct economic conditions. Subsequently, LSTM models are
trained using the original and augmented data sets, allowing for comparative
analysis and evaluation. The results demonstrate that incorporating HMM-derived
data improves the predictive accuracy of LSTM models, particularly in capturing
complex temporal patterns and mitigating the impact of volatile economic
conditions. Additionally, the paper discusses the implementation of Integrated
Gradients for model interpretability and provides insights into the economic
dynamics reflected in the forecasting outcomes.
|
2501.02003 | SurfPatch: Enabling Patch Matching for Exploratory Stream Surface
Visualization | cs.GR cs.CV | Unlike their line-based counterparts, surface-based techniques have yet to be
thoroughly investigated in flow visualization due to their significant
placement, speed, perception, and evaluation challenges. This paper presents
SurfPatch, a novel framework supporting exploratory stream surface
visualization. To begin with, we translate the issue of surface placement to
surface selection and trace a large number of stream surfaces from a given flow
field dataset. Then, we introduce a three-stage process: vertex-level
classification, patch-level matching, and surface-level clustering that
hierarchically builds the connection between vertices and patches and between
patches and surfaces. This bottom-up approach enables fine-grained, multiscale
patch-level matching, sharply contrasts surface-level matching offered by
existing works, and provides previously unavailable flexibility during
querying. We design an intuitive visual interface for users to conveniently
visualize and analyze the underlying collection of stream surfaces in an
exploratory manner. SurfPatch is not limited to stream surfaces traced from
steady flow datasets. We demonstrate its effectiveness through experiments on
stream surfaces produced from steady and unsteady flows as well as isosurfaces
extracted from scalar fields. The code is available at
https://github.com/adlsn/SurfPatch.
|
2501.02004 | General Information Metrics for Improving AI Model Training Efficiency | cs.LG cs.AI cs.IT math.IT | To address the growing size of AI model training data and the lack of a
universal data selection methodology-factors that significantly drive up
training costs -- this paper presents the General Information Metrics
Evaluation (GIME) method. GIME leverages general information metrics from
Objective Information Theory (OIT), including volume, delay, scope,
granularity, variety, duration, sampling rate, aggregation, coverage,
distortion, and mismatch to optimize dataset selection for training purposes.
Comprehensive experiments conducted across diverse domains, such as CTR
Prediction, Civil Case Prediction, and Weather Forecasting, demonstrate that
GIME effectively preserves model performance while substantially reducing both
training time and costs. Additionally, applying GIME within the Judicial AI
Program led to a remarkable 39.56% reduction in total model training expenses,
underscoring its potential to support efficient and sustainable AI development.
|
2501.02006 | Multi-Task Semantic Communication With Graph Attention-Based Feature
Correlation Extraction | cs.LG cs.AI | Multi-task semantic communication can serve multiple learning tasks using a
shared encoder model. Existing models have overlooked the intricate
relationships between features extracted during an encoding process of tasks.
This paper presents a new graph attention inter-block (GAI) module to the
encoder/transmitter of a multi-task semantic communication system, which
enriches the features for multiple tasks by embedding the intermediate outputs
of encoding in the features, compared to the existing techniques. The key idea
is that we interpret the outputs of the intermediate feature extraction blocks
of the encoder as the nodes of a graph to capture the correlations of the
intermediate features. Another important aspect is that we refine the node
representation using a graph attention mechanism to extract the correlations
and a multi-layer perceptron network to associate the node representations with
different tasks. Consequently, the intermediate features are weighted and
embedded into the features transmitted for executing multiple tasks at the
receiver. Experiments demonstrate that the proposed model surpasses the most
competitive and publicly available models by 11.4% on the CityScapes 2Task
dataset and outperforms the established state-of-the-art by 3.97% on the NYU V2
3Task dataset, respectively, when the bandwidth ratio of the communication
channel (i.e., compression level for transmission over the channel) is as
constrained as 1 12 .
|
2501.02007 | TART: Token-based Architecture Transformer for Neural Network
Performance Prediction | cs.LG cs.AI | In the realm of neural architecture design, achieving high performance is
largely reliant on the manual expertise of researchers. Despite the emergence
of Neural Architecture Search (NAS) as a promising technique for automating
this process, current NAS methods still require human input to expand the
search space and cannot generate new architectures. This paper explores the
potential of Transformers in comprehending neural architectures and their
performance, with the objective of establishing the foundation for utilizing
Transformers to generate novel networks. We propose the Token-based
Architecture Transformer (TART), which predicts neural network performance
without the need to train candidate networks. TART attains state-of-the-art
performance on the DeepNets-1M dataset for performance prediction tasks without
edge information, indicating the potential of Transformers to aid in
discovering novel and high-performing neural architectures.
|
2501.02008 | Integrated Strategy for Urban Traffic Optimization: Prediction, Adaptive
Signal Control, and Distributed Communication via Messaging | eess.SY cs.SY eess.SP | This work introduces an integrated approach to optimizing urban traffic by
combining predictive modeling of vehicle flow, adaptive traffic signal control,
and a modular integration architecture through distributed messaging. Using
real-time data from various sensors, the system anticipates traffic
fluctuations and dynamically adjusts signal phase durations to minimize delays
and improve traffic flow. This proactive adjustment, supported by algorithms
inspired by simulated annealing and reinforcement learning, also enhances
energy efficiency, reduces pollutant emissions, and responds effectively to
unexpected events (adverse weather, accidents, or temporary gatherings).
Preliminary simulations conducted in a realistic urban environment demonstrate
a significant reduction in average waiting times. Future developments include
incorporating data from connected vehicles, integrating new modes of transport,
and continuously refining predictive models to address the growing challenges
of urban mobility.
|
2501.02009 | Cross-model Transferability among Large Language Models on the Platonic
Representations of Concepts | cs.CL cs.AI | Understanding the inner workings of Large Language Models (LLMs) is a
critical research frontier. Prior research has shown that a single LLM's
concept representations can be captured as steering vectors (SVs), enabling the
control of LLM behavior (e.g., towards generating harmful content). Our work
takes a novel approach by exploring the intricate relationships between concept
representations across different LLMs, drawing an intriguing parallel to
Plato's Allegory of the Cave. In particular, we introduce a linear
transformation method to bridge these representations and present three key
findings: 1) Concept representations across different LLMs can be effectively
aligned using simple linear transformations, enabling efficient cross-model
transfer and behavioral control via SVs. 2) This linear transformation
generalizes across concepts, facilitating alignment and control of SVs
representing different concepts across LLMs. 3) A weak-to-strong
transferability exists between LLM concept representations, whereby SVs
extracted from smaller LLMs can effectively control the behavior of larger
LLMs.
|
2501.02010 | Explainable Neural Networks with Guarantees: A Sparse Estimation
Approach | cs.LG | Balancing predictive power and interpretability has long been a challenging
research area, particularly in powerful yet complex models like neural
networks, where nonlinearity obstructs direct interpretation. This paper
introduces a novel approach to constructing an explainable neural network that
harmonizes predictiveness and explainability. Our model, termed SparXnet, is
designed as a linear combination of a sparse set of jointly learned features,
each derived from a different trainable function applied to a single
1-dimensional input feature. Leveraging the ability to learn arbitrarily
complex relationships, our neural network architecture enables automatic
selection of a sparse set of important features, with the final prediction
being a linear combination of rescaled versions of these features. We
demonstrate the ability to select significant features while maintaining
comparable predictive performance and direct interpretability through extensive
experiments on synthetic and real-world datasets. We also provide theoretical
analysis on the generalization bounds of our framework, which is favorably
linear in the number of selected features and only logarithmic in the number of
input features. We further lift any dependence of sample complexity on the
number of parameters or the architectural details under very mild conditions.
Our research paves the way for further research on sparse and explainable
neural networks with guarantee.
|
2501.02012 | Information Subtraction: Learning Representations for Conditional
Entropy | cs.LG | The representations of conditional entropy and conditional mutual information
are significant in explaining the unique effects among variables. While
previous studies based on conditional contrastive sampling have effectively
removed information regarding discrete sensitive variables, they have not yet
extended their scope to continuous cases. This paper introduces Information
Subtraction, a framework designed to generate representations that preserve
desired information while eliminating the undesired. We implement a
generative-based architecture that outputs these representations by
simultaneously maximizing an information term and minimizing another. With its
flexibility in disentangling information, we can iteratively apply Information
Subtraction to represent arbitrary information components between continuous
variables, thereby explaining the various relationships that exist between
them. Our results highlight the representations' ability to provide semantic
features of conditional entropy. By subtracting sensitive and domain-specific
information, our framework demonstrates effective performance in fair learning
and domain generalization. The code for this paper is available at
https://github.com/jh-liang/Information-Subtraction
|
2501.02014 | Machine Learning-Based Differential Diagnosis of Parkinson's Disease
Using Kinematic Feature Extraction and Selection | cs.LG cs.AI | Parkinson's disease (PD), the second most common neurodegenerative disorder,
is characterized by dopaminergic neuron loss and the accumulation of abnormal
synuclein. PD presents both motor and non-motor symptoms that progressively
impair daily functioning. The severity of these symptoms is typically assessed
using the MDS-UPDRS rating scale, which is subjective and dependent on the
physician's experience. Additionally, PD shares symptoms with other
neurodegenerative diseases, such as progressive supranuclear palsy (PSP) and
multiple system atrophy (MSA), complicating accurate diagnosis. To address
these diagnostic challenges, we propose a machine learning-based system for
differential diagnosis of PD, PSP, MSA, and healthy controls (HC). This system
utilizes a kinematic feature-based hierarchical feature extraction and
selection approach. Initially, 18 kinematic features are extracted, including
two newly proposed features: Thumb-to-index vector velocity and acceleration,
which provide insights into motor control patterns. In addition, 41 statistical
features were extracted here from each kinematic feature, including some new
approaches such as Average Absolute Change, Rhythm, Amplitude, Frequency,
Standard Deviation of Frequency, and Slope. Feature selection is performed
using One-way ANOVA to rank features, followed by Sequential Forward Floating
Selection (SFFS) to identify the most relevant ones, aiming to reduce the
computational complexity. The final feature set is used for classification,
achieving a classification accuracy of 66.67% for each dataset and 88.89% for
each patient, with particularly high performance for the MSA and HC groups
using the SVM algorithm. This system shows potential as a rapid and accurate
diagnostic tool in clinical practice, though further data collection and
refinement are needed to enhance its reliability.
|
2501.02015 | KANS: Knowledge Discovery Graph Attention Network for Soft Sensing in
Multivariate Industrial Processes | cs.LG cs.AI cs.SY eess.SP eess.SY | Soft sensing of hard-to-measure variables is often crucial in industrial
processes. Current practices rely heavily on conventional modeling techniques
that show success in improving accuracy. However, they overlook the non-linear
nature, dynamics characteristics, and non-Euclidean dependencies between
complex process variables. To tackle these challenges, we present a framework
known as a Knowledge discovery graph Attention Network for effective Soft
sensing (KANS). Unlike the existing deep learning soft sensor models, KANS can
discover the intrinsic correlations and irregular relationships between the
multivariate industrial processes without a predefined topology. First, an
unsupervised graph structure learning method is introduced, incorporating the
cosine similarity between different sensor embedding to capture the
correlations between sensors. Next, we present a graph attention-based
representation learning that can compute the multivariate data parallelly to
enhance the model in learning complex sensor nodes and edges. To fully explore
KANS, knowledge discovery analysis has also been conducted to demonstrate the
interpretability of the model. Experimental results demonstrate that KANS
significantly outperforms all the baselines and state-of-the-art methods in
soft sensing performance. Furthermore, the analysis shows that KANS can find
sensors closely related to different process variables without domain
knowledge, significantly improving soft sensing accuracy.
|
2501.02016 | ST-HCSS: Deep Spatio-Temporal Hypergraph Convolutional Neural Network
for Soft Sensing | cs.LG cs.AI eess.SP | Higher-order sensor networks are more accurate in characterizing the
nonlinear dynamics of sensory time-series data in modern industrial settings by
allowing multi-node connections beyond simple pairwise graph edges. In light of
this, we propose a deep spatio-temporal hypergraph convolutional neural network
for soft sensing (ST-HCSS). In particular, our proposed framework is able to
construct and leverage a higher-order graph (hypergraph) to model the complex
multi-interactions between sensor nodes in the absence of prior structural
knowledge. To capture rich spatio-temporal relationships underlying sensor
data, our proposed ST-HCSS incorporates stacked gated temporal and hypergraph
convolution layers to effectively aggregate and update hypergraph information
across time and nodes. Our results validate the superiority of ST-HCSS compared
to existing state-of-the-art soft sensors, and demonstrates that the learned
hypergraph feature representations aligns well with the sensor data
correlations. The code is available at https://github.com/htew0001/ST-HCSS.git
|
2501.02017 | Rephotography in the Digital Era: Mass Rephotography and re.photos, the
Web Portal for Rephotography | cs.CV | Since the beginning of rephotography in the middle of the 19th century,
techniques in registration, conservation, presentation, and sharing of
rephotographs have come a long way. Here, we will present existing digital
approaches to rephotography and discuss future approaches and requirements for
digital mass rephotography. We present re.photos, an existing web portal for
rephotography, featuring methods for collaborative rephotography, interactive
image registration, as well as retrieval, organization, and sharing of
rephotographs. For mass rephotography additional requirements must be met.
Batches of template images and rephotographs must be handled simultaneously,
image registration must be automated, and intuitive smartphone apps for
rephotography must be available. Long--term storage with persistent
identifiers, automatic or mass georeferencing, as well as gamification and
social media integration are further requirements we will discuss in this
paper.
|
2501.02018 | Safeguarding Large Language Models in Real-time with Tunable
Safety-Performance Trade-offs | cs.CL cs.AI cs.CR cs.LG | Large Language Models (LLMs) have been shown to be susceptible to jailbreak
attacks, or adversarial attacks used to illicit high risk behavior from a
model. Jailbreaks have been exploited by cybercriminals and blackhat actors to
cause significant harm, highlighting the critical need to safeguard
widely-deployed models. Safeguarding approaches, which include fine-tuning
models or having LLMs "self-reflect", may lengthen the inference time of a
model, incur a computational penalty, reduce the semantic fluency of an output,
and restrict ``normal'' model behavior. Importantly, these Safety-Performance
Trade-offs (SPTs) remain an understudied area. In this work, we introduce a
novel safeguard, called SafeNudge, that combines Controlled Text Generation
with "nudging", or using text interventions to change the behavior of a model.
SafeNudge triggers during text-generation while a jailbreak attack is being
executed, and can reduce successful jailbreak attempts by 30% by guiding the
LLM towards a safe responses. It adds minimal latency to inference and has a
negligible impact on the semantic fluency of outputs. Further, we allow for
tunable SPTs. SafeNudge is open-source and available through https://pypi.org/,
and is compatible with models loaded with the Hugging Face "transformers"
library.
|
2501.02019 | Benchmarking Constraint-Based Bayesian Structure Learning Algorithms:
Role of Network Topology | cs.LG cs.AI q-bio.MN | Modeling the associations between real world entities from their multivariate
cross-sectional profiles can provide cues into the concerted working of these
entities as a system. Several techniques have been proposed for deciphering
these associations including constraint-based Bayesian structure learning (BSL)
algorithms that model them as directed acyclic graphs. Benchmarking these
algorithms have typically focused on assessing the variation in performance
measures such as sensitivity as a function of the dimensionality represented by
the number of nodes in the DAG, and sample size. The present study elucidates
the importance of network topology in benchmarking exercises. More
specifically, it investigates variations in sensitivity across distinct network
topologies while constraining the nodes, edges, and sample-size to be
identical, eliminating these as potential confounders. Sensitivity of three
popular constraint-based BSL algorithms (Peter-Clarke, Grow-Shrink, Incremental
Association Markov Blanket) in learning the network structure from multivariate
cross-sectional profiles sampled from network models with sub-linear, linear,
and super-linear DAG topologies generated using preferential attachment is
investigated. Results across linear and nonlinear models revealed statistically
significant $(\alpha=0.05)$ decrease in sensitivity estimates from sub-linear
to super-linear topology constitutively across the three algorithms. These
results are demonstrated on networks with nodes $(N_{nods}=48,64)$, noise
strengths $(\sigma =3,6)$ and sample size $(N = 2^{10})$. The findings
elucidate the importance of accommodating the network topology in
constraint-based BSL benchmarking exercises.
|
2501.02020 | Enhancing Uncertainty Modeling with Semantic Graph for Hallucination
Detection | cs.CL cs.AI | Large Language Models (LLMs) are prone to hallucination with non-factual or
unfaithful statements, which undermines the applications in real-world
scenarios. Recent researches focus on uncertainty-based hallucination
detection, which utilizes the output probability of LLMs for uncertainty
calculation and does not rely on external knowledge or frequent sampling from
LLMs. Whereas, most approaches merely consider the uncertainty of each
independent token, while the intricate semantic relations among tokens and
sentences are not well studied, which limits the detection of hallucination
that spans over multiple tokens and sentences in the passage. In this paper, we
propose a method to enhance uncertainty modeling with semantic graph for
hallucination detection. Specifically, we first construct a semantic graph that
well captures the relations among entity tokens and sentences. Then, we
incorporate the relations between two entities for uncertainty propagation to
enhance sentence-level hallucination detection. Given that hallucination occurs
due to the conflict between sentences, we further present a graph-based
uncertainty calibration method that integrates the contradiction probability of
the sentence with its neighbors in the semantic graph for uncertainty
calculation. Extensive experiments on two datasets show the great advantages of
our proposed approach. In particular, we obtain substantial improvements with
19.78% in passage-level hallucination detection.
|
2501.02021 | Weakly Supervised Learning on Large Graphs | cs.LG cs.AI | Graph classification plays a pivotal role in various domains, including
pathology, where images can be represented as graphs. In this domain, images
can be represented as graphs, where nodes might represent individual nuclei,
and edges capture the spatial or functional relationships between them. Often,
the overall label of the graph, such as a cancer type or disease state, is
determined by patterns within smaller, localized regions of the image. This
work introduces a weakly-supervised graph classification framework leveraging
two subgraph extraction techniques: (1) Sliding-window approach (2) BFS-based
approach. Subgraphs are processed using a Graph Attention Network (GAT), which
employs attention mechanisms to identify the most informative subgraphs for
classification. Weak supervision is achieved by propagating graph-level labels
to subgraphs, eliminating the need for detailed subgraph annotations.
|
2501.02024 | Model Checking in Medical Imaging for Tumor Detection and Segmentation | cs.CV cs.AI cs.LG | Recent advancements in model checking have demonstrated significant potential
across diverse applications, particularly in signal and image analysis. Medical
imaging stands out as a critical domain where model checking can be effectively
applied to design and evaluate robust frameworks. These frameworks facilitate
automatic and semi-automatic delineation of regions of interest within images,
aiding in accurate segmentation. This paper provides a comprehensive analysis
of recent works leveraging spatial logic to develop operators and tools for
identifying regions of interest, including tumorous and non-tumorous areas.
Additionally, we examine the challenges inherent to spatial model-checking
techniques, such as variability in ground truth data and the need for
streamlined procedures suitable for routine clinical practice.
|
2501.02025 | RealDiffFusionNet: Neural Controlled Differential Equation Informed
Multi-Head Attention Fusion Networks for Disease Progression Modeling Using
Real-World Data | cs.LG cs.CV q-bio.QM | This paper presents a novel deep learning-based approach named
RealDiffFusionNet incorporating Neural Controlled Differential Equations
(Neural CDE) - time series models that are robust in handling irregularly
sampled data - and multi-head attention to align relevant multimodal context
(image data, time invariant data, etc.) at each time point. Long short-term
memory (LSTM) models were also used as a baseline. Two different datasets were
used: a data from the Open-Source Imaging Consortium (OSIC) containing
structured time series data of demographics and lung function with a baseline
CT scan of the lungs and the second from the Alzheimer's Disease Neuroimaging
Initiative (ADNI) containing a series of MRI scans along with demographics,
physical examinations, and cognitive assessment data. An ablation study was
performed to understand the role of CDEs, multimodal data, attention fusion,
and interpolation strategies on model performance. When the baseline models
were evaluated, the use of multimodal data resulted in an improvement in Neural
CDE performance, with a lower test RMSE. Additionally, the performance of
multimodal Neural CDE was also superior to multimodal LSTM. In the
attention-based architectures, fusion through concatenation and rectilinear
interpolation were found to improve model performance. The performance of the
proposed RealDiffFusionNet was found to be superior (0.2570) to all models. For
the ADNI dataset, between the Neural-CDE and LSTM models trained only on the
structured data, the test RMSE were comparable (0.471 for LSTM vs. 0.4581
Neural-CDE). Furthermore, the addition of image features from patients' MRI
series resulted in an improvement in performance, with a lower test RMSE
(0.4372 with multimodal vs 0.4581 with structured data). RealDiffFusionNet has
shown promise in utilizing CDEs and multimodal data to accurately predict
disease progression.
|
2501.02026 | Recursive Decomposition of Logical Thoughts: Framework for Superior
Reasoning and Knowledge Propagation in Large Language Models | cs.CL cs.AI cs.LG cs.LO | Enhancing the reasoning capabilities of Large Language Models remains a
critical challenge in artificial intelligence. We introduce RDoLT, Recursive
Decomposition of Logical Thought prompting, a novel framework that
significantly boosts LLM reasoning performance. RDoLT is built on three key
innovations: (1) recursively breaking down complex reasoning tasks into
sub-tasks of progressive complexity; (2) employing an advanced selection and
scoring mechanism to identify the most promising reasoning thoughts; and (3)
integrating a knowledge propagation module that mimics human learning by
keeping track of strong and weak thoughts for information propagation. Our
approach was evaluated across multiple benchmarks, including GSM8K, SVAMP,
MultiArith, LastLetterConcatenation, and Gaokao2023 Math. The results
demonstrate that RDoLT consistently outperforms existing state-of-the-art
techniques, achieving a 90.98 percent accuracy on GSM8K with ChatGPT-4,
surpassing state-of-the-art techniques by 6.28 percent. Similar improvements
were observed on other benchmarks, with accuracy gains ranging from 5.5 percent
to 6.75 percent. These findings highlight RDoLT's potential to advance prompt
engineering, offering a more effective and generalizable approach to complex
reasoning tasks.
|
2501.02029 | Spot Risks Before Speaking! Unraveling Safety Attention Heads in Large
Vision-Language Models | cs.LG cs.AI cs.CR cs.CV | With the integration of an additional modality, large vision-language models
(LVLMs) exhibit greater vulnerability to safety risks (e.g., jailbreaking)
compared to their language-only predecessors. Although recent studies have
devoted considerable effort to the post-hoc alignment of LVLMs, the inner
safety mechanisms remain largely unexplored. In this paper, we discover that
internal activations of LVLMs during the first token generation can effectively
identify malicious prompts across different attacks. This inherent safety
perception is governed by sparse attention heads, which we term ``safety
heads." Further analysis reveals that these heads act as specialized shields
against malicious prompts; ablating them leads to higher attack success rates,
while the model's utility remains unaffected. By locating these safety heads
and concatenating their activations, we construct a straightforward but
powerful malicious prompt detector that integrates seamlessly into the
generation process with minimal extra inference overhead. Despite its simple
structure of a logistic regression model, the detector surprisingly exhibits
strong zero-shot generalization capabilities. Experiments across various
prompt-based attacks confirm the effectiveness of leveraging safety heads to
protect LVLMs. Code is available at \url{https://github.com/Ziwei-Zheng/SAHs}.
|
2501.02030 | Detecting Music Performance Errors with Transformers | cs.SD cs.AI eess.AS | Beginner musicians often struggle to identify specific errors in their
performances, such as playing incorrect notes or rhythms. There are two
limitations in existing tools for music error detection: (1) Existing
approaches rely on automatic alignment; therefore, they are prone to errors
caused by small deviations between alignment targets.; (2) There is a lack of
sufficient data to train music error detection models, resulting in
over-reliance on heuristics. To address (1), we propose a novel transformer
model, Polytune, that takes audio inputs and outputs annotated music scores.
This model can be trained end-to-end to implicitly align and compare
performance audio with music scores through latent space representations. To
address (2), we present a novel data generation technique capable of creating
large-scale synthetic music error datasets. Our approach achieves a 64.1%
average Error Detection F1 score, improving upon prior work by 40 percentage
points across 14 instruments. Additionally, compared with existing
transcription methods repurposed for music error detection, our model can
handle multiple instruments. Our source code and datasets are available at
https://github.com/ben2002chou/Polytune.
|
2501.02031 | CarbonChat: Large Language Model-Based Corporate Carbon Emission
Analysis and Climate Knowledge Q&A System | cs.CL cs.AI | As the impact of global climate change intensifies, corporate carbon
emissions have become a focal point of global attention. In response to issues
such as the lag in climate change knowledge updates within large language
models, the lack of specialization and accuracy in traditional augmented
generation architectures for complex problems, and the high cost and time
consumption of sustainability report analysis, this paper proposes CarbonChat:
Large Language Model-based corporate carbon emission analysis and climate
knowledge Q&A system, aimed at achieving precise carbon emission analysis and
policy understanding.First, a diversified index module construction method is
proposed to handle the segmentation of rule-based and long-text documents, as
well as the extraction of structured data, thereby optimizing the parsing of
key information.Second, an enhanced self-prompt retrieval-augmented generation
architecture is designed, integrating intent recognition, structured reasoning
chains, hybrid retrieval, and Text2SQL, improving the efficiency of semantic
understanding and query conversion.Next, based on the greenhouse gas accounting
framework, 14 dimensions are established for carbon emission analysis, enabling
report summarization, relevance evaluation, and customized responses.Finally,
through a multi-layer chunking mechanism, timestamps, and hallucination
detection features, the accuracy and verifiability of the analysis results are
ensured, reducing hallucination rates and enhancing the precision of the
responses.
|
2501.02032 | Dynamic Feature Fusion: Combining Global Graph Structures and Local
Semantics for Blockchain Fraud Detection | cs.CR cs.AI cs.SE | The advent of blockchain technology has facilitated the widespread adoption
of smart contracts in the financial sector. However, current fraud detection
methodologies exhibit limitations in capturing both global structural patterns
within transaction networks and local semantic relationships embedded in
transaction data. Most existing models focus on either structural information
or semantic features individually, leading to suboptimal performance in
detecting complex fraud patterns.In this paper, we propose a dynamic feature
fusion model that combines graph-based representation learning and semantic
feature extraction for blockchain fraud detection. Specifically, we construct
global graph representations to model account relationships and extract local
contextual features from transaction data. A dynamic multimodal fusion
mechanism is introduced to adaptively integrate these features, enabling the
model to capture both structural and semantic fraud patterns effectively. We
further develop a comprehensive data processing pipeline, including graph
construction, temporal feature enhancement, and text preprocessing.
Experimental results on large-scale real-world blockchain datasets demonstrate
that our method outperforms existing benchmarks across accuracy, F1 score, and
recall metrics. This work highlights the importance of integrating structural
relationships and semantic similarities for robust fraud detection and offers a
scalable solution for securing blockchain systems.
|
2501.02035 | 3D Cloud reconstruction through geospatially-aware Masked Autoencoders | cs.CV cs.AI | Clouds play a key role in Earth's radiation balance with complex effects that
introduce large uncertainties into climate models. Real-time 3D cloud data is
essential for improving climate predictions. This study leverages geostationary
imagery from MSG/SEVIRI and radar reflectivity measurements of cloud profiles
from CloudSat/CPR to reconstruct 3D cloud structures. We first apply
self-supervised learning (SSL) methods-Masked Autoencoders (MAE) and
geospatially-aware SatMAE on unlabelled MSG images, and then fine-tune our
models on matched image-profile pairs. Our approach outperforms
state-of-the-art methods like U-Nets, and our geospatial encoding further
improves prediction results, demonstrating the potential of SSL for cloud
reconstruction.
|
2501.02036 | Deep Clustering via Community Detection | cs.LG cs.AI cs.SI | Deep clustering is an essential task in modern artificial intelligence,
aiming to partition a set of data samples into a given number of homogeneous
groups (i.e., clusters). Even though many Deep Neural Network (DNN) backbones
and clustering strategies have been proposed for the task, achieving
increasingly improved performance, deep clustering remains very challenging due
to the lack of accurately labeled samples. In this paper, we propose a novel
approach of deep clustering via community detection. It initializes clustering
by detecting many communities, and then gradually expands clusters by community
merging. Compared with the existing clustering strategies, community detection
factors in the new perspective of cluster network analysis. As a result, it has
the inherent benefit of high pseudo-label purity, which is critical to the
performance of self-supervision. We have validated the efficacy of the proposed
approach on benchmark image datasets. Our extensive experiments have shown that
it can effectively improve the SOTA performance. Our ablation study also
demonstrates that the new network perspective can effectively improve community
pseudo-label purity, resulting in improved clustering performance.
|
2501.02038 | Architecture for Trajectory-Based Fishing Ship Classification with AIS
Data | cs.LG cs.AI | This paper proposes a data preparation process for managing real-world
kinematic data and detecting fishing vessels. The solution is a binary
classification that classifies ship trajectories into either fishing or
non-fishing ships. The data used are characterized by the typical problems
found in classic data mining applications using real-world data, such as noise
and inconsistencies. The two classes are also clearly unbalanced in the data, a
problem which is addressed using algorithms that resample the instances. For
classification, a series of features are extracted from spatiotemporal data
that represent the trajectories of the ships, available from sequences of
Automatic Identification System (AIS) reports. These features are proposed for
the modelling of ship behavior but, because they do not contain context-related
information, the classification can be applied in other scenarios.
Experimentation shows that the proposed data preparation process is useful for
the presented classification problem. In addition, positive results are
obtained using minimal information.
|
2501.02039 | An Investigation into Value Misalignment in LLM-Generated Texts for
Cultural Heritage | cs.CL cs.AI | As Large Language Models (LLMs) become increasingly prevalent in tasks
related to cultural heritage, such as generating descriptions of historical
monuments, translating ancient texts, preserving oral traditions, and creating
educational content, their ability to produce accurate and culturally aligned
texts is being increasingly relied upon by users and researchers. However,
cultural value misalignments may exist in generated texts, such as the
misrepresentation of historical facts, the erosion of cultural identity, and
the oversimplification of complex cultural narratives, which may lead to severe
consequences. Therefore, investigating value misalignment in the context of LLM
for cultural heritage is crucial for mitigating these risks, yet there has been
a significant lack of systematic and comprehensive study and investigation in
this area. To fill this gap, we systematically assess the reliability of LLMs
in generating culturally aligned texts for cultural heritage-related tasks. We
conduct a comprehensive evaluation by compiling an extensive set of 1066 query
tasks covering 5 widely recognized categories with 17 aspects within the
knowledge framework of cultural heritage across 5 open-source LLMs, and examine
both the type and rate of cultural value misalignments in the generated texts.
Using both automated and manual approaches, we effectively detect and analyze
the cultural value misalignments in LLM-generated texts. Our findings are
concerning: over 65% of the generated texts exhibit notable cultural
misalignments, with certain tasks demonstrating almost complete misalignment
with key cultural values. Beyond these findings, this paper introduces a
benchmark dataset and a comprehensive evaluation workflow that can serve as a
valuable resource for future research aimed at enhancing the cultural
sensitivity and reliability of LLMs.
|
2501.02040 | A Separable Self-attention Inspired by the State Space Model for
Computer Vision | cs.CV cs.AI | Mamba is an efficient State Space Model (SSM) with linear computational
complexity. Although SSMs are not suitable for handling non-causal data, Vision
Mamba (ViM) methods still demonstrate good performance in tasks such as image
classification and object detection. Recent studies have shown that there is a
rich theoretical connection between state space models and attention variants.
We propose a novel separable self attention method, for the first time
introducing some excellent design concepts of Mamba into separable
self-attention. To ensure a fair comparison with ViMs, we introduce VMINet, a
simple yet powerful prototype architecture, constructed solely by stacking our
novel attention modules with the most basic down-sampling layers. Notably,
VMINet differs significantly from the conventional Transformer architecture.
Our experiments demonstrate that VMINet has achieved competitive results on
image classification and high-resolution dense prediction tasks.Code is
available at: \url{https://github.com/yws-wxs/VMINet}.
|
2501.02041 | MRG: A Multi-Robot Manufacturing Digital Scene Generation Method Using
Multi-Instance Point Cloud Registration | cs.CV cs.AI | A high-fidelity digital simulation environment is crucial for accurately
replicating physical operational processes. However, inconsistencies between
simulation and physical environments result in low confidence in simulation
outcomes, limiting their effectiveness in guiding real-world production. Unlike
the traditional step-by-step point cloud "segmentation-registration" generation
method, this paper introduces, for the first time, a novel Multi-Robot
Manufacturing Digital Scene Generation (MRG) method that leverages
multi-instance point cloud registration, specifically within manufacturing
scenes. Tailored to the characteristics of industrial robots and manufacturing
settings, an instance-focused transformer module is developed to delineate
instance boundaries and capture correlations between local regions.
Additionally, a hypothesis generation module is proposed to extract target
instances while preserving key features. Finally, an efficient screening and
optimization algorithm is designed to refine the final registration results.
Experimental evaluations on the Scan2CAD and Welding-Station datasets
demonstrate that: (1) the proposed method outperforms existing multi-instance
point cloud registration techniques; (2) compared to state-of-the-art methods,
the Scan2CAD dataset achieves improvements in MR and MP by 12.15% and 17.79%,
respectively; and (3) on the Welding-Station dataset, MR and MP are enhanced by
16.95% and 24.15%, respectively. This work marks the first application of
multi-instance point cloud registration in manufacturing scenes, significantly
advancing the precision and reliability of digital simulation environments for
industrial applications.
|
2501.02042 | Towards Robust and Accurate Stability Estimation of Local Surrogate
Models in Text-based Explainable AI | cs.LG cs.CR | Recent work has investigated the concept of adversarial attacks on
explainable AI (XAI) in the NLP domain with a focus on examining the
vulnerability of local surrogate methods such as Lime to adversarial
perturbations or small changes on the input of a machine learning (ML) model.
In such attacks, the generated explanation is manipulated while the meaning and
structure of the original input remain similar under the ML model. Such attacks
are especially alarming when XAI is used as a basis for decision making (e.g.,
prescribing drugs based on AI medical predictors) or for legal action (e.g.,
legal dispute involving AI software). Although weaknesses across many XAI
methods have been shown to exist, the reasons behind why remain little
explored. Central to this XAI manipulation is the similarity measure used to
calculate how one explanation differs from another. A poor choice of similarity
measure can lead to erroneous conclusions about the stability or adversarial
robustness of an XAI method. Therefore, this work investigates a variety of
similarity measures designed for text-based ranked lists referenced in related
work to determine their comparative suitability for use. We find that many
measures are overly sensitive, resulting in erroneous estimates of stability.
We then propose a weighting scheme for text-based data that incorporates the
synonymity between the features within an explanation, providing more accurate
estimates of the actual weakness of XAI methods to adversarial examples.
|
2501.02043 | Modeling COVID-19 spread in the USA using metapopulation SIR models
coupled with graph convolutional neural networks | stat.ML cs.LG math.DS q-bio.PE | Graph convolutional neural networks (GCNs) have shown tremendous promise in
addressing data-intensive challenges in recent years. In particular, some
attempts have been made to improve predictions of
Susceptible-Infected-Recovered (SIR) models by incorporating human mobility
between metapopulations and using graph approaches to estimate corresponding
hyperparameters. Recently, researchers have found that a hybrid GCN-SIR
approach outperformed existing methodologies when used on the data collected on
a precinct level in Japan. In our work, we extend this approach to data
collected from the continental US, adjusting for the differing mobility
patterns and varying policy responses. We also develop the strategy for
real-time continuous estimation of the reproduction number and study the
accuracy of model predictions for the overall population as well as individual
states. Strengths and limitations of the GCN-SIR approach are discussed as a
potential candidate for modeling disease dynamics.
|
2501.02044 | Advancing Pancreatic Cancer Prediction with a Next Visit Token
Prediction Head on top of Med-BERT | cs.CL cs.AI | Background: Recently, numerous foundation models pretrained on extensive data
have demonstrated efficacy in disease prediction using Electronic Health
Records (EHRs). However, there remains some unanswered questions on how to best
utilize such models especially with very small fine-tuning cohorts. Methods: We
utilized Med-BERT, an EHR-specific foundation model, and reformulated the
disease binary prediction task into a token prediction task and a next visit
mask token prediction task to align with Med-BERT's pretraining task format in
order to improve the accuracy of pancreatic cancer (PaCa) prediction in both
few-shot and fully supervised settings. Results: The reformulation of the task
into a token prediction task, referred to as Med-BERT-Sum, demonstrates
slightly superior performance in both few-shot scenarios and larger data
samples. Furthermore, reformulating the prediction task as a Next Visit Mask
Token Prediction task (Med-BERT-Mask) significantly outperforms the
conventional Binary Classification (BC) prediction task (Med-BERT-BC) by 3% to
7% in few-shot scenarios with data sizes ranging from 10 to 500 samples. These
findings highlight that aligning the downstream task with Med-BERT's
pretraining objectives substantially enhances the model's predictive
capabilities, thereby improving its effectiveness in predicting both rare and
common diseases. Conclusion: Reformatting disease prediction tasks to align
with the pretraining of foundation models enhances prediction accuracy, leading
to earlier detection and timely intervention. This approach improves treatment
effectiveness, survival rates, and overall patient outcomes for PaCa and
potentially other cancers.
|
2501.02045 | METAGENE-1: Metagenomic Foundation Model for Pandemic Monitoring | q-bio.GN cs.AI cs.CL cs.LG | We pretrain METAGENE-1, a 7-billion-parameter autoregressive transformer
model, which we refer to as a metagenomic foundation model, on a novel corpus
of diverse metagenomic DNA and RNA sequences comprising over 1.5 trillion base
pairs. This dataset is sourced from a large collection of human wastewater
samples, processed and sequenced using deep metagenomic (next-generation)
sequencing methods. Unlike genomic models that focus on individual genomes or
curated sets of specific species, the aim of METAGENE-1 is to capture the full
distribution of genomic information present within this wastewater, to aid in
tasks relevant to pandemic monitoring and pathogen detection. We carry out
byte-pair encoding (BPE) tokenization on our dataset, tailored for metagenomic
sequences, and then pretrain our model. In this paper, we first detail the
pretraining dataset, tokenization strategy, and model architecture,
highlighting the considerations and design choices that enable the effective
modeling of metagenomic data. We then show results of pretraining this model on
our metagenomic dataset, providing details about our losses, system metrics,
and training stability over the course of pretraining. Finally, we demonstrate
the performance of METAGENE-1, which achieves state-of-the-art results on a set
of genomic benchmarks and new evaluations focused on human-pathogen detection
and genomic sequence embedding, showcasing its potential for public health
applications in pandemic monitoring, biosurveillance, and early detection of
emerging health threats.
|
2501.02048 | DreamMask: Boosting Open-vocabulary Panoptic Segmentation with Synthetic
Data | cs.CV | Open-vocabulary panoptic segmentation has received significant attention due
to its applicability in the real world. Despite claims of robust
generalization, we find that the advancements of previous works are attributed
mainly on trained categories, exposing a lack of generalization to novel
classes. In this paper, we explore boosting existing models from a data-centric
perspective. We propose DreamMask, which systematically explores how to
generate training data in the open-vocabulary setting, and how to train the
model with both real and synthetic data. For the first part, we propose an
automatic data generation pipeline with off-the-shelf models. We propose
crucial designs for vocabulary expansion, layout arrangement, data filtering,
etc. Equipped with these techniques, our generated data could significantly
outperform the manually collected web data. To train the model with generated
data, a synthetic-real alignment loss is designed to bridge the representation
gap, bringing noticeable improvements across multiple benchmarks. In general,
DreamMask significantly simplifies the collection of large-scale training data,
serving as a plug-and-play enhancement for existing methods. For instance, when
trained on COCO and tested on ADE20K, the model equipped with DreamMask
outperforms the previous state-of-the-art by a substantial margin of 2.1% mIoU.
|
2501.02059 | Active Learning Enables Extrapolation in Molecular Generative Models | cs.LG cond-mat.mtrl-sci physics.chem-ph | Although generative models hold promise for discovering molecules with
optimized desired properties, they often fail to suggest synthesizable
molecules that improve upon the known molecules seen in training. We find that
a key limitation is not in the molecule generation process itself, but in the
poor generalization capabilities of molecular property predictors. We tackle
this challenge by creating an active-learning, closed-loop molecule generation
pipeline, whereby molecular generative models are iteratively refined on
feedback from quantum chemical simulations to improve generalization to new
chemical space. Compared against other generative model approaches, only our
active learning approach generates molecules with properties that extrapolate
beyond the training data (reaching up to 0.44 standard deviations beyond the
training data range) and out-of-distribution molecule classification accuracy
is improved by 79%. By conditioning molecular generation on thermodynamic
stability data from the active-learning loop, the proportion of stable
molecules generated is 3.5x higher than the next-best model.
|
2501.02063 | AGGA: A Dataset of Academic Guidelines for Generative AI and Large
Language Models | cs.CL cs.CY | This study introduces AGGA, a dataset comprising 80 academic guidelines for
the use of Generative AIs (GAIs) and Large Language Models (LLMs) in academic
settings, meticulously collected from official university websites. The dataset
contains 188,674 words and serves as a valuable resource for natural language
processing tasks commonly applied in requirements engineering, such as model
synthesis, abstraction identification, and document structure assessment.
Additionally, AGGA can be further annotated to function as a benchmark for
various tasks, including ambiguity detection, requirements categorization, and
the identification of equivalent requirements. Our methodologically rigorous
approach ensured a thorough examination, with a selection of universities that
represent a diverse range of global institutions, including top-ranked
universities across six continents. The dataset captures perspectives from a
variety of academic fields, including humanities, technology, and both public
and private institutions, offering a broad spectrum of insights into the
integration of GAIs and LLMs in academia.
|
2501.02064 | ArtCrafter: Text-Image Aligning Style Transfer via Embedding Reframing | cs.CV cs.AI | Recent years have witnessed significant advancements in text-guided style
transfer, primarily attributed to innovations in diffusion models. These models
excel in conditional guidance, utilizing text or images to direct the sampling
process. However, despite their capabilities, direct conditional guidance
approaches often face challenges in balancing the expressiveness of textual
semantics with the diversity of output results while capturing stylistic
features. To address these challenges, we introduce ArtCrafter, a novel
framework for text-to-image style transfer. Specifically, we introduce an
attention-based style extraction module, meticulously engineered to capture the
subtle stylistic elements within an image. This module features a multi-layer
architecture that leverages the capabilities of perceiver attention mechanisms
to integrate fine-grained information. Additionally, we present a novel
text-image aligning augmentation component that adeptly balances control over
both modalities, enabling the model to efficiently map image and text
embeddings into a shared feature space. We achieve this through attention
operations that enable smooth information flow between modalities. Lastly, we
incorporate an explicit modulation that seamlessly blends multimodal enhanced
embeddings with original embeddings through an embedding reframing design,
empowering the model to generate diverse outputs. Extensive experiments
demonstrate that ArtCrafter yields impressive results in visual stylization,
exhibiting exceptional levels of stylistic intensity, controllability, and
diversity.
|
2501.02066 | RadHop-Net: A Lightweight Radiomics-to-Error Regression for False
Positive Reduction In MRI Prostate Cancer Detection | cs.CV eess.IV | Clinically significant prostate cancer (csPCa) is a leading cause of cancer
death in men, yet it has a high survival rate if diagnosed early. Bi-parametric
MRI (bpMRI) reading has become a prominent screening test for csPCa. However,
this process has a high false positive (FP) rate, incurring higher diagnostic
costs and patient discomfort. This paper introduces RadHop-Net, a novel and
lightweight CNN for FP reduction. The pipeline consists of two stages: Stage 1
employs data driven radiomics to extract candidate ROIs. In contrast, Stage 2
expands the receptive field about each ROI using RadHop-Net to compensate for
the predicted error from Stage 1. Moreover, a novel loss function for
regression problems is introduced to balance the influence between FPs and true
positives (TPs). RadHop-Net is trained in a radiomics-to-error manner, thus
decoupling from the common voxel-to-label approach. The proposed Stage 2
improves the average precision (AP) in lesion detection from 0.407 to 0.468 in
the publicly available pi-cai dataset, also maintaining a significantly smaller
model size than the state-of-the-art.
|
2501.02068 | The interplay between domain specialization and model size: a case study
in the legal domain | cs.CL cs.AI | Scaling laws for language models so far focused on finding the
compute-optimal model size and token count for training from scratch. However,
achieving this optimal balance requires significant compute resources due to
the extensive data demands when training models from randomly-initialized
weights. Continual pre-training offers a cost-effective alternative, leveraging
the compute investment from pre-trained models to incorporate new knowledge
without requiring extensive new data. Recent findings suggest that data quality
influences constants in scaling laws, thereby altering the optimal
parameter-token allocation ratio. Building on this insight, we investigate the
interplay between domain specialization and model size during continual
pre-training under compute-constrained scenarios. Our goal is to identify a
compute-efficient training regime for this scenario and, potentially, detect
patterns in this interplay that can be generalized across different model sizes
and domains. To compare general and specialized training, we filtered a
web-based dataset to extract legal domain data. We pre-trained models with
1.5B, 3B, 7B and 14B parameters on both the unfiltered and filtered datasets,
then evaluated their performance on legal exams. Results show that as model
size increases, the compute-effectiveness gap between specialized and general
models widens.
|
2501.02069 | Counterfactual Explanation for Auto-Encoder Based Time-Series Anomaly
Detection | cs.LG | The complexity of modern electro-mechanical systems require the development
of sophisticated diagnostic methods like anomaly detection capable of detecting
deviations. Conventional anomaly detection approaches like signal processing
and statistical modelling often struggle to effectively handle the intricacies
of complex systems, particularly when dealing with multi-variate signals. In
contrast, neural network-based anomaly detection methods, especially
Auto-Encoders, have emerged as a compelling alternative, demonstrating
remarkable performance. However, Auto-Encoders exhibit inherent opaqueness in
their decision-making processes, hindering their practical implementation at
scale. Addressing this opacity is essential for enhancing the interpretability
and trustworthiness of anomaly detection models. In this work, we address this
challenge by employing a feature selector to select features and counterfactual
explanations to give a context to the model output. We tested this approach on
the SKAB benchmark dataset and an industrial time-series dataset. The gradient
based counterfactual explanation approach was evaluated via validity, sparsity
and distance measures. Our experimental findings illustrate that our proposed
counterfactual approach can offer meaningful and valuable insights into the
model decision-making process, by explaining fewer signals compared to
conventional approaches. These insights enhance the trustworthiness and
interpretability of anomaly detection models.
|
2501.02071 | Laws of thermodynamics for exponential families | cond-mat.stat-mech cs.LG math.ST stat.TH | We develop the laws of thermodynamics in terms of general exponential
families. By casting learning (log-loss minimization) problems in max-entropy
and statistical mechanics terms, we translate thermodynamics results to
learning scenarios. We extend the well-known way in which exponential families
characterize thermodynamic and learning equilibria. Basic ideas of work and
heat, and advanced concepts of thermodynamic cycles and equipartition of
energy, find exact and useful counterparts in AI / statistics terms. These
ideas have broad implications for quantifying and addressing distribution
shift.
|
2501.02077 | Chance Constrained PDE-Constrained Optimal Design Strategies Under
High-Dimensional Uncertainty | cs.CE | This study presents an advanced computational framework for the optimal
design of thermal insulation components in buildings, utilizing silica aerogel
porous materials. The framework aims to achieve superior thermal insulation
while maintaining structural integrity of the component under stress
concentrations. A multiphase continuum model is employed to simulate the
thermomechanical behavior of the insulation system, governed by a set of
coupled partial differential equations (PDEs). The design process explicitly
accounts for spatially correlated uncertainties in the design parameters,
particularly the spatially varying aerogel porosity, resulting in a
high-dimensional, PDE-constrained optimization under uncertainty. The
optimization problem is formulated using a risk-averse cost functional to
balance insulation performance with uncertainty mitigation, incorporating
statistical moments of the design objective. Additionally, chance constraints
are imposed to limit the probability of stress exceeding critical thresholds.
To address the challenges arising from high-dimensional parameters, the
optimization leverages a second-order Taylor expansion of both the design
objective and the chance constraint functions, combined with a low-rank
approximation of the Hessian matrix for efficient evaluation of the generalized
eigenvalue problem. This approach supports scalable computations, either
directly or as a variance-reduction control variate for Monte Carlo
estimations. Combined with a gradient-based optimization approach, we achieve a
scalable solution algorithm with dimension-independent computational costs in
terms of number of PDE solved. Two- and three-dimensional numerical experiments
on the design of thermal breaks in buildings showcase the features of the
proposed design under uncertainty framework with respect to accuracy,
scalability, and computational cost.
|
2501.02080 | AI-Powered Cow Detection in Complex Farm Environments | cs.CV | Animal welfare has become a critical issue in contemporary society,
emphasizing our ethical responsibilities toward animals, particularly within
livestock farming. The advent of Artificial Intelligence (AI) technologies,
specifically computer vision, offers an innovative approach to monitoring and
enhancing animal welfare. Cows, as essential contributors to sustainable
agriculture, are central to this effort. However, existing cow detection
algorithms face challenges in real-world farming environments, such as complex
lighting, occlusions, pose variations, and background interference, hindering
detection. Model generalization is crucial for adaptation across contexts
beyond the training dataset. This study addresses these challenges using a
diverse cow dataset from six environments, including indoor and outdoor
scenarios. We propose a detection model combining YOLOv8 with the CBAM
(Convolutional Block Attention Module) and assess its performance against
baseline models, including Mask R-CNN, YOLOv5, and YOLOv8. Our findings show
baseline models degrade in complex conditions, while our approach improves
using CBAM. YOLOv8-CBAM outperformed YOLOv8 by 2.3% in mAP, achieving 95.2%
precision and an mAP@0.5:0.95 of 82.6%, demonstrating superior accuracy.
Contributions include (1) analyzing detection limitations, (2) proposing a
robust model, and (3) benchmarking state-of-the-art algorithms. Applications
include health monitoring, behavioral analysis, and tracking in smart farms,
enabling precise detection in challenging settings. This study advances
AI-driven livestock monitoring, improving animal welfare and smart agriculture.
|
2501.02086 | Instruction-Following Pruning for Large Language Models | cs.CL | With the rapid scaling of large language models (LLMs), structured pruning
has become a widely used technique to learn efficient, smaller models from
larger ones, delivering superior performance compared to training similarly
sized models from scratch. In this paper, we move beyond the traditional static
pruning approach of determining a fixed pruning mask for a model, and propose a
dynamic approach to structured pruning. In our method, the pruning mask is
input-dependent and adapts dynamically based on the information described in a
user instruction. Our approach, termed "instruction-following pruning",
introduces a sparse mask predictor that takes the user instruction as input and
dynamically selects the most relevant model parameters for the given task. To
identify and activate effective parameters, we jointly optimize the sparse mask
predictor and the LLM, leveraging both instruction-following data and the
pre-training corpus. Experimental results demonstrate the effectiveness of our
approach on a wide range of evaluation benchmarks. For example, our 3B
activated model improves over the 3B dense model by 5-8 points of absolute
margin on domains such as math and coding, and rivals the performance of a 9B
model.
|
2501.02087 | Beyond CVaR: Leveraging Static Spectral Risk Measures for Enhanced
Decision-Making in Distributional Reinforcement Learning | cs.LG stat.ML | In domains such as finance, healthcare, and robotics, managing worst-case
scenarios is critical, as failure to do so can lead to catastrophic outcomes.
Distributional Reinforcement Learning (DRL) provides a natural framework to
incorporate risk sensitivity into decision-making processes. However, existing
approaches face two key limitations: (1) the use of fixed risk measures at each
decision step often results in overly conservative policies, and (2) the
interpretation and theoretical properties of the learned policies remain
unclear. While optimizing a static risk measure addresses these issues, its use
in the DRL framework has been limited to the simple static CVaR risk measure.
In this paper, we present a novel DRL algorithm with convergence guarantees
that optimizes for a broader class of static Spectral Risk Measures (SRM).
Additionally, we provide a clear interpretation of the learned policy by
leveraging the distribution of returns in DRL and the decomposition of static
coherent risk measures. Extensive experiments demonstrate that our model learns
policies aligned with the SRM objective, and outperforms existing risk-neutral
and risk-sensitive DRL models in various settings.
|
2501.02089 | On the Statistical Complexity for Offline and Low-Adaptive Reinforcement
Learning with Structures | cs.LG cs.AI | This article reviews the recent advances on the statistical foundation of
reinforcement learning (RL) in the offline and low-adaptive settings. We will
start by arguing why offline RL is the appropriate model for almost any
real-life ML problems, even if they have nothing to do with the recent AI
breakthroughs that use RL. Then we will zoom into two fundamental problems of
offline RL: offline policy evaluation (OPE) and offline policy learning (OPL).
It may be surprising to people that tight bounds for these problems were not
known even for tabular and linear cases until recently. We delineate the
differences between worst-case minimax bounds and instance-dependent bounds. We
also cover key algorithmic ideas and proof techniques behind near-optimal
instance-dependent methods in OPE and OPL. Finally, we discuss the limitations
of offline RL and review a burgeoning problem of \emph{low-adaptive
exploration} which addresses these limitations by providing a sweet middle
ground between offline and online RL.
|
2501.02090 | Applying Text Mining to Analyze Human Question Asking in Creativity
Research | cs.CL | Creativity relates to the ability to generate novel and effective ideas in
the areas of interest. How are such creative ideas generated? One possible
mechanism that supports creative ideation and is gaining increased empirical
attention is by asking questions. Question asking is a likely cognitive
mechanism that allows defining problems, facilitating creative problem solving.
However, much is unknown about the exact role of questions in creativity. This
work presents an attempt to apply text mining methods to measure the cognitive
potential of questions, taking into account, among others, (a) question type,
(b) question complexity, and (c) the content of the answer. This contribution
summarizes the history of question mining as a part of creativity research,
along with the natural language processing methods deemed useful or helpful in
the study. In addition, a novel approach is proposed, implemented, and applied
to five datasets. The experimental results obtained are comprehensively
analyzed, suggesting that natural language processing has a role to play in
creative research.
|
2501.02094 | SMTL: A Stratified Logic for Expressive Multi-Level Temporal
Specifications | eess.SY cs.SY | We present Stratified Metric Temporal Logic (SMTL), a novel formalism for
specifying and verifying properties of complex cyber-physical systems that
exhibit behaviors across multiple temporal and abstraction scales. SMTL extends
existing temporal logics by incorporating a stratification operator, enabling
the association of temporal properties with specific abstraction levels. This
allows for the natural expression of multi-scale requirements while maintaining
formal reasoning about inter-level relationships. We formalize the syntax and
semantics of SMTL, proving that it strictly subsumes metric temporal logic
(MTL) and offers enhanced expressiveness by capturing properties unattainable
in existing logics. Numerical simulations comparing agents operating under MTL
and SMTL specifications show that SMTL enhances agent coordination and safety,
reducing collision rates without substantial computational overhead or
compromising path efficiency. These findings underscore SMTL's potential as a
valuable tool for designing and verifying complex multi-agent systems operating
across diverse temporal and abstraction scales.
|
2501.02104 | Equivalence of Informations Characterizes Bregman Divergences | cs.IT math.IT | Bregman divergences are a class of distance-like comparison functions which
play fundamental roles in optimization, statistics, and information theory. One
important property of Bregman divergences is that they cause two useful
formulations of information content (in the sense of variability or
non-uniformity) in a weighted collection of vectors to agree. In this note, we
show that this agreement in fact characterizes the class of Bregman
divergences; they are the only divergences which generate this agreement for
arbitrary collections of weighted vectors.
|
2501.02105 | Learning Fricke signs from Maass form Coefficients | math.NT cs.LG hep-th stat.ML | In this paper, we conduct a data-scientific investigation of Maass forms. We
find that averaging the Fourier coefficients of Maass forms with the same
Fricke sign reveals patterns analogous to the recently discovered "murmuration"
phenomenon, and that these patterns become more pronounced when parity is
incorporated as an additional feature. Approximately 43% of the forms in our
dataset have an unknown Fricke sign. For the remaining forms, we employ Linear
Discriminant Analysis (LDA) to machine learn their Fricke sign, achieving 96%
(resp. 94%) accuracy for forms with even (resp. odd) parity. We apply the
trained LDA model to forms with unknown Fricke signs to make predictions. The
average values based on the predicted Fricke signs are computed and compared to
those for forms with known signs to verify the reasonableness of the
predictions. Additionally, a subset of these predictions is evaluated against
heuristic guesses provided by Hejhal's algorithm, showing a match approximately
95% of the time. We also use neural networks to obtain results comparable to
those from the LDA model.
|
2501.02107 | Online Detection of Water Contamination Under Concept Drift | cs.LG cs.AI | Water Distribution Networks (WDNs) are vital infrastructures, and
contamination poses serious public health risks. Harmful substances can
interact with disinfectants like chlorine, making chlorine monitoring essential
for detecting contaminants. However, chlorine sensors often become unreliable
and require frequent calibration. This study introduces the Dual-Threshold
Anomaly and Drift Detection (AD&DD) method, an unsupervised approach combining
a dual-threshold drift detection mechanism with an LSTM-based Variational
Autoencoder(LSTM-VAE) for real-time contamination detection. Tested on two
realistic WDNs, AD&DD effectively identifies anomalies with sensor offsets as
concept drift, and outperforms other methods. A proposed decentralized
architecture enables accurate contamination detection and localization by
deploying AD&DD on selected nodes.
|
2501.02111 | How Your Location Relates to Health: Variable Importance and
Interpretable Machine Learning for Environmental and Sociodemographic Data | cs.LG | Health outcomes depend on complex environmental and sociodemographic factors
whose effects change over location and time. Only recently has fine-grained
spatial and temporal data become available to study these effects, namely the
MEDSAT dataset of English health, environmental, and sociodemographic
information. Leveraging this new resource, we use a variety of variable
importance techniques to robustly identify the most informative predictors
across multiple health outcomes. We then develop an interpretable machine
learning framework based on Generalized Additive Models (GAMs) and Multiscale
Geographically Weighted Regression (MGWR) to analyze both local and global
spatial dependencies of each variable on various health outcomes. Our findings
identify NO2 as a global predictor for asthma, hypertension, and anxiety,
alongside other outcome-specific predictors related to occupation, marriage,
and vegetation. Regional analyses reveal local variations with air pollution
and solar radiation, with notable shifts during COVID. This comprehensive
approach provides actionable insights for addressing health disparities, and
advocates for the integration of interpretable machine learning in public
health.
|
2501.02112 | Siamese Networks for Cat Re-Identification: Exploring Neural Models for
Cat Instance Recognition | cs.CV cs.AI | Street cats in urban areas often rely on human intervention for survival,
leading to challenges in population control and welfare management. In April
2023, Hello Inc., a Chinese urban mobility company, launched the Hello Street
Cat initiative to address these issues. The project deployed over 21,000 smart
feeding stations across 14 cities in China, integrating livestreaming cameras
and treat dispensers activated through user donations. It also promotes the
Trap-Neuter-Return (TNR) method, supported by a community-driven platform,
HelloStreetCatWiki, where volunteers catalog and identify cats. However, manual
identification is inefficient and unsustainable, creating a need for automated
solutions. This study explores Deep Learning-based models for re-identifying
street cats in the Hello Street Cat initiative. A dataset of 2,796 images of 69
cats was used to train Siamese Networks with EfficientNetB0, MobileNet and
VGG16 as base models, evaluated under contrastive and triplet loss functions.
VGG16 paired with contrastive loss emerged as the most effective configuration,
achieving up to 97% accuracy and an F1 score of 0.9344 during testing. The
approach leverages image augmentation and dataset refinement to overcome
challenges posed by limited data and diverse visual variations. These findings
underscore the potential of automated cat re-identification to streamline
population monitoring and welfare efforts. By reducing reliance on manual
processes, the method offers a scalable and reliable solution for
communitydriven initiatives. Future research will focus on expanding datasets
and developing real-time implementations to enhance practicality in large-scale
deployments.
|
2501.02114 | Relaxation-assisted reverse annealing on nonnegative/binary matrix
factorization | quant-ph cond-mat.stat-mech cs.AI cs.LG | Quantum annealing has garnered significant attention as meta-heuristics
inspired by quantum physics for combinatorial optimization problems. Among its
many applications, nonnegative/binary matrix factorization stands out for its
complexity and relevance in unsupervised machine learning. The use of reverse
annealing, a derivative procedure of quantum annealing to prioritize the search
in a vicinity under a given initial state, helps improve its optimization
performance in matrix factorization. This study proposes an improved strategy
that integrates reverse annealing with a linear programming relaxation
technique. Using relaxed solutions as the initial configuration for reverse
annealing, we demonstrate improvements in optimization performance comparable
to the exact optimization methods. Our experiments on facial image datasets
show that our method provides better convergence than known reverse annealing
methods. Furthermore, we investigate the effectiveness of relaxation-based
initialization methods on randomized datasets, demonstrating a relationship
between the relaxed solution and the optimal solution. This research
underscores the potential of combining reverse annealing and classical
optimization strategies to enhance optimization performance.
|
2501.02116 | Humanoid Locomotion and Manipulation: Current Progress and Challenges in
Control, Planning, and Learning | cs.RO | Humanoid robots have great potential to perform various human-level skills.
These skills involve locomotion, manipulation, and cognitive capabilities.
Driven by advances in machine learning and the strength of existing model-based
approaches, these capabilities have progressed rapidly, but often separately.
Therefore, a timely overview of current progress and future trends in this
fast-evolving field is essential. This survey first summarizes the model-based
planning and control that have been the backbone of humanoid robotics for the
past three decades. We then explore emerging learning-based methods, with a
focus on reinforcement learning and imitation learning that enhance the
versatility of loco-manipulation skills. We examine the potential of
integrating foundation models with humanoid embodiments, assessing the
prospects for developing generalist humanoid agents. In addition, this survey
covers emerging research for whole-body tactile sensing that unlocks new
humanoid skills that involve physical interactions. The survey concludes with a
discussion of the challenges and future trends.
|
2501.02117 | Fastest mixing reversible Markov chain on friendship graph: Trade-off
between transition probabilities among friends and convergence rate | cs.IT math.IT | A long-standing goal of social network research has been to alter the
properties of network to achieve the desired outcome. In doing so, DeGroot's
consensus model has served as the popular choice for modeling the information
diffusion and opinion formation in social networks. Achieving a trade-off
between the cost associated with modifications made to the network and the
speed of convergence to the desired state has shown to be a critical factor.
This has been treated as the Fastest Mixing Markov Chain (FMMC) problem over a
graph with given transition probabilities over a subset of edges. Addressing
this multi-objective optimization problem over the friendship graph, this paper
has provided the corresponding Pareto optimal points or the Pareto frontier. In
the case of friendship graph with at least three blades, it is shown that the
Pareto frontier is reduced to a global minimum point which is same as the
optimal point corresponding to the minimum spanning tree of the friendship
graph, i.e., the star topology. Furthermore, a lower limit for transition
probabilities among friends has been provided, where values higher than this
limit do not have any impact on the convergence rate.
|
2501.02127 | How do Humans take an Object from a Robot: Behavior changes observed in
a User Study | cs.RO | To facilitate human-robot interaction and gain human trust, a robot should
recognize and adapt to changes in human behavior. This work documents different
human behaviors observed while taking objects from an interactive robot in an
experimental study, categorized across two dimensions: pull force applied and
handedness. We also present the changes observed in human behavior upon
repeated interaction with the robot to take various objects.
|
2501.02132 | A hybrid marketplace of ideas | cs.CY cs.AI cs.ET | The convergence of humans and artificial intelligence systems introduces new
dynamics into the cultural and intellectual landscape. Complementing emerging
cultural evolution concepts such as machine culture, AI agents represent a
significant techno-sociological development, particularly within the
anthropological study of Web3 as a community focused on decentralization
through blockchain. Despite their growing presence, the cultural significance
of AI agents remains largely unexplored in academic literature. Toward this
end, we conceived hybrid netnography, a novel interdisciplinary approach that
examines the cultural and intellectual dynamics within digital ecosystems by
analyzing the interactions and contributions of both human and AI agents as
co-participants in shaping narratives, ideas, and cultural artifacts. We argue
that, within the Web3 community on the social media platform X, these agents
challenge traditional notions of participation and influence in public
discourse, creating a hybrid marketplace of ideas, a conceptual space where
human and AI generated ideas coexist and compete for attention. We examine the
current state of AI agents in idea generation, propagation, and engagement,
positioning their role as cultural agents through the lens of memetics and
encouraging further inquiry into their cultural and societal impact.
Additionally, we address the implications of this paradigm for privacy,
intellectual property, and governance, highlighting the societal and legal
challenges of integrating AI agents into the hybrid marketplace of ideas.
|
2501.02135 | AVTrustBench: Assessing and Enhancing Reliability and Robustness in
Audio-Visual LLMs | cs.CV cs.AI | With the rapid advancement of Multi-modal Large Language Models (MLLMs),
several diagnostic benchmarks have recently been developed to assess these
models' multi-modal reasoning proficiency. However, these benchmarks are
restricted to assessing primarily the visual aspect and do not examine the
holistic audio-visual (AV) understanding. Moreover, currently, there are no
benchmarks that investigate the capabilities of AVLLMs to calibrate their
responses when presented with perturbed inputs. To this end, we introduce
Audio-Visual Trustworthiness assessment Benchmark (AVTrustBench), comprising
600K samples spanning over 9 meticulously crafted tasks, evaluating the
capabilities of AVLLMs across three distinct dimensions: Adversarial attack,
Compositional reasoning, and Modality-specific dependency. Using our benchmark
we extensively evaluate 13 state-of-the-art AVLLMs. The findings reveal that
the majority of existing models fall significantly short of achieving
human-like comprehension, offering valuable insights for future research
directions. To alleviate the limitations in the existing approaches, we further
propose a robust, model-agnostic calibrated audio-visual preference
optimization based training strategy CAVPref, obtaining a gain up to 30.19%
across all 9 tasks. We will publicly release our code and benchmark to
facilitate future research in this direction.
|
2501.02138 | Effective LLM-Driven Code Generation with Pythoness | cs.PL cs.AI cs.SE | The advent of large language models (LLMs) has paved the way for a new era of
programming tools with both significant capabilities and risks, as the
generated code lacks guarantees of correctness and reliability. Developers
using LLMs currently face the difficult task of optimizing, integrating, and
maintaining code generated by AI. We propose an embedded domain-specific
language (DSL), Pythoness, to address those challenges. In Pythoness,
developers program with LLMs at a higher level of abstraction. Rather than
interacting directly with generated code, developers using Pythoness operate at
the level of behavioral specifications when writing functions, classes, or an
entire program. These specifications can take the form of unit tests and
property-based tests, which may be expressed formally or in natural language.
Guided by these specifications, Pythoness generates code that both passes the
tests and can be continuously checked during execution. We posit that the
Pythoness approach lets developers harness the full potential of LLMs for code
generation while substantially mitigating their inherent risks. We describe our
current prototype implementation of Pythoness and demonstrate that it can
successfully leverage a combination of tests and code generation to yield
higher quality code than specifications alone.
|
2501.02140 | Tree-NET: Enhancing Medical Image Segmentation Through Efficient
Low-Level Feature Training | eess.IV cs.CV | This paper introduces Tree-NET, a novel framework for medical image
segmentation that leverages bottleneck feature supervision to enhance both
segmentation accuracy and computational efficiency. While previous studies have
employed bottleneck feature supervision, their applications have largely been
limited to the training phase, offering no computational benefits during
training or evaluation. To the best of our knowledge, this study is the first
to propose a framework that incorporates two additional training phases for
segmentation models, utilizing bottleneck features at both input and output
stages. This approach significantly improves computational performance by
reducing input and output dimensions with a negligible addition to parameter
count, without compromising accuracy. Tree-NET features a three-layer
architecture comprising Encoder-Net and Decoder-Net, which are autoencoders
designed to compress input and label data, respectively, and Bridge-Net, a
segmentation framework that supervises the bottleneck features. By focusing on
dense, compressed representations, Tree-NET enhances operational efficiency and
can be seamlessly integrated into existing segmentation models without altering
their internal structures or increasing model size. We evaluate Tree-NET on two
critical segmentation tasks -- skin lesion and polyp segmentation -- using
various backbone models, including U-NET variants and Polyp-PVT. Experimental
results demonstrate that Tree-NET reduces FLOPs by a factor of 4 to 13 and
decreases memory usage, while achieving comparable or superior accuracy
compared to the original architectures. These findings underscore Tree-NET's
potential as a robust and efficient solution for medical image segmentation.
|
2501.02143 | SafeAug: Safety-Critical Driving Data Augmentation from Naturalistic
Datasets | cs.CV cs.LG | Safety-critical driving data is crucial for developing safe and trustworthy
self-driving algorithms. Due to the scarcity of safety-critical data in
naturalistic datasets, current approaches primarily utilize simulated or
artificially generated images. However, there remains a gap in authenticity
between these generated images and naturalistic ones. We propose a novel
framework to augment the safety-critical driving data from the naturalistic
dataset to address this issue. In this framework, we first detect vehicles
using YOLOv5, followed by depth estimation and 3D transformation to simulate
vehicle proximity and critical driving scenarios better. This allows for
targeted modification of vehicle dynamics data to reflect potentially hazardous
situations. Compared to the simulated or artificially generated data, our
augmentation methods can generate safety-critical driving data with minimal
compromise on image authenticity. Experiments using KITTI datasets demonstrate
that a downstream self-driving algorithm trained on this augmented dataset
performs superiorly compared to the baselines, which include SMOGN and
importance sampling.
|
2501.02144 | Establishing baselines for generative discovery of inorganic crystals | cond-mat.mtrl-sci cs.AI physics.chem-ph | Generative artificial intelligence offers a promising avenue for materials
discovery, yet its advantages over traditional methods remain unclear. In this
work, we introduce and benchmark two baseline approaches - random enumeration
of charge-balanced prototypes and data-driven ion exchange of known compounds -
against three generative models: a variational autoencoder, a large language
model, and a diffusion model. Our results show that established methods such as
ion exchange perform comparably well in generating stable materials, although
many of these materials tend to closely resemble known compounds. In contrast,
generative models excel at proposing novel structural frameworks and, when
sufficient training data is available, can more effectively target properties
such as electronic band gap and bulk modulus while maintaining a high stability
rate. To enhance the performance of both the baseline and generative
approaches, we implement a post-generation screening step in which all proposed
structures are passed through stability and property filters from pre-trained
machine learning models including universal interatomic potentials. This
low-cost filtering step leads to substantial improvement in the success rates
of all methods, remains computationally efficient, and ultimately provides a
practical pathway toward more effective generative strategies for materials
discovery.
|
2501.02146 | Plasma-CycleGAN: Plasma Biomarker-Guided MRI to PET Cross-modality
Translation Using Conditional CycleGAN | cs.CV cs.AI q-bio.NC | Cross-modality translation between MRI and PET imaging is challenging due to
the distinct mechanisms underlying these modalities. Blood-based biomarkers
(BBBMs) are revolutionizing Alzheimer's disease (AD) detection by identifying
patients and quantifying brain amyloid levels. However, the potential of BBBMs
to enhance PET image synthesis remains unexplored. In this paper, we performed
a thorough study on the effect of incorporating BBBM into deep generative
models. By evaluating three widely used cross-modality translation models, we
found that BBBMs integration consistently enhances the generative quality
across all models. By visual inspection of the generated results, we observed
that PET images generated by CycleGAN exhibit the best visual fidelity. Based
on these findings, we propose Plasma-CycleGAN, a novel generative model based
on CycleGAN, to synthesize PET images from MRI using BBBMs as conditions. This
is the first approach to integrate BBBMs in conditional cross-modality
translation between MRI and PET.
|
2501.02147 | Exploring Secure Machine Learning Through Payload Injection and FGSM
Attacks on ResNet-50 | cs.CR cs.LG | This paper investigates the resilience of a ResNet-50 image classification
model under two prominent security threats: Fast Gradient Sign Method (FGSM)
adversarial attacks and malicious payload injection. Initially, the model
attains a 53.33% accuracy on clean images. When subjected to FGSM
perturbations, its overall accuracy remains unchanged; however, the model's
confidence in incorrect predictions notably increases. Concurrently, a payload
injection scheme is successfully executed in 93.33% of the tested samples,
revealing how stealthy attacks can manipulate model predictions without
degrading visual quality. These findings underscore the vulnerability of even
high-performing neural networks and highlight the urgency of developing more
robust defense mechanisms for security-critical applications.
|
2501.02149 | Attribute-Based Robotic Grasping with Data-Efficient Adaptation | cs.RO cs.AI | Robotic grasping is one of the most fundamental robotic manipulation tasks
and has been the subject of extensive research. However, swiftly teaching a
robot to grasp a novel target object in clutter remains challenging. This paper
attempts to address the challenge by leveraging object attributes that
facilitate recognition, grasping, and rapid adaptation to new domains. In this
work, we present an end-to-end encoder-decoder network to learn attribute-based
robotic grasping with data-efficient adaptation capability. We first pre-train
the end-to-end model with a variety of basic objects to learn generic attribute
representation for recognition and grasping. Our approach fuses the embeddings
of a workspace image and a query text using a gated-attention mechanism and
learns to predict instance grasping affordances. To train the joint embedding
space of visual and textual attributes, the robot utilizes object persistence
before and after grasping. Our model is self-supervised in a simulation that
only uses basic objects of various colors and shapes but generalizes to novel
objects in new environments. To further facilitate generalization, we propose
two adaptation methods, adversarial adaption and one-grasp adaptation.
Adversarial adaptation regulates the image encoder using augmented data of
unlabeled images, whereas one-grasp adaptation updates the overall end-to-end
model using augmented data from one grasp trial. Both adaptation methods are
data-efficient and considerably improve instance grasping performance.
Experimental results in both simulation and the real world demonstrate that our
approach achieves over 81% instance grasping success rate on unknown objects,
which outperforms several baselines by large margins.
|
2501.02151 | From Images to Detection: Machine Learning for Blood Pattern
Classification | cs.CV stat.AP | Bloodstain Pattern Analysis (BPA) helps us understand how bloodstains form,
with a focus on their size, shape, and distribution. This aids in crime scene
reconstruction and provides insight into victim positions and crime
investigation. One challenge in BPA is distinguishing between different types
of bloodstains, such as those from firearms, impacts, or other mechanisms. Our
study focuses on differentiating impact spatter bloodstain patterns from
gunshot bloodstain patterns. We distinguish patterns by extracting
well-designed individual stain features, applying effective data consolidation
methods, and selecting boosting classifiers. As a result, we have developed a
model that excels in both accuracy and efficiency. In addition, we use outside
data sources from previous studies to discuss the challenges and future
directions for BPA.
|
2501.02152 | Table as Thought: Exploring Structured Thoughts in LLM Reasoning | cs.AI cs.CL | Large language models' reasoning abilities benefit from methods that organize
their thought processes, such as chain-of-thought prompting, which employs a
sequential structure to guide the reasoning process step-by-step. However,
existing approaches focus primarily on organizing the sequence of thoughts,
leaving structure in individual thought steps underexplored. To address this
gap, we propose Table as Thought, a framework inspired by cognitive
neuroscience theories on human thought. Table as Thought organizes reasoning
within a tabular schema, where rows represent sequential thought steps and
columns capture critical constraints and contextual information to enhance
reasoning. The reasoning process iteratively populates the table until
self-verification ensures completeness and correctness. Our experiments show
that Table as Thought excels in planning tasks and demonstrates a strong
potential for enhancing LLM performance in mathematical reasoning compared to
unstructured thought baselines. This work provides a novel exploration of
refining thought representation within LLMs, paving the way for advancements in
reasoning and AI cognition.
|
2501.02153 | Resolving the Exploration-Exploitation Dilemma in Evolutionary
Algorithms: A Novel Human-Centered Framework | cs.NE math.OC | Evolutionary Algorithms (EAs) are powerful tools for tackling complex
computational problems, yet effectively managing the exploitation and
exploration dynamics -- crucial for robust search navigation -- remains a
persistent challenge for EA designers, and leads to the so-called
exploration-exploitation dilemma. In this paper, a new human-centered framework
is proposed to resolve this dilemma. Unlike the traditional approach, the
search process will not be compromised of a single-phase nor the decision-maker
tuning efforts will be distributed among the algorithm's traditional parameters
such as defining new evolutionary operators internal to the algorithm to
influence its search navigation. Instead, a human-centered two-phase search
process, compromised of a global search phase followed by a local phase will be
utilized. In this framework, the designer plays the central role in directing
the algorithm's search navigation through the focused tuning efforts of a new
Search Space Size Control parameter external to the algorithm which proves
itself to be the dominant parameter in-effect to the algorithm's effective
search navigation.
The framework is applicable to any search algorithm. We demonstrate its
effectiveness on 14 well-known benchmark problems in unconstrained
optimization.
|
2501.02156 | The Race to Efficiency: A New Perspective on AI Scaling Laws | cs.LG cs.AI cs.PF | As large-scale AI models expand, training becomes costlier and sustaining
progress grows harder. Classical scaling laws (e.g., Kaplan et al. (2020),
Hoffmann et al. (2022)) predict training loss from a static compute budget yet
neglect time and efficiency, prompting the question: how can we balance
ballooning GPU fleets with rapidly improving hardware and algorithms? We
introduce the relative-loss equation, a time- and efficiency-aware framework
that extends classical AI scaling laws. Our model shows that, without ongoing
efficiency gains, advanced performance could demand millennia of training or
unrealistically large GPU fleets. However, near-exponential progress remains
achievable if the "efficiency-doubling rate" parallels Moore's Law. By
formalizing this race to efficiency, we offer a quantitative roadmap for
balancing front-loaded GPU investments with incremental improvements across the
AI stack. Empirical trends suggest that sustained efficiency gains can push AI
scaling well into the coming decade, providing a new perspective on the
diminishing returns inherent in classical scaling.
|
2501.02157 | Personalized Graph-Based Retrieval for Large Language Models | cs.CL | As large language models (LLMs) evolve, their ability to deliver personalized
and context-aware responses offers transformative potential for improving user
experiences. Existing personalization approaches, however, often rely solely on
user history to augment the prompt, limiting their effectiveness in generating
tailored outputs, especially in cold-start scenarios with sparse data. To
address these limitations, we propose Personalized Graph-based
Retrieval-Augmented Generation (PGraphRAG), a framework that leverages
user-centric knowledge graphs to enrich personalization. By directly
integrating structured user knowledge into the retrieval process and augmenting
prompts with user-relevant context, PGraphRAG enhances contextual understanding
and output quality. We also introduce the Personalized Graph-based Benchmark
for Text Generation, designed to evaluate personalized text generation tasks in
real-world settings where user history is sparse or unavailable. Experimental
results show that PGraphRAG significantly outperforms state-of-the-art
personalization methods across diverse tasks, demonstrating the unique
advantages of graph-based retrieval for personalization.
|
2501.02158 | Joint Optimization for 4D Human-Scene Reconstruction in the Wild | cs.CV | Reconstructing human motion and its surrounding environment is crucial for
understanding human-scene interaction and predicting human movements in the
scene. While much progress has been made in capturing human-scene interaction
in constrained environments, those prior methods can hardly reconstruct the
natural and diverse human motion and scene context from web videos. In this
work, we propose JOSH, a novel optimization-based method for 4D human-scene
reconstruction in the wild from monocular videos. JOSH uses techniques in both
dense scene reconstruction and human mesh recovery as initialization, and then
it leverages the human-scene contact constraints to jointly optimize the scene,
the camera poses, and the human motion. Experiment results show JOSH achieves
better results on both global human motion estimation and dense scene
reconstruction by joint optimization of scene geometry and human motion. We
further design a more efficient model, JOSH3R, and directly train it with
pseudo-labels from web videos. JOSH3R outperforms other optimization-free
methods by only training with labels predicted from JOSH, further demonstrating
its accuracy and generalization ability.
|
2501.02166 | ROLO-SLAM: Rotation-Optimized LiDAR-Only SLAM in Uneven Terrain with
Ground Vehicle | cs.RO cs.CV | LiDAR-based SLAM is recognized as one effective method to offer localization
guidance in rough environments. However, off-the-shelf LiDAR-based SLAM methods
suffer from significant pose estimation drifts, particularly components
relevant to the vertical direction, when passing to uneven terrains. This
deficiency typically leads to a conspicuously distorted global map. In this
article, a LiDAR-based SLAM method is presented to improve the accuracy of pose
estimations for ground vehicles in rough terrains, which is termed
Rotation-Optimized LiDAR-Only (ROLO) SLAM. The method exploits a forward
location prediction to coarsely eliminate the location difference of
consecutive scans, thereby enabling separate and accurate determination of the
location and orientation at the front-end. Furthermore, we adopt a
parallel-capable spatial voxelization for correspondence-matching. We develop a
spherical alignment-guided rotation registration within each voxel to estimate
the rotation of vehicle. By incorporating geometric alignment, we introduce the
motion constraint into the optimization formulation to enhance the rapid and
effective estimation of LiDAR's translation. Subsequently, we extract several
keyframes to construct the submap and exploit an alignment from the current
scan to the submap for precise pose estimation. Meanwhile, a global-scale
factor graph is established to aid in the reduction of cumulative errors. In
various scenes, diverse experiments have been conducted to evaluate our method.
The results demonstrate that ROLO-SLAM excels in pose estimation of ground
vehicles and outperforms existing state-of-the-art LiDAR SLAM frameworks.
|
2501.02167 | Generating Multimodal Images with GAN: Integrating Text, Image, and
Style | cs.CV | In the field of computer vision, multimodal image generation has become a
research hotspot, especially the task of integrating text, image, and style. In
this study, we propose a multimodal image generation method based on Generative
Adversarial Networks (GAN), capable of effectively combining text descriptions,
reference images, and style information to generate images that meet multimodal
requirements. This method involves the design of a text encoder, an image
feature extractor, and a style integration module, ensuring that the generated
images maintain high quality in terms of visual content and style consistency.
We also introduce multiple loss functions, including adversarial loss,
text-image consistency loss, and style matching loss, to optimize the
generation process. Experimental results show that our method produces images
with high clarity and consistency across multiple public datasets,
demonstrating significant performance improvements compared to existing
methods. The outcomes of this study provide new insights into multimodal image
generation and present broad application prospects.
|
2501.02169 | The Integration of Blockchain and Artificial Intelligence for Secure
Healthcare Systems | cs.CY cs.AI | Verisign reported a 125 percent increase in data breaches within the
healthcare sector in the United States during 2022, with 18.2 million patient
records being impacted. Growing healthcare data volumes and diversification
mean that medical information is becoming more valuable. Many Health Centers
use various technologies to ease the classification, storage, and exchange of
big data. This use can also make the health data of the users at risk and
vulnerable. AI and blockchain are among the leading technologies at hand. With
AI, data-driven operations and big data efficiency have been improved with
respect to traditional techniques. Due to its potential to bring about
improvements in health services and lower medical costs, this AI technology is
regularly used in healthcare. Blockchain helps protect transactions on sharing
information and private privacy as long as the exchange of knowledge is that of
the standard. The objective of this analysis is to investigate the research and
unique contributions since 2008 regarding blockchain-integrated AI and
healthcare systems. The work sheds light on applied AI-based healthcare schemes
with machine, ballistic, and acrylic learning and disparate blockchain
structures. The use of technology in order to ensure patient data security and
manage medical information effectively in healthcare settings offers a highly
successful position for both healthcare providers and patients. From 2018 to
2021, the best year was 2021 to grow, enhancing everything to examine the
download of the device and the counting of Google Academies, for which the
joining perspective was borrowed; local research experts were asked, identified
articles in recent years, and read reviews of large research grants.
|
2501.02172 | Multifractal Terrain Generation for Evaluating Autonomous Off-Road
Ground Vehicles | cs.RO | We present a multifractal artificial terrain generation method that uses the
3D Weierstrass-Mandelbrot function to control roughness. By varying the fractal
dimension used in terrain generation across three different values, we generate
60 unique off-road terrains. We use gradient maps to categorize the roughness
of each terrain, consisting of low-, semi-, and high-roughness areas. To test
how the fractal dimension affects the difficulty of vehicle traversals, we
measure the success rates, vertical accelerations, pitch and roll rates, and
traversal times of an autonomous ground vehicle traversing 20 randomized
straight-line paths in each terrain. As we increase the fractal dimension from
2.3 to 2.45 and from 2.45 to 2.6, we find that the median area of low-roughness
terrain decreases 13.8% and 7.16%, the median area of semi-rough terrain
increases 11.7% and 5.63%, and the median area of high-roughness terrain
increases 1.54% and 3.33%, all respectively. We find that the median success
rate of the vehicle decreases 22.5% and 25% as the fractal dimension increases
from 2.3 to 2.45 and from 2.45 to 2.6, respectively. Successful traversal
results show that the median root-mean-squared vertical accelerations, median
root-mean-squared pitch and roll rates, and median traversal times all increase
with the fractal dimension.
|
2501.02173 | The Efficiency vs. Accuracy Trade-off: Optimizing RAG-Enhanced LLM
Recommender Systems Using Multi-Head Early Exit | cs.IR cs.LG | The deployment of Large Language Models (LLMs) in recommender systems for
predicting Click-Through Rates (CTR) necessitates a delicate balance between
computational efficiency and predictive accuracy. This paper presents an
optimization framework that combines Retrieval-Augmented Generation (RAG) with
an innovative multi-head early exit architecture to concurrently enhance both
aspects. By integrating Graph Convolutional Networks (GCNs) as efficient
retrieval mechanisms, we are able to significantly reduce data retrieval times
while maintaining high model performance. The early exit strategy employed
allows for dynamic termination of model inference, utilizing real-time
predictive confidence assessments across multiple heads. This not only quickens
the responsiveness of LLMs but also upholds or improves their accuracy, making
it ideal for real-time application scenarios. Our experiments demonstrate how
this architecture effectively decreases computation time without sacrificing
the accuracy needed for reliable recommendation delivery, establishing a new
standard for efficient, real-time LLM deployment in commercial systems.
|
2501.02174 | TACTIC: Task-Agnostic Contrastive pre-Training for Inter-Agent
Communication | cs.MA | The "sight range dilemma" in cooperative Multi-Agent Reinforcement Learning
(MARL) presents a significant challenge: limited observability hinders team
coordination, while extensive sight ranges lead to distracted attention and
reduced performance. While communication can potentially address this issue,
existing methods often struggle to generalize across different sight ranges,
limiting their effectiveness. We propose TACTIC, Task-Agnostic Contrastive
pre-Training strategy Inter-Agent Communication. TACTIC is an adaptive
communication mechanism that enhances agent coordination even when the sight
range during execution is vastly different from that during training. The
communication mechanism encodes messages and integrates them with local
observations, generating representations grounded in the global state using
contrastive learning. By learning to generate and interpret messages that
capture important information about the whole environment, TACTIC enables
agents to effectively "see" more through communication, regardless of their
sight ranges. We comprehensively evaluate TACTIC on the SMACv2 benchmark across
various scenarios with broad sight ranges. The results demonstrate that TACTIC
consistently outperforms traditional state-of-the-art MARL techniques with and
without communication, in terms of generalizing to sight ranges different from
those seen in training, particularly in cases of extremely limited or extensive
observability.
|
2501.02176 | Molecule-dynamic-based Aging Clock and Aging Roadmap Forecast with
Sundial | q-bio.QM cs.LG | Addressing the unavoidable bias inherent in supervised aging clocks, we
introduce Sundial, a novel framework that models molecular dynamics through a
diffusion field, capturing both the population-level aging process and the
individual-level relative aging order. Sundial enables unbiasedestimation of
biological age and the forecast of aging roadmap. Fasteraging individuals from
Sundial exhibit a higher disease risk compared to those identified from
supervised aging clocks. This framework opens new avenues for exploring key
topics, including age- and sex-specific aging dynamics and faster yet healthy
aging paths.
|
2501.02178 | The Application of Large Language Models in Recommendation Systems | cs.IR | The integration of Large Language Models into recommendation frameworks
presents key advantages for personalization and adaptability of experiences to
the users. Classic methods of recommendations, such as collaborative filtering
and content-based filtering, are seriously limited in the solution of
cold-start problems, sparsity of data, and lack of diversity in information
considered. LLMs, of which GPT-4 is a good example, have emerged as powerful
tools that enable recommendation frameworks to tap into unstructured data
sources such as user reviews, social interactions, and text-based content. By
analyzing these data sources, LLMs improve the accuracy and relevance of
recommendations, thereby overcoming some of the limitations of traditional
approaches. This work discusses applications of LLMs in recommendation systems,
especially in electronic commerce, social media platforms, streaming services,
and educational technologies. This showcases how LLMs enrich recommendation
diversity, user engagement, and the system's adaptability; yet it also looks
into the challenges connected to their technical implementation. This can also
be presented as a study that shows the potential of LLMs for changing user
experiences and making innovation possible in industries.
|
2501.02180 | Phase Retrieval by Quaternionic Reweighted Amplitude Flow on Image
Reconstruction | cs.CV math.CV | Quaternionic signal processing provides powerful tools for efficiently
managing color signals by preserving the intrinsic correlations among signal
dimensions through quaternion algebra. In this paper, we address the
quaternionic phase retrieval problem by systematically developing novel
algorithms based on an amplitude-based model. Specifically, we propose the
Quaternionic Reweighted Amplitude Flow (QRAF) algorithm, which is further
enhanced by three of its variants: incremental, accelerated, and adapted QRAF
algorithms. In addition, we introduce the Quaternionic Perturbed Amplitude Flow
(QPAF) algorithm, which has linear convergence. Extensive numerical experiments
on both synthetic data and real images, demonstrate that our proposed methods
significantly improve recovery performance and computational efficiency
compared to state-of-the-art approaches.
|
2501.02181 | SMDP-Based Dynamic Batching for Improving Responsiveness and Energy
Efficiency of Batch Services | cs.DC cs.LG cs.SY eess.SY | For servers incorporating parallel computing resources, batching is a pivotal
technique for providing efficient and economical services at scale. Parallel
computing resources exhibit heightened computational and energy efficiency when
operating with larger batch sizes. However, in the realm of online services,
the adoption of a larger batch size may lead to longer response times. This
paper aims to provide a dynamic batching scheme that delicately balances
latency and efficiency. The system is modeled as a batch service queue with
size-dependent service times. Then, the design of dynamic batching is
formulated as a semi-Markov decision process (SMDP) problem, with the objective
of minimizing the weighted sum of average response time and average power
consumption. A method is proposed to derive an approximate optimal SMDP
solution, representing the chosen dynamic batching policy. By introducing an
abstract cost to reflect the impact of "tail" states, the space complexity and
the time complexity of the procedure can decrease by 63.5% and 98%,
respectively. Numerical results showcase the superiority of SMDP-based batching
policies across various parameter setups. Additionally, the proposed scheme
exhibits noteworthy flexibility in balancing power consumption and latency.
|
2501.02182 | AdaMixup: A Dynamic Defense Framework for Membership Inference Attack
Mitigation | cs.LG cs.AI | Membership inference attacks have emerged as a significant privacy concern in
the training of deep learning models, where attackers can infer whether a data
point was part of the training set based on the model's outputs. To address
this challenge, we propose a novel defense mechanism, AdaMixup. AdaMixup
employs adaptive mixup techniques to enhance the model's robustness against
membership inference attacks by dynamically adjusting the mixup strategy during
training. This method not only improves the model's privacy protection but also
maintains high performance. Experimental results across multiple datasets
demonstrate that AdaMixup significantly reduces the risk of membership
inference attacks while achieving a favorable trade-off between defensive
efficiency and model accuracy. This research provides an effective solution for
data privacy protection and lays the groundwork for future advancements in
mixup training methods.
|
2501.02184 | Model-Free and Real-Time Bioinspired Unicycle-Based Source Seeking:
Differential Wheeled Robotic Experiments | cs.RO math.OC | Bioinspred robots aimed at source-seeking are often studied, and their
controls designed, using unicycle modeling and formulation. This is true not
only for model-based controllers, but also for model-free, real-time control
methods such as extremum seeking control (ESC). In this paper, we propose a
unicycle-based ESC design applicable to differential wheeled robots that: (1)
is very simple design, based on one simple control-affine law, and without
state integrators; (2) attenuates oscillations known to persist in ESC designs
(i.e., fully stop at the source); and (3) operates in a model-free, real-time
setting, tolerating environmental/sensor noise. We provide simulation and
real-world robotic experimental results for fixed and moving light source
seeking by a differential wheeled robot using our proposed design. Results
indicate clear advantages of our proposed design when compared to the
literature, including attenuation of undesired oscillations, improved
convergence speed, and better handling of noise.
|
2501.02187 | An Efficient Quadratic Penalty Method for a Class of Graph Clustering
Problems | math.OC cs.SI | Community-based graph clustering is one of the most popular topics in the
analysis of complex social networks. This type of clustering involves grouping
vertices that are considered to share more connections, whereas vertices in
different groups share fewer connections. A successful clustering result forms
densely connected induced subgraphs. This paper studies a specific form of
graph clustering problems that can be formulated as semi-assignment problems,
where the objective function exhibits block properties. We reformulate these
problems as sparse-constrained optimization problems and relax them to
continuous optimization models. We apply a quadratic penalty method to the
relaxation problem and solve the nonlinear quadratic penalty subproblem with
simple box constraints using a projected gradient method based on the active
set. Extensive numerical results indicate that our method provides more
accurate clustering results for solving graph clustering problems at a faster
speed, both for synthetic graphs and real-world network datasets, particularly
in large-scale cases.
|
2501.02189 | Benchmark Evaluations, Applications, and Challenges of Large Vision
Language Models: A Survey | cs.CV cs.AI cs.CL cs.LG cs.RO | Multimodal Vision Language Models (VLMs) have emerged as a transformative
technology at the intersection of computer vision and natural language
processing, enabling machines to perceive and reason about the world through
both visual and textual modalities. For example, models such as CLIP, Claude,
and GPT-4V demonstrate strong reasoning and understanding abilities on visual
and textual data and beat classical single modality vision models on zero-shot
classification. Despite their rapid advancements in research and growing
popularity in applications, a comprehensive survey of existing studies on VLMs
is notably lacking, particularly for researchers aiming to leverage VLMs in
their specific domains. To this end, we provide a systematic overview of VLMs
in the following aspects: model information of the major VLMs developed over
the past five years (2019-2024); the main architectures and training methods of
these VLMs; summary and categorization of the popular benchmarks and evaluation
metrics of VLMs; the applications of VLMs including embodied agents, robotics,
and video generation; the challenges and issues faced by current VLMs such as
hallucination, fairness, and safety. Detailed collections including papers and
model repository links are listed in
https://github.com/zli12321/Awesome-VLM-Papers-And-Models.git.
|
2501.02191 | On LLM-Enhanced Mixed-Type Data Imputation with High-Order Message
Passing | cs.LG cs.SI | Missing data imputation, which aims to impute the missing values in the raw
datasets to achieve the completeness of datasets, is crucial for modern
data-driven models like large language models (LLMs) and has attracted
increasing interest over the past decades. Despite its importance, existing
solutions for missing data imputation either 1) only support numerical and
categorical data or 2) show an unsatisfactory performance due to their design
prioritizing text data and the lack of key properties for tabular data
imputation. In this paper, we propose UnIMP, a Unified IMPutation framework
that leverages LLM and high-order message passing to enhance the imputation of
mixed-type data including numerical, categorical, and text data. Specifically,
we first introduce a cell-oriented hypergraph to model the table. We then
propose BiHMP, an efficient Bidirectional High-order Message-Passing network to
aggregate global-local information and high-order relationships on the
constructed hypergraph while capturing the inter-column heterogeneity and
intra-column homogeneity. To effectively and efficiently align the capacity of
the LLM with the information aggregated by BiHMP, we introduce Xfusion, which,
together with BiHMP, acts as adapters for the LLM. We follow a pre-training and
fine-tuning pipeline to train UnIMP, integrating two optimizations: chunking
technique, which divides tables into smaller chunks to enhance efficiency; and
progressive masking technique, which gradually adapts the model to learn more
complex data patterns. Both theoretical proofs and empirical experiments on 10
real world datasets highlight the superiority of UnIMP over existing
techniques.
|
2501.02192 | EvoPath: Evolutionary Meta-path Discovery with Large Language Models for
Complex Heterogeneous Information Networks | cs.SI | Heterogeneous Information Networks (HINs) encapsulate diverse entity and
relation types, with meta-paths providing essential meta-level semantics for
knowledge reasoning, although their utility is constrained by discovery
challenges. While Large Language Models (LLMs) offer new prospects for
meta-path discovery due to their extensive knowledge encoding and efficiency,
their adaptation faces challenges such as corpora bias, lexical discrepancies,
and hallucination. This paper pioneers the mitigation of these challenges by
presenting EvoPath, an innovative framework that leverages LLMs to efficiently
identify high-quality meta-paths. EvoPath is carefully designed, with each
component aimed at addressing issues that could lead to potential knowledge
conflicts. With a minimal subset of HIN facts, EvoPath iteratively generates
and evolves meta-paths by dynamically replaying meta-paths in the buffer with
prioritization based on their scores. Comprehensive experiments on three large,
complex HINs with hundreds of relations demonstrate that our framework,
EvoPath, enables LLMs to generate high-quality meta-paths through effective
prompting, confirming its superior performance in HIN reasoning tasks. Further
ablation studies validate the effectiveness of each module within the
framework.
|
2501.02194 | Ensemble-based Deep Multilayer Community Search | cs.SI | Multilayer graphs, consisting of multiple interconnected layers, are widely
used to model diverse relationships in the real world. A community is a
cohesive subgraph that offers valuable insights for analyzing (multilayer)
graphs. Recently, there has been an emerging trend focused on searching
query-driven communities within the multilayer graphs. However, existing
methods for multilayer community search are either 1) rule-based, which suffer
from structure inflexibility; or 2) learning-based, which rely on labeled data
or fail to capture layer-specific characteristics. To address these, we propose
EnMCS, an Ensemble-based unsupervised (i.e., label-free) Multilayer Community
Search framework. EnMCS contains two key components, i.e., HoloSearch which
identifies potential communities in each layer while integrating both
layer-shared and layer-specific information, and EMerge which is an
Expectation-Maximization (EM)-based method that synthesizes the potential
communities from each layer into a consensus community. Specifically,
HoloSearch first employs a graph-diffusion-based model that integrates three
label-free loss functions to learn layer-specific and layer-shared
representations for each node. Communities in each layer are then identified
based on nodes that exhibit high similarity in layer-shared representations
while demonstrating low similarity in layer-specific representations w.r.t. the
query nodes. To account for the varying layer-specific characteristics of each
layer when merging communities, EMerge models the error rates of layers and
true community as latent variables. It then employs the EM algorithm to
simultaneously minimize the error rates of layers and predict the final
consensus community through iterative maximum likelihood estimation.
Experiments over 10 real-world datasets highlight the superiority of EnMCS in
terms of both efficiency and effectiveness.
|
2501.02196 | CPTuning: Contrastive Prompt Tuning for Generative Relation Extraction | cs.CL cs.AI | Generative relation extraction (RE) commonly involves first reformulating RE
as a linguistic modeling problem easily tackled with pre-trained language
models (PLM) and then fine-tuning a PLM with supervised cross-entropy loss.
Although having achieved promising performance, existing approaches assume only
one deterministic relation between each pair of entities without considering
real scenarios where multiple relations may be valid, i.e., entity pair
overlap, causing their limited applications. To address this problem, we
introduce a novel contrastive prompt tuning method for RE, CPTuning, which
learns to associate a candidate relation between two in-context entities with a
probability mass above or below a threshold, corresponding to whether the
relation exists. Beyond learning schema, CPTuning also organizes RE as a
verbalized relation generation task and uses Trie-constrained decoding to
ensure a model generates valid relations. It adaptively picks out the generated
candidate relations with a high estimated likelihood in inference, thereby
achieving multi-relation extraction. We conduct extensive experiments on four
widely used datasets to validate our method. Results show that T5-large
fine-tuned with CPTuning significantly outperforms previous methods, regardless
of single or multiple relations extraction.
|
2501.02197 | Majorization-Minimization Dual Stagewise Algorithm for Generalized Lasso | stat.ML cs.LG stat.CO | The generalized lasso is a natural generalization of the celebrated lasso
approach to handle structural regularization problems. Many important methods
and applications fall into this framework, including fused lasso, clustered
lasso, and constrained lasso. To elevate its effectiveness in large-scale
problems, extensive research has been conducted on the computational strategies
of generalized lasso. However, to our knowledge, most studies are under the
linear setup, with limited advances in non-Gaussian and non-linear models. We
propose a majorization-minimization dual stagewise (MM-DUST) algorithm to
efficiently trace out the full solution paths of the generalized lasso problem.
The majorization technique is incorporated to handle different convex loss
functions through their quadratic majorizers. Utilizing the connection between
primal and dual problems and the idea of ``slow-brewing'' from stagewise
learning, the minimization step is carried out in the dual space through a
sequence of simple coordinate-wise updates on the dual coefficients with a
small step size. Consequently, selecting an appropriate step size enables a
trade-off between statistical accuracy and computational efficiency. We analyze
the computational complexity of MM-DUST and establish the uniform convergence
of the approximated solution paths. Extensive simulation studies and
applications with regularized logistic regression and Cox model demonstrate the
effectiveness of the proposed approach.
|
2501.02198 | Fresh-CL: Feature Realignment through Experts on Hypersphere in
Continual Learning | cs.LG cs.CV | Continual Learning enables models to learn and adapt to new tasks while
retaining prior knowledge. Introducing new tasks, however, can naturally lead
to feature entanglement across tasks, limiting the model's capability to
distinguish between new domain data. In this work, we propose a method called
Feature Realignment through Experts on hyperSpHere in Continual Learning
(Fresh-CL). By leveraging predefined and fixed simplex equiangular tight frame
(ETF) classifiers on a hypersphere, our model improves feature separation both
intra and inter tasks. However, the projection to a simplex ETF shifts with new
tasks, disrupting structured feature representation of previous tasks and
degrading performance. Therefore, we propose a dynamic extension of ETF through
mixture of experts, enabling adaptive projections onto diverse subspaces to
enhance feature representation. Experiments on 11 datasets demonstrate a 2%
improvement in accuracy compared to the strongest baseline, particularly in
fine-grained datasets, confirming the efficacy of combining ETF and MoE to
improve feature distinction in continual learning scenarios.
|
2501.02199 | Can ChatGPT implement finite element models for geotechnical engineering
applications? | math.NA cs.AI cs.NA | This study assesses the capability of ChatGPT to generate finite element code
for geotechnical engineering applications from a set of prompts. We tested
three different initial boundary value problems using a hydro-mechanically
coupled formulation for unsaturated soils, including the dissipation of excess
pore water pressure through fluid mass diffusion in one-dimensional space,
time-dependent differential settlement of a strip footing, and gravity-driven
seepage. For each case, initial prompting involved providing ChatGPT with
necessary information for finite element implementation, such as balance and
constitutive equations, problem geometry, initial and boundary conditions,
material properties, and spatiotemporal discretization and solution strategies.
Any errors and unexpected results were further addressed through prompt
augmentation processes until the ChatGPT-generated finite element code passed
the verification/validation test. Our results demonstrate that ChatGPT required
minimal code revisions when using the FEniCS finite element library, owing to
its high-level interfaces that enable efficient programming. In contrast, the
MATLAB code generated by ChatGPT necessitated extensive prompt augmentations
and/or direct human intervention, as it involves a significant amount of
low-level programming required for finite element analysis, such as
constructing shape functions or assembling global matrices. Given that prompt
engineering for this task requires an understanding of the mathematical
formulation and numerical techniques, this study suggests that while a large
language model may not yet replace human programmers, it can greatly assist in
the implementation of numerical models.
|
2501.02200 | Learning Evolution via Optimization Knowledge Adaptation | cs.NE cs.AI cs.CV cs.LG | Evolutionary algorithms (EAs) maintain populations through evolutionary
operators to discover diverse solutions for complex tasks while gathering
valuable knowledge, such as historical population data and fitness evaluations.
However, traditional EAs face challenges in dynamically adapting to expanding
knowledge bases, hindering the efficient exploitation of accumulated
information and limiting adaptability to new situations. To address these
issues, we introduce an Optimization Knowledge Adaptation Evolutionary Model
(OKAEM), which features dynamic parameter adjustment using accumulated
knowledge to enhance its optimization capabilities. OKAEM employs attention
mechanisms to model the interactions among individuals, fitness landscapes, and
genetic components separately, thereby parameterizing the evolutionary
operators of selection, crossover, and mutation. These powerful learnable
operators enable OKAEM to benefit from pre-learned extensive prior knowledge
and self-tune with real-time evolutionary insights. Experimental results
demonstrate that OKAEM: 1) exploits prior knowledge for significant performance
gains across various knowledge transfer settings; 2) achieves competitive
performance through self-tuning alone, even without prior knowledge; 3)
outperforms state-of-the-art black-box baselines in a vision-language model
tuning case; 4) can improve its optimization capabilities with growing
knowledge; 5) is capable of emulating principles of natural selection and
genetic recombination.
|
2501.02201 | Accounting for Focus Ambiguity in Visual Questions | cs.CV | No existing work on visual question answering explicitly accounts for
ambiguity regarding where the content described in the question is located in
the image. To fill this gap, we introduce VQ-FocusAmbiguity, the first VQA
dataset that visually grounds each region described in the question that is
necessary to arrive at the answer. We then provide an analysis showing how our
dataset for visually grounding `questions' is distinct from visually grounding
`answers', and characterize the properties of the questions and segmentations
provided in our dataset. Finally, we benchmark modern models for two novel
tasks: recognizing whether a visual question has focus ambiguity and localizing
all plausible focus regions within the image. Results show that the dataset is
challenging for modern models. To facilitate future progress on these tasks, we
publicly share the dataset with an evaluation server at
https://focusambiguity.github.io/.
|
2501.02205 | Digital Twin Calibration with Model-Based Reinforcement Learning | cs.LG | This paper presents a novel methodological framework, called the
Actor-Simulator, that incorporates the calibration of digital twins into
model-based reinforcement learning for more effective control of stochastic
systems with complex nonlinear dynamics. Traditional model-based control often
relies on restrictive structural assumptions (such as linear state transitions)
and fails to account for parameter uncertainty in the model. These issues
become particularly critical in industries such as biopharmaceutical
manufacturing, where process dynamics are complex and not fully known, and only
a limited amount of data is available. Our approach jointly calibrates the
digital twin and searches for an optimal control policy, thus accounting for
and reducing model error. We balance exploration and exploitation by using
policy performance as a guide for data collection. This dual-component approach
provably converges to the optimal policy, and outperforms existing methods in
extensive numerical experiments based on the biopharmaceutical manufacturing
domain.
|
2501.02207 | Self-Supervised Learning for Detecting AI-Generated Faces as Anomalies | cs.CV | The detection of AI-generated faces is commonly approached as a binary
classification task. Nevertheless, the resulting detectors frequently struggle
to adapt to novel AI face generators, which evolve rapidly. In this paper, we
describe an anomaly detection method for AI-generated faces by leveraging
self-supervised learning of camera-intrinsic and face-specific features purely
from photographic face images. The success of our method lies in designing a
pretext task that trains a feature extractor to rank four ordinal exchangeable
image file format (EXIF) tags and classify artificially manipulated face
images. Subsequently, we model the learned feature distribution of photographic
face images using a Gaussian mixture model. Faces with low likelihoods are
flagged as AI-generated. Both quantitative and qualitative experiments validate
the effectiveness of our method. Our code is available at
\url{https://github.com/MZMMSEC/AIGFD_EXIF.git}.
|
2501.02208 | Robust Multi-Dimensional Scaling via Accelerated Alternating Projections | stat.ML cs.LG math.OC | We consider the robust multi-dimensional scaling (RMDS) problem in this
paper. The goal is to localize point locations from pairwise distances that may
be corrupted by outliers. Inspired by classic MDS theories, and nonconvex works
for the robust principal component analysis (RPCA) problem, we propose an
alternating projection based algorithm that is further accelerated by the
tangent space projection technique. For the proposed algorithm, if the outliers
are sparse enough, we can establish linear convergence of the reconstructed
points to the original points after centering and rotation alignment. Numerical
experiments verify the state-of-the-art performances of the proposed algorithm.
|
2501.02211 | Examining the Robustness of Homogeneity Bias to Hyperparameter
Adjustments in GPT-4 | cs.CV cs.CL cs.LG | Vision-Language Models trained on massive collections of human-generated data
often reproduce and amplify societal stereotypes. One critical form of
stereotyping reproduced by these models is homogeneity bias-the tendency to
represent certain groups as more homogeneous than others. We investigate how
this bias responds to hyperparameter adjustments in GPT-4, specifically
examining sampling temperature and top p which control the randomness of model
outputs. By generating stories about individuals from different racial and
gender groups and comparing their similarities using vector representations, we
assess both bias robustness and its relationship with hyperparameter values. We
find that (1) homogeneity bias persists across most hyperparameter
configurations, with Black Americans and women being represented more
homogeneously than White Americans and men, (2) the relationship between
hyperparameters and group representations shows unexpected non-linear patterns,
particularly at extreme values, and (3) hyperparameter adjustments affect
racial and gender homogeneity bias differently-while increasing temperature or
decreasing top p can reduce racial homogeneity bias, these changes show
different effects on gender homogeneity bias. Our findings suggest that while
hyperparameter tuning may mitigate certain biases to some extent, it cannot
serve as a universal solution for addressing homogeneity bias across different
social group dimensions.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.