id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
2501.08528
|
Dynamic Portfolio Optimization via Augmented DDPG with Quantum Price
Levels-Based Trading Strategy
|
cs.CE cs.AI
|
With the development of deep learning, Dynamic Portfolio Optimization (DPO)
problem has received a lot of attention in recent years, not only in the field
of finance but also in the field of deep learning. Some advanced research in
recent years has proposed the application of Deep Reinforcement Learning (DRL)
to the DPO problem, which demonstrated to be more advantageous than supervised
learning in solving the DPO problem. However, there are still certain unsolved
issues: 1) DRL algorithms usually have the problems of slow learning speed and
high sample complexity, which is especially problematic when dealing with
complex financial data. 2) researchers use DRL simply for the purpose of
obtaining high returns, but pay little attention to the problem of risk control
and trading strategy, which will affect the stability of model returns. In
order to address these issues, in this study we revamped the intrinsic
structure of the model based on the Deep Deterministic Policy Gradient (DDPG)
and proposed the Augmented DDPG model. Besides, we also proposed an innovative
risk control strategy based on Quantum Price Levels (QPLs) derived from Quantum
Finance Theory (QFT). Our experimental results revealed that our model has
better profitability as well as risk control ability with less sample
complexity in the DPO problem compared to the baseline models.
|
2501.08531
|
A Novel Multiple Interval Prediction Method for Electricity Prices based
on Scenarios Generation: Definition and Method
|
eess.SY cs.SY
|
This paper presents interval prediction methodology to address limitations in
existing evaluation indicators and improve prediction accuracy and reliability.
First, new evaluation indicators are proposed to comprehensively assess
interval prediction methods, considering both all-sample and single-sample
scenarios. Second, a novel Pattern-Diversity Conditional Time-Series Generative
Adversarial Network (PDCTSGAN) is introduced to generate realistic scenarios,
enabling a new interval prediction approach based on scenario generation. The
PDCTSGAN model innovatively incorporates modifications to random noise inputs,
allowing the generation of pattern-diverse realistic scenarios. These scenarios
are further utilized to construct multiple interval patterns with high coverage
probability and low average width. The effectiveness of the proposed
methodology is demonstrated through comprehensive case studies. The paper
concludes by highlighting future research directions to further enhance
interval prediction methods.
|
2501.08532
|
Scenarios Generation-based Multiple Interval Prediction Method for
Electricity Prices
|
eess.SY cs.SY
|
This paper introduces an innovative interval prediction methodology aimed at
addressing the limitations of current evaluation indicators while enhancing
prediction accuracy and reliability. To achieve this, new evaluation metrics
are proposed, offering a comprehensive assessment of interval prediction
methods across both all-sample and single-sample scenarios. Additionally, a
novel Pattern-Diversity Conditional Time-Series Generative Adversarial Network
(PDCTSGAN) is developed, designed to generate realistic scenarios and support a
new interval prediction framework based on scenario generation. The PDCTSGAN
model incorporates unique modifications to random noise inputs, enabling the
creation of pattern-diverse and realistic scenarios. These scenarios are then
utilized to produce multiple interval patterns characterized by high coverage
probability and reduced average width. The proposed approach is validated
through detailed case studies, and the paper concludes with a discussion of
future research directions to further refine interval prediction techniques.
|
2501.08537
|
Complexity Control Facilitates Reasoning-Based Compositional
Generalization in Transformers
|
cs.CL cs.LG
|
Transformers have demonstrated impressive capabilities across various tasks,
yet their performance on compositional problems remains a subject of debate. In
this study, we investigate the internal mechanisms underlying Transformers'
behavior in compositional tasks. We find that complexity control strategies
significantly influence whether the model learns primitive-level rules that
generalize out-of-distribution (reasoning-based solutions) or relies solely on
memorized mappings (memory-based solutions). By applying masking strategies to
the model's information circuits and employing multiple complexity metrics, we
reveal distinct internal working mechanisms associated with different solution
types. Further analysis reveals that reasoning-based solutions exhibit a lower
complexity bias, which aligns with the well-studied neuron condensation
phenomenon. This lower complexity bias is hypothesized to be the key factor
enabling these solutions to learn reasoning rules. We validate these
conclusions across multiple real-world datasets, including image generation and
natural language processing tasks, confirming the broad applicability of our
findings.
|
2501.08538
|
Homophily-aware Heterogeneous Graph Contrastive Learning
|
cs.LG cs.SI
|
Heterogeneous graph pre-training (HGP) has demonstrated remarkable
performance across various domains. However, the issue of heterophily in
real-world heterogeneous graphs (HGs) has been largely overlooked. To bridge
this research gap, we proposed a novel heterogeneous graph contrastive learning
framework, termed HGMS, which leverages connection strength and multi-view
self-expression to learn homophilous node representations. Specifically, we
design a heterogeneous edge dropping augmentation strategy that enhances the
homophily of augmented views. Moreover, we introduce a multi-view
self-expressive learning method to infer the homophily between nodes. In
practice, we develop two approaches to solve the self-expressive matrix. The
solved self-expressive matrix serves as an additional augmented view to provide
homophilous information and is used to identify false negatives in contrastive
loss. Extensive experimental results demonstrate the superiority of HGMS across
different downstream tasks.
|
2501.08539
|
Research on stock price forecast of general electric based on mixed
CNN-LSTM model
|
cs.CE
|
Accurate stock price prediction is crucial for investors and financial
institutions, yet the complexity of the stock market makes it highly
challenging. This study aims to construct an effective model to enhance the
prediction ability of General Electric's stock price trend. The CNN - LSTM
model is adopted, combining the feature extraction ability of CNN with the long
- term dependency handling ability of LSTM, and the Adam optimizer is used to
adjust the parameters. In the data preparation stage, historical trading data
of General Electric's stock is collected. After cleaning, handling missing
values, and feature engineering, features with strong correlations to the
closing price are selected and dimensionality reduction is performed. During
model training, the data is divided into training, validation, and testing sets
in a ratio of 7:2:1. The Stochastic Gradient Descent algorithm is used with a
dynamic learning rate adjustment and L2 regularization, and the Mean Squared
Error is used as the loss function, evaluated by variance, R - squared score,
and maximum error. Experimental results show that the model loss decreases
steadily, and the predicted values align well with the actual values, providing
a powerful tool for investment decisions. However, the model's performance in
real - time and extreme market conditions remains to be tested, and future
improvements could consider incorporating more data sources.
|
2501.08540
|
Knowledge prompt chaining for semantic modeling
|
cs.CL cs.AI cs.DB
|
The task of building semantics for structured data such as CSV, JSON, and XML
files is highly relevant in the knowledge representation field. Even though we
have a vast of structured data on the internet, mapping them to domain
ontologies to build semantics for them is still very challenging as it requires
the construction model to understand and learn graph-structured knowledge.
Otherwise, the task will require human beings' effort and cost. In this paper,
we proposed a novel automatic semantic modeling framework: Knowledge Prompt
Chaining. It can serialize the graph-structured knowledge and inject it into
the LLMs properly in a Prompt Chaining architecture. Through this knowledge
injection and prompting chaining, the model in our framework can learn the
structure information and latent space of the graph and generate the semantic
labels and semantic graphs following the chains' insturction naturally. Based
on experimental results, our method achieves better performance than existing
leading techniques, despite using reduced structured input data.
|
2501.08545
|
T2VEval: T2V-generated Videos Benchmark Dataset and Objective Evaluation
Method
|
cs.CV
|
Recent advances in text-to-video (T2V) technology, as demonstrated by models
such as Runway Gen-3, Pika, Sora, and Kling, have significantly broadened the
applicability and popularity of the technology. This progress has created a
growing demand for accurate quality assessment metrics to evaluate the
perceptual quality of T2V-generated videos and optimize video generation
models. However, assessing the quality of text-to-video outputs remain
challenging due to the presence of highly complex distortions, such as
unnatural actions and phenomena that defy human cognition. To address these
challenges, we constructed T2VEval-Bench, a multi-dimensional benchmark dataset
for text-to-video quality evaluation, which contains 148 textual prompts and
1,783 videos generated by 13 T2V models. To ensure a comprehensive evaluation,
we scored each video on four dimensions in the subjective experiment, which are
overall impression, text-video consistency, realness, and technical quality.
Based on T2VEval-Bench, we developed T2VEval, a multi-branch fusion scheme for
T2V quality evaluation. T2VEval assesses videos across three branches:
text-video consistency, realness, and technical quality. Using an
attention-based fusion module, T2VEval effectively integrates features from
each branch and predicts scores with the aid of a large language model.
Additionally, we implemented a divide-and-conquer training strategy, enabling
each branch to learn targeted knowledge while maintaining synergy with the
others. Experimental results demonstrate that T2VEval achieves state-of-the-art
performance across multiple metrics.
|
2501.08547
|
OMEGA: A Low-Latency GNN Serving System for Large Graphs
|
cs.DC cs.LG
|
Graph Neural Networks (GNNs) have been widely adopted for their ability to
compute expressive node representations in graph datasets. However, serving
GNNs on large graphs is challenging due to the high communication, computation,
and memory overheads of constructing and executing computation graphs, which
represent information flow across large neighborhoods. Existing approximation
techniques in training can mitigate the overheads but, in serving, still lead
to high latency and/or accuracy loss. To this end, we propose OMEGA, a system
that enables low-latency GNN serving for large graphs with minimal accuracy
loss through two key ideas. First, OMEGA employs selective recomputation of
precomputed embeddings, which allows for reusing precomputed computation
subgraphs while selectively recomputing a small fraction to minimize accuracy
loss. Second, we develop computation graph parallelism, which reduces
communication overhead by parallelizing the creation and execution of
computation graphs across machines. Our evaluation with large graph datasets
and GNN models shows that OMEGA significantly outperforms state-of-the-art
techniques.
|
2501.08549
|
The Devil is in Temporal Token: High Quality Video Reasoning
Segmentation
|
cs.CV cs.AI
|
Existing methods for Video Reasoning Segmentation rely heavily on a single
special token to represent the object in the keyframe or the entire video,
inadequately capturing spatial complexity and inter-frame motion. To overcome
these challenges, we propose VRS-HQ, an end-to-end video reasoning segmentation
approach that leverages Multimodal Large Language Models (MLLMs) to inject rich
spatiotemporal features into hierarchical tokens.Our key innovations include a
Temporal Dynamic Aggregation (TDA) and a Token-driven Keyframe Selection (TKS).
Specifically, we design frame-level <SEG> and temporal-level <TAK> tokens that
utilize MLLM's autoregressive learning to effectively capture both local and
global information. Subsequently, we apply a similarity-based weighted fusion
and frame selection strategy, then utilize SAM2 to perform keyframe
segmentation and propagation. To enhance keyframe localization accuracy, the
TKS filters keyframes based on SAM2's occlusion scores during inference. VRS-HQ
achieves state-of-the-art performance on ReVOS, surpassing VISA by
5.9%/12.5%/9.1% in J&F scores across the three subsets. These results highlight
the strong temporal reasoning and segmentation capabilities of our method. Code
and model weights will be released at VRS-HQ.
|
2501.08551
|
A Theory of Optimistically Universal Online Learnability for General
Concept Classes
|
stat.ML cs.LG
|
We provide a full characterization of the concept classes that are
optimistically universally online learnable with $\{0, 1\}$ labels. The notion
of optimistically universal online learning was defined in [Hanneke, 2021] in
order to understand learnability under minimal assumptions. In this paper,
following the philosophy behind that work, we investigate two questions,
namely, for every concept class: (1) What are the minimal assumptions on the
data process admitting online learnability? (2) Is there a learning algorithm
which succeeds under every data process satisfying the minimal assumptions?
Such an algorithm is said to be optimistically universal for the given concept
class. We resolve both of these questions for all concept classes, and
moreover, as part of our solution, we design general learning algorithms for
each case. Finally, we extend these algorithms and results to the agnostic
case, showing an equivalence between the minimal assumptions on the data
process for learnability in the agnostic and realizable cases, for every
concept class, as well as the equivalence of optimistically universal
learnability.
|
2501.08552
|
Reinforcement Learning-Enhanced Procedural Generation for Dynamic
Narrative-Driven AR Experiences
|
cs.AI cs.GR cs.HC cs.LG
|
Procedural Content Generation (PCG) is widely used to create scalable and
diverse environments in games. However, existing methods, such as the Wave
Function Collapse (WFC) algorithm, are often limited to static scenarios and
lack the adaptability required for dynamic, narrative-driven applications,
particularly in augmented reality (AR) games. This paper presents a
reinforcement learning-enhanced WFC framework designed for mobile AR
environments. By integrating environment-specific rules and dynamic tile weight
adjustments informed by reinforcement learning (RL), the proposed method
generates maps that are both contextually coherent and responsive to gameplay
needs. Comparative evaluations and user studies demonstrate that the framework
achieves superior map quality and delivers immersive experiences, making it
well-suited for narrative-driven AR games. Additionally, the method holds
promise for broader applications in education, simulation training, and
immersive extended reality (XR) experiences, where dynamic and adaptive
environments are critical.
|
2501.08553
|
DynamicFace: High-Quality and Consistent Video Face Swapping using
Composable 3D Facial Priors
|
cs.CV
|
Face swapping transfers the identity of a source face to a target face while
retaining the attributes like expression, pose, hair, and background of the
target face. Advanced face swapping methods have achieved attractive results.
However, these methods often inadvertently transfer identity information from
the target face, compromising expression-related details and accurate identity.
We propose a novel method DynamicFace that leverages the power of diffusion
model and plug-and-play temporal layers for video face swapping. First, we
introduce four fine-grained face conditions using 3D facial priors. All
conditions are designed to be disentangled from each other for precise and
unique control. Then, we adopt Face Former and ReferenceNet for high-level and
detailed identity injection. Through experiments on the FF++ dataset, we
demonstrate that our method achieves state-of-the-art results in face swapping,
showcasing superior image quality, identity preservation, and expression
accuracy. Besides, our method could be easily transferred to video domain with
temporal attention layer. Our code and results will be available on the project
page: https://dynamic-face.github.io/
|
2501.08558
|
LAMS: LLM-Driven Automatic Mode Switching for Assistive Teleoperation
|
cs.RO cs.AI cs.HC cs.LG
|
Teleoperating high degrees-of-freedom (DoF) robotic manipulators via low-DoF
controllers like joysticks often requires frequent switching between control
modes, where each mode maps controller movements to specific robot actions.
Manually performing this frequent switching can make teleoperation cumbersome
and inefficient. On the other hand, existing automatic mode-switching
solutions, such as heuristic-based or learning-based methods, are often
task-specific and lack generalizability. In this paper, we introduce LLM-Driven
Automatic Mode Switching (LAMS), a novel approach that leverages Large Language
Models (LLMs) to automatically switch control modes based on task context.
Unlike existing methods, LAMS requires no prior task demonstrations and
incrementally improves by integrating user-generated mode-switching examples.
We validate LAMS through an ablation study and a user study with 10
participants on complex, long-horizon tasks, demonstrating that LAMS
effectively reduces manual mode switches, is preferred over alternative
methods, and improves performance over time. The project website with
supplementary materials is at https://lams-assistance.github.io/.
|
2501.08561
|
ANSR-DT: An Adaptive Neuro-Symbolic Learning and Reasoning Framework for
Digital Twins
|
cs.AI cs.HC cs.LG cs.SC
|
In this paper, we propose an Adaptive Neuro-Symbolic Learning Framework for
digital twin technology called ``ANSR-DT." Our approach combines pattern
recognition algorithms with reinforcement learning and symbolic reasoning to
enable real-time learning and adaptive intelligence. This integration enhances
the understanding of the environment and promotes continuous learning, leading
to better and more effective decision-making in real-time for applications that
require human-machine collaboration. We evaluated the \textit{ANSR-DT}
framework for its ability to learn and adapt to dynamic patterns, observing
significant improvements in decision accuracy, reliability, and
interpretability when compared to existing state-of-the-art methods. However,
challenges still exist in extracting and integrating symbolic rules in complex
environments, which limits the full potential of our framework in heterogeneous
settings. Moreover, our ongoing research aims to address this issue in the
future by ensuring seamless integration of neural models at large. In addition,
our open-source implementation promotes reproducibility and encourages future
research to build on our foundational work.
|
2501.08562
|
MIAFEx: An Attention-based Feature Extraction Method for Medical Image
Classification
|
cs.CV cs.LG
|
Feature extraction techniques are crucial in medical image classification;
however, classical feature extractors in addition to traditional machine
learning classifiers often exhibit significant limitations in providing
sufficient discriminative information for complex image sets. While
Convolutional Neural Networks (CNNs) and Vision Transformer (ViT) have shown
promise in feature extraction, they are prone to overfitting due to the
inherent characteristics of medical imaging data, including small sample sizes
or high intra-class variance. In this work, the Medical Image Attention-based
Feature Extractor (MIAFEx) is proposed, a novel method that employs a learnable
refinement mechanism to enhance the classification token within the Transformer
encoder architecture. This mechanism adjusts the token based on learned
weights, improving the extraction of salient features and enhancing the model's
adaptability to the challenges presented by medical imaging data. The MIAFEx
output features quality is compared against classical feature extractors using
traditional and hybrid classifiers. Also, the performance of these features is
compared against modern CNN and ViT models in classification tasks,
demonstrating its superiority in accuracy and robustness across multiple
complex classification medical imaging datasets. This advantage is particularly
pronounced in scenarios with limited training data, where traditional and
modern models often struggle to generalize effectively. The source code of this
proposal can be found at
https://github.com/Oscar-RamosS/Medical-Image-Attention-based-Feature-Extractor-MIAFEx
|
2501.08563
|
Adaptive Sampled Softmax with Inverted Multi-Index: Methods, Theory and
Applications
|
cs.LG
|
The softmax function is a cornerstone of multi-class classification, integral
to a wide range of machine learning applications, from large-scale retrieval
and ranking models to advanced large language models. However, its
computational cost grows linearly with the number of classes, which becomes
prohibitively expensive in scenarios with millions or even billions of classes.
The sampled softmax, which relies on self-normalized importance sampling, has
emerged as a powerful alternative, significantly reducing computational
complexity. Yet, its estimator remains unbiased only when the sampling
distribution matches the true softmax distribution. To improve both
approximation accuracy and sampling efficiency, we propose the MIDX Sampler, a
novel adaptive sampling strategy based on an inverted multi-index approach.
Concretely, we decompose the softmax probability into several multinomial
probabilities, each associated with a specific set of codewords and the last
associated with the residual score of queries, thus reducing time complexity to
the number of codewords instead of the number of classes. To further boost
efficiency, we replace the query-specific residual probability with a simple
uniform distribution, simplifying the computation while retaining high
performance. Our method is backed by rigorous theoretical analysis, addressing
key concerns such as sampling bias, gradient bias, convergence rates, and
generalization error bounds. The results demonstrate that a smaller divergence
from the ideal softmax distribution leads to faster convergence and improved
generalization. Extensive experiments on large-scale language models,
sequential recommenders, and extreme multi-class classification tasks confirm
that the MIDX-Sampler delivers superior effectiveness and efficiency compared
to existing approaches.
|
2501.08565
|
DualOpt: A Dual Divide-and-Optimize Algorithm for the Large-scale
Traveling Salesman Problem
|
cs.AI
|
This paper proposes a dual divide-and-optimize algorithm (DualOpt) for
solving the large-scale traveling salesman problem (TSP). DualOpt combines two
complementary strategies to improve both solution quality and computational
efficiency. The first strategy is a grid-based divide-and-conquer procedure
that partitions the TSP into smaller sub-problems, solving them in parallel and
iteratively refining the solution by merging nodes and partial routes. The
process continues until only one grid remains, yielding a high-quality initial
solution. The second strategy involves a path-based divide-and-optimize
procedure that further optimizes the solution by dividing it into sub-paths,
optimizing each using a neural solver, and merging them back to progressively
improve the overall solution. Extensive experiments conducted on two groups of
TSP benchmark instances, including randomly generated instances with up to
100,000 nodes and real-world datasets from TSPLIB, demonstrate the
effectiveness of DualOpt. The proposed DualOpt achieves highly competitive
results compared to 10 state-of-the-art algorithms in the literature. In
particular, DualOpt achieves an improvement gap up to 1.40% for the largest
instance TSP100K with a remarkable 104x speed-up over the leading heuristic
solver LKH3. Additionally, DualOpt demonstrates strong generalization on TSPLIB
benchmarks, confirming its capability to tackle diverse real-world TSP
applications.
|
2501.08566
|
Towards Lightweight and Stable Zero-shot TTS with Self-distilled
Representation Disentanglement
|
cs.SD cs.AI eess.AS
|
Zero-shot Text-To-Speech (TTS) synthesis shows great promise for personalized
voice customization through voice cloning. However, current methods for
achieving zero-shot TTS heavily rely on large model scales and extensive
training datasets to ensure satisfactory performance and generalizability
across various speakers. This raises concerns regarding both deployment costs
and data security. In this paper, we present a lightweight and stable zero-shot
TTS system. We introduce a novel TTS architecture designed to effectively model
linguistic content and various speaker attributes from source speech and prompt
speech, respectively. Furthermore, we present a two-stage self-distillation
framework that constructs parallel data pairs for effectively disentangling
linguistic content and speakers from the perspective of training data.
Extensive experiments show that our system exhibits excellent performance and
superior stability on the zero-shot TTS tasks. Moreover, it shows markedly
superior computational efficiency, with RTFs of 0.13 and 0.012 on the CPU and
GPU, respectively.
|
2501.08569
|
Evaluating SAT and SMT Solvers on Large-Scale Sudoku Puzzles
|
cs.AI cs.LO
|
Modern SMT solvers have revolutionized the approach to constraint
satisfaction problems by integrating advanced theory reasoning and encoding
techniques. In this work, we evaluate the performance of modern SMT solvers in
Z3, CVC5 and DPLL(T) against a standard SAT solver in DPLL. By benchmarking
these solvers on novel, diverse 25x25 Sudoku puzzles of various difficulty
levels created by our improved Sudoku generator, we examine the impact of
advanced theory reasoning and encoding techniques. Our findings demonstrate
that modern SMT solvers significantly outperform classical SAT solvers. This
work highlights the evolution of logical solvers and exemplifies the utility of
SMT solvers in addressing large-scale constraint satisfaction problems.
|
2501.08570
|
Information Entropy Invariance: Enhancing Length Extrapolation in
Attention Mechanisms
|
cs.CL
|
Since the emergence of research on improving the length extrapolation
capabilities of large language models in 2021, some studies have made
modifications to the scaling factor in the scaled dot-product attention
mechanism as part of their proposed methods without rigorous theoretical
justifications. To fill this gap, we propose two new scaled temperatures based
on information entropy invariance to enhance length extrapolation. First, a
training-free method InfoScale is designed for dotproduct attention, and
preserves focus on original tokens during length extrapolation by ensuring
consistent entropy. Second, we theoretically analyze the impact of scaling
(CosScale) on cosine attention. Experimental data demonstrates that combining
InfoScale and CosScale achieves state-ofthe-art performance on the GAU-{\alpha}
model with a context window extended to 64 times the training length, and
outperforms seven existing methods. Our analysis reveals that significantly
increasing CosScale approximates the Windowed Attention, and highlights the
significance of attention score dilution as a key challenge in long-range
context handling. The code and data are available at
https://github.com/HT-NEKO/ Information-Entropy-Invariance.
|
2501.08572
|
DNMDR: Dynamic Networks and Multi-view Drug Representations for Safe
Medication Recommendation
|
cs.LG cs.IR
|
Medication Recommendation (MR) is a promising research topic which booms
diverse applications in the healthcare and clinical domains. However, existing
methods mainly rely on sequential modeling and static graphs for representation
learning, which ignore the dynamic correlations in diverse medical events of a
patient's temporal visits, leading to insufficient global structural
exploration on nodes. Additionally, mitigating drug-drug interactions (DDIs) is
another issue determining the utility of the MR systems. To address the
challenges mentioned above, this paper proposes a novel MR method with the
integration of dynamic networks and multi-view drug representations (DNMDR).
Specifically, weighted snapshot sequences for dynamic heterogeneous networks
are constructed based on discrete visits in temporal EHRs, and all the dynamic
networks are jointly trained to gain both structural correlations in diverse
medical events and temporal dependency in historical health conditions, for
achieving comprehensive patient representations with both semantic features and
structural relationships. Moreover, combining the drug co-occurrences and
adverse drug-drug interactions (DDIs) in internal view of drug molecule
structure and interactive view of drug pairs, the safe drug representations are
available to obtain high-quality medication combination recommendation.
Finally, extensive experiments on real world datasets are conducted for
performance evaluation, and the experimental results demonstrate that the
proposed DNMDR method outperforms the state-of-the-art baseline models with a
large margin on various metrics such as PRAUC, Jaccard, DDI rates and so on.
|
2501.08575
|
GOTLoc: General Outdoor Text-based Localization Using Scene Graph
Retrieval with OpenStreetMap
|
cs.RO cs.CV
|
We propose GOTLoc, a robust localization method capable of operating even in
outdoor environments where GPS signals are unavailable. The method achieves
this robust localization by leveraging comparisons between scene graphs
generated from text descriptions and maps. Existing text-based localization
studies typically represent maps as point clouds and identify the most similar
scenes by comparing embeddings of text and point cloud data. However, point
cloud maps have limited scalability as it is impractical to pre-generate maps
for all outdoor spaces. Furthermore, their large data size makes it challenging
to store and utilize them directly on actual robots. To address these issues,
GOTLoc leverages compact data structures, such as scene graphs, to store
spatial information, enabling individual robots to carry and utilize large
amounts of map data. Additionally, by utilizing publicly available map data,
such as OpenStreetMap, which provides global information on outdoor spaces, we
eliminate the need for additional effort to create custom map data. For
performance evaluation, we utilized the KITTI360Pose dataset in conjunction
with corresponding OpenStreetMap data to compare the proposed method with
existing approaches. Our results demonstrate that the proposed method achieves
accuracy comparable to algorithms relying on point cloud maps. Moreover, in
city-scale tests, GOTLoc required significantly less storage compared to point
cloud-based methods and completed overall processing within a few seconds,
validating its applicability to real-world robotics. Our code is available at
https://github.com/donghwijung/GOTLoc.
|
2501.08577
|
Scalable and High-Quality Neural Implicit Representation for 3D
Reconstruction
|
cs.CV cs.GR
|
Various SDF-based neural implicit surface reconstruction methods have been
proposed recently, and have demonstrated remarkable modeling capabilities.
However, due to the global nature and limited representation ability of a
single network, existing methods still suffer from many drawbacks, such as
limited accuracy and scale of the reconstruction. In this paper, we propose a
versatile, scalable and high-quality neural implicit representation to address
these issues. We integrate a divide-and-conquer approach into the neural
SDF-based reconstruction. Specifically, we model the object or scene as a
fusion of multiple independent local neural SDFs with overlapping regions. The
construction of our representation involves three key steps: (1) constructing
the distribution and overlap relationship of the local radiance fields based on
object structure or data distribution, (2) relative pose registration for
adjacent local SDFs, and (3) SDF blending. Thanks to the independent
representation of each local region, our approach can not only achieve
high-fidelity surface reconstruction, but also enable scalable scene
reconstruction. Extensive experimental results demonstrate the effectiveness
and practicality of our proposed method.
|
2501.08579
|
What Limits LLM-based Human Simulation: LLMs or Our Design?
|
cs.CL
|
We argue that advancing LLM-based human simulation requires addressing both
LLM's inherent limitations and simulation framework design challenges. Recent
studies have revealed significant gaps between LLM-based human simulations and
real-world observations, highlighting these dual challenges. To address these
gaps, we present a comprehensive analysis of LLM limitations and our design
issues, proposing targeted solutions for both aspects. Furthermore, we explore
future directions that address both challenges simultaneously, particularly in
data collection, LLM generation, and evaluation. To support further research in
this field, we provide a curated collection of LLM-based human simulation
resources.\footnote{https://github.com/Persdre/llm-human-simulation}
|
2501.08580
|
Densely Connected Parameter-Efficient Tuning for Referring Image
Segmentation
|
cs.CV
|
In the domain of computer vision, Parameter-Efficient Tuning (PET) is
increasingly replacing the traditional paradigm of pre-training followed by
full fine-tuning. PET is particularly favored for its effectiveness in large
foundation models, as it streamlines transfer learning costs and optimizes
hardware utilization. However, the current PET methods are mainly designed for
single-modal optimization. While some pioneering studies have undertaken
preliminary explorations, they still remain at the level of aligned encoders
(e.g., CLIP) and lack exploration of misaligned encoders. These methods show
sub-optimal performance with misaligned encoders, as they fail to effectively
align the multimodal features during fine-tuning. In this paper, we introduce
DETRIS, a parameter-efficient tuning framework designed to enhance low-rank
visual feature propagation by establishing dense interconnections between each
layer and all preceding layers, which enables effective cross-modal feature
interaction and adaptation to misaligned encoders. We also suggest using text
adapters to improve textual features. Our simple yet efficient approach greatly
surpasses state-of-the-art methods with 0.9% to 1.8% backbone parameter
updates, evaluated on challenging benchmarks. Our project is available at
\url{https://github.com/jiaqihuang01/DETRIS}.
|
2501.08581
|
Normalize Then Propagate: Efficient Homophilous Regularization for
Few-shot Semi-Supervised Node Classification
|
cs.LG
|
Graph Neural Networks (GNNs) have demonstrated remarkable ability in
semi-supervised node classification. However, most existing GNNs rely heavily
on a large amount of labeled data for training, which is labor-intensive and
requires extensive domain knowledge. In this paper, we first analyze the
restrictions of GNNs generalization from the perspective of supervision signals
in the context of few-shot semi-supervised node classification. To address
these challenges, we propose a novel algorithm named NormProp, which utilizes
the homophily assumption of unlabeled nodes to generate additional supervision
signals, thereby enhancing the generalization against label scarcity. The key
idea is to efficiently capture both the class information and the consistency
of aggregation during message passing, via decoupling the direction and
Euclidean norm of node representations. Moreover, we conduct a theoretical
analysis to determine the upper bound of Euclidean norm, and then propose
homophilous regularization to constraint the consistency of unlabeled nodes.
Extensive experiments demonstrate that NormProp achieve state-of-the-art
performance under low-label rate scenarios with low computational complexity.
|
2501.08582
|
LoRS: Efficient Low-Rank Adaptation for Sparse Large Language Model
|
cs.CL
|
Existing low-rank adaptation (LoRA) methods face challenges on sparse large
language models (LLMs) due to the inability to maintain sparsity. Recent works
introduced methods that maintain sparsity by augmenting LoRA techniques with
additional masking mechanisms. Despite these successes, such approaches suffer
from an increased memory and computation overhead, which affects efficiency of
LoRA methods. In response to this limitation, we introduce LoRS, an innovative
method designed to achieve both memory and computation efficiency when
fine-tuning sparse LLMs. To mitigate the substantial memory and computation
demands associated with preserving sparsity, our approach incorporates
strategies of weight recompute and computational graph rearrangement. In
addition, we also improve the effectiveness of LoRS through better adapter
initialization. These innovations lead to a notable reduction in memory and
computation consumption during the fine-tuning phase, all while achieving
performance levels that outperform existing LoRA approaches.
|
2501.08585
|
A Systematic Review of Machine Learning Methods for Multimodal EEG Data
in Clinical Application
|
eess.SP cs.AI cs.CV cs.LG
|
Machine learning (ML) and deep learning (DL) techniques have been widely
applied to analyze electroencephalography (EEG) signals for disease diagnosis
and brain-computer interfaces (BCI). The integration of multimodal data has
been shown to enhance the accuracy of ML and DL models. Combining EEG with
other modalities can improve clinical decision-making by addressing complex
tasks in clinical populations. This systematic literature review explores the
use of multimodal EEG data in ML and DL models for clinical applications. A
comprehensive search was conducted across PubMed, Web of Science, and Google
Scholar, yielding 16 relevant studies after three rounds of filtering. These
studies demonstrate the application of multimodal EEG data in addressing
clinical challenges, including neuropsychiatric disorders, neurological
conditions (e.g., seizure detection), neurodevelopmental disorders (e.g.,
autism spectrum disorder), and sleep stage classification. Data fusion occurred
at three levels: signal, feature, and decision levels. The most commonly used
ML models were support vector machines (SVM) and decision trees. Notably, 11
out of the 16 studies reported improvements in model accuracy with multimodal
EEG data. This review highlights the potential of multimodal EEG-based ML
models in enhancing clinical diagnostics and problem-solving.
|
2501.08587
|
Sound Scene Synthesis at the DCASE 2024 Challenge
|
cs.AI cs.SD eess.AS
|
This paper presents Task 7 at the DCASE 2024 Challenge: sound scene
synthesis. Recent advances in sound synthesis and generative models have
enabled the creation of realistic and diverse audio content. We introduce a
standardized evaluation framework for comparing different sound scene synthesis
systems, incorporating both objective and subjective metrics. The challenge
attracted four submissions, which are evaluated using the Fr\'echet Audio
Distance (FAD) and human perceptual ratings. Our analysis reveals significant
insights into the current capabilities and limitations of sound scene synthesis
systems, while also highlighting areas for future improvement in this rapidly
evolving field.
|
2501.08589
|
Molecular Graph Contrastive Learning with Line Graph
|
cs.LG
|
Trapped by the label scarcity in molecular property prediction and drug
design, graph contrastive learning (GCL) came forward. Leading contrastive
learning works show two kinds of view generators, that is, random or learnable
data corruption and domain knowledge incorporation. While effective, the two
ways also lead to molecular semantics altering and limited generalization
capability, respectively. To this end, we relate the \textbf{L}in\textbf{E}
graph with \textbf{MO}lecular graph co\textbf{N}trastive learning and propose a
novel method termed \textit{LEMON}. Specifically, by contrasting the given
graph with the corresponding line graph, the graph encoder can freely encode
the molecular semantics without omission. Furthermore, we present a new patch
with edge attribute fusion and two local contrastive losses enhance information
transmission and tackle hard negative samples. Compared with state-of-the-art
(SOTA) methods for view generation, superior performance on molecular property
prediction suggests the effectiveness of our proposed framework.
|
2501.08591
|
OpenMLDB: A Real-Time Relational Data Feature Computation System for
Online ML
|
cs.DB cs.AI cs.LG
|
Efficient and consistent feature computation is crucial for a wide range of
online ML applications. Typically, feature computation is divided into two
distinct phases, i.e., offline stage for model training and online stage for
model serving. These phases often rely on execution engines with different
interface languages and function implementations, causing significant
inconsistencies. Moreover, many online ML features involve complex time-series
computations (e.g., functions over varied-length table windows) that differ
from standard streaming and analytical queries. Existing data processing
systems (e.g., Spark, Flink, DuckDB) often incur multi-second latencies for
these computations, making them unsuitable for real-time online ML applications
that demand timely feature updates.
This paper presents OpenMLDB, a feature computation system deployed in
4Paradigm's SageOne platform and over 100 real scenarios. Technically, OpenMLDB
first employs a unified query plan generator for consistent computation results
across the offline and online stages, significantly reducing feature deployment
overhead. Second, OpenMLDB provides an online execution engine that resolves
performance bottlenecks caused by long window computations (via
pre-aggregation) and multi-table window unions (via data self-adjusting). It
also provides a high-performance offline execution engine with window parallel
optimization and time-aware data skew resolving. Third, OpenMLDB features a
compact data format and stream-focused indexing to maximize memory usage and
accelerate data access. Evaluations in testing and real workloads reveal
significant performance improvements and resource savings compared to the
baseline systems. The open community of OpenMLDB now has over 150 contributors
and gained 1.6k stars on GitHub.
|
2501.08593
|
Image-to-Force Estimation for Soft Tissue Interaction in
Robotic-Assisted Surgery Using Structured Light
|
cs.RO cs.CV
|
For Minimally Invasive Surgical (MIS) robots, accurate haptic interaction
force feedback is essential for ensuring the safety of interacting with soft
tissue. However, most existing MIS robotic systems cannot facilitate direct
measurement of the interaction force with hardware sensors due to space
limitations. This letter introduces an effective vision-based scheme that
utilizes a One-Shot structured light projection with a designed pattern on soft
tissue coupled with haptic information processing through a trained
image-to-force neural network. The images captured from the endoscopic stereo
camera are analyzed to reconstruct high-resolution 3D point clouds for soft
tissue deformation. Based on this, a modified PointNet-based force estimation
method is proposed, which excels in representing the complex mechanical
properties of soft tissue. Numerical force interaction experiments are
conducted on three silicon materials with different stiffness. The results
validate the effectiveness of the proposed scheme.
|
2501.08595
|
Characterizations of voting rules based on majority margins
|
econ.TH cs.GT cs.MA
|
In the context of voting with ranked ballots, an important class of voting
rules is the class of margin-based rules (also called pairwise rules). A voting
rule is margin-based if whenever two elections generate the same head-to-head
margins of victory or loss between candidates, then the voting rule yields the
same outcome in both elections. Although this is a mathematically natural
invariance property to consider, whether it should be regarded as a normative
axiom on voting rules is less clear. In this paper, we address this question
for voting rules with any kind of output, whether a set of candidates, a
ranking, a probability distribution, etc. We prove that a voting rule is
margin-based if and only if it satisfies some axioms with clearer normative
content. A key axiom is what we call Preferential Equality, stating that if two
voters both rank a candidate $x$ immediately above a candidate $y$, then either
voter switching to rank $y$ immediately above $x$ will have the same effect on
the election outcome as if the other voter made the switch, so each voter's
preference for $y$ over $x$ is treated equally.
|
2501.08597
|
Dynamic Knowledge Integration for Enhanced Vision-Language Reasoning
|
cs.CL
|
Large Vision-Language Models (LVLMs) have demonstrated impressive
capabilities in multimodal tasks, but their performance is often constrained by
the lack of external knowledge integration, limiting their ability to handle
knowledge-intensive tasks such as visual question answering and reasoning. To
address this challenge, we propose a novel method, Adaptive Knowledge-Guided
Pretraining for Large Vision-Language Models (AKGP-LVLM), which dynamically
incorporates structured and unstructured knowledge into LVLMs during
pretraining and fine-tuning. Our approach employs a knowledge encoder to
represent external knowledge, a retrieval mechanism to select task-relevant
information, and a dynamic adaptor to align multimodal and knowledge
representations effectively. We evaluate our method on four benchmark datasets,
demonstrating significant performance improvements over state-of-the-art
models. Furthermore, human evaluations highlight the superior correctness and
relevance of our model's outputs. Extensive analyses confirm the robustness,
efficiency, and scalability of AKGP-LVLM, making it a compelling solution for
real-world knowledge-intensive tasks.
|
2501.08598
|
LlamaRestTest: Effective REST API Testing with Small Language Models
|
cs.SE cs.AI
|
Modern web services rely heavily on REST APIs, typically documented using the
OpenAPI specification. The widespread adoption of this standard has resulted in
the development of many black-box testing tools that generate tests based on
these specifications. Recent advancements in Natural Language Processing (NLP),
particularly with Large Language Models (LLMs), have enhanced REST API testing
by extracting actionable rules and generating input values from the
human-readable portions of the specification. However, these advancements
overlook the potential of continuously refining the identified rules and test
inputs based on server responses. To address this limitation, we present
LlamaRestTest, a novel approach that employs two custom LLMs to generate
realistic test inputs and uncover parameter dependencies during the testing
process by incorporating server responses. These LLMs are created by
fine-tuning the Llama3-8b model, using mined datasets of REST API example
values and inter-parameter dependencies. We evaluated LlamaRestTest on 12
real-world services (including popular services such as Spotify), comparing it
against RESTGPT, a GPT-powered specification-enhancement tool, as well as
several state-of-the-art REST API testing tools, including RESTler, MoRest,
EvoMaster, and ARAT-RL. Our results show that fine-tuning enables smaller LLMs
to outperform larger models in detecting actionable rules and generating inputs
for REST API testing. We evaluated configurations from the base Llama3-8B to
fine-tuned versions and explored 2-bit, 4-bit, and 8-bit quantization for
efficiency. LlamaRestTest surpasses state-of-the-art tools in code coverage and
error detection, even with RESTGPT-enhanced specifications, and an ablation
study highlights the impact of its novel components.
|
2501.08600
|
AutoRestTest: A Tool for Automated REST API Testing Using LLMs and MARL
|
cs.SE cs.AI
|
As REST APIs have become widespread in modern web services, comprehensive
testing of these APIs has become increasingly crucial. Due to the vast search
space consisting of operations, parameters, and parameter values along with
their complex dependencies and constraints, current testing tools suffer from
low code coverage, leading to suboptimal fault detection. To address this
limitation, we present a novel tool, AutoRestTest, which integrates the
Semantic Operation Dependency Graph (SODG) with Multi-Agent Reinforcement
Learning (MARL) and large language models (LLMs) for effective REST API
testing. AutoRestTest determines operation-dependent parameters using the SODG
and employs five specialized agents (operation, parameter, value, dependency,
and header) to identify dependencies of operations and generate operation
sequences, parameter combinations, and values. AutoRestTest provides a
command-line interface and continuous telemetry on successful operation count,
unique server errors detected, and time elapsed. Upon completion, AutoRestTest
generates a detailed report highlighting errors detected and operations
exercised. In this paper, we introduce our tool and present preliminary
results.
|
2501.08603
|
Monte Carlo Tree Search for Comprehensive Exploration in LLM-Based
Automatic Heuristic Design
|
cs.AI
|
Handcrafting heuristics for solving complex optimization tasks (e.g., route
planning and task allocation) is a common practice but requires extensive
domain knowledge. Recently, Large Language Model (LLM)-based automatic
heuristic design (AHD) methods have shown promise in generating high-quality
heuristics without manual interventions. Existing LLM-based AHD methods employ
a population to maintain a fixed number of top-performing LLM-generated
heuristics and introduce evolutionary computation (EC) to iteratively enhance
the population. However, these population-based procedures cannot fully develop
the potential of each heuristic and are prone to converge into local optima. To
more comprehensively explore the space of heuristics, this paper proposes to
use Monte Carlo Tree Search (MCTS) for LLM-based heuristic evolution. The
proposed MCTS-AHD method organizes all LLM-generated heuristics in a tree
structure and can better develop the potential of temporarily underperforming
heuristics. In experiments, MCTS-AHD delivers significantly higher-quality
heuristics on various complex tasks. Our code is available.
|
2501.08604
|
Watermarking in Diffusion Model: Gaussian Shading with Exact Diffusion
Inversion via Coupled Transformations (EDICT)
|
cs.CV
|
This paper introduces a novel approach to enhance the performance of Gaussian
Shading, a prevalent watermarking technique, by integrating the Exact Diffusion
Inversion via Coupled Transformations (EDICT) framework. While Gaussian Shading
traditionally embeds watermarks in a noise latent space, followed by iterative
denoising for image generation and noise addition for watermark recovery, its
inversion process is not exact, leading to potential watermark distortion. We
propose to leverage EDICT's ability to derive exact inverse mappings to refine
this process. Our method involves duplicating the watermark-infused noisy
latent and employing a reciprocal, alternating denoising and noising scheme
between the two latents, facilitated by EDICT. This allows for a more precise
reconstruction of both the image and the embedded watermark. Empirical
evaluation on standard datasets demonstrates that our integrated approach
yields a slight, yet statistically significant improvement in watermark
recovery fidelity. These results highlight the potential of EDICT to enhance
existing diffusion-based watermarking techniques by providing a more accurate
and robust inversion mechanism. To the best of our knowledge, this is the first
work to explore the synergy between EDICT and Gaussian Shading for digital
watermarking, opening new avenues for research in robust and high-fidelity
watermark embedding and extraction.
|
2501.08605
|
PACF: Prototype Augmented Compact Features for Improving Domain Adaptive
Object Detection
|
cs.CV
|
In recent years, there has been significant advancement in object detection.
However, applying off-the-shelf detectors to a new domain leads to significant
performance drop, caused by the domain gap. These detectors exhibit
higher-variance class-conditional distributions in the target domain than that
in the source domain, along with mean shift. To address this problem, we
propose the Prototype Augmented Compact Features (PACF) framework to regularize
the distribution of intra-class features. Specifically, we provide an in-depth
theoretical analysis on the lower bound of the target features-related
likelihood and derive the prototype cross entropy loss to further calibrate the
distribution of target RoI features. Furthermore, a mutual regularization
strategy is designed to enable the linear and prototype-based classifiers to
learn from each other, promoting feature compactness while enhancing
discriminability. Thanks to this PACF framework, we have obtained a more
compact cross-domain feature space, within which the variance of the target
features' class-conditional distributions has significantly decreased, and the
class-mean shift between the two domains has also been further reduced. The
results on different adaptation settings are state-of-the-art, which
demonstrate the board applicability and effectiveness of the proposed approach.
|
2501.08609
|
Computerized Assessment of Motor Imitation for Distinguishing Autism in
Video (CAMI-2DNet)
|
cs.CV
|
Motor imitation impairments are commonly reported in individuals with autism
spectrum conditions (ASCs), suggesting that motor imitation could be used as a
phenotype for addressing autism heterogeneity. Traditional methods for
assessing motor imitation are subjective, labor-intensive, and require
extensive human training. Modern Computerized Assessment of Motor Imitation
(CAMI) methods, such as CAMI-3D for motion capture data and CAMI-2D for video
data, are less subjective. However, they rely on labor-intensive data
normalization and cleaning techniques, and human annotations for algorithm
training. To address these challenges, we propose CAMI-2DNet, a scalable and
interpretable deep learning-based approach to motor imitation assessment in
video data, which eliminates the need for data normalization, cleaning and
annotation. CAMI-2DNet uses an encoder-decoder architecture to map a video to a
motion encoding that is disentangled from nuisance factors such as body shape
and camera views. To learn a disentangled representation, we employ synthetic
data generated by motion retargeting of virtual characters through the
reshuffling of motion, body shape, and camera views, as well as real
participant data. To automatically assess how well an individual imitates an
actor, we compute a similarity score between their motion encodings, and use it
to discriminate individuals with ASCs from neurotypical (NT) individuals. Our
comparative analysis demonstrates that CAMI-2DNet has a strong correlation with
human scores while outperforming CAMI-2D in discriminating ASC vs NT children.
Moreover, CAMI-2DNet performs comparably to CAMI-3D while offering greater
practicality by operating directly on video data and without the need for
ad-hoc data normalization and human annotations.
|
2501.08612
|
Neural Risk-sensitive Satisficing in Contextual Bandits
|
cs.LG
|
The contextual bandit problem, which is a type of reinforcement learning
tasks, provides an effective framework for solving challenges in recommendation
systems, such as satisfying real-time requirements, enabling personalization,
addressing cold-start problems. However, contextual bandit algorithms face
challenges since they need to handle large state-action spaces sequentially.
These challenges include the high costs for learning and balancing exploration
and exploitation, as well as large variations in performance that depend on the
domain of application. To address these challenges, Tsuboya et~al. proposed the
Regional Linear Risk-sensitive Satisficing (RegLinRS) algorithm. RegLinRS
switches between exploration and exploitation based on how well the agent has
achieved the target. However, the reward expectations in RegLinRS are linearly
approximated based on features, which limits its applicability when the
relationship between features and reward expectations is non-linear. To handle
more complex environments, we proposed Neural Risk-sensitive Satisficing
(NeuralRS), which incorporates neural networks into RegLinRS, and demonstrated
its utility.
|
2501.08613
|
Assessing the Alignment of FOL Closeness Metrics with Human Judgement
|
cs.CL
|
The recent successful paradigm of solving logical reasoning problems with
tool-augmented large language models (LLMs) leverages translation of natural
language statements into First-Order Logic~(FOL) and external theorem provers.
However, the correctness of FOL statements, comprising operators and text
predicates, often goes unverified due to the lack of a reliable evaluation
metric for comparing generated and ground-truth FOLs. In this paper, we present
a comprehensive study of sensitivity of existing metrics and their alignment
with human judgement on FOL evaluation. Using ground-truth FOLs, we carefully
designed various perturbations on the ground-truth to assess metric
sensitivity. We sample FOL translation candidates for natural language
statements and measure the ranking alignment between automatic metrics and
human annotators. Our empirical findings highlight oversensitivity in the
n-gram metric BLEU for text perturbations, the semantic graph metric Smatch++
for structural perturbations, and FOL metric for operator perturbation. We also
observe a closer alignment between BertScore and human judgement. Additionally,
we show that combining metrics enhances both alignment and sensitivity compared
to using individual metrics.
|
2501.08615
|
Towards Aligned Data Forgetting via Twin Machine Unlearning
|
cs.LG
|
Modern privacy regulations have spurred the evolution of machine unlearning,
a technique enabling a trained model to efficiently forget specific training
data. In prior unlearning methods, the concept of "data forgetting" is often
interpreted and implemented as achieving zero classification accuracy on such
data. Nevertheless, the authentic aim of machine unlearning is to achieve
alignment between the unlearned model and the gold model, i.e., encouraging
them to have identical classification accuracy. On the other hand, the gold
model often exhibits non-zero classification accuracy due to its generalization
ability. To achieve aligned data forgetting, we propose a Twin Machine
Unlearning (TMU) approach, where a twin unlearning problem is defined
corresponding to the original unlearning problem. Consequently, the
generalization-label predictor trained on the twin problem can be transferred
to the original problem, facilitating aligned data forgetting. Comprehensive
empirical experiments illustrate that our approach significantly enhances the
alignment between the unlearned model and the gold model.
|
2501.08617
|
RLHS: Mitigating Misalignment in RLHF with Hindsight Simulation
|
cs.LG cs.AI cs.CL
|
While Reinforcement Learning from Human Feedback (RLHF) has shown promise in
aligning generative AI, we present empirical evidence that it can also cause
severe, systematic misalignment. We hypothesize that this stems from evaluator
feedback depending on downstream outcome predictions (foresight) that can be
influenced by the AI's output, inducing Goodhart's law dynamics. Conversely,
our theoretical analysis shows that conditioning evaluator feedback on
downstream observations (hindsight) inhibits this effect by decoupling the
alignment signal from potentially compromised predictions-crucially, the result
holds even if the observed outcomes are sampled from the AI's own world model.
Building on this insight, we introduce Reinforcement Learning from Hindsight
Simulation (RLHS), which presents plausible simulated outcomes to evaluators
before eliciting feedback. We demonstrate RLHS on online (PPO) and offline
(DPO) large language model fine-tuning, obtaining superior alignment over RLHF
in controlled consultancy-type experiments and user studies. We evaluate
post-hoc on the TruthfulQA benchmark and find that, even after single-task
fine-tuning, both RLHF misalignment and RLHS alignment carry over to
substantially different settings.
|
2501.08618
|
Disjoint Processing Mechanisms of Hierarchical and Linear Grammars in
Large Language Models
|
cs.CL cs.AI
|
All natural languages are structured hierarchically. In humans, this
structural restriction is neurologically coded: when two grammars are presented
with identical vocabularies, brain areas responsible for language processing
are only sensitive to hierarchical grammars. Using large language models
(LLMs), we investigate whether such functionally distinct hierarchical
processing regions can arise solely from exposure to large-scale language
distributions. We generate inputs using English, Italian, Japanese, or nonce
words, varying the underlying grammars to conform to either hierarchical or
linear/positional rules. Using these grammars, we first observe that language
models show distinct behaviors on hierarchical versus linearly structured
inputs. Then, we find that the components responsible for processing
hierarchical grammars are distinct from those that process linear grammars; we
causally verify this in ablation experiments. Finally, we observe that
hierarchy-selective components are also active on nonce grammars; this suggests
that hierarchy sensitivity is not tied to meaning, nor in-distribution inputs.
|
2501.08620
|
CT-PatchTST: Channel-Time Patch Time-Series Transformer for Long-Term
Renewable Energy Forecasting
|
cs.LG
|
Accurately predicting renewable energy output is crucial for the efficient
integration of solar and wind power into modern energy systems. This study
develops and evaluates an advanced deep learning model, Channel-Time Patch
Time-Series Transformer (CT-PatchTST), to forecast the power output of
photovoltaic and wind energy systems using annual offshore wind power, onshore
wind power, and solar power generation data from Denmark. While the original
Patch Time-Series Transformer(PatchTST) model employs a channel-independent
(CI) approach, it tends to overlook inter-channel relationships during
training, potentially leading to a loss of critical information. To address
this limitation and further leverage the benefits of increased data granularity
brought by CI, we propose CT-PatchTST. This enhanced model improves the
processing of inter-channel information while maintaining the advantages of the
channel-independent approach. The predictive performance of CT-PatchTST is
rigorously analyzed, demonstrating its ability to provide precise and reliable
energy forecasts. This work contributes to improving the predictability of
renewable energy systems, supporting their broader adoption and integration
into energy grids.
|
2501.08621
|
ViBidirectionMT-Eval: Machine Translation for Vietnamese-Chinese and
Vietnamese-Lao language pair
|
cs.CL cs.AI
|
This paper presents an results of the VLSP 2022-2023 Machine Translation
Shared Tasks, focusing on Vietnamese-Chinese and Vietnamese-Lao machine
translation. The tasks were organized as part of the 9th, 10th annual workshop
on Vietnamese Language and Speech Processing (VLSP 2022, VLSP 2023). The
objective of the shared task was to build machine translation systems,
specifically targeting Vietnamese-Chinese and Vietnamese-Lao translation
(corresponding to 4 translation directions). The submission were evaluated on
1,000 pairs for testing (news and general domains) using established metrics
like BLEU [11] and SacreBLEU [12]. Additionally, system outputs also were
evaluated with human judgment provided by experts in Chinese and Lao languages.
These human assessments played a crucial role in ranking the performance of the
machine translation models, ensuring a more comprehensive evaluation.
|
2501.08625
|
A Bioplausible Model for the Expanding Hole Illusion: Insights into
Retinal Processing and Illusory Motion
|
q-bio.NC cs.NE eess.IV
|
The Expanding Hole Illusion is a compelling visual phenomenon in which a
static, concentric pattern evokes a strong perception of continuous forward
motion. Despite its simplicity, this illusion challenges our understanding of
how the brain processes visual information, particularly motion derived from
static cues. While the neural basis of this illusion has remained elusive,
recent psychophysical studies [1] reveal that this illusion induces not only a
perceptual effect but also physiological responses, such as pupil dilation.
This paper presents a computational model based on Difference of Gaussians
(DoG) filtering and a classical receptive field (CRF) implementation to
simulate early retinal processing and to explain the underlying mechanisms of
this illusion. Based on our results we hypothesize that the illusion arises
from contrast-dependent lateral inhibition in early visual processing. Our
results demonstrate that contrast gradients and multi-layered spatial
processing contribute to the perception of expansion, aligning closely with
psychophysical findings and supporting the role of retinal ganglion cells in
generating this illusory motion signal. Our findings provide insights into the
perceptual biases driving dynamic illusions and offer a new framework for
studying complex visual phenomena.
|
2501.08626
|
A Learning Algorithm That Attains the Human Optimum in a Repeated
Human-Machine Interaction Game
|
cs.GT cs.HC cs.LG
|
When humans interact with learning-based control systems, a common goal is to
minimize a cost function known only to the human. For instance, an exoskeleton
may adapt its assistance in an effort to minimize the human's metabolic
cost-of-transport. Conventional approaches to synthesizing the learning
algorithm solve an inverse problem to infer the human's cost. However, these
problems can be ill-posed, hard to solve, or sensitive to problem data. Here we
show a game-theoretic learning algorithm that works solely by observing human
actions to find the cost minimum, avoiding the need to solve an inverse
problem. We evaluate the performance of our algorithm in an extensive set of
human subjects experiments, demonstrating consistent convergence to the minimum
of a prescribed human cost function in scalar and multidimensional
instantiations of the game. We conclude by outlining future directions for
theoretical and empirical extensions of our results.
|
2501.08628
|
Transformer-based Multivariate Time Series Anomaly Localization
|
cs.LG
|
With the growing complexity of Cyber-Physical Systems (CPS) and the
integration of Internet of Things (IoT), the use of sensors for online
monitoring generates large volume of multivariate time series (MTS) data.
Consequently, the need for robust anomaly diagnosis in MTS is paramount to
maintaining system reliability and safety. While significant advancements have
been made in anomaly detection, localization remains a largely underexplored
area, though crucial for intelligent decision-making. This paper introduces a
novel transformer-based model for unsupervised anomaly diagnosis in MTS, with a
focus on improving localization performance, through an in-depth analysis of
the self-attention mechanism's learning behavior under both normal and
anomalous conditions. We formulate the anomaly localization problem as a
three-stage process: time-step, window, and segment-based. This leads to the
development of the Space-Time Anomaly Score (STAS), a new metric inspired by
the connection between transformer latent representations and space-time
statistical models. STAS is designed to capture individual anomaly behaviors
and inter-series dependencies, delivering enhanced localization performance.
Additionally, the Statistical Feature Anomaly Score (SFAS) complements STAS by
analyzing statistical features around anomalies, with their combination helping
to reduce false alarms. Experiments on real world and synthetic datasets
illustrate the model's superiority over state-of-the-art methods in both
detection and localization tasks.
|
2501.08629
|
Self-Organizing Edge Computing Distribution Framework for Visual SLAM
|
cs.RO cs.CV cs.DC
|
Localization within a known environment is a crucial capability for mobile
robots. Simultaneous Localization and Mapping (SLAM) is a prominent solution to
this problem. SLAM is a framework that consists of a diverse set of
computational tasks ranging from real-time tracking to computation-intensive
map optimization. This combination can present a challenge for resource-limited
mobile robots. Previously, edge-assisted SLAM methods have demonstrated
promising real-time execution capabilities by offloading heavy computations
while performing real-time tracking onboard. However, the common approach of
utilizing a client-server architecture for offloading is sensitive to server
and network failures. In this article, we propose a novel edge-assisted SLAM
framework capable of self-organizing fully distributed SLAM execution across a
network of devices or functioning on a single device without connectivity. The
architecture consists of three layers and is designed to be device-agnostic,
resilient to network failures, and minimally invasive to the core SLAM system.
We have implemented and demonstrated the framework for monocular ORB SLAM3 and
evaluated it in both fully distributed and standalone SLAM configurations
against the ORB SLAM3. The experiment results demonstrate that the proposed
design matches the accuracy and resource utilization of the monolithic approach
while enabling collaborative execution.
|
2501.08631
|
SWSC: Shared Weight for Similar Channel in LLM
|
cs.LG cs.CL
|
Large language models (LLMs) have spurred development in multiple industries.
However, the growing number of their parameters brings substantial storage and
computing burdens, making it essential to explore model compression techniques
for parameter reduction and easier deployment. We propose SWSC, an LLM
compression method based on the concept of Shared Weight for Similar Channel.
It uses the K-Means clustering algorithm to cluster model weights
channel-by-channel, generating clusters with highly similar vectors within
each. A representative vector from each cluster is selected to approximately
replace all vectors in the cluster, significantly reducing the number of model
weight parameters. However, approximate restoration will inevitably cause
damage to the performance of the model. To tackle this issue, we perform
singular value decomposition on the weight error values before and after
compression and retain the larger singular values and their corresponding
singular vectors to compensate for the accuracy. The experimental results show
that our method can effectively ensure the performance of the compressed LLM
even under low-precision conditions.
|
2501.08639
|
Detecting Wildfire Flame and Smoke through Edge Computing using Transfer
Learning Enhanced Deep Learning Models
|
cs.CV eess.IV
|
Autonomous unmanned aerial vehicles (UAVs) integrated with edge computing
capabilities empower real-time data processing directly on the device,
dramatically reducing latency in critical scenarios such as wildfire detection.
This study underscores Transfer Learning's (TL) significance in boosting the
performance of object detectors for identifying wildfire smoke and flames,
especially when trained on limited datasets, and investigates the impact TL has
on edge computing metrics. With the latter focusing how TL-enhanced You Only
Look Once (YOLO) models perform in terms of inference time, power usage, and
energy consumption when using edge computing devices. This study utilizes the
Aerial Fire and Smoke Essential (AFSE) dataset as the target, with the Flame
and Smoke Detection Dataset (FASDD) and the Microsoft Common Objects in Context
(COCO) dataset serving as source datasets. We explore a two-stage cascaded TL
method, utilizing D-Fire or FASDD as initial stage target datasets and AFSE as
the subsequent stage. Through fine-tuning, TL significantly enhances detection
precision, achieving up to 79.2% mean Average Precision (mAP@0.5), reduces
training time, and increases model generalizability across the AFSE dataset.
However, cascaded TL yielded no notable improvements and TL alone did not
benefit the edge computing metrics evaluated. Lastly, this work found that
YOLOv5n remains a powerful model when lacking hardware acceleration, finding
that YOLOv5n can process images nearly twice as fast as its newer counterpart,
YOLO11n. Overall, the results affirm TL's role in augmenting the accuracy of
object detectors while also illustrating that additional enhancements are
needed to improve edge computing performance.
|
2501.08640
|
Quantum Reservoir Computing and Risk Bounds
|
cs.LG stat.ML
|
We propose a way to bound the generalisation errors of several classes of
quantum reservoirs using the Rademacher complexity. We give specific,
parameter-dependent bounds for two particular quantum reservoir classes. We
analyse how the generalisation bounds scale with growing numbers of qubits.
Applying our results to classes with polynomial readout functions, we find that
the risk bounds converge in the number of training samples. The explicit
dependence on the quantum reservoir and readout parameters in our bounds can be
used to control the generalisation error to a certain extent. It should be
noted that the bounds scale exponentially with the number of qubits $n$. The
upper bounds on the Rademacher complexity can be applied to other reservoir
classes that fulfill a few hypotheses on the quantum dynamics and the readout
function.
|
2501.08641
|
Reassessing the Role of Chain-of-Thought in Sentiment Analysis: Insights
and Limitations
|
cs.CL cs.AI
|
The relationship between language and thought remains an unresolved
philosophical issue. Existing viewpoints can be broadly categorized into two
schools: one asserting their independence, and another arguing that language
constrains thought. In the context of large language models, this debate raises
a crucial question: Does a language model's grasp of semantic meaning depend on
thought processes? To explore this issue, we investigate whether reasoning
techniques can facilitate semantic understanding. Specifically, we
conceptualize thought as reasoning, employ chain-of-thought prompting as a
reasoning technique, and examine its impact on sentiment analysis tasks. The
experiments show that chain-of-thought has a minimal impact on sentiment
analysis tasks. Both the standard and chain-of-thought prompts focus on aspect
terms rather than sentiment in the generated content. Furthermore,
counterfactual experiments reveal that the model's handling of sentiment tasks
primarily depends on information from demonstrations. The experimental results
support the first viewpoint.
|
2501.08643
|
MonSter: Marry Monodepth to Stereo Unleashes Power
|
cs.CV
|
Stereo matching recovers depth from image correspondences. Existing methods
struggle to handle ill-posed regions with limited matching cues, such as
occlusions and textureless areas. To address this, we propose MonSter, a novel
method that leverages the complementary strengths of monocular depth estimation
and stereo matching. MonSter integrates monocular depth and stereo matching
into a dual-branch architecture to iteratively improve each other.
Confidence-based guidance adaptively selects reliable stereo cues for monodepth
scale-shift recovery. The refined monodepth is in turn guides stereo
effectively at ill-posed regions. Such iterative mutual enhancement enables
MonSter to evolve monodepth priors from coarse object-level structures to
pixel-level geometry, fully unlocking the potential of stereo matching. As
shown in Fig.1, MonSter ranks 1st across five most commonly used leaderboards
-- SceneFlow, KITTI 2012, KITTI 2015, Middlebury, and ETH3D. Achieving up to
49.5% improvements (Bad 1.0 on ETH3D) over the previous best method.
Comprehensive analysis verifies the effectiveness of MonSter in ill-posed
regions. In terms of zero-shot generalization, MonSter significantly and
consistently outperforms state-of-the-art across the board. The code is
publicly available at: https://github.com/Junda24/MonSter.
|
2501.08648
|
MAGNET: Augmenting Generative Decoders with Representation Learning and
Infilling Capabilities
|
cs.CL cs.AI
|
While originally designed for unidirectional generative modeling,
decoder-only large language models (LLMs) are increasingly being adapted for
bidirectional modeling. However, unidirectional and bidirectional models are
typically trained separately with distinct objectives (generation and
representation learning). This separation overlooks the opportunity for
developing a more versatile language model and for these objectives to
complement each other. In this work, we propose MAGNET, a method for adapting
decoder-only LLMs to generate robust representations and infill missing text
spans. MAGNET employs three self-supervised training objectives and introduces
an attention mechanism that combines bidirectional and causal attention,
enabling unified training across all objectives. Our results demonstrate that
LLMs adapted with MAGNET (1) surpass strong text encoders on token-level and
sentence-level representation learning tasks, (2) generate contextually
appropriate text infills by leveraging past and future contexts, (3) perform
open-ended text generation without excessive repetition of words or phrases,
and (4) preserve the knowledge and reasoning capability gained by the LLM
during pretraining.
|
2501.08649
|
Joint Learning of Depth and Appearance for Portrait Image Animation
|
cs.CV cs.LG
|
2D portrait animation has experienced significant advancements in recent
years. Much research has utilized the prior knowledge embedded in large
generative diffusion models to enhance high-quality image manipulation.
However, most methods only focus on generating RGB images as output, and the
co-generation of consistent visual plus 3D output remains largely
under-explored. In our work, we propose to jointly learn the visual appearance
and depth simultaneously in a diffusion-based portrait image generator. Our
method embraces the end-to-end diffusion paradigm and introduces a new
architecture suitable for learning this conditional joint distribution,
consisting of a reference network and a channel-expanded diffusion backbone.
Once trained, our framework can be efficiently adapted to various downstream
applications, such as facial depth-to-image and image-to-depth generation,
portrait relighting, and audio-driven talking head animation with consistent 3D
output.
|
2501.08653
|
Fine-grained Spatio-temporal Event Prediction with Self-adaptive Anchor
Graph
|
cs.LG cs.AI cs.SI
|
Event prediction tasks often handle spatio-temporal data distributed in a
large spatial area. Different regions in the area exhibit different
characteristics while having latent correlations. This spatial heterogeneity
and correlations greatly affect the spatio-temporal distributions of event
occurrences, which has not been addressed by state-of-the-art models. Learning
spatial dependencies of events in a continuous space is challenging due to its
fine granularity and a lack of prior knowledge. In this work, we propose a
novel Graph Spatio-Temporal Point Process (GSTPP) model for fine-grained event
prediction. It adopts an encoder-decoder architecture that jointly models the
state dynamics of spatially localized regions using neural Ordinary
Differential Equations (ODEs). The state evolution is built on the foundation
of a novel Self-Adaptive Anchor Graph (SAAG) that captures spatial
dependencies. By adaptively localizing the anchor nodes in the space and
jointly constructing the correlation edges between them, the SAAG enhances the
model's ability of learning complex spatial event patterns. The proposed GSTPP
model greatly improves the accuracy of fine-grained event prediction. Extensive
experimental results show that our method greatly improves the prediction
accuracy over existing spatio-temporal event prediction approaches.
|
2501.08654
|
StereoGen: High-quality Stereo Image Generation from a Single Image
|
cs.CV
|
State-of-the-art supervised stereo matching methods have achieved amazing
results on various benchmarks. However, these data-driven methods suffer from
generalization to real-world scenarios due to the lack of real-world annotated
data. In this paper, we propose StereoGen, a novel pipeline for high-quality
stereo image generation. This pipeline utilizes arbitrary single images as left
images and pseudo disparities generated by a monocular depth estimation model
to synthesize high-quality corresponding right images. Unlike previous methods
that fill the occluded area in warped right images using random backgrounds or
using convolutions to take nearby pixels selectively, we fine-tune a diffusion
inpainting model to recover the background. Images generated by our model
possess better details and undamaged semantic structures. Besides, we propose
Training-free Confidence Generation and Adaptive Disparity Selection. The
former suppresses the negative effect of harmful pseudo ground truth during
stereo training, while the latter helps generate a wider disparity distribution
and better synthetic images. Experiments show that models trained under our
pipeline achieve state-of-the-art zero-shot generalization results among all
published methods. The code will be available upon publication of the paper.
|
2501.08655
|
Application of Deep Reinforcement Learning to UAV Swarming for Ground
Surveillance
|
cs.AI cs.RO
|
This paper summarizes in depth the state of the art of aerial swarms,
covering both classical and new reinforcement-learning-based approaches for
their management. Then, it proposes a hybrid AI system, integrating deep
reinforcement learning in a multi-agent centralized swarm architecture. The
proposed system is tailored to perform surveillance of a specific area,
searching and tracking ground targets, for security and law enforcement
applications. The swarm is governed by a central swarm controller responsible
for distributing different search and tracking tasks among the cooperating
UAVs. Each UAV agent is then controlled by a collection of cooperative
sub-agents, whose behaviors have been trained using different deep
reinforcement learning models, tailored for the different task types proposed
by the swarm controller. More specifically, proximal policy optimization (PPO)
algorithms were used to train the agents' behavior. In addition, several
metrics to assess the performance of the swarm in this application were
defined. The results obtained through simulation show that our system searches
the operation area effectively, acquires the targets in a reasonable time, and
is capable of tracking them continuously and consistently.
|
2501.08659
|
BRIGHT-VO: Brightness-Guided Hybrid Transformer for Visual Odometry with
Multi-modality Refinement Module
|
cs.CV
|
Visual odometry (VO) plays a crucial role in autonomous driving, robotic
navigation, and other related tasks by estimating the position and orientation
of a camera based on visual input. Significant progress has been made in
data-driven VO methods, particularly those leveraging deep learning techniques
to extract image features and estimate camera poses. However, these methods
often struggle in low-light conditions because of the reduced visibility of
features and the increased difficulty of matching keypoints. To address this
limitation, we introduce BrightVO, a novel VO model based on Transformer
architecture, which not only performs front-end visual feature extraction, but
also incorporates a multi-modality refinement module in the back-end that
integrates Inertial Measurement Unit (IMU) data. Using pose graph optimization,
this module iteratively refines pose estimates to reduce errors and improve
both accuracy and robustness. Furthermore, we create a synthetic low-light
dataset, KiC4R, which includes a variety of lighting conditions to facilitate
the training and evaluation of VO frameworks in challenging environments.
Experimental results demonstrate that BrightVO achieves state-of-the-art
performance on both the KiC4R dataset and the KITTI benchmarks. Specifically,
it provides an average improvement of 20% in pose estimation accuracy in normal
outdoor environments and 259% in low-light conditions, outperforming existing
methods. For widespread use and further development, the research work is fully
open-source at https://github.com/Anastasiawd/BrightVO.
|
2501.08662
|
Product of Gaussian Mixture Diffusion Model for non-linear MRI Inversion
|
eess.IV cs.CV cs.LG
|
Diffusion models have recently shown remarkable results in magnetic resonance
imaging reconstruction. However, the employed networks typically are black-box
estimators of the (smoothed) prior score with tens of millions of parameters,
restricting interpretability and increasing reconstruction time. Furthermore,
parallel imaging reconstruction algorithms either rely on off-line coil
sensitivity estimation, which is prone to misalignment and restricting sampling
trajectories, or perform per-coil reconstruction, making the computational cost
proportional to the number of coils. To overcome this, we jointly reconstruct
the image and the coil sensitivities using the lightweight,
parameter-efficient, and interpretable product of Gaussian mixture diffusion
model as an image prior and a classical smoothness priors on the coil
sensitivities. The proposed method delivers promising results while allowing
for fast inference and demonstrating robustness to contrast out-of-distribution
data and sampling trajectories, comparable to classical variational penalties
such as total variation. Finally, the probabilistic formulation allows the
calculation of the posterior expectation and pixel-wise variance.
|
2501.08665
|
A Survey on Facial Image Privacy Preservation in Cloud-Based Services
|
cs.CV
|
Facial recognition models are increasingly employed by commercial
enterprises, government agencies, and cloud service providers for identity
verification, consumer services, and surveillance. These models are often
trained using vast amounts of facial data processed and stored in cloud-based
platforms, raising significant privacy concerns. Users' facial images may be
exploited without their consent, leading to potential data breaches and misuse.
This survey presents a comprehensive review of current methods aimed at
preserving facial image privacy in cloud-based services. We categorize these
methods into two primary approaches: image obfuscation-based protection and
adversarial perturbation-based protection. We provide an in-depth analysis of
both categories, offering qualitative and quantitative comparisons of their
effectiveness. Additionally, we highlight unresolved challenges and propose
future research directions to improve privacy preservation in cloud computing
environments.
|
2501.08667
|
TimeFlow: Longitudinal Brain Image Registration and Aging Progression
Analysis
|
eess.IV cs.CV
|
Predicting future brain states is crucial for understanding healthy aging and
neurodegenerative diseases. Longitudinal brain MRI registration, a cornerstone
for such analyses, has long been limited by its inability to forecast future
developments, reliance on extensive, dense longitudinal data, and the need to
balance registration accuracy with temporal smoothness. In this work, we
present \emph{TimeFlow}, a novel framework for longitudinal brain MRI
registration that overcomes all these challenges. Leveraging a U-Net
architecture with temporal conditioning inspired by diffusion models, TimeFlow
enables accurate longitudinal registration and facilitates prospective analyses
through future image prediction. Unlike traditional methods that depend on
explicit smoothness regularizers and dense sequential data, TimeFlow achieves
temporal consistency and continuity without these constraints. Experimental
results highlight its superior performance in both future timepoint prediction
and registration accuracy compared to state-of-the-art methods. Additionally,
TimeFlow supports novel biological brain aging analyses, effectively
differentiating neurodegenerative conditions from healthy aging. It eliminates
the need for segmentation, thereby avoiding the challenges of non-trivial
annotation and inconsistent segmentation errors. TimeFlow paves the way for
accurate, data-efficient, and annotation-free prospective analyses of brain
aging and chronic diseases.
|
2501.08669
|
SPEQ: Stabilization Phases for Efficient Q-Learning in High
Update-To-Data Ratio Reinforcement Learning
|
cs.LG cs.AI
|
A key challenge in Deep Reinforcement Learning is sample efficiency,
especially in real-world applications where collecting environment interactions
is expensive or risky. Recent off-policy algorithms improve sample efficiency
by increasing the Update-To-Data (UTD) ratio and performing more gradient
updates per environment interaction. While this improves sample efficiency, it
significantly increases computational cost due to the higher number of gradient
updates required. In this paper we propose a sample-efficient method to improve
computational efficiency by separating training into distinct learning phases
in order to exploit gradient updates more effectively. Our approach builds on
top of the Dropout Q-Functions (DroQ) algorithm and alternates between an
online, low UTD ratio training phase, and an offline stabilization phase.
During the stabilization phase, we fine-tune the Q-functions without collecting
new environment interactions. This process improves the effectiveness of the
replay buffer and reduces computational overhead. Our experimental results on
continuous control problems show that our method achieves results comparable to
state-of-the-art, high UTD ratio algorithms while requiring 56\% fewer gradient
updates and 50\% less training time than DroQ. Our approach offers an effective
and computationally economical solution while maintaining the same sample
efficiency as the more costly, high UTD ratio state-of-the-art.
|
2501.08672
|
GS-LIVO: Real-Time LiDAR, Inertial, and Visual Multi-sensor Fused
Odometry with Gaussian Mapping
|
cs.RO cs.CV
|
In recent years, 3D Gaussian splatting (3D-GS) has emerged as a novel scene
representation approach. However, existing vision-only 3D-GS methods often rely
on hand-crafted heuristics for point-cloud densification and face challenges in
handling occlusions and high GPU memory and computation consumption.
LiDAR-Inertial-Visual (LIV) sensor configuration has demonstrated superior
performance in localization and dense mapping by leveraging complementary
sensing characteristics: rich texture information from cameras, precise
geometric measurements from LiDAR, and high-frequency motion data from IMU.
Inspired by this, we propose a novel real-time Gaussian-based simultaneous
localization and mapping (SLAM) system. Our map system comprises a global
Gaussian map and a sliding window of Gaussians, along with an IESKF-based
odometry. The global Gaussian map consists of hash-indexed voxels organized in
a recursive octree, effectively covering sparse spatial volumes while adapting
to different levels of detail and scales. The Gaussian map is initialized
through multi-sensor fusion and optimized with photometric gradients. Our
system incrementally maintains a sliding window of Gaussians, significantly
reducing GPU computation and memory consumption by only optimizing the map
within the sliding window. Moreover, we implement a tightly coupled
multi-sensor fusion odometry with an iterative error state Kalman filter
(IESKF), leveraging real-time updating and rendering of the Gaussian map. Our
system represents the first real-time Gaussian-based SLAM framework deployable
on resource-constrained embedded systems, demonstrated on the NVIDIA Jetson
Orin NX platform. The framework achieves real-time performance while
maintaining robust multi-sensor fusion capabilities. All implementation
algorithms, hardware designs, and CAD models will be publicly available.
|
2501.08676
|
FlexiClip: Locality-Preserving Free-Form Character Animation
|
cs.CV cs.GR
|
Animating clipart images with seamless motion while maintaining visual
fidelity and temporal coherence presents significant challenges. Existing
methods, such as AniClipart, effectively model spatial deformations but often
fail to ensure smooth temporal transitions, resulting in artifacts like abrupt
motions and geometric distortions. Similarly, text-to-video (T2V) and
image-to-video (I2V) models struggle to handle clipart due to the mismatch in
statistical properties between natural video and clipart styles. This paper
introduces FlexiClip, a novel approach designed to overcome these limitations
by addressing the intertwined challenges of temporal consistency and geometric
integrity. FlexiClip extends traditional B\'ezier curve-based trajectory
modeling with key innovations: temporal Jacobians to correct motion dynamics
incrementally, continuous-time modeling via probability flow ODEs (pfODEs) to
mitigate temporal noise, and a flow matching loss inspired by GFlowNet
principles to optimize smooth motion transitions. These enhancements ensure
coherent animations across complex scenarios involving rapid movements and
non-rigid deformations. Extensive experiments validate the effectiveness of
FlexiClip in generating animations that are not only smooth and natural but
also structurally consistent across diverse clipart types, including humans and
animals. By integrating spatial and temporal modeling with pre-trained video
diffusion models, FlexiClip sets a new standard for high-quality clipart
animation, offering robust performance across a wide range of visual content.
Project Page: https://creative-gen.github.io/flexiclip.github.io/
|
2501.08678
|
Investigating Parameter-Efficiency of Hybrid QuGANs Based on Geometric
Properties of Generated Sea Route Graphs
|
cs.LG quant-ph
|
The demand for artificially generated data for the development, training and
testing of new algorithms is omnipresent. Quantum computing (QC), does offer
the hope that its inherent probabilistic functionality can be utilised in this
field of generative artificial intelligence. In this study, we use
quantum-classical hybrid generative adversarial networks (QuGANs) to
artificially generate graphs of shipping routes. We create a training dataset
based on real shipping data and investigate to what extent QuGANs are able to
learn and reproduce inherent distributions and geometric features of this data.
We compare hybrid QuGANs with classical Generative Adversarial Networks (GANs),
with a special focus on their parameter efficiency. Our results indicate that
QuGANs are indeed able to quickly learn and represent underlying geometric
properties and distributions, although they seem to have difficulties in
introducing variance into the sampled data. Compared to classical GANs of
greater size, measured in the number of parameters used, some QuGANs show
similar result quality. Our reference to concrete use cases, such as the
generation of shipping data, provides an illustrative example and demonstrate
the potential and diversity in which QC can be used.
|
2501.08679
|
Diagonal Over-parameterization in Reproducing Kernel Hilbert Spaces as
an Adaptive Feature Model: Generalization and Adaptivity
|
cs.LG stat.ML
|
This paper introduces a diagonal adaptive kernel model that dynamically
learns kernel eigenvalues and output coefficients simultaneously during
training. Unlike fixed-kernel methods tied to the neural tangent kernel theory,
the diagonal adaptive kernel model adapts to the structure of the truth
function, significantly improving generalization over fixed-kernel methods,
especially when the initial kernel is misaligned with the target. Moreover, we
show that the adaptivity comes from learning the right eigenvalues during
training, showing a feature learning behavior. By extending to deeper
parameterization, we further show how extra depth enhances adaptability and
generalization. This study combines the insights from feature learning and
implicit regularization and provides new perspective into the adaptivity and
generalization potential of neural networks beyond the kernel regime.
|
2501.08680
|
Digital Twin Online Channel Modeling: Challenges,Principles, and
Applications
|
eess.SY cs.NI cs.SY
|
Different from traditional offline channel modeling, digital twin online
channel modeling can sense and accurately characterize dynamic wireless
channels in real time, and can therefore greatly assist 6G network
optimization. This article proposes a novel promising framework and a
step-by-step design procedure of digital twin online channel models (DTOCM). By
enabling continuous visualization and accurate prediction of dynamic channel
variations, DTOCM can synchronize the performance between simulated and real
networks. We first explore the evolution and conceptual advancements of DTOCM,
highlighting its visions and associated challenges. Then, we explain its
operational principles, construction mechanisms, and applications to typical 6G
scenarios. Subsequently, the real-time channel information provisioning and
visualization capabilities of DTOCM are illustrated through our DTOCM platform
based on practical scenarios. Finally, future research directions and open
issues are discussed.
|
2501.08682
|
RealVVT: Towards Photorealistic Video Virtual Try-on via Spatio-Temporal
Consistency
|
cs.CV cs.GR
|
Virtual try-on has emerged as a pivotal task at the intersection of computer
vision and fashion, aimed at digitally simulating how clothing items fit on the
human body. Despite notable progress in single-image virtual try-on (VTO),
current methodologies often struggle to preserve a consistent and authentic
appearance of clothing across extended video sequences. This challenge arises
from the complexities of capturing dynamic human pose and maintaining target
clothing characteristics. We leverage pre-existing video foundation models to
introduce RealVVT, a photoRealistic Video Virtual Try-on framework tailored to
bolster stability and realism within dynamic video contexts. Our methodology
encompasses a Clothing & Temporal Consistency strategy, an Agnostic-guided
Attention Focus Loss mechanism to ensure spatial consistency, and a Pose-guided
Long Video VTO technique adept at handling extended video sequences.Extensive
experiments across various datasets confirms that our approach outperforms
existing state-of-the-art models in both single-image and video VTO tasks,
offering a viable solution for practical applications within the realms of
fashion e-commerce and virtual fitting environments.
|
2501.08683
|
The Physics of Life: Exploring Information as a Distinctive Feature of
Living Systems
|
cond-mat.soft astro-ph.EP cs.IT math.IT nlin.AO q-bio.QM
|
This paper explores the idea that information is an essential and distinctive
feature of living systems. Unlike non-living systems, living systems actively
acquire, process, and use information about their environments to respond to
changing conditions, sustain themselves, and achieve other intrinsic goals. We
discuss relevant theoretical frameworks such as ``semantic information'' and
``fitness value of information''. We also highlight the broader implications of
our perspective for fields such as origins-of-life research and astrobiology.
In particular, we touch on the transition to information-driven systems as a
key step in abiogenesis, informational constraints as determinants of planetary
habitability, and informational biosignatures for detecting life beyond Earth.
We briefly discuss experimental platforms which offer opportunities to
investigate these theoretical concepts in controlled environments. By
integrating theoretical and experimental approaches, this perspective advances
our understanding of life's informational dynamics and its universal principles
across diverse scientific domains.
|
2501.08686
|
Knowledge Graph-based Retrieval-Augmented Generation for Schema Matching
|
cs.DB cs.CL cs.IR
|
Traditional similarity-based schema matching methods are incapable of
resolving semantic ambiguities and conflicts in domain-specific complex mapping
scenarios due to missing commonsense and domain-specific knowledge. The
hallucination problem of large language models (LLMs) also makes it challenging
for LLM-based schema matching to address the above issues. Therefore, we
propose a Knowledge Graph-based Retrieval-Augmented Generation model for Schema
Matching, referred to as the KG-RAG4SM. In particular, KG-RAG4SM introduces
novel vector-based, graph traversal-based, and query-based graph retrievals, as
well as a hybrid approach and ranking schemes that identify the most relevant
subgraphs from external large knowledge graphs (KGs). We showcase that KG-based
retrieval-augmented LLMs are capable of generating more accurate results for
complex matching cases without any re-training. Our experimental results show
that KG-RAG4SM outperforms the LLM-based state-of-the-art (SOTA) methods (e.g.,
Jellyfish-8B) by 35.89% and 30.50% in terms of precision and F1 score on the
MIMIC dataset, respectively; KG-RAG4SM with GPT-4o-mini outperforms the
pre-trained language model (PLM)-based SOTA methods (e.g., SMAT) by 69.20% and
21.97% in terms of precision and F1 score on the Synthea dataset, respectively.
The results also demonstrate that our approach is more efficient in end-to-end
schema matching, and scales to retrieve from large KGs. Our case studies on the
dataset from the real-world schema matching scenario exhibit that the
hallucination problem of LLMs for schema matching is well mitigated by our
solution.
|
2501.08688
|
Some remarks on practical stabilization via CLF-based control under
measurement noise
|
eess.SY cs.SY math.OC
|
Practical stabilization of input-affine systems in the presence of
measurement errors and input constraints is considered in this brief note.
Assuming that a Lyapunov function and a stabilizing control exist for an
input-affine system, the required measurement accuracy at each point of the
state space is computed. This is done via the Lyapunov function-based decay
condition, which describes along with the input constraints a set of admissible
controls. Afterwards, the measurement time points are computed based on the
system dynamics. It is shown that between these self-triggered measurement time
points, the system evolves and converges into the so-called target ball, i.e. a
vicinity of the origin, where it remains. Furthermore, it is shown that the
approach ensures the existence of a control law, which is admissible for all
possible states and it introduces a connection between measurement time points,
measurement accuracy, target ball, and decay. The results of the approach are
shown in three examples.
|
2501.08695
|
Real-time Indexing for Large-scale Recommendation by Streaming Vector
Quantization Retriever
|
cs.IR
|
Retrievers, which form one of the most important recommendation stages, are
responsible for efficiently selecting possible positive samples to the later
stages under strict latency limitations. Because of this, large-scale systems
always rely on approximate calculations and indexes to roughly shrink candidate
scale, with a simple ranking model. Considering simple models lack the ability
to produce precise predictions, most of the existing methods mainly focus on
incorporating complicated ranking models. However, another fundamental problem
of index effectiveness remains unresolved, which also bottlenecks complication.
In this paper, we propose a novel index structure: streaming Vector
Quantization model, as a new generation of retrieval paradigm. Streaming VQ
attaches items with indexes in real time, granting it immediacy. Moreover,
through meticulous verification of possible variants, it achieves additional
benefits like index balancing and reparability, enabling it to support
complicated ranking models as existing approaches. As a lightweight and
implementation-friendly architecture, streaming VQ has been deployed and
replaced all major retrievers in Douyin and Douyin Lite, resulting in
remarkable user engagement gain.
|
2501.08696
|
Deep Learning-Based Feature Fusion for Emotion Analysis and Suicide Risk
Differentiation in Chinese Psychological Support Hotlines
|
cs.CL
|
Mental health is a critical global public health issue, and psychological
support hotlines play a pivotal role in providing mental health assistance and
identifying suicide risks at an early stage. However, the emotional expressions
conveyed during these calls remain underexplored in current research. This
study introduces a method that combines pitch acoustic features with deep
learning-based features to analyze and understand emotions expressed during
hotline interactions. Using data from China's largest psychological support
hotline, our method achieved an F1-score of 79.13% for negative binary emotion
classification.Additionally, the proposed approach was validated on an open
dataset for multi-class emotion classification,where it demonstrated better
performance compared to the state-of-the-art methods. To explore its clinical
relevance, we applied the model to analysis the frequency of negative emotions
and the rate of emotional change in the conversation, comparing 46 subjects
with suicidal behavior to those without. While the suicidal group exhibited
more frequent emotional changes than the non-suicidal group, the difference was
not statistically significant.Importantly, our findings suggest that emotional
fluctuation intensity and frequency could serve as novel features for
psychological assessment scales and suicide risk prediction.The proposed method
provides valuable insights into emotional dynamics and has the potential to
advance early intervention and improve suicide prevention strategies through
integration with clinical tools and assessments The source code is publicly
available at https://github.com/Sco-field/Speechemotionrecognition/tree/main.
|
2501.08710
|
Disentangled Interleaving Variational Encoding
|
cs.LG stat.ML
|
Conflicting objectives present a considerable challenge in interleaving
multi-task learning, necessitating the need for meticulous design and balance
to ensure effective learning of a representative latent data space across all
tasks without mutual negative impact. Drawing inspiration from the concept of
marginal and conditional probability distributions in probability theory, we
design a principled and well-founded approach to disentangle the original input
into marginal and conditional probability distributions in the latent space of
a variational autoencoder. Our proposed model, Deep Disentangled Interleaving
Variational Encoding (DeepDIVE) learns disentangled features from the original
input to form clusters in the embedding space and unifies these features via
the cross-attention mechanism in the fusion stage. We theoretically prove that
combining the objectives for reconstruction and forecasting fully captures the
lower bound and mathematically derive a loss function for disentanglement using
Na\"ive Bayes. Under the assumption that the prior is a mixture of log-concave
distributions, we also establish that the Kullback-Leibler divergence between
the prior and the posterior is upper bounded by a function minimized by the
minimizer of the cross entropy loss, informing our adoption of radial basis
functions (RBF) and cross entropy with interleaving training for DeepDIVE to
provide a justified basis for convergence. Experiments on two public datasets
show that DeepDIVE disentangles the original input and yields forecast
accuracies better than the original VAE and comparable to existing
state-of-the-art baselines.
|
2501.08712
|
Self-supervised Transformation Learning for Equivariant Representations
|
cs.CV cs.AI cs.LG
|
Unsupervised representation learning has significantly advanced various
machine learning tasks. In the computer vision domain, state-of-the-art
approaches utilize transformations like random crop and color jitter to achieve
invariant representations, embedding semantically the same inputs despite
transformations. However, this can degrade performance in tasks requiring
precise features, such as localization or flower classification. To address
this, recent research incorporates equivariant representation learning, which
captures transformation-sensitive information. However, current methods depend
on transformation labels and thus struggle with interdependency and complex
transformations. We propose Self-supervised Transformation Learning (STL),
replacing transformation labels with transformation representations derived
from image pairs. The proposed method ensures transformation representation is
image-invariant and learns corresponding equivariant transformations, enhancing
performance without increased batch complexity. We demonstrate the approach's
effectiveness across diverse classification and detection tasks, outperforming
existing methods in 7 out of 11 benchmarks and excelling in detection. By
integrating complex transformations like AugMix, unusable by prior equivariant
methods, this approach enhances performance across tasks, underscoring its
adaptability and resilience. Additionally, its compatibility with various base
models highlights its flexibility and broad applicability. The code is
available at https://github.com/jaemyung-u/stl.
|
2501.08716
|
The Inherent Limits of Pretrained LLMs: The Unexpected Convergence of
Instruction Tuning and In-Context Learning Capabilities
|
cs.CL
|
Large Language Models (LLMs), trained on extensive web-scale corpora, have
demonstrated remarkable abilities across diverse tasks, especially as they are
scaled up. Nevertheless, even state-of-the-art models struggle in certain
cases, sometimes failing at problems solvable by young children, indicating
that traditional notions of task complexity are insufficient for explaining LLM
capabilities. However, exploring LLM capabilities is complicated by the fact
that most widely-used models are also "instruction-tuned" to respond
appropriately to prompts. With the goal of disentangling the factors
influencing LLM performance, we investigate whether instruction-tuned models
possess fundamentally different capabilities from base models that are prompted
using in-context examples. Through extensive experiments across various model
families, scales and task types, which included instruction tuning 90 different
LLMs, we demonstrate that the performance of instruction-tuned models is
significantly correlated with the in-context performance of their base
counterparts. By clarifying what instruction-tuning contributes, we extend
prior research into in-context learning, which suggests that base models use
priors from pretraining data to solve tasks. Specifically, we extend this
understanding to instruction-tuned models, suggesting that their pretraining
data similarly sets a limiting boundary on the tasks they can solve, with the
added influence of the instruction-tuning dataset.
|
2501.08717
|
$\texttt{InfoHier}$: Hierarchical Information Extraction via Encoding
and Embedding
|
cs.IR cs.CV cs.LG
|
Analyzing large-scale datasets, especially involving complex and
high-dimensional data like images, is particularly challenging. While
self-supervised learning (SSL) has proven effective for learning
representations from unlabelled data, it typically focuses on flat,
non-hierarchical structures, missing the multi-level relationships present in
many real-world datasets. Hierarchical clustering (HC) can uncover these
relationships by organizing data into a tree-like structure, but it often
relies on rigid similarity metrics that struggle to capture the complexity of
diverse data types. To address these we envision $\texttt{InfoHier}$, a
framework that combines SSL with HC to jointly learn robust latent
representations and hierarchical structures. This approach leverages SSL to
provide adaptive representations, enhancing HC's ability to capture complex
patterns. Simultaneously, it integrates HC loss to refine SSL training,
resulting in representations that are more attuned to the underlying
information hierarchy. $\texttt{InfoHier}$ has the potential to improve the
expressiveness and performance of both clustering and representation learning,
offering significant benefits for data analysis, management, and information
retrieval.
|
2501.08726
|
Task Allocation in Mobile Robot Fleets: A review
|
cs.RO cs.MA
|
Mobile robot fleets are currently used in different scenarios such as medical
environments or logistics. The management of these systems provides different
challenges that vary from the control of the movement of each robot to the
allocation of tasks to be performed. Task Allocation (TA) problem is a key
topic for the proper management of mobile robot fleets to ensure the
minimization of energy consumption and quantity of necessary robots. Solutions
on this aspect are essential to reach economic and environmental sustainability
of robot fleets, mainly in industry applications such as warehouse logistics.
The minimization of energy consumption introduces TA problem as an optimization
issue which has been treated in recent studies. This work focuses on the
analysis of current trends in solving TA of mobile robot fleets. Main TA
optimization algorithms are presented, including novel methods based on
Artificial Intelligence (AI). Additionally, this work showcases most important
results extracted from simulations, including frameworks utilized for the
development of the simulations. Finally, some conclusions are obtained from the
analysis to target on gaps that must be treated in the future.
|
2501.08727
|
Transformed Low-rank Adaptation via Tensor Decomposition and Its
Applications to Text-to-image Models
|
cs.LG
|
Parameter-Efficient Fine-Tuning (PEFT) of text-to-image models has become an
increasingly popular technique with many applications. Among the various PEFT
methods, Low-Rank Adaptation (LoRA) and its variants have gained significant
attention due to their effectiveness, enabling users to fine-tune models with
limited computational resources. However, the approximation gap between the
low-rank assumption and desired fine-tuning weights prevents the simultaneous
acquisition of ultra-parameter-efficiency and better performance. To reduce
this gap and further improve the power of LoRA, we propose a new PEFT method
that combines two classes of adaptations, namely, transform and residual
adaptations. In specific, we first apply a full-rank and dense transform to the
pre-trained weight. This learnable transform is expected to align the
pre-trained weight as closely as possible to the desired weight, thereby
reducing the rank of the residual weight. Then, the residual part can be
effectively approximated by more compact and parameter-efficient structures,
with a smaller approximation error. To achieve ultra-parameter-efficiency in
practice, we design highly flexible and effective tensor decompositions for
both the transform and residual adaptations. Additionally, popular PEFT methods
such as DoRA can be summarized under this transform plus residual adaptation
scheme. Experiments are conducted on fine-tuning Stable Diffusion models in
subject-driven and controllable generation. The results manifest that our
method can achieve better performances and parameter efficiency compared to
LoRA and several baselines.
|
2501.08729
|
GRAPPA -- A Hybrid Graph Neural Network for Predicting Pure Component
Vapor Pressures
|
cs.LG cs.CE
|
Although the pure component vapor pressure is one of the most important
properties for designing chemical processes, no broadly applicable,
sufficiently accurate, and open-source prediction method has been available. To
overcome this, we have developed GRAPPA - a hybrid graph neural network for
predicting vapor pressures of pure components. GRAPPA enables the prediction of
the vapor pressure curve of basically any organic molecule, requiring only the
molecular structure as input. The new model consists of three parts: A graph
attention network for the message passing step, a pooling function that
captures long-range interactions, and a prediction head that yields the
component-specific parameters of the Antoine equation, from which the vapor
pressure can readily and consistently be calculated for any temperature. We
have trained and evaluated GRAPPA on experimental vapor pressure data of almost
25,000 pure components. We found excellent prediction accuracy for unseen
components, outperforming state-of-the-art group contribution methods and other
machine learning approaches in applicability and accuracy. The trained model
and its code are fully disclosed, and GRAPPA is directly applicable via the
interactive website ml-prop.mv.rptu.de.
|
2501.08737
|
Resource-Constrained Federated Continual Learning: What Does Matter?
|
cs.LG
|
Federated Continual Learning (FCL) aims to enable sequentially
privacy-preserving model training on streams of incoming data that vary in edge
devices by preserving previous knowledge while adapting to new data. Current
FCL literature focuses on restricted data privacy and access to previously seen
data while imposing no constraints on the training overhead. This is
unreasonable for FCL applications in real-world scenarios, where edge devices
are primarily constrained by resources such as storage, computational budget,
and label rate. We revisit this problem with a large-scale benchmark and
analyze the performance of state-of-the-art FCL approaches under different
resource-constrained settings. Various typical FCL techniques and six datasets
in two incremental learning scenarios (Class-IL and Domain-IL) are involved in
our experiments. Through extensive experiments amounting to a total of over
1,000+ GPU hours, we find that, under limited resource-constrained settings,
existing FCL approaches, with no exception, fail to achieve the expected
performance. Our conclusions are consistent in the sensitivity analysis. This
suggests that most existing FCL methods are particularly too resource-dependent
for real-world deployment. Moreover, we study the performance of typical FCL
techniques with resource constraints and shed light on future research
directions in FCL.
|
2501.08738
|
MeshMask: Physics-Based Simulations with Masked Graph Neural Networks
|
cs.LG physics.flu-dyn
|
We introduce a novel masked pre-training technique for graph neural networks
(GNNs) applied to computational fluid dynamics (CFD) problems. By randomly
masking up to 40\% of input mesh nodes during pre-training, we force the model
to learn robust representations of complex fluid dynamics. We pair this masking
strategy with an asymmetric encoder-decoder architecture and gated multi-layer
perceptrons to further enhance performance. The proposed method achieves
state-of-the-art results on seven CFD datasets, including a new challenging
dataset of 3D intracranial aneurysm simulations with over 250,000 nodes per
mesh. Moreover, it significantly improves model performance and training
efficiency across such diverse range of fluid simulation tasks. We demonstrate
improvements of up to 60\% in long-term prediction accuracy compared to
previous best models, while maintaining similar computational costs. Notably,
our approach enables effective pre-training on multiple datasets
simultaneously, significantly reducing the time and data required to achieve
high performance on new tasks. Through extensive ablation studies, we provide
insights into the optimal masking ratio, architectural choices, and training
strategies.
|
2501.08758
|
Expanding Vietnamese SentiWordNet to Improve Performance of Vietnamese
Sentiment Analysis Models
|
cs.CL
|
Sentiment analysis is one of the most crucial tasks in Natural Language
Processing (NLP), involving the training of machine learning models to classify
text based on the polarity of opinions. Pre-trained Language Models (PLMs) can
be applied to downstream tasks through fine-tuning, eliminating the need to
train the model from scratch. Specifically, PLMs have been employed for
Sentiment Analysis, a process that involves detecting, analyzing, and
extracting the polarity of text sentiments. Numerous models have been proposed
to address this task, with pre-trained PhoBERT-V2 models standing out as the
state-of-the-art language models for Vietnamese. The PhoBERT-V2 pre-training
approach is based on RoBERTa, optimizing the BERT pre-training method for more
robust performance. In this paper, we introduce a novel approach that combines
PhoBERT-V2 and SentiWordnet for Sentiment Analysis of Vietnamese reviews. Our
proposed model utilizes PhoBERT-V2 for Vietnamese, offering a robust
optimization for the prominent BERT model in the context of Vietnamese
language, and leverages SentiWordNet, a lexical resource explicitly designed to
support sentiment classification applications. Experimental results on the VLSP
2016 and AIVIVN 2019 datasets demonstrate that our sentiment analysis system
has achieved excellent performance in comparison to other models.
|
2501.08760
|
Leveraging LLM Agents for Translating Network Configurations
|
cs.NI cs.AI cs.LG cs.SE
|
Configuration translation is a critical and frequent task in network
operations. When a network device is damaged or outdated, administrators need
to replace it to maintain service continuity. The replacement devices may
originate from different vendors, necessitating configuration translation to
ensure seamless network operation. However, translating configurations manually
is a labor-intensive and error-prone process. In this paper, we propose an
intent-based framework for translating network configuration with Large
Language Model (LLM) Agents. The core of our approach is an Intent-based
Retrieval Augmented Generation (IRAG) module that systematically splits a
configuration file into fragments, extracts intents, and generates accurate
translations. We also design a two-stage verification method to validate the
syntax and semantics correctness of the translated configurations. We implement
and evaluate the proposed method on real-world network configurations.
Experimental results show that our method achieves 97.74% syntax correctness,
outperforming state-of-the-art methods in translation accuracy.
|
2501.08763
|
Few-Shot Learner Generalizes Across AI-Generated Image Detection
|
cs.CV
|
Current fake image detectors trained on large synthetic image datasets
perform satisfactorily on limited studied generative models. However, they
suffer a notable performance decline over unseen models. Besides, collecting
adequate training data from online generative models is often expensive or
infeasible. To overcome these issues, we propose Few-Shot Detector (FSD), a
novel AI-generated image detector which learns a specialized metric space to
effectively distinguish unseen fake images by utilizing very few samples.
Experiments show FSD achieves state-of-the-art performance by $+7.4\%$ average
ACC on GenImage dataset. More importantly, our method is better capable of
capturing the intra-category common features in unseen images without further
training.
|
2501.08769
|
Enhanced Large Language Models for Effective Screening of Depression and
Anxiety
|
cs.CL
|
Depressive and anxiety disorders are widespread, necessitating timely
identification and management. Recent advances in Large Language Models (LLMs)
offer potential solutions, yet high costs and ethical concerns about training
data remain challenges. This paper introduces a pipeline for synthesizing
clinical interviews, resulting in 1,157 interactive dialogues (PsyInterview),
and presents EmoScan, an LLM-based emotional disorder screening system. EmoScan
distinguishes between coarse (e.g., anxiety or depressive disorders) and fine
disorders (e.g., major depressive disorders) and conducts high-quality
interviews. Evaluations showed that EmoScan exceeded the performance of base
models and other LLMs like GPT-4 in screening emotional disorders
(F1-score=0.7467). It also delivers superior explanations (BERTScore=0.9408)
and demonstrates robust generalizability (F1-score of 0.67 on an external
dataset). Furthermore, EmoScan outperforms baselines in interviewing skills, as
validated by automated ratings and human evaluations. This work highlights the
importance of scalable data-generative pipelines for developing effective
mental health LLM tools.
|
2501.08771
|
Admitting Ignorance Helps the Video Question Answering Models to Answer
|
cs.CV
|
Significant progress has been made in the field of video question answering
(VideoQA) thanks to deep learning and large-scale pretraining. Despite the
presence of sophisticated model structures and powerful video-text foundation
models, most existing methods focus solely on maximizing the correlation
between answers and video-question pairs during training. We argue that these
models often establish shortcuts, resulting in spurious correlations between
questions and answers, especially when the alignment between video and text
data is suboptimal. To address these spurious correlations, we propose a novel
training framework in which the model is compelled to acknowledge its ignorance
when presented with an intervened question, rather than making guesses solely
based on superficial question-answer correlations. We introduce methodologies
for intervening in questions, utilizing techniques such as displacement and
perturbation, and design frameworks for the model to admit its lack of
knowledge in both multi-choice VideoQA and open-ended settings. In practice, we
integrate a state-of-the-art model into our framework to validate its
effectiveness. The results clearly demonstrate that our framework can
significantly enhance the performance of VideoQA models with minimal structural
modifications.
|
2501.08774
|
How Developers Interact with AI: A Taxonomy of Human-AI Collaboration in
Software Engineering
|
cs.SE cs.AI cs.HC
|
Artificial intelligence (AI), including large language models and generative
AI, is emerging as a significant force in software development, offering
developers powerful tools that span the entire development lifecycle. Although
software engineering research has extensively studied AI tools in software
development, the specific types of interactions between developers and these
AI-powered tools have only recently begun to receive attention. Understanding
and improving these interactions has the potential to enhance productivity,
trust, and efficiency in AI-driven workflows. In this paper, we propose a
taxonomy of interaction types between developers and AI tools, identifying
eleven distinct interaction types, such as auto-complete code suggestions,
command-driven actions, and conversational assistance. Building on this
taxonomy, we outline a research agenda focused on optimizing AI interactions,
improving developer control, and addressing trust and usability challenges in
AI-assisted development. By establishing a structured foundation for studying
developer-AI interactions, this paper aims to stimulate research on creating
more effective, adaptive AI tools for software development.
|
2501.08778
|
Networked Agents in the Dark: Team Value Learning under Partial
Observability
|
cs.LG cs.AI cs.MA
|
We propose a novel cooperative multi-agent reinforcement learning (MARL)
approach for networked agents. In contrast to previous methods that rely on
complete state information or joint observations, our agents must learn how to
reach shared objectives under partial observability. During training, they
collect individual rewards and approximate a team value function through local
communication, resulting in cooperative behavior. To describe our problem, we
introduce the networked dynamic partially observable Markov game framework,
where agents communicate over a switching topology communication network. Our
distributed method, DNA-MARL, uses a consensus mechanism for local
communication and gradient descent for local computation. DNA-MARL increases
the range of the possible applications of networked agents, being well-suited
for real world domains that impose privacy and where the messages may not reach
their recipients. We evaluate DNA-MARL across benchmark MARL scenarios. Our
results highlight the superior performance of DNA-MARL over previous methods.
|
2501.08779
|
Nesterov Acceleration for Ensemble Kalman Inversion and Variants
|
math.OC cs.LG stat.CO
|
Ensemble Kalman inversion (EKI) is a derivative-free, particle-based
optimization method for solving inverse problems. It can be shown that EKI
approximates a gradient flow, which allows the application of methods for
accelerating gradient descent. Here, we show that Nesterov acceleration is
effective in speeding up the reduction of the EKI cost function on a variety of
inverse problems. We also implement Nesterov acceleration for two EKI variants,
unscented Kalman inversion and ensemble transform Kalman inversion. Our
specific implementation takes the form of a particle-level nudge that is
demonstrably simple to couple in a black-box fashion with any existing EKI
variant algorithms, comes with no additional computational expense, and with no
additional tuning hyperparameters. This work shows a pathway for future
research to translate advances in gradient-based optimization into advances in
gradient-free Kalman optimization.
|
2501.08780
|
Deep learning for temporal super-resolution 4D Flow MRI
|
cs.LG
|
4D Flow Magnetic Resonance Imaging (4D Flow MRI) is a non-invasive technique
for volumetric, time-resolved blood flow quantification. However, apparent
trade-offs between acquisition time, image noise, and resolution limit clinical
applicability. In particular, in regions of highly transient flow, coarse
temporal resolution can hinder accurate capture of physiologically relevant
flow variations. To overcome these issues, post-processing techniques using
deep learning have shown promising results to enhance resolution post-scan
using so-called super-resolution networks. However, while super-resolution has
been focusing on spatial upsampling, temporal super-resolution remains largely
unexplored. The aim of this study was therefore to implement and evaluate a
residual network for temporal super-resolution 4D Flow MRI. To achieve this, an
existing spatial network (4DFlowNet) was re-designed for temporal upsampling,
adapting input dimensions, and optimizing internal layer structures. Training
and testing were performed using synthetic 4D Flow MRI data originating from
patient-specific in-silico models, as well as using in-vivo datasets. Overall,
excellent performance was achieved with input velocities effectively denoised
and temporally upsampled, with a mean absolute error (MAE) of 1.0 cm/s in an
unseen in-silico setting, outperforming deterministic alternatives (linear
interpolation MAE = 2.3 cm/s, sinc interpolation MAE = 2.6 cm/s). Further, the
network synthesized high-resolution temporal information from unseen
low-resolution in-vivo data, with strong correlation observed at peak flow
frames. As such, our results highlight the potential of utilizing data-driven
neural networks for temporal super-resolution 4D Flow MRI, enabling
high-frame-rate flow quantification without extending acquisition times beyond
clinically acceptable limits.
|
2501.08781
|
Cultivating Precision: Comparative Analysis of Sensor-Based Yogurt
Fermentation Monitoring Techniques
|
eess.SY cs.SY
|
Fermented dairy products, including yogurt, are widely consumed for their
nutritional and health benefits. While numerous methods exist to monitor and
understand yogurt fermentation, the literature lacks an integrated evaluation
of diverse sensing approaches within a single experimental framework. To
address this gap, this study systematically examines and compares multiple
measurement techniques--electrical impedance, DC resistance, pH, optical
transparency, carbon dioxide concentration, ambient temperature, and relative
humidity--in tracking the yogurt fermentation process. By presenting a unified
set of experimental results and assessing each method's observational
characteristics, this work offers an encompassing reference point for
researchers seeking to understand the relative merits and limitations of
different sensing modalities. Rather than establishing definitive guidelines or
practical recommendations, the findings provide a foundation for subsequent
investigations into sensor-based fermentation monitoring, thereby contributing
to a more comprehensive understanding of yogurt fermentation dynamics.
|
2501.08786
|
Differentiability and overlap concentration in optimal Bayesian
inference
|
math.PR cs.IT math.IT math.ST stat.TH
|
In this short note, we consider models of optimal Bayesian inference of
finite-rank tensor products. We add to the model a linear channel parametrized
by $h$. We show that at every interior differentiable point $h$ of the free
energy (associated with the model), the overlap concentrates at the gradient of
the free energy and the minimum mean-square error converges to a related limit.
In other words, the model is replica-symmetric at every differentiable point.
At any signal-to-noise ratio, such points $h$ form a full-measure set (hence
$h=0$ belongs to the closure of these points). For a sufficiently low
signal-to-noise ratio, we show that every interior point is a differentiable
point.
|
2501.08795
|
Heat transfer simulation of window frames with SPHinXsys
|
cs.CE
|
Maintaining a comfortable temperature inside a building requires appropriate
thermal insulation of windows, which can be optimised iteratively with
numerical simulation. Smoothed particle hydrodynamics(SPH) is a fully
Lagrangian method widely used for simulating multi-physics applications with
high computational efficiency and accuracy. It is advantageous in physically
coupled problems such as heat-fluid-solid or any other type of physically
coupled simulations. The focus of this study is to simulate the heat transfer
process in various window frames under convective boundary conditions according
to ISO10077-2:2012. This paper demonstrates the accuracy and compatibility of
SPH when dealing with heat transfer problems, which ensures further development
of thermal coupling with other physical fields. The results and methods used in
this paper provide some guidance on how to properly handle heat transfer
simulations using SPH, which can be extended to multi-physics coupled
simulations in the future.
|
2501.08799
|
Exploring ChatGPT for Face Presentation Attack Detection in Zero and
Few-Shot in-Context Learning
|
cs.CV cs.CR
|
This study highlights the potential of ChatGPT (specifically GPT-4o) as a
competitive alternative for Face Presentation Attack Detection (PAD),
outperforming several PAD models, including commercial solutions, in specific
scenarios. Our results show that GPT-4o demonstrates high consistency,
particularly in few-shot in-context learning, where its performance improves as
more examples are provided (reference data). We also observe that detailed
prompts enable the model to provide scores reliably, a behavior not observed
with concise prompts. Additionally, explanation-seeking prompts slightly
enhance the model's performance by improving its interpretability. Remarkably,
the model exhibits emergent reasoning capabilities, correctly predicting the
attack type (print or replay) with high accuracy in few-shot scenarios, despite
not being explicitly instructed to classify attack types. Despite these
strengths, GPT-4o faces challenges in zero-shot tasks, where its performance is
limited compared to specialized PAD systems. Experiments were conducted on a
subset of the SOTERIA dataset, ensuring compliance with data privacy
regulations by using only data from consenting individuals. These findings
underscore GPT-4o's promise in PAD applications, laying the groundwork for
future research to address broader data privacy concerns and improve
cross-dataset generalization. Code available here:
https://gitlab.idiap.ch/bob/bob.paper.wacv2025_chatgpt_face_pad
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.