id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
2501.01039 | MSWA: Refining Local Attention with Multi-ScaleWindow Attention | cs.CL cs.AI | Transformer-based LLMs have achieved exceptional performance across a wide
range of NLP tasks. However, the standard self-attention mechanism suffers from
quadratic time complexity and linearly increased cache size. Sliding window
attention (SWA) solves this problem by restricting the attention range to a
fixed-size local context window. Nevertheless, SWA employs a uniform window
size for each head in each layer, making it inefficient in capturing context of
varying scales. To mitigate this limitation, we propose Multi-Scale Window
Attention (MSWA) which applies diverse window sizes across heads and layers in
the Transformer. It not only allows for different window sizes among heads
within the same layer but also progressively increases window size allocation
from shallow to deep layers, thus enabling the model to capture contextual
information with different lengths and distances. Experimental results on
language modeling and common-sense reasoning tasks substantiate that MSWA
outperforms traditional local attention in both effectiveness and efficiency.
|
2501.01040 | Event Masked Autoencoder: Point-wise Action Recognition with Event-Based
Cameras | cs.CV | Dynamic vision sensors (DVS) are bio-inspired devices that capture visual
information in the form of asynchronous events, which encode changes in pixel
intensity with high temporal resolution and low latency. These events provide
rich motion cues that can be exploited for various computer vision tasks, such
as action recognition. However, most existing DVS-based action recognition
methods lose temporal information during data transformation or suffer from
noise and outliers caused by sensor imperfections or environmental factors. To
address these challenges, we propose a novel framework that preserves and
exploits the spatiotemporal structure of event data for action recognition. Our
framework consists of two main components: 1) a point-wise event masked
autoencoder (MAE) that learns a compact and discriminative representation of
event patches by reconstructing them from masked raw event camera points data;
2) an improved event points patch generation algorithm that leverages an event
data inlier model and point-wise data augmentation techniques to enhance the
quality and diversity of event points patches. To the best of our knowledge,
our approach introduces the pre-train method into event camera raw points data
for the first time, and we propose a novel event points patch embedding to
utilize transformer-based models on event cameras.
|
2501.01042 | Image-based Multimodal Models as Intruders: Transferable Multimodal
Attacks on Video-based MLLMs | cs.CV cs.CR cs.LG | Video-based multimodal large language models (V-MLLMs) have shown
vulnerability to adversarial examples in video-text multimodal tasks. However,
the transferability of adversarial videos to unseen models--a common and
practical real world scenario--remains unexplored. In this paper, we pioneer an
investigation into the transferability of adversarial video samples across
V-MLLMs. We find that existing adversarial attack methods face significant
limitations when applied in black-box settings for V-MLLMs, which we attribute
to the following shortcomings: (1) lacking generalization in perturbing video
features, (2) focusing only on sparse key-frames, and (3) failing to integrate
multimodal information. To address these limitations and deepen the
understanding of V-MLLM vulnerabilities in black-box scenarios, we introduce
the Image-to-Video MLLM (I2V-MLLM) attack. In I2V-MLLM, we utilize an
image-based multimodal model (IMM) as a surrogate model to craft adversarial
video samples. Multimodal interactions and temporal information are integrated
to disrupt video representations within the latent space, improving adversarial
transferability. In addition, a perturbation propagation technique is
introduced to handle different unknown frame sampling strategies. Experimental
results demonstrate that our method can generate adversarial examples that
exhibit strong transferability across different V-MLLMs on multiple video-text
multimodal tasks. Compared to white-box attacks on these models, our black-box
attacks (using BLIP-2 as surrogate model) achieve competitive performance, with
average attack success rates of 55.48% on MSVD-QA and 58.26% on MSRVTT-QA for
VideoQA tasks, respectively. Our code will be released upon acceptance.
|
2501.01045 | ZeroFlow: Overcoming Catastrophic Forgetting is Easier than You Think | cs.CV cs.LG | Backpropagation provides a generalized configuration for overcoming
catastrophic forgetting. Like, SGD and Adam are commonly used for weight
updates in continual learning and continual pre-training. In practice,
permission to access gradient information is not always granted (the gradient
ban), such as black-box APIs, hardware limitations, and non-differentiable
systems. To bridge this gap, we introduce the first benchmark ZeroFlow to
evaluate gradient-free optimization algorithms for overcoming forgetting. This
benchmark examines a suite of forward pass methods across multiple methods,
forgetting scenarios, and datasets. We find that forward passes alone are
enough to overcome forgetting. Our findings reveal new optimization principles
that highlight the potential of forward-pass in mitigating forgetting, managing
task conflicts, and reducing memory demands, alongside novel enhancements that
further mitigate forgetting with just one forward pass. This work provides
essential insights and tools for advancing forward pass methods to overcome
forgetting.
|
2501.01046 | FED: Fast and Efficient Dataset Deduplication Framework with GPU
Acceleration | cs.CL | Dataset deduplication plays a crucial role in enhancing data quality,
ultimately improving the training performance and efficiency of large language
models. A commonly used method for data deduplication is the MinHash LSH
algorithm. Recently, NVIDIA introduced a GPU-based MinHash LSH deduplication
method, but it remains suboptimal, leaving room for further improvement in
processing efficiency. This paper proposes a GPU-accelerated deduplication
framework, FED, that optimizes MinHash LSH for GPU clusters and leverages
computationally efficient, partially reusable non-cryptographic hash functions.
FED significantly outperforms the CPU-based deduplication tool in SlimPajama
(using 64 logical CPU cores) by up to 107.2 times and the GPU-based tool in
NVIDIA NeMo Curator by up to 6.3 times when processing 30 million documents on
a node with four GPUs. Notably, our method dramatically accelerates the
previously time-consuming MinHash signature generation phase, achieving
speed-ups of up to 260 compared to the CPU baseline. Despite these gains in
efficiency, FED maintains high deduplication quality, with the duplicate
document sets reaching a Jaccard similarity of over 0.96 compared to those
identified by the standard MinHash algorithm. In large-scale experiments, the
deduplication of 1.2 trillion tokens is completed in just 6 hours in a
four-node, 16-GPU environment. The related code is publicly available on GitHub
(\href{https://github.com/mcrl/FED}{https://github.com/mcrl/FED}).
|
2501.01049 | TS-SatMVSNet: Slope Aware Height Estimation for Large-Scale Earth
Terrain Multi-view Stereo | cs.CV | 3D terrain reconstruction with remote sensing imagery achieves cost-effective
and large-scale earth observation and is crucial for safeguarding natural
disasters, monitoring ecological changes, and preserving the
environment.Recently, learning-based multi-view stereo~(MVS) methods have shown
promise in this task. However, these methods simply modify the general
learning-based MVS framework for height estimation, which overlooks the terrain
characteristics and results in insufficient accuracy. Considering that the
Earth's surface generally undulates with no drastic changes and can be measured
by slope, integrating slope considerations into MVS frameworks could enhance
the accuracy of terrain reconstructions. To this end, we propose an end-to-end
slope-aware height estimation network named TS-SatMVSNet for large-scale remote
sensing terrain reconstruction.To effectively obtain the slope representation,
drawing from mathematical gradient concepts, we innovatively proposed a
height-based slope calculation strategy to first calculate a slope map from a
height map to measure the terrain undulation. To fully integrate slope
information into the MVS pipeline, we separately design two slope-guided
modules to enhance reconstruction outcomes at both micro and macro levels.
Specifically, at the micro level, we designed a slope-guided interval partition
module for refined height estimation using slope values. At the macro level, a
height correction module is proposed, using a learnable Gaussian smoothing
operator to amend the inaccurate height values. Additionally, to enhance the
efficacy of height estimation, we proposed a slope direction loss for
implicitly optimizing height estimation results. Extensive experiments on the
WHU-TLC dataset and MVS3D dataset show that our proposed method achieves
state-of-the-art performance and demonstrates competitive generalization
ability.
|
2501.01054 | Dynamic Scaling of Unit Tests for Code Reward Modeling | cs.CL cs.SE | Current large language models (LLMs) often struggle to produce accurate
responses on the first attempt for complex reasoning tasks like code
generation. Prior research tackles this challenge by generating multiple
candidate solutions and validating them with LLM-generated unit tests. The
execution results of unit tests serve as reward signals to identify correct
solutions. As LLMs always confidently make mistakes, these unit tests are not
reliable, thereby diminishing the quality of reward signals. Motivated by the
observation that scaling the number of solutions improves LLM performance, we
explore the impact of scaling unit tests to enhance reward signal quality. Our
pioneer experiment reveals a positive correlation between the number of unit
tests and reward signal quality, with greater benefits observed in more
challenging problems. Based on these insights, we propose CodeRM-8B, a
lightweight yet effective unit test generator that enables efficient and
high-quality unit test scaling. Additionally, we implement a dynamic scaling
mechanism that adapts the number of unit tests based on problem difficulty,
further improving efficiency. Experimental results show that our approach
significantly improves performance across various models on three benchmarks
(e.g., with gains of 18.43% for Llama3-8B and 3.42% for GPT-4o-mini on
HumanEval Plus).
|
2501.01056 | Risks of Cultural Erasure in Large Language Models | cs.CL cs.AI | Large language models are increasingly being integrated into applications
that shape the production and discovery of societal knowledge such as search,
online education, and travel planning. As a result, language models will shape
how people learn about, perceive and interact with global cultures making it
important to consider whose knowledge systems and perspectives are represented
in models. Recognizing this importance, increasingly work in Machine Learning
and NLP has focused on evaluating gaps in global cultural representational
distribution within outputs. However, more work is needed on developing
benchmarks for cross-cultural impacts of language models that stem from a
nuanced sociologically-aware conceptualization of cultural impact or harm. We
join this line of work arguing for the need of metricizable evaluations of
language technologies that interrogate and account for historical power
inequities and differential impacts of representation on global cultures,
particularly for cultures already under-represented in the digital corpora. We
look at two concepts of erasure: omission: where cultures are not represented
at all and simplification i.e. when cultural complexity is erased by presenting
one-dimensional views of a rich culture. The former focuses on whether
something is represented, and the latter on how it is represented. We focus our
analysis on two task contexts with the potential to influence global cultural
production. First, we probe representations that a language model produces
about different places around the world when asked to describe these contexts.
Second, we analyze the cultures represented in the travel recommendations
produced by a set of language model applications. Our study shows ways in which
the NLP community and application developers can begin to operationalize
complex socio-cultural considerations into standard evaluations and benchmarks.
|
2501.01057 | HPC Application Parameter Autotuning on Edge Devices: A Bandit Learning
Approach | cs.PF cs.LG cs.SY eess.SY | The growing necessity for enhanced processing capabilities in edge devices
with limited resources has led us to develop effective methods for improving
high-performance computing (HPC) applications. In this paper, we introduce LASP
(Lightweight Autotuning of Scientific Application Parameters), a novel strategy
designed to address the parameter search space challenge in edge devices. Our
strategy employs a multi-armed bandit (MAB) technique focused on online
exploration and exploitation. Notably, LASP takes a dynamic approach, adapting
seamlessly to changing environments. We tested LASP with four HPC applications:
Lulesh, Kripke, Clomp, and Hypre. Its lightweight nature makes it particularly
well-suited for resource-constrained edge devices. By employing the MAB
framework to efficiently navigate the search space, we achieved significant
performance improvements while adhering to the stringent computational limits
of edge devices. Our experimental results demonstrate the effectiveness of LASP
in optimizing parameter search on edge devices.
|
2501.01059 | Dynamic Attention-Guided Context Decoding for Mitigating Context
Faithfulness Hallucinations in Large Language Models | cs.CL cs.LG | Large language models (LLMs) often suffer from context faithfulness
hallucinations, where outputs deviate from retrieved information due to
insufficient context utilization and high output uncertainty. Our uncertainty
evaluation experiments reveal a strong correlation between high uncertainty and
hallucinations. We hypothesize that attention mechanisms encode signals
indicative of contextual utilization, validated through probing analysis. Based
on these insights, we propose Dynamic Attention-Guided Context Decoding
(DAGCD), a lightweight framework that integrates attention distributions and
uncertainty signals in a single-pass decoding process. Experiments across QA
datasets demonstrate DAGCD's effectiveness, achieving significant improvements
in faithfulness and robustness while maintaining computational efficiency.
|
2501.01061 | An Efficient Outlier Detection Algorithm for Data Streaming | stat.CO cs.LG stat.AP | The nature of modern data is increasingly real-time, making outlier detection
crucial in any data-related field, such as finance for fraud detection and
healthcare for monitoring patient vitals. Traditional outlier detection
methods, such as the Local Outlier Factor (LOF) algorithm, struggle with
real-time data due to the need for extensive recalculations with each new data
point, limiting their application in real-time environments. While the
Incremental LOF (ILOF) algorithm has been developed to tackle the challenges of
online anomaly detection, it remains computationally expensive when processing
large streams of data points, and its detection performance may degrade after a
certain threshold of points have streamed in. In this paper, we propose a novel
approach to enhance the efficiency of LOF algorithms for online anomaly
detection, named the Efficient Incremental LOF (EILOF) algorithm. The EILOF
algorithm only computes the LOF scores of new points without altering the LOF
scores of existing data points. Although exact LOF scores have not yet been
computed for the existing points in the new algorithm, datasets often contain
noise, and minor deviations in LOF score calculations do not necessarily
degrade detection performance. In fact, such deviations can sometimes enhance
outlier detection. We systematically tested this approach on both simulated and
real-world datasets, demonstrating that EILOF outperforms ILOF as the volume of
streaming data increases across various scenarios. The EILOF algorithm not only
significantly reduces computational costs, but also systematically improves
detection accuracy when the number of additional points increases compared to
the ILOF algorithm.
|
2501.01062 | Fides: Scalable Censorship-Resistant DAG Consensus via Trusted
Components | cs.DC cs.DB | Recently, consensus protocols based on Directed Acyclic Graph (DAG) have
gained significant attention due to their potential to build robust blockchain
systems, particularly in asynchronous networks. In this paper, we propose
Fides, an asynchronous DAG-based BFT consensus protocol that leverages Trusted
Execution Environments (TEEs) to tackle three major scalability and security
challenges faced by existing protocols: (i) the need for a larger quorum size
(i.e., at least 3x larger) to tolerate Byzantine replicas, (ii) high
communication costs and reliance on expensive cryptographic primitives (i.e.,
global common coin) to reach agreement in asynchronous networks, and (iii) poor
censorship resilience undermining the liveness guarantee. Specifically, Fides
adopts four trusted components-Reliable Broadcast, Vertex Validation, Common
Coin, and Transaction Disclosure-within TEEs. Incorporating these components
enables Fides to achieve linear message complexity, guaranteed censorship
resilience, 2x larger quorum size, and lightweight common coin usage. Besides,
abstracting these essential components rather than porting the entire protocol
into TEE can significantly reduce the Trusted Computing Base (TCB).
Experimental evaluations of Fides in local and geo-distributed networks
demonstrate its superior performance compared to established state-of-the-art
protocols such as Tusk, RCC, HotStuff, and PBFT. The results indicate that
Fides achieves a throughput of 400k transactions per second in a
geo-distributed network and 810k transactions per second in a local network.
Our analysis further explores the protocol's overhead, highlighting its
suitability and effectiveness for practical deployment in real-world blockchain
systems.
|
2501.01067 | Enhancing Precision of Automated Teller Machines Network Quality
Assessment: Machine Learning and Multi Classifier Fusion Approaches | cs.LG | Ensuring reliable ATM services is essential for modern banking, directly
impacting customer satisfaction and the operational efficiency of financial
institutions. This study introduces a data fusion approach that utilizes
multi-classifier fusion techniques, with a special focus on the Stacking
Classifier, to enhance the reliability of ATM networks. To address class
imbalance, the Synthetic Minority Over-sampling Technique (SMOTE) was applied,
enabling balanced learning for both frequent and rare events. The proposed
framework integrates diverse classification models - Random Forest, LightGBM,
and CatBoost - within a Stacking Classifier, achieving a dramatic reduction in
false alarms from 3.56 percent to just 0.71 percent, along with an outstanding
overall accuracy of 99.29 percent. This multi-classifier fusion method
synthesizes the strengths of individual models, leading to significant cost
savings and improved operational decision-making. By demonstrating the power of
machine learning and data fusion in optimizing ATM status detection, this
research provides practical and scalable solutions for financial institutions
aiming to enhance their ATM network performance and customer satisfaction.
|
2501.01069 | BeliN: A Novel Corpus for Bengali Religious News Headline Generation
using Contextual Feature Fusion | cs.CL cs.LG | Automatic text summarization, particularly headline generation, remains a
critical yet underexplored area for Bengali religious news. Existing approaches
to headline generation typically rely solely on the article content,
overlooking crucial contextual features such as sentiment, category, and
aspect. This limitation significantly hinders their effectiveness and overall
performance. This study addresses this limitation by introducing a novel
corpus, BeliN (Bengali Religious News) - comprising religious news articles
from prominent Bangladeshi online newspapers, and MultiGen - a contextual
multi-input feature fusion headline generation approach. Leveraging
transformer-based pre-trained language models such as BanglaT5, mBART, mT5, and
mT0, MultiGen integrates additional contextual features - including category,
aspect, and sentiment - with the news content. This fusion enables the model to
capture critical contextual information often overlooked by traditional
methods. Experimental results demonstrate the superiority of MultiGen over the
baseline approach that uses only news content, achieving a BLEU score of 18.61
and ROUGE-L score of 24.19, compared to baseline approach scores of 16.08 and
23.08, respectively. These findings underscore the importance of incorporating
contextual features in headline generation for low-resource languages. By
bridging linguistic and cultural gaps, this research advances natural language
processing for Bengali and other underrepresented languages. To promote
reproducibility and further exploration, the dataset and implementation code
are publicly accessible at https://github.com/akabircs/BeliN.
|
2501.01072 | Evidential Calibrated Uncertainty-Guided Interactive Segmentation
paradigm for Ultrasound Images | cs.CV | Accurate and robust ultrasound image segmentation is critical for
computer-aided diagnostic systems. Nevertheless, the inherent challenges of
ultrasound imaging, such as blurry boundaries and speckle noise, often cause
traditional segmentation methods to struggle with performance. Despite recent
advancements in universal image segmentation, such as the Segment Anything
Model, existing interactive segmentation methods still suffer from inefficiency
and lack of specialization. These methods rely heavily on extensive accurate
manual or random sampling prompts for interaction, necessitating numerous
prompts and iterations to reach satisfactory performance. In response to this
challenge, we propose the Evidential Uncertainty-Guided Interactive
Segmentation (EUGIS), an end-to-end, efficient tiered interactive segmentation
paradigm based on evidential uncertainty estimation for ultrasound image
segmentation. Specifically, EUGIS harnesses evidence-based uncertainty
estimation, grounded in Dempster-Shafer theory and Subjective Logic, to gauge
the level of uncertainty in the predictions of model for different regions. By
prioritizing sampling the high-uncertainty region, our method can effectively
simulate the interactive behavior of well-trained radiologists, enhancing the
targeted of sampling while reducing the number of prompts and iterations
required.Additionally, we propose a trainable calibration mechanism for
uncertainty estimation, which can further optimize the boundary between
certainty and uncertainty, thereby enhancing the confidence of uncertainty
estimation.
|
2501.01073 | Graph Generative Pre-trained Transformer | cs.LG cs.AI | Graph generation is a critical task in numerous domains, including molecular
design and social network analysis, due to its ability to model complex
relationships and structured data. While most modern graph generative models
utilize adjacency matrix representations, this work revisits an alternative
approach that represents graphs as sequences of node set and edge set. We
advocate for this approach due to its efficient encoding of graphs and propose
a novel representation. Based on this representation, we introduce the Graph
Generative Pre-trained Transformer (G2PT), an auto-regressive model that learns
graph structures via next-token prediction. To further exploit G2PT's
capabilities as a general-purpose foundation model, we explore fine-tuning
strategies for two downstream applications: goal-oriented generation and graph
property prediction. We conduct extensive experiments across multiple datasets.
Results indicate that G2PT achieves superior generative performance on both
generic graph and molecule datasets. Furthermore, G2PT exhibits strong
adaptability and versatility in downstream tasks from molecular design to
property prediction.
|
2501.01085 | Noise-Resilient Symbolic Regression with Dynamic Gating Reinforcement
Learning | cs.LG | Symbolic regression (SR) has emerged as a pivotal technique for uncovering
the intrinsic information within data and enhancing the interpretability of AI
models. However, current state-of-the-art (sota) SR methods struggle to perform
correct recovery of symbolic expressions from high-noise data. To address this
issue, we introduce a novel noise-resilient SR (NRSR) method capable of
recovering expressions from high-noise data. Our method leverages a novel
reinforcement learning (RL) approach in conjunction with a designed
noise-resilient gating module (NGM) to learn symbolic selection policies. The
gating module can dynamically filter the meaningless information from
high-noise data, thereby demonstrating a high noise-resilient capability for
the SR process. And we also design a mixed path entropy (MPE) bonus term in the
RL process to increase the exploration capabilities of the policy. Experimental
results demonstrate that our method significantly outperforms several popular
baselines on benchmarks with high-noise data. Furthermore, our method also can
achieve sota performance on benchmarks with clean data, showcasing its
robustness and efficacy in SR tasks.
|
2501.01087 | Bridging Simplicity and Sophistication using GLinear: A Novel
Architecture for Enhanced Time Series Prediction | cs.LG cs.CV cs.ET | Time Series Forecasting (TSF) is an important application across many fields.
There is a debate about whether Transformers, despite being good at
understanding long sequences, struggle with preserving temporal relationships
in time series data. Recent research suggests that simpler linear models might
outperform or at least provide competitive performance compared to complex
Transformer-based models for TSF tasks. In this paper, we propose a novel
data-efficient architecture, GLinear, for multivariate TSF that exploits
periodic patterns to provide better accuracy. It also provides better
prediction accuracy by using a smaller amount of historical data compared to
other state-of-the-art linear predictors. Four different datasets (ETTh1,
Electricity, Traffic, and Weather) are used to evaluate the performance of the
proposed predictor. A performance comparison with state-of-the-art linear
architectures (such as NLinear, DLinear, and RLinear) and transformer-based
time series predictor (Autoformer) shows that the GLinear, despite being
parametrically efficient, significantly outperforms the existing architectures
in most cases of multivariate TSF. We hope that the proposed GLinear opens new
fronts of research and development of simpler and more sophisticated
architectures for data and computationally efficient time-series analysis.
|
2501.01090 | HoneypotNet: Backdoor Attacks Against Model Extraction | cs.CR cs.CV | Model extraction attacks are one type of inference-time attacks that
approximate the functionality and performance of a black-box victim model by
launching a certain number of queries to the model and then leveraging the
model's predictions to train a substitute model. These attacks pose severe
security threats to production models and MLaaS platforms and could cause
significant monetary losses to the model owners. A body of work has proposed to
defend machine learning models against model extraction attacks, including both
active defense methods that modify the model's outputs or increase the query
overhead to avoid extraction and passive defense methods that detect malicious
queries or leverage watermarks to perform post-verification. In this work, we
introduce a new defense paradigm called attack as defense which modifies the
model's output to be poisonous such that any malicious users that attempt to
use the output to train a substitute model will be poisoned. To this end, we
propose a novel lightweight backdoor attack method dubbed HoneypotNet that
replaces the classification layer of the victim model with a honeypot layer and
then fine-tunes the honeypot layer with a shadow model (to simulate model
extraction) via bi-level optimization to modify its output to be poisonous
while remaining the original performance. We empirically demonstrate on four
commonly used benchmark datasets that HoneypotNet can inject backdoors into
substitute models with a high success rate. The injected backdoor not only
facilitates ownership verification but also disrupts the functionality of
substitute models, serving as a significant deterrent to model extraction
attacks.
|
2501.01094 | MMVA: Multimodal Matching Based on Valence and Arousal across Images,
Music, and Musical Captions | cs.SD cs.AI cs.MM eess.AS | We introduce Multimodal Matching based on Valence and Arousal (MMVA), a
tri-modal encoder framework designed to capture emotional content across
images, music, and musical captions. To support this framework, we expand the
Image-Music-Emotion-Matching-Net (IMEMNet) dataset, creating IMEMNet-C which
includes 24,756 images and 25,944 music clips with corresponding musical
captions. We employ multimodal matching scores based on the continuous valence
(emotional positivity) and arousal (emotional intensity) values. This
continuous matching score allows for random sampling of image-music pairs
during training by computing similarity scores from the valence-arousal values
across different modalities. Consequently, the proposed approach achieves
state-of-the-art performance in valence-arousal prediction tasks. Furthermore,
the framework demonstrates its efficacy in various zeroshot tasks, highlighting
the potential of valence and arousal predictions in downstream applications.
|
2501.01096 | Learning-Based Stable Optimal Guidance for Spacecraft Close-Proximity
Operations | eess.SY cs.SY math.OC | Machine learning techniques have demonstrated their effectiveness in
achieving autonomy and optimality for nonlinear and high-dimensional dynamical
systems. However, traditional black-box machine learning methods often lack
formal stability guarantees, which are critical for safety-sensitive aerospace
applications. This paper proposes a comprehensive framework that combines
control Lyapunov functions with supervised learning to provide certifiably
stable, time- and fuel-optimal guidance for rendezvous maneuvers governed by
Clohessy-Wiltshire dynamics. The framework is easily extensible to nonlinear
control-affine systems. A novel neural candidate Lyapunov function is developed
to ensure positive definiteness. Subsequently, a control policy is defined, in
which the thrust direction vector minimizes the Lyapunov function's time
derivative, and the thrust throttle is determined using minimal required
throttle. This approach ensures that all loss terms related to the control
Lyapunov function are either naturally satisfied or replaced by the derived
control policy. To jointly supervise the Lyapunov function and the control
policy, a simple loss function is introduced, leveraging optimal state-control
pairs obtained by a polynomial maps based method. Consequently, the trained
neural network not only certifies the Lyapunov function but also generates a
near-optimal guidance policy, even for the bang-bang fuel-optimal problem.
Extensive numerical simulations are presented to validate the proposed method.
|
2501.01097 | EliGen: Entity-Level Controlled Image Generation with Regional Attention | cs.CV | Recent advancements in diffusion models have significantly advanced
text-to-image generation, yet global text prompts alone remain insufficient for
achieving fine-grained control over individual entities within an image. To
address this limitation, we present EliGen, a novel framework for Entity-level
controlled image Generation. Firstly, we put forward regional attention, a
mechanism for diffusion transformers that requires no additional parameters,
seamlessly integrating entity prompts and arbitrary-shaped spatial masks. By
contributing a high-quality dataset with fine-grained spatial and semantic
entity-level annotations, we train EliGen to achieve robust and accurate
entity-level manipulation, surpassing existing methods in both spatial
precision and image quality. Additionally, we propose an inpainting fusion
pipeline, extending its capabilities to multi-entity image inpainting tasks. We
further demonstrate its flexibility by integrating it with other open-source
models such as IP-Adapter, In-Context LoRA and MLLM, unlocking new creative
possibilities. The source code, model, and dataset are published at
https://github.com/modelscope/DiffSynth-Studio.git.
|
2501.01100 | Long-range Brain Graph Transformer | cs.LG | Understanding communication and information processing among brain regions of
interest (ROIs) is highly dependent on long-range connectivity, which plays a
crucial role in facilitating diverse functional neural integration across the
entire brain. However, previous studies generally focused on the short-range
dependencies within brain networks while neglecting the long-range
dependencies, limiting an integrated understanding of brain-wide communication.
To address this limitation, we propose Adaptive Long-range aware TransformER
(ALTER), a brain graph transformer to capture long-range dependencies between
brain ROIs utilizing biased random walk. Specifically, we present a novel
long-range aware strategy to explicitly capture long-range dependencies between
brain ROIs. By guiding the walker towards the next hop with higher correlation
value, our strategy simulates the real-world brain-wide communication.
Furthermore, by employing the transformer framework, ALERT adaptively
integrates both short- and long-range dependencies between brain ROIs, enabling
an integrated understanding of multi-level communication across the entire
brain. Extensive experiments on ABIDE and ADNI datasets demonstrate that ALTER
consistently outperforms generalized state-of-the-art graph learning methods
(including SAN, Graphormer, GraphTrans, and LRGNN) and other graph learning
based brain network analysis methods (including FBNETGEN, BrainNetGNN,
BrainGNN, and BrainNETTF) in neurological disease diagnosis. Cases of
long-range dependencies are also presented to further illustrate the
effectiveness of ALTER. The implementation is available at
https://github.com/yushuowiki/ALTER.
|
2501.01101 | Deformable Gaussian Splatting for Efficient and High-Fidelity
Reconstruction of Surgical Scenes | cs.CV | Efficient and high-fidelity reconstruction of deformable surgical scenes is a
critical yet challenging task. Building on recent advancements in 3D Gaussian
splatting, current methods have seen significant improvements in both
reconstruction quality and rendering speed. However, two major limitations
remain: (1) difficulty in handling irreversible dynamic changes, such as tissue
shearing, which are common in surgical scenes; and (2) the lack of hierarchical
modeling for surgical scene deformation, which reduces rendering speed. To
address these challenges, we introduce EH-SurGS, an efficient and high-fidelity
reconstruction algorithm for deformable surgical scenes. We propose a
deformation modeling approach that incorporates the life cycle of 3D Gaussians,
effectively capturing both regular and irreversible deformations, thus
enhancing reconstruction quality. Additionally, we present an adaptive motion
hierarchy strategy that distinguishes between static and deformable regions
within the surgical scene. This strategy reduces the number of 3D Gaussians
passing through the deformation field, thereby improving rendering speed.
Extensive experiments demonstrate that our method surpasses existing
state-of-the-art approaches in both reconstruction quality and rendering speed.
Ablation studies further validate the effectiveness and necessity of our
proposed components. We will open-source our code upon acceptance of the paper.
|
2501.01102 | Disambiguation of Chinese Polyphones in an End-to-End Framework with
Semantic Features Extracted by Pre-trained BERT | eess.AS cs.AI cs.SD | Grapheme-to-phoneme (G2P) conversion serves as an essential component in
Chinese Mandarin text-to-speech (TTS) system, where polyphone disambiguation is
the core issue. In this paper, we propose an end-to-end framework to predict
the pronunciation of a polyphonic character, which accepts sentence containing
polyphonic character as input in the form of Chinese character sequence without
the necessity of any preprocessing. The proposed method consists of a
pre-trained bidirectional encoder representations from Transformers (BERT)
model and a neural network (NN) based classifier. The pre-trained BERT model
extracts semantic features from a raw Chinese character sequence and the NN
based classifier predicts the polyphonic character's pronunciation according to
BERT output. In out experiments, we implemented three classifiers, a
fully-connected network based classifier, a long short-term memory (LSTM)
network based classifier and a Transformer block based classifier. The
experimental results compared with the baseline approach based on LSTM
demonstrate that, the pre-trained model extracts effective semantic features,
which greatly enhances the performance of polyphone disambiguation. In
addition, we also explored the impact of contextual information on polyphone
disambiguation.
|
2501.01103 | learning discriminative features from spectrograms using center loss for
speech emotion recognition | eess.AS cs.AI cs.SD | Identifying the emotional state from speech is essential for the natural
interaction of the machine with the speaker. However, extracting effective
features for emotion recognition is difficult, as emotions are ambiguous. We
propose a novel approach to learn discriminative features from variable length
spectrograms for emotion recognition by cooperating softmax cross-entropy loss
and center loss together. The softmax cross-entropy loss enables features from
different emotion categories separable, and center loss efficiently pulls the
features belonging to the same emotion category to their center. By combining
the two losses together, the discriminative power will be highly enhanced,
which leads to network learning more effective features for emotion
recognition. As demonstrated by the experimental results, after introducing
center loss, both the unweighted accuracy and weighted accuracy are improved by
over 3\% on Mel-spectrogram input, and more than 4\% on Short Time Fourier
Transform spectrogram input.
|
2501.01105 | Temperature-Controlled Smart Charging for Electric Vehicles in Cold
Climates | eess.SY cs.SY | The battery performance and lifespan of electric vehicles (EVs) degrade
significantly in cold climates, requiring a considerable amount of energy to
heat up the EV batteries. This paper proposes a novel technology, namely
temperature-controlled smart charging, to coordinate the heating/charging power
and reduce the total energy use of a solar-powered EV charging station. Instead
of fixing the battery temperature setpoints, we analyze the thermal dynamics
and inertia of EV batteries, and decide the optimal timing and proper amount of
energy allocated for heating. In addition, a temperature-sensitive charging
model is formulated with consideration of dynamic charging rates as well as
battery health. We further tailor acceleration algorithms for large-scale EV
charging, including the reduced-order dual decomposition and vehicle
rescheduling. Simulation results demonstrate that the proposed
temperature-controlled smart charging is superior in capturing the flexibility
value of EV batteries and making full use of the rooftop solar energy. The
proposed model typically achieves a 12.5--18.4% reduction in the charging cost
and a 0.4--6.8% drop in the overhead energy use for heating.
|
2501.01106 | AIM: Additional Image Guided Generation of Transferable Adversarial
Attacks | cs.CV cs.LG | Transferable adversarial examples highlight the vulnerability of deep neural
networks (DNNs) to imperceptible perturbations across various real-world
applications. While there have been notable advancements in untargeted
transferable attacks, targeted transferable attacks remain a significant
challenge. In this work, we focus on generative approaches for targeted
transferable attacks. Current generative attacks focus on reducing overfitting
to surrogate models and the source data domain, but they often overlook the
importance of enhancing transferability through additional semantics. To
address this issue, we introduce a novel plug-and-play module into the general
generator architecture to enhance adversarial transferability. Specifically, we
propose a \emph{Semantic Injection Module} (SIM) that utilizes the semantics
contained in an additional guiding image to improve transferability. The
guiding image provides a simple yet effective method to incorporate target
semantics from the target class to create targeted and highly transferable
attacks. Additionally, we propose new loss formulations that can integrate the
semantic injection module more effectively for both targeted and untargeted
attacks. We conduct comprehensive experiments under both targeted and
untargeted attack settings to demonstrate the efficacy of our proposed
approach.
|
2501.01108 | MuQ: Self-Supervised Music Representation Learning with Mel Residual
Vector Quantization | cs.SD cs.AI cs.CL cs.LG eess.AS | Recent years have witnessed the success of foundation models pre-trained with
self-supervised learning (SSL) in various music informatics understanding
tasks, including music tagging, instrument classification, key detection, and
more. In this paper, we propose a self-supervised music representation learning
model for music understanding. Distinguished from previous studies adopting
random projection or existing neural codec, the proposed model, named MuQ, is
trained to predict tokens generated by Mel Residual Vector Quantization
(Mel-RVQ). Our Mel-RVQ utilizes residual linear projection structure for Mel
spectrum quantization to enhance the stability and efficiency of target
extraction and lead to better performance. Experiments in a large variety of
downstream tasks demonstrate that MuQ outperforms previous self-supervised
music representation models with only 0.9K hours of open-source pre-training
data. Scaling up the data to over 160K hours and adopting iterative training
consistently improve the model performance. To further validate the strength of
our model, we present MuQ-MuLan, a joint music-text embedding model based on
contrastive learning, which achieves state-of-the-art performance in the
zero-shot music tagging task on the MagnaTagATune dataset. Code and checkpoints
are open source in https://github.com/tencent-ailab/MuQ.
|
2501.01109 | BatStyler: Advancing Multi-category Style Generation for Source-free
Domain Generalization | cs.CV cs.AI | Source-Free Domain Generalization (SFDG) aims to develop a model that
performs on unseen domains without relying on any source domains. However, the
implementation remains constrained due to the unavailability of training data.
Research on SFDG focus on knowledge transfer of multi-modal models and style
synthesis based on joint space of multiple modalities, thus eliminating the
dependency on source domain images. However, existing works primarily work for
multi-domain and less-category configuration, but performance on multi-domain
and multi-category configuration is relatively poor. In addition, the
efficiency of style synthesis also deteriorates in multi-category scenarios.
How to efficiently synthesize sufficiently diverse data and apply it to
multi-category configuration is a direction with greater practical value. In
this paper, we propose a method called BatStyler, which is utilized to improve
the capability of style synthesis in multi-category scenarios. BatStyler
consists of two modules: Coarse Semantic Generation and Uniform Style
Generation modules. The Coarse Semantic Generation module extracts
coarse-grained semantics to prevent the compression of space for style
diversity learning in multi-category configuration, while the Uniform Style
Generation module provides a template of styles that are uniformly distributed
in space and implements parallel training. Extensive experiments demonstrate
that our method exhibits comparable performance on less-category datasets,
while surpassing state-of-the-art methods on multi-category datasets.
|
2501.01110 | MalCL: Leveraging GAN-Based Generative Replay to Combat Catastrophic
Forgetting in Malware Classification | cs.CR cs.AI | Continual Learning (CL) for malware classification tackles the rapidly
evolving nature of malware threats and the frequent emergence of new types.
Generative Replay (GR)-based CL systems utilize a generative model to produce
synthetic versions of past data, which are then combined with new data to
retrain the primary model. Traditional machine learning techniques in this
domain often struggle with catastrophic forgetting, where a model's performance
on old data degrades over time.
In this paper, we introduce a GR-based CL system that employs Generative
Adversarial Networks (GANs) with feature matching loss to generate high-quality
malware samples. Additionally, we implement innovative selection schemes for
replay samples based on the model's hidden representations.
Our comprehensive evaluation across Windows and Android malware datasets in a
class-incremental learning scenario -- where new classes are introduced
continuously over multiple tasks -- demonstrates substantial performance
improvements over previous methods. For example, our system achieves an average
accuracy of 55% on Windows malware samples, significantly outperforming other
GR-based models by 28%. This study provides practical insights for advancing
GR-based malware classification systems. The implementation is available at
\url {https://github.com/MalwareReplayGAN/MalCL}\footnote{The code will be made
public upon the presentation of the paper}.
|
2501.01111 | Regularized Proportional Fairness Mechanism for Resource Allocation
Without Money | cs.GT cs.LG | Mechanism design in resource allocation studies dividing limited resources
among self-interested agents whose satisfaction with the allocation depends on
privately held utilities. We consider the problem in a payment-free setting,
with the aim of maximizing social welfare while enforcing incentive
compatibility (IC), i.e., agents cannot inflate allocations by misreporting
their utilities. The well-known proportional fairness (PF) mechanism achieves
the maximum possible social welfare but incurs an undesirably high
exploitability (the maximum unilateral inflation in utility from misreport and
a measure of deviation from IC). In fact, it is known that no mechanism can
achieve the maximum social welfare and exact incentive compatibility (IC)
simultaneously without the use of monetary incentives (Cole et al., 2013).
Motivated by this fact, we propose learning an approximate mechanism that
desirably trades off the competing objectives. Our main contribution is to
design an innovative neural network architecture tailored to the resource
allocation problem, which we name Regularized Proportional Fairness Network
(RPF-Net). RPF-Net regularizes the output of the PF mechanism by a learned
function approximator of the most exploitable allocation, with the aim of
reducing the incentive for any agent to misreport. We derive generalization
bounds that guarantee the mechanism performance when trained under finite and
out-of-distribution samples and experimentally demonstrate the merits of the
proposed mechanism compared to the state-of-the-art.
|
2501.01114 | Generalized Task-Driven Medical Image Quality Enhancement with Gradient
Promotion | cs.CV | Thanks to the recent achievements in task-driven image quality enhancement
(IQE) models like ESTR, the image enhancement model and the visual recognition
model can mutually enhance each other's quantitation while producing
high-quality processed images that are perceivable by our human vision systems.
However, existing task-driven IQE models tend to overlook an underlying fact --
different levels of vision tasks have varying and sometimes conflicting
requirements of image features. To address this problem, this paper proposes a
generalized gradient promotion (GradProm) training strategy for task-driven IQE
of medical images. Specifically, we partition a task-driven IQE system into two
sub-models, i.e., a mainstream model for image enhancement and an auxiliary
model for visual recognition. During training, GradProm updates only parameters
of the image enhancement model using gradients of the visual recognition model
and the image enhancement model, but only when gradients of these two
sub-models are aligned in the same direction, which is measured by their cosine
similarity. In case gradients of these two sub-models are not in the same
direction, GradProm only uses the gradient of the image enhancement model to
update its parameters. Theoretically, we have proved that the optimization
direction of the image enhancement model will not be biased by the auxiliary
visual recognition model under the implementation of GradProm. Empirically,
extensive experimental results on four public yet challenging medical image
datasets demonstrated the superior performance of GradProm over existing
state-of-the-art methods.
|
2501.01115 | Co-Design of a Robot Controller Board and Indoor Positioning System for
IoT-Enabled Applications | cs.RO cs.SY eess.SY | This paper describes the development of a cost-effective yet precise indoor
robot navigation system composed of a custom robot controller board and an
indoor positioning system. First, the proposed robot controller board has been
specially designed for emerging IoT-based robot applications and is capable of
driving two 6-Amp motor channels. The controller board also embeds an on-board
micro-controller with WIFI connectivity, enabling robot-to-server
communications for IoT applications. Then, working together with the robot
controller board, the proposed positioning system detects the robot's location
using a down-looking webcam and uses the robot's position on the webcam images
to estimate the real-world position of the robot in the environment. The
positioning system can then send commands via WIFI to the robot in order to
steer it to any arbitrary location in the environment. Our experiments show
that the proposed system reaches a navigation error smaller or equal to 0.125
meters while being more than two orders of magnitude more cost-effective
compared to off-the-shelve motion capture (MOCAP) positioning systems.
|
2501.01116 | HarmonyIQA: Pioneering Benchmark and Model for Image Harmonization
Quality Assessment | cs.CV cs.MM | Image composition involves extracting a foreground object from one image and
pasting it into another image through Image harmonization algorithms (IHAs),
which aim to adjust the appearance of the foreground object to better match the
background. Existing image quality assessment (IQA) methods may fail to align
with human visual preference on image harmonization due to the insensitivity to
minor color or light inconsistency. To address the issue and facilitate the
advancement of IHAs, we introduce the first Image Quality Assessment Database
for image Harmony evaluation (HarmonyIQAD), which consists of 1,350 harmonized
images generated by 9 different IHAs, and the corresponding human visual
preference scores. Based on this database, we propose a Harmony Image Quality
Assessment (HarmonyIQA), to predict human visual preference for harmonized
images. Extensive experiments show that HarmonyIQA achieves state-of-the-art
performance on human visual preference evaluation for harmonized images, and
also achieves competing results on traditional IQA tasks. Furthermore,
cross-dataset evaluation also shows that HarmonyIQA exhibits better
generalization ability than self-supervised learning-based IQA methods. Both
HarmonyIQAD and HarmonyIQA will be made publicly available upon paper
publication.
|
2501.01117 | Robust COVID-19 Detection from Cough Sounds using Deep Neural Decision
Tree and Forest: A Comprehensive Cross-Datasets Evaluation | cs.SD cs.AI cs.LG eess.AS | This research presents a robust approach to classifying COVID-19 cough sounds
using cutting-edge machine-learning techniques. Leveraging deep neural decision
trees and deep neural decision forests, our methodology demonstrates consistent
performance across diverse cough sound datasets. We begin with a comprehensive
extraction of features to capture a wide range of audio features from
individuals, whether COVID-19 positive or negative. To determine the most
important features, we use recursive feature elimination along with
cross-validation. Bayesian optimization fine-tunes hyper-parameters of deep
neural decision tree and deep neural decision forest models. Additionally, we
integrate the SMOTE during training to ensure a balanced representation of
positive and negative data. Model performance refinement is achieved through
threshold optimization, maximizing the ROC-AUC score. Our approach undergoes a
comprehensive evaluation in five datasets: Cambridge, Coswara, COUGHVID,
Virufy, and the combined Virufy with the NoCoCoDa dataset. Consistently
outperforming state-of-the-art methods, our proposed approach yields notable
AUC scores of 0.97, 0.98, 0.92, 0.93, 0.99, and 0.99 across the respective
datasets. Merging all datasets into a combined dataset, our method, using a
deep neural decision forest classifier, achieves an AUC of 0.97. Also, our
study includes a comprehensive cross-datasets analysis, revealing demographic
and geographic differences in the cough sounds associated with COVID-19. These
differences highlight the challenges in transferring learned features across
diverse datasets and underscore the potential benefits of dataset integration,
improving generalizability and enhancing COVID-19 detection from audio signals.
|
2501.01118 | Pruning-based Data Selection and Network Fusion for Efficient Deep
Learning | cs.LG cs.AI | Efficient data selection is essential for improving the training efficiency
of deep neural networks and reducing the associated annotation costs. However,
traditional methods tend to be computationally expensive, limiting their
scalability and real-world applicability. We introduce PruneFuse, a novel
method that combines pruning and network fusion to enhance data selection and
accelerate network training. In PruneFuse, the original dense network is pruned
to generate a smaller surrogate model that efficiently selects the most
informative samples from the dataset. Once this iterative data selection
selects sufficient samples, the insights learned from the pruned model are
seamlessly integrated with the dense model through network fusion, providing an
optimized initialization that accelerates training. Extensive experimentation
on various datasets demonstrates that PruneFuse significantly reduces
computational costs for data selection, achieves better performance than
baselines, and accelerates the overall training process.
|
2501.01119 | Leverage Cross-Attention for End-to-End Open-Vocabulary Panoptic
Reconstruction | cs.CV cs.RO | Open-vocabulary panoptic reconstruction offers comprehensive scene
understanding, enabling advances in embodied robotics and photorealistic
simulation. In this paper, we propose PanopticRecon++, an end-to-end method
that formulates panoptic reconstruction through a novel cross-attention
perspective. This perspective models the relationship between 3D instances (as
queries) and the scene's 3D embedding field (as keys) through their attention
map. Unlike existing methods that separate the optimization of queries and keys
or overlook spatial proximity, PanopticRecon++ introduces learnable 3D
Gaussians as instance queries. This formulation injects 3D spatial priors to
preserve proximity while maintaining end-to-end optimizability. Moreover, this
query formulation facilitates the alignment of 2D open-vocabulary instance IDs
across frames by leveraging optimal linear assignment with instance masks
rendered from the queries. Additionally, we ensure semantic-instance
segmentation consistency by fusing query-based instance segmentation
probabilities with semantic probabilities in a novel panoptic head supervised
by a panoptic loss. During training, the number of instance query tokens
dynamically adapts to match the number of objects. PanopticRecon++ shows
competitive performance in terms of 3D and 2D segmentation and reconstruction
performance on both simulation and real-world datasets, and demonstrates a user
case as a robot simulator. Our project website is at:
https://yuxuan1206.github.io/panopticrecon_pp/
|
2501.01120 | Retrieval-Augmented Dynamic Prompt Tuning for Incomplete Multimodal
Learning | cs.CV cs.AI | Multimodal learning with incomplete modality is practical and challenging.
Recently, researchers have focused on enhancing the robustness of pre-trained
MultiModal Transformers (MMTs) under missing modality conditions by applying
learnable prompts. However, these prompt-based methods face several
limitations: (1) incomplete modalities provide restricted modal cues for
task-specific inference, (2) dummy imputation for missing content causes
information loss and introduces noise, and (3) static prompts are
instance-agnostic, offering limited knowledge for instances with various
missing conditions. To address these issues, we propose RAGPT, a novel
Retrieval-AuGmented dynamic Prompt Tuning framework. RAGPT comprises three
modules: (I) the multi-channel retriever, which identifies similar instances
through a within-modality retrieval strategy, (II) the missing modality
generator, which recovers missing information using retrieved contexts, and
(III) the context-aware prompter, which captures contextual knowledge from
relevant instances and generates dynamic prompts to largely enhance the MMT's
robustness. Extensive experiments conducted on three real-world datasets show
that RAGPT consistently outperforms all competitive baselines in handling
incomplete modality problems. The code of our work and prompt-based baselines
is available at https://github.com/Jian-Lang/RAGPT.
|
2501.01121 | PatchRefiner V2: Fast and Lightweight Real-Domain High-Resolution Metric
Depth Estimation | cs.CV | While current high-resolution depth estimation methods achieve strong
results, they often suffer from computational inefficiencies due to reliance on
heavyweight models and multiple inference steps, increasing inference time. To
address this, we introduce PatchRefiner V2 (PRV2), which replaces heavy refiner
models with lightweight encoders. This reduces model size and inference time
but introduces noisy features. To overcome this, we propose a Coarse-to-Fine
(C2F) module with a Guided Denoising Unit for refining and denoising the
refiner features and a Noisy Pretraining strategy to pretrain the refiner
branch to fully exploit the potential of the lightweight refiner branch.
Additionally, we introduce a Scale-and-Shift Invariant Gradient Matching
(SSIGM) loss to enhance synthetic-to-real domain transfer. PRV2 outperforms
state-of-the-art depth estimation methods on UnrealStereo4K in both accuracy
and speed, using fewer parameters and faster inference. It also shows improved
depth boundary delineation on real-world datasets like CityScape, ScanNet++,
and KITTI, demonstrating its versatility across domains.
|
2501.01123 | TED: Turn Emphasis with Dialogue Feature Attention for Emotion
Recognition in Conversation | cs.CL cs.AI cs.LG | Emotion recognition in conversation (ERC) has been attracting attention by
methods for modeling multi-turn contexts. The multi-turn input to a pretraining
model implicitly assumes that the current turn and other turns are
distinguished during the training process by inserting special tokens into the
input sequence. This paper proposes a priority-based attention method to
distinguish each turn explicitly by adding dialogue features into the attention
mechanism, called Turn Emphasis with Dialogue (TED). It has a priority for each
turn according to turn position and speaker information as dialogue features.
It takes multi-head self-attention between turn-based vectors for multi-turn
input and adjusts attention scores with the dialogue features. We evaluate TED
on four typical benchmarks. The experimental results demonstrate that TED has
high overall performance in all datasets and achieves state-of-the-art
performance on IEMOCAP with numerous turns.
|
2501.01124 | Graph2text or Graph2token: A Perspective of Large Language Models for
Graph Learning | cs.LG | Graphs are data structures used to represent irregular networks and are
prevalent in numerous real-world applications. Previous methods directly model
graph structures and achieve significant success. However, these methods
encounter bottlenecks due to the inherent irregularity of graphs. An innovative
solution is converting graphs into textual representations, thereby harnessing
the powerful capabilities of Large Language Models (LLMs) to process and
comprehend graphs. In this paper, we present a comprehensive review of
methodologies for applying LLMs to graphs, termed LLM4graph. The core of
LLM4graph lies in transforming graphs into texts for LLMs to understand and
analyze. Thus, we propose a novel taxonomy of LLM4graph methods in the view of
the transformation. Specifically, existing methods can be divided into two
paradigms: Graph2text and Graph2token, which transform graphs into texts or
tokens as the input of LLMs, respectively. We point out four challenges during
the transformation to systematically present existing methods in a
problem-oriented perspective. For practical concerns, we provide a guideline
for researchers on selecting appropriate models and LLMs for different graphs
and hardware constraints. We also identify five future research directions for
LLM4graph.
|
2501.01125 | DuMo: Dual Encoder Modulation Network for Precise Concept Erasure | cs.CV | The exceptional generative capability of text-to-image models has raised
substantial safety concerns regarding the generation of Not-Safe-For-Work
(NSFW) content and potential copyright infringement. To address these concerns,
previous methods safeguard the models by eliminating inappropriate concepts.
Nonetheless, these models alter the parameters of the backbone network and
exert considerable influences on the structural (low-frequency) components of
the image, which undermines the model's ability to retain non-target concepts.
In this work, we propose our Dual encoder Modulation network (DuMo), which
achieves precise erasure of inappropriate target concepts with minimum
impairment to non-target concepts. In contrast to previous methods, DuMo
employs the Eraser with PRior Knowledge (EPR) module which modifies the skip
connection features of the U-NET and primarily achieves concept erasure on
details (high-frequency) components of the image. To minimize the damage to
non-target concepts during erasure, the parameters of the backbone U-NET are
frozen and the prior knowledge from the original skip connection features is
introduced to the erasure process. Meanwhile, the phenomenon is observed that
distinct erasing preferences for the image structure and details are
demonstrated by the EPR at different timesteps and layers. Therefore, we adopt
a novel Time-Layer MOdulation process (TLMO) that adjusts the erasure scale of
EPR module's outputs across different layers and timesteps, automatically
balancing the erasure effects and model's generative ability. Our method
achieves state-of-the-art performance on Explicit Content Erasure, Cartoon
Concept Removal and Artistic Style Erasure, clearly outperforming alternative
methods. Code is available at https://github.com/Maplebb/DuMo
|
2501.01126 | Source-free Semantic Regularization Learning for Semi-supervised Domain
Adaptation | cs.CV | Semi-supervised domain adaptation (SSDA) has been extensively researched due
to its ability to improve classification performance and generalization ability
of models by using a small amount of labeled data on the target domain.
However, existing methods cannot effectively adapt to the target domain due to
difficulty in fully learning rich and complex target semantic information and
relationships. In this paper, we propose a novel SSDA learning framework called
semantic regularization learning (SERL), which captures the target semantic
information from multiple perspectives of regularization learning to achieve
adaptive fine-tuning of the source pre-trained model on the target domain. SERL
includes three robust semantic regularization techniques. Firstly, semantic
probability contrastive regularization (SPCR) helps the model learn more
discriminative feature representations from a probabilistic perspective, using
semantic information on the target domain to understand the similarities and
differences between samples. Additionally, adaptive weights in SPCR can help
the model learn the semantic distribution correctly through the probabilities
of different samples. To further comprehensively understand the target semantic
distribution, we introduce hard-sample mixup regularization (HMR), which uses
easy samples as guidance to mine the latent target knowledge contained in hard
samples, thereby learning more complete and complex target semantic knowledge.
Finally, target prediction regularization (TPR) regularizes the target
predictions of the model by maximizing the correlation between the current
prediction and the past learned objective, thereby mitigating the misleading of
semantic information caused by erroneous pseudo-labels. Extensive experiments
on three benchmark datasets demonstrate that our SERL method achieves
state-of-the-art performance.
|
2501.01127 | InDeed: Interpretable image deep decomposition with guaranteed
generalizability | cs.CV | Image decomposition aims to analyze an image into elementary components,
which is essential for numerous downstream tasks and also by nature provides
certain interpretability to the analysis. Deep learning can be powerful for
such tasks, but surprisingly their combination with a focus on interpretability
and generalizability is rarely explored. In this work, we introduce a novel
framework for interpretable deep image decomposition, combining hierarchical
Bayesian modeling and deep learning to create an architecture-modularized and
model-generalizable deep neural network (DNN). The proposed framework includes
three steps: (1) hierarchical Bayesian modeling of image decomposition, (2)
transforming the inference problem into optimization tasks, and (3) deep
inference via a modularized Bayesian DNN. We further establish a theoretical
connection between the loss function and the generalization error bound, which
inspires a new test-time adaptation approach for out-of-distribution scenarios.
We instantiated the application using two downstream tasks, \textit{i.e.},
image denoising, and unsupervised anomaly detection, and the results
demonstrated improved generalizability as well as interpretability of our
methods. The source code will be released upon the acceptance of this paper.
|
2501.01130 | An Inclusive Theoretical Framework of Robust Supervised Contrastive Loss
against Label Noise | cs.LG | Learning from noisy labels is a critical challenge in machine learning, with
vast implications for numerous real-world scenarios. While supervised
contrastive learning has recently emerged as a powerful tool for navigating
label noise, many existing solutions remain heuristic, often devoid of a
systematic theoretical foundation for crafting robust supervised contrastive
losses. To address the gap, in this paper, we propose a unified theoretical
framework for robust losses under the pairwise contrastive paradigm. In
particular, we for the first time derive a general robust condition for
arbitrary contrastive losses, which serves as a criterion to verify the
theoretical robustness of a supervised contrastive loss against label noise.
The theory indicates that the popular InfoNCE loss is in fact non-robust, and
accordingly inspires us to develop a robust version of InfoNCE, termed
Symmetric InfoNCE (SymNCE). Moreover, we highlight that our theory is an
inclusive framework that provides explanations to prior robust techniques such
as nearest-neighbor (NN) sample selection and robust contrastive loss.
Validation experiments on benchmark datasets demonstrate the superiority of
SymNCE against label noise.
|
2501.01132 | Missing Data as Augmentation in the Earth Observation Domain: A
Multi-View Learning Approach | cs.LG cs.AI cs.CV | Multi-view learning (MVL) leverages multiple sources or views of data to
enhance machine learning model performance and robustness. This approach has
been successfully used in the Earth Observation (EO) domain, where views have a
heterogeneous nature and can be affected by missing data. Despite the negative
effect that missing data has on model predictions, the ML literature has used
it as an augmentation technique to improve model generalization, like masking
the input data. Inspired by this, we introduce novel methods for EO
applications tailored to MVL with missing views. Our methods integrate the
combination of a set to simulate all combinations of missing views as different
training samples. Instead of replacing missing data with a numerical value, we
use dynamic merge functions, like average, and more complex ones like
Transformer. This allows the MVL model to entirely ignore the missing views,
enhancing its predictive robustness. We experiment on four EO datasets with
temporal and static views, including state-of-the-art methods from the EO
domain. The results indicate that our methods improve model robustness under
conditions of moderate missingness, and improve the predictive performance when
all views are present. The proposed methods offer a single adaptive solution to
operate effectively with any combination of available views.
|
2501.01136 | Symmetries-enhanced Multi-Agent Reinforcement Learning | cs.RO cs.AI cs.LG cs.MA math.RT | Multi-agent reinforcement learning has emerged as a powerful framework for
enabling agents to learn complex, coordinated behaviors but faces persistent
challenges regarding its generalization, scalability and sample efficiency.
Recent advancements have sought to alleviate those issues by embedding
intrinsic symmetries of the systems in the policy. Yet, most dynamical systems
exhibit little to no symmetries to exploit. This paper presents a novel
framework for embedding extrinsic symmetries in multi-agent system dynamics
that enables the use of symmetry-enhanced methods to address systems with
insufficient intrinsic symmetries, expanding the scope of equivariant learning
to a wide variety of MARL problems. Central to our framework is the Group
Equivariant Graphormer, a group-modular architecture specifically designed for
distributed swarming tasks. Extensive experiments on a swarm of
symmetry-breaking quadrotors validate the effectiveness of our approach,
showcasing its potential for improved generalization and zero-shot scalability.
Our method achieves significant reductions in collision rates and enhances task
success rates across a diverse range of scenarios and varying swarm sizes.
|
2501.01138 | Semantics-Guided Diffusion for Deep Joint Source-Channel Coding in
Wireless Image Transmission | cs.IT eess.SP math.IT | Joint source-channel coding (JSCC) offers a promising avenue for enhancing
transmission efficiency by jointly incorporating source and channel statistics
into the system design. A key advancement in this area is the deep joint source
and channel coding (DeepJSCC) technique that designs a direct mapping of input
signals to channel symbols parameterized by a neural network, which can be
trained for arbitrary channel models and semantic quality metrics. This paper
advances the DeepJSCC framework toward a semantics-aligned, high-fidelity
transmission approach, called semantics-guided diffusion DeepJSCC (SGD-JSCC).
Existing schemes that integrate diffusion models (DMs) with JSCC face
challenges in transforming random generation into accurate reconstruction and
adapting to varying channel conditions. SGD-JSCC incorporates two key
innovations: (1) utilizing some inherent information that contributes to the
semantics of an image, such as text description or edge map, to guide the
diffusion denoising process; and (2) enabling seamless adaptability to varying
channel conditions with the help of a semantics-guided DM for channel
denoising. The DM is guided by diverse semantic information and integrates
seamlessly with DeepJSCC. In a slow fading channel, SGD-JSCC dynamically adapts
to the instantaneous signal-to-noise ratio (SNR) directly estimated from the
channel output, thereby eliminating the need for additional pilot transmissions
for channel estimation. In a fast fading channel, we introduce a training-free
denoising strategy, allowing SGD-JSCC to effectively adjust to fluctuations in
channel gains. Numerical results demonstrate that, guided by semantic
information and leveraging the powerful DM, our method outperforms existing
DeepJSCC schemes, delivering satisfactory reconstruction performance even at
extremely poor channel conditions.
|
2501.01140 | Communicating Unexpectedness for Out-of-Distribution Multi-Agent
Reinforcement Learning | cs.MA | Applying multi-agent reinforcement learning methods to realistic settings is
challenging as it may require the agents to quickly adapt to unexpected
situations that are rarely or never encountered in training. Recent methods for
generalization to such out-of-distribution settings are limited to more
specific, restricted instances of distribution shifts. To tackle adaptation to
distribution shifts, we propose Unexpected Encoding Scheme, a novel
decentralized multi-agent reinforcement learning algorithm where agents
communicate "unexpectedness," the aspects of the environment that are
surprising. In addition to a message yielded by the original reward-driven
communication, each agent predicts the next observation based on previous
experience, measures the discrepancy between the prediction and the actually
encountered observation, and encodes this discrepancy as a message. Experiments
on multi-robot warehouse environment support that our proposed method adapts
robustly to dynamically changing training environments as well as
out-of-distribution environment.
|
2501.01142 | Adaptive Hardness-driven Augmentation and Alignment Strategies for
Multi-Source Domain Adaptations | cs.CV | Multi-source Domain Adaptation (MDA) aims to transfer knowledge from multiple
labeled source domains to an unlabeled target domain. Nevertheless, traditional
methods primarily focus on achieving inter-domain alignment through
sample-level constraints, such as Maximum Mean Discrepancy (MMD), neglecting
three pivotal aspects: 1) the potential of data augmentation, 2) the
significance of intra-domain alignment, and 3) the design of cluster-level
constraints. In this paper, we introduce a novel hardness-driven strategy for
MDA tasks, named "A3MDA" , which collectively considers these three aspects
through Adaptive hardness quantification and utilization in both data
Augmentation and domain Alignment.To achieve this, "A3MDA" progressively
proposes three Adaptive Hardness Measurements (AHM), i.e., Basic, Smooth, and
Comparative AHMs, each incorporating distinct mechanisms for diverse scenarios.
Specifically, Basic AHM aims to gauge the instantaneous hardness for each
source/target sample. Then, hardness values measured by Smooth AHM will
adaptively adjust the intensity level of strong data augmentation to maintain
compatibility with the model's generalization capacity.In contrast, Comparative
AHM is designed to facilitate cluster-level constraints. By leveraging hardness
values as sample-specific weights, the traditional MMD is enhanced into a
weighted-clustered variant, strengthening the robustness and precision of
inter-domain alignment. As for the often-neglected intra-domain alignment, we
adaptively construct a pseudo-contrastive matrix by selecting harder samples
based on the hardness rankings, enhancing the quality of pseudo-labels, and
shaping a well-clustered target feature space. Experiments on multiple MDA
benchmarks show that " A3MDA " outperforms other methods.
|
2501.01144 | BlockDialect: Block-wise Fine-grained Mixed Format Quantization for
Energy-Efficient LLM Inference | cs.CL cs.LG | The rapidly increasing size of large language models (LLMs) presents
significant challenges in memory usage and computational costs. Quantizing both
weights and activations can address these issues, with hardware-supported
fine-grained scaling emerging as a promising solution to mitigate outliers.
However, existing methods struggle to capture nuanced block data distributions.
We propose BlockDialect, a block-wise fine-grained mixed format technique that
assigns a per-block optimal number format from a formatbook for better data
representation. Additionally, we introduce DialectFP4, a formatbook of FP4
variants (akin to dialects) that adapt to diverse data distributions. To
leverage this efficiently, we propose a two-stage approach for online
DialectFP4 activation quantization. Importantly, DialectFP4 ensures energy
efficiency by selecting representable values as scaled integers compatible with
low-precision integer arithmetic. BlockDialect achieves 10.78% (7.48%) accuracy
gain on the LLaMA3-8B (LLaMA2-7B) model compared to MXFP4 format with lower bit
usage per data, while being only 5.45% (2.69%) below full precision even when
quantizing full-path matrix multiplication. Focusing on how to represent over
how to scale, our work presents a promising path for energy-efficient LLM
inference.
|
2501.01148 | Adaptive posterior distributions for uncertainty analysis of covariance
matrices in Bayesian inversion problems for multioutput signals | stat.CO cs.CE stat.ML | In this paper we address the problem of performing Bayesian inference for the
parameters of a nonlinear multi-output model and the covariance matrix of the
different output signals. We propose an adaptive importance sampling (AIS)
scheme for multivariate Bayesian inversion problems, which is based in two main
ideas: the variables of interest are split in two blocks and the inference
takes advantage of known analytical optimization formulas. We estimate both the
unknown parameters of the multivariate non-linear model and the covariance
matrix of the noise. In the first part of the proposed inference scheme, a
novel AIS technique called adaptive target adaptive importance sampling (ATAIS)
is designed, which alternates iteratively between an IS technique over the
parameters of the non-linear model and a frequentist approach for the
covariance matrix of the noise. In the second part of the proposed inference
scheme, a prior density over the covariance matrix is considered and the cloud
of samples obtained by ATAIS are recycled and re-weighted to obtain a complete
Bayesian study over the model parameters and covariance matrix. ATAIS is the
main contribution of the work. Additionally, the inverted layered importance
sampling (ILIS) is presented as a possible compelling algorithm (but based on a
conceptually simpler idea). Different numerical examples show the benefits of
the proposed approaches
|
2501.01149 | A3: Android Agent Arena for Mobile GUI Agents | cs.AI | AI agents have become increasingly prevalent in recent years, driven by
significant advancements in the field of large language models (LLMs). Mobile
GUI agents, a subset of AI agents, are designed to autonomously perform tasks
on mobile devices. While numerous studies have introduced agents, datasets, and
benchmarks to advance mobile GUI agent research, many existing datasets focus
on static frame evaluations and fail to provide a comprehensive platform for
assessing performance on real-world, in-the-wild tasks. To address this gap, we
present Android Agent Arena (A3), a novel evaluation platform. Unlike existing
in-the-wild systems, A3 offers: (1) meaningful and practical tasks, such as
real-time online information retrieval and operational instructions; (2) a
larger, more flexible action space, enabling compatibility with agents trained
on any dataset; and (3) automated business-level LLM-based evaluation process.
A3 includes 21 widely used general third-party apps and 201 tasks
representative of common user scenarios, providing a robust foundation for
evaluating mobile GUI agents in real-world situations and a new autonomous
evaluation process for less human labor and coding expertise. The project is
available at https://yuxiangchai.github.io/Android-Agent-Arena/.
|
2501.01153 | Robot localization in a mapped environment using Adaptive Monte Carlo
algorithm | cs.RO | Localization is the challenge of determining the robot's pose in a mapped
environment. This is done by implementing a probabilistic algorithm to filter
noisy sensor measurements and track the robot's position and orientation. This
paper focuses on localizing a robot in a known mapped environment using
Adaptive Monte Carlo Localization or Particle Filters method and send it to a
goal state. ROS, Gazebo and RViz were used as the tools of the trade to
simulate the environment and programming two robots for performing
localization.
|
2501.01156 | TexAVi: Generating Stereoscopic VR Video Clips from Text Descriptions | cs.CV cs.AI cs.LG | While generative models such as text-to-image, large language models and
text-to-video have seen significant progress, the extension to
text-to-virtual-reality remains largely unexplored, due to a deficit in
training data and the complexity of achieving realistic depth and motion in
virtual environments. This paper proposes an approach to coalesce existing
generative systems to form a stereoscopic virtual reality video from text.
Carried out in three main stages, we start with a base text-to-image model
that captures context from an input text. We then employ Stable Diffusion on
the rudimentary image produced, to generate frames with enhanced realism and
overall quality. These frames are processed with depth estimation algorithms to
create left-eye and right-eye views, which are stitched side-by-side to create
an immersive viewing experience. Such systems would be highly beneficial in
virtual reality production, since filming and scene building often require
extensive hours of work and post-production effort.
We utilize image evaluation techniques, specifically Fr\'echet Inception
Distance and CLIP Score, to assess the visual quality of frames produced for
the video. These quantitative measures establish the proficiency of the
proposed method.
Our work highlights the exciting possibilities of using natural
language-driven graphics in fields like virtual reality simulations.
|
2501.01157 | Ultrasound Lung Aeration Map via Physics-Aware Neural Operators | eess.IV cs.LG physics.med-ph | Lung ultrasound is a growing modality in clinics for diagnosing and
monitoring acute and chronic lung diseases due to its low cost and
accessibility. Lung ultrasound works by emitting diagnostic pulses, receiving
pressure waves and converting them into radio frequency (RF) data, which are
then processed into B-mode images with beamformers for radiologists to
interpret. However, unlike conventional ultrasound for soft tissue anatomical
imaging, lung ultrasound interpretation is complicated by complex
reverberations from the pleural interface caused by the inability of ultrasound
to penetrate air. The indirect B-mode images make interpretation highly
dependent on reader expertise, requiring years of training, which limits its
widespread use despite its potential for high accuracy in skilled hands.
To address these challenges and democratize ultrasound lung imaging as a
reliable diagnostic tool, we propose LUNA, an AI model that directly
reconstructs lung aeration maps from RF data, bypassing the need for
traditional beamformers and indirect interpretation of B-mode images. LUNA uses
a Fourier neural operator, which processes RF data efficiently in Fourier
space, enabling accurate reconstruction of lung aeration maps. LUNA offers a
quantitative, reader-independent alternative to traditional semi-quantitative
lung ultrasound scoring methods. The development of LUNA involves synthetic and
real data: We simulate synthetic data with an experimentally validated approach
and scan ex vivo swine lungs as real data. Trained on abundant simulated data
and fine-tuned with a small amount of real-world data, LUNA achieves robust
performance, demonstrated by an aeration estimation error of 9% in ex-vivo lung
scans. We demonstrate the potential of reconstructing lung aeration maps from
RF data, providing a foundation for improving lung ultrasound reproducibility
and diagnostic utility.
|
2501.01158 | Attending To Syntactic Information In Biomedical Event Extraction Via
Graph Neural Networks | cs.CL | Many models are proposed in the literature on biomedical event
extraction(BEE). Some of them use the shortest dependency path(SDP) information
to represent the argument classification task. There is an issue with this
representation since even missing one word from the dependency parsing graph
may totally change the final prediction. To this end, the full adjacency matrix
of the dependency graph is used to embed individual tokens using a graph
convolutional network(GCN). An ablation study is also done to show the effect
of the dependency graph on the overall performance. The results show a
significant improvement when dependency graph information is used. The proposed
model slightly outperforms state-of-the-art models on BEE over different
datasets.
|
2501.01163 | 3D-LLaVA: Towards Generalist 3D LMMs with Omni Superpoint Transformer | cs.CV | Current 3D Large Multimodal Models (3D LMMs) have shown tremendous potential
in 3D-vision-based dialogue and reasoning. However, how to further enhance 3D
LMMs to achieve fine-grained scene understanding and facilitate flexible
human-agent interaction remains a challenging problem. In this work, we
introduce 3D-LLaVA, a simple yet highly powerful 3D LMM designed to act as an
intelligent assistant in comprehending, reasoning, and interacting with the 3D
world. Unlike existing top-performing methods that rely on complicated
pipelines-such as offline multi-view feature extraction or additional
task-specific heads-3D-LLaVA adopts a minimalist design with integrated
architecture and only takes point clouds as input. At the core of 3D-LLaVA is a
new Omni Superpoint Transformer (OST), which integrates three functionalities:
(1) a visual feature selector that converts and selects visual tokens, (2) a
visual prompt encoder that embeds interactive visual prompts into the visual
token space, and (3) a referring mask decoder that produces 3D masks based on
text description. This versatile OST is empowered by the hybrid pretraining to
obtain perception priors and leveraged as the visual connector that bridges the
3D data to the LLM. After performing unified instruction tuning, our 3D-LLaVA
reports impressive results on various benchmarks. The code and model will be
released to promote future exploration.
|
2501.01164 | Towards Interactive Deepfake Analysis | cs.CV | Existing deepfake analysis methods are primarily based on discriminative
models, which significantly limit their application scenarios. This paper aims
to explore interactive deepfake analysis by performing instruction tuning on
multi-modal large language models (MLLMs). This will face challenges such as
the lack of datasets and benchmarks, and low training efficiency. To address
these issues, we introduce (1) a GPT-assisted data construction process
resulting in an instruction-following dataset called DFA-Instruct, (2) a
benchmark named DFA-Bench, designed to comprehensively evaluate the
capabilities of MLLMs in deepfake detection, deepfake classification, and
artifact description, and (3) construct an interactive deepfake analysis system
called DFA-GPT, as a strong baseline for the community, with the Low-Rank
Adaptation (LoRA) module. The dataset and code will be made available at
https://github.com/lxq1000/DFA-Instruct to facilitate further research.
|
2501.01166 | Deep Learning in Palmprint Recognition-A Comprehensive Survey | cs.CV cs.AI | Palmprint recognition has emerged as a prominent biometric technology, widely
applied in diverse scenarios. Traditional handcrafted methods for palmprint
recognition often fall short in representation capability, as they heavily
depend on researchers' prior knowledge. Deep learning (DL) has been introduced
to address this limitation, leveraging its remarkable successes across various
domains. While existing surveys focus narrowly on specific tasks within
palmprint recognition-often grounded in traditional methodologies-there remains
a significant gap in comprehensive research exploring DL-based approaches
across all facets of palmprint recognition. This paper bridges that gap by
thoroughly reviewing recent advancements in DL-powered palmprint recognition.
The paper systematically examines progress across key tasks, including
region-of-interest segmentation, feature extraction, and
security/privacy-oriented challenges. Beyond highlighting these advancements,
the paper identifies current challenges and uncovers promising opportunities
for future research. By consolidating state-of-the-art progress, this review
serves as a valuable resource for researchers, enabling them to stay abreast of
cutting-edge technologies and drive innovation in palmprint recognition.
|
2501.01168 | Blind Men and the Elephant: Diverse Perspectives on Gender Stereotypes
in Benchmark Datasets | cs.CL cs.AI | The multifaceted challenge of accurately measuring gender stereotypical bias
in language models is akin to discerning different segments of a broader,
unseen entity. This short paper primarily focuses on intrinsic bias mitigation
and measurement strategies for language models, building on prior research that
demonstrates a lack of correlation between intrinsic and extrinsic approaches.
We delve deeper into intrinsic measurements, identifying inconsistencies and
suggesting that these benchmarks may reflect different facets of gender
stereotype. Our methodology involves analyzing data distributions across
datasets and integrating gender stereotype components informed by social
psychology. By adjusting the distribution of two datasets, we achieve a better
alignment of outcomes. Our findings underscore the complexity of gender
stereotyping in language models and point to new directions for developing more
refined techniques to detect and reduce bias.
|
2501.01170 | Automated monitoring of bee colony movement in the hive during winter
season | eess.SY cs.NI cs.SY | In this study, we have experimentally modelled the movement of a bee colony
in a hive during the winter season and developed a monitoring system that
allows tracking the movement of the bee colony and honey consumption. The
monitoring system consists of four load cells connected to the RP2040
controller based on the Raspberry Pi Pico board, from which data is transmitted
via the MQTT protocol to the Raspberry Pi 5 microcomputer via a Wi-Fi network.
The processed data from the Raspberry Pi 5 is recorded in a MySQL database. The
algorithm for finding the location of the bee colony in the hive works
correctly, the trajectory of movement based on the data from the sensors
repeats the physical movement in the experiment, which is an imitation of the
movement of the bee colony in real conditions. The proposed monitoring system
provides continuous observation of the bee colony without adversely affecting
its natural activities and can be integrated with various wireless data
networks. This is a promising tool for improving the efficiency of beekeeping
and maintaining the health of bee colonies.
|
2501.01174 | L3D-Pose: Lifting Pose for 3D Avatars from a Single Camera in the Wild | cs.CV cs.AI | While 2D pose estimation has advanced our ability to interpret body movements
in animals and primates, it is limited by the lack of depth information,
constraining its application range. 3D pose estimation provides a more
comprehensive solution by incorporating spatial depth, yet creating extensive
3D pose datasets for animals is challenging due to their dynamic and
unpredictable behaviours in natural settings. To address this, we propose a
hybrid approach that utilizes rigged avatars and the pipeline to generate
synthetic datasets to acquire the necessary 3D annotations for training. Our
method introduces a simple attention-based MLP network for converting 2D poses
to 3D, designed to be independent of the input image to ensure scalability for
poses in natural environments. Additionally, we identify that existing
anatomical keypoint detectors are insufficient for accurate pose retargeting
onto arbitrary avatars. To overcome this, we present a lookup table based on a
deep pose estimation method using a synthetic collection of diverse actions
rigged avatars perform. Our experiments demonstrate the effectiveness and
efficiency of this lookup table-based retargeting approach. Overall, we propose
a comprehensive framework with systematically synthesized datasets for lifting
poses from 2D to 3D and then utilize this to re-target motion from wild
settings onto arbitrary avatars.
|
2501.01182 | RingFormer: A Neural Vocoder with Ring Attention and
Convolution-Augmented Transformer | cs.SD cs.LG eess.AS | While transformers demonstrate outstanding performance across various audio
tasks, their application to neural vocoders remains challenging. Neural
vocoders require the generation of long audio signals at the sample level,
which demands high temporal resolution. This results in significant
computational costs for attention map generation and limits their ability to
efficiently process both global and local information. Additionally, the
sequential nature of sample generation in neural vocoders poses difficulties
for real-time processing, making the direct adoption of transformers
impractical. To address these challenges, we propose RingFormer, a neural
vocoder that incorporates the ring attention mechanism into a lightweight
transformer variant, the convolution-augmented transformer (Conformer). Ring
attention effectively captures local details while integrating global
information, making it well-suited for processing long sequences and enabling
real-time audio generation. RingFormer is trained using adversarial training
with two discriminators. The proposed model is applied to the decoder of the
text-to-speech model VITS and compared with state-of-the-art vocoders such as
HiFi-GAN, iSTFT-Net, and BigVGAN under identical conditions using various
objective and subjective metrics. Experimental results show that RingFormer
achieves comparable or superior performance to existing models, particularly
excelling in real-time audio generation. Our code and audio samples are
available on GitHub.
|
2501.01183 | Machine Learning-Based Prediction of ICU Readmissions in Intracerebral
Hemorrhage Patients: Insights from the MIMIC Databases | cs.LG | Intracerebral hemorrhage (ICH) is a life-risking condition characterized by
bleeding within the brain parenchyma. ICU readmission in ICH patients is a
critical outcome, reflecting both clinical severity and resource utilization.
Accurate prediction of ICU readmission risk is crucial for guiding clinical
decision-making and optimizing healthcare resources. This study utilized the
Medical Information Mart for Intensive Care (MIMIC-III and MIMIC-IV) databases,
which contain comprehensive clinical and demographic data on ICU patients.
Patients with ICH were identified from both databases. Various clinical,
laboratory, and demographic features were extracted for analysis based on both
overview literature and experts' opinions. Preprocessing methods like imputing
and sampling were applied to improve the performance of our models. Machine
learning techniques, such as Artificial Neural Network (ANN), XGBoost, and
Random Forest, were employed to develop predictive models for ICU readmission
risk. Model performance was evaluated using metrics such as AUROC, accuracy,
sensitivity, and specificity. The developed models demonstrated robust
predictive accuracy for ICU readmission in ICH patients, with key predictors
including demographic information, clinical parameters, and laboratory
measurements. Our study provides a predictive framework for ICU readmission
risk in ICH patients, which can aid in clinical decision-making and improve
resource allocation in intensive care settings.
|
2501.01184 | Vulnerability-Aware Spatio-Temporal Learning for Generalizable and
Interpretable Deepfake Video Detection | cs.CV | Detecting deepfake videos is highly challenging due to the complex
intertwined spatial and temporal artifacts in forged sequences. Most recent
approaches rely on binary classifiers trained on both real and fake data.
However, such methods may struggle to focus on important artifacts, which can
hinder their generalization capability. Additionally, these models often lack
interpretability, making it difficult to understand how predictions are made.
To address these issues, we propose FakeSTormer, offering two key
contributions. First, we introduce a multi-task learning framework with
additional spatial and temporal branches that enable the model to focus on
subtle spatio-temporal artifacts. These branches also provide interpretability
by highlighting video regions that may contain artifacts. Second, we propose a
video-level data synthesis algorithm that generates pseudo-fake videos with
subtle artifacts, providing the model with high-quality samples and ground
truth data for our spatial and temporal branches. Extensive experiments on
several challenging benchmarks demonstrate the competitiveness of our approach
compared to recent state-of-the-art methods. The code is available at
https://github.com/10Ring/FakeSTormer.
|
2501.01189 | Can Human Drivers and Connected Autonomous Vehicles Co-exist in
Lane-Free Traffic? A Microscopic Simulation Perspective | eess.SY cs.ET cs.SY | Recent advancements in connected autonomous vehicle (CAV) technology have
sparked growing research interest in lane-free traffic (LFT). LFT envisions a
scenario where all vehicles are CAVs, coordinating their movements without
lanes to achieve smoother traffic flow and higher road capacity. This
potentially reduces congestion without building new infrastructure. However,
the transition phase will likely involve non-connected actors such as
human-driven vehicles (HDVs) or independent AVs sharing the roads. This raises
the question of how LFT performance is impacted when not all vehicles are CAVs,
as these non-connected vehicles may prioritize their own benefits over
system-wide improvements. This paper addresses this question through
microscopic simulation on a ring road, where CAVs follow the potential lines
(PL) controller for LFT, while HDVs adhere to a strip-based car-following
model. The PL controller is also modified for safe velocities to prevent
collisions. The results reveal that even a small percentage of HDVs can
significantly disrupt LFT flow: 5% HDVs can reduce LFT's maximum road capacity
by 16%, and a 20% HDVs nearly halves it. The study also develops an adaptive
potential (APL) controller that forms APL corridors with modified PLs in the
surroundings of HDVs. APL shows a peak traffic flow improvement of 23.6% over
the PL controller. The study indicates that a penetration rate of approximately
60% CAVs in LFT is required before significant benefits of LFT start appearing
compared to a scenario with all HDVs. These findings open a new research
direction on minimizing the adverse effects of non-connected vehicles on LFT.
|
2501.01191 | Data-Driven Yet Formal Policy Synthesis for Stochastic Nonlinear
Dynamical Systems | eess.SY cs.SY | The automated synthesis of control policies for stochastic dynamical systems
presents significant challenges. A standard approach is to construct a
finite-state abstraction of the continuous system, typically represented as a
Markov decision process (MDP). However, generating abstractions is challenging
when (1) the system's dynamics are nonlinear, and/or (2) we do not have
complete knowledge of the dynamics. In this work, we introduce a novel
data-driven abstraction technique for nonlinear dynamical systems with additive
stochastic noise that addresses both of these issues. As a key step, we use
samples of the dynamics to learn the enabled actions and transition
probabilities of the abstraction. We represent abstractions as MDPs with
intervals of transition probabilities, known as interval MDPs (IMDPs). These
abstractions enable the synthesis of control policies for the concrete
nonlinear system, with probably approximately correct (PAC) guarantees on the
probability of satisfying a specified control objective. Through numerical
experiments, we illustrate the effectiveness and robustness of our approach in
achieving reliable control under uncertainty.
|
2501.01195 | Data Augmentation Techniques for Chinese Disease Name Normalization | cs.CL cs.AI | Disease name normalization is an important task in the medical domain. It
classifies disease names written in various formats into standardized names,
serving as a fundamental component in smart healthcare systems for various
disease-related functions. Nevertheless, the most significant obstacle to
existing disease name normalization systems is the severe shortage of training
data. Consequently, we present a novel data augmentation approach that includes
a series of data augmentation techniques and some supporting modules to help
mitigate the problem. Through extensive experimentation, we illustrate that our
proposed approach exhibits significant performance improvements across various
baseline models and training objectives, particularly in scenarios with limited
training data
|
2501.01196 | Sparis: Neural Implicit Surface Reconstruction of Indoor Scenes from
Sparse Views | cs.CV | In recent years, reconstructing indoor scene geometry from multi-view images
has achieved encouraging accomplishments. Current methods incorporate monocular
priors into neural implicit surface models to achieve high-quality
reconstructions. However, these methods require hundreds of images for scene
reconstruction. When only a limited number of views are available as input, the
performance of monocular priors deteriorates due to scale ambiguity, leading to
the collapse of the reconstructed scene geometry. In this paper, we propose a
new method, named Sparis, for indoor surface reconstruction from sparse views.
Specifically, we investigate the impact of monocular priors on sparse scene
reconstruction, introducing a novel prior based on inter-image matching
information. Our prior offers more accurate depth information while ensuring
cross-view matching consistency. Additionally, we employ an angular filter
strategy and an epipolar matching weight function, aiming to reduce errors due
to view matching inaccuracies, thereby refining the inter-image prior for
improved reconstruction accuracy. The experiments conducted on widely used
benchmarks demonstrate superior performance in sparse-view scene
reconstruction.
|
2501.01197 | LayeringDiff: Layered Image Synthesis via Generation, then Disassembly
with Generative Knowledge | cs.CV | Layers have become indispensable tools for professional artists, allowing
them to build a hierarchical structure that enables independent control over
individual visual elements. In this paper, we propose LayeringDiff, a novel
pipeline for the synthesis of layered images, which begins by generating a
composite image using an off-the-shelf image generative model, followed by
disassembling the image into its constituent foreground and background layers.
By extracting layers from a composite image, rather than generating them from
scratch, LayeringDiff bypasses the need for large-scale training to develop
generative capabilities for individual layers. Furthermore, by utilizing a
pretrained off-the-shelf generative model, our method can produce diverse
contents and object scales in synthesized layers. For effective layer
decomposition, we adapt a large-scale pretrained generative prior to estimate
foreground and background layers. We also propose high-frequency alignment
modules to refine the fine-details of the estimated layers. Our comprehensive
experiments demonstrate that our approach effectively synthesizes layered
images and supports various practical applications.
|
2501.01202 | Empirical Analysis of Nature-Inspired Algorithms for Autism Spectrum
Disorder Detection Using 3D Video Dataset | cs.LG cs.NE | Autism Spectrum Disorder (ASD) is a chronic neurodevelopmental disorder
symptoms of which includes repetitive behaviour and lack of social and
communication skills. Even though these symptoms can be seen very clearly in
social but a large number of individuals with ASD remain undiagnosed. In this
paper, we worked on a methodology for the detection of ASD from a 3-dimensional
walking video dataset, utilizing supervised machine learning (ML)
classification algorithms and nature-inspired optimization algorithms for
feature extraction from the dataset. The proposed methodology involves the
classification of ASD using a supervised ML classification algorithm and
extracting important and relevant features from the dataset using
nature-inspired optimization algorithms. We also included the ranking
coefficients to find the initial leading particle. This selection of particle
significantly reduces the computation time and hence, improves the total
efficiency and accuracy for ASD detection. To evaluate the efficiency of the
proposed methodology, we deployed various combinationsalgorithms of
classification algorithm and nature-inspired algorithms resulting in an
outstanding classification accuracy of $100\%$ using the random forest
classification algorithm and gravitational search algorithm for feature
selection. The application of the proposed methodology with different datasets
would enhance the robustness and generalizability of the proposed methodology.
Due to high accuracy and less total computation time, the proposed methodology
will offer a significant contribution to the medical and academic fields,
providing a foundation for future research and advancements in ASD diagnosis.
|
2501.01203 | HetGCoT-Rec: Heterogeneous Graph-Enhanced Chain-of-Thought LLM Reasoning
for Journal Recommendation | cs.SI | Academic journal recommendation requires effectively combining structural
understanding of scholarly networks with interpretable recommendations. While
graph neural networks (GNNs) and large language models (LLMs) excel in their
respective domains, current approaches often fail to achieve true integration
at the reasoning level. We propose HetGCoT-Rec, a framework that deeply
integrates heterogeneous graph transformer with LLMs through chain-of-thought
reasoning. Our framework features two key technical innovations: (1) a
structure-aware mechanism that transforms heterogeneous graph neural network
learned subgraph information into natural language contexts, utilizing
predefined metapaths to capture academic relationships, and (2) a multi-step
reasoning strategy that systematically embeds graph-derived contexts into the
LLM's stage-wise reasoning process. Experiments on a dataset collected from
OpenAlex demonstrate that our approach significantly outperforms baseline
methods, achieving 96.48% Hit rate and 92.21% H@1 accuracy. Furthermore, we
validate the framework's adaptability across different LLM architectures,
showing consistent improvements in both recommendation accuracy and explanation
quality. Our work demonstrates an effective approach for combining
graph-structured reasoning with language models for interpretable academic
venue recommendations.
|
2501.01205 | Harnessing Multi-Agent LLMs for Complex Engineering Problem-Solving: A
Framework for Senior Design Projects | cs.MA cs.AI cs.CL cs.LG | Multi-Agent Large Language Models (LLMs) are gaining significant attention
for their ability to harness collective intelligence in complex
problem-solving, decision-making, and planning tasks. This aligns with the
concept of the wisdom of crowds, where diverse agents contribute collectively
to generating effective solutions, making it particularly suitable for
educational settings. Senior design projects, also known as capstone or final
year projects, are pivotal in engineering education as they integrate
theoretical knowledge with practical application, fostering critical thinking,
teamwork, and real-world problem-solving skills. In this paper, we explore the
use of Multi-Agent LLMs in supporting these senior design projects undertaken
by engineering students, which often involve multidisciplinary considerations
and conflicting objectives, such as optimizing technical performance while
addressing ethical, social, and environmental concerns. We propose a framework
where distinct LLM agents represent different expert perspectives, such as
problem formulation agents, system complexity agents, societal and ethical
agents, or project managers, thus facilitating a holistic problem-solving
approach. This implementation leverages standard multi-agent system (MAS)
concepts such as coordination, cooperation, and negotiation, incorporating
prompt engineering to develop diverse personas for each agent. These agents
engage in rich, collaborative dialogues to simulate human engineering teams,
guided by principles from swarm AI to efficiently balance individual
contributions towards a unified solution. We adapt these techniques to create a
collaboration structure for LLM agents, encouraging interdisciplinary reasoning
and negotiation similar to real-world senior design projects. To assess the
efficacy of this framework, we collected six proposals of engineering and
computer science of...
|
2501.01209 | A redescription mining framework for post-hoc explaining and relating
deep learning models | cs.AI cs.LG | Deep learning models (DLMs) achieve increasingly high performance both on
structured and unstructured data. They significantly extended applicability of
machine learning to various domains. Their success in making predictions,
detecting patterns and generating new data made significant impact on science
and industry. Despite these accomplishments, DLMs are difficult to explain
because of their enormous size. In this work, we propose a novel framework for
post-hoc explaining and relating DLMs using redescriptions. The framework
allows cohort analysis of arbitrary DLMs by identifying statistically
significant redescriptions of neuron activations. It allows coupling neurons to
a set of target labels or sets of descriptive attributes, relating layers
within a single DLM or associating different DLMs. The proposed framework is
independent of the artificial neural network architecture and can work with
more complex target labels (e.g. multi-label or multi-target scenario).
Additionally, it can emulate both pedagogical and decompositional approach to
rule extraction. The aforementioned properties of the proposed framework can
increase explainability and interpretability of arbitrary DLMs by providing
different information compared to existing explainable-AI approaches.
|
2501.01212 | Real-time Cross-modal Cybersickness Prediction in Virtual Reality | cs.CV cs.HC | Cybersickness remains a significant barrier to the widespread adoption of
immersive virtual reality (VR) experiences, as it can greatly disrupt user
engagement and comfort. Research has shown that cybersickness can significantly
be reflected in head and eye tracking data, along with other physiological data
(e.g., TMP, EDA, and BMP). Despite the application of deep learning techniques
such as CNNs and LSTMs, these models often struggle to capture the complex
interactions between multiple data modalities and lack the capacity for
real-time inference, limiting their practical application. Addressing this gap,
we propose a lightweight model that leverages a transformer-based encoder with
sparse self-attention to process bio-signal features and a PP-TSN network for
video feature extraction. These features are then integrated via a cross-modal
fusion module, creating a video-aware bio-signal representation that supports
cybersickness prediction based on both visual and bio-signal inputs. Our model,
trained with a lightweight framework, was validated on a public dataset
containing eye and head tracking data, physiological data, and VR video, and
demonstrated state-of-the-art performance in cybersickness prediction,
achieving a high accuracy of 93.13\% using only VR video inputs. These findings
suggest that our approach not only enables effective, real-time cybersickness
prediction but also addresses the longstanding issue of modality interaction in
VR environments. This advancement provides a foundation for future research on
multimodal data integration in VR, potentially leading to more personalized,
comfortable and widely accessible VR experiences.
|
2501.01213 | Range-Only Localization System for Small-Scale Flapping-Wing Robots | cs.RO | The design of localization systems for small-scale flapping-wing aerial
robots faces relevant challenges caused by the limited payload and onboard
computational resources. This paper presents an ultra-wideband localization
system particularly designed for small-scale flapping-wing robots. The solution
relies on custom 5 grams ultra-wideband sensors and provides robust, very
efficient (in terms of both computation and energy consumption), and accurate
(mean error of 0.28 meters) 3D position estimation. We validate our system
using a Flapper Nimble+ flapping-wing robot.
|
2501.01216 | TabTreeFormer: Tabular Data Generation Using Hybrid Tree-Transformer | cs.LG | Transformers have achieved remarkable success in tabular data generation.
However, they lack domain-specific inductive biases which are critical to
preserving the intrinsic characteristics of tabular data. Meanwhile, they
suffer from poor scalability and efficiency due to quadratic computational
complexity. In this paper, we propose TabTreeFormer, a hybrid transformer
architecture that incorporates a tree-based model that retains tabular-specific
inductive biases of non-smooth and potentially low-correlated patterns caused
by discreteness and non-rotational invariance, and hence enhances the fidelity
and utility of synthetic data. In addition, we devise a dual-quantization
tokenizer to capture the multimodal continuous distribution and further
facilitate the learning of numerical value distribution. Moreover, our proposed
tokenizer reduces the vocabulary size and sequence length due to the limited
complexity (e.g., dimension-wise semantic meaning) of tabular data, rendering a
significant model size shrink without sacrificing the capability of the
transformer model. We evaluate TabTreeFormer on 10 datasets against multiple
generative models on various metrics; our experimental results show that
TabTreeFormer achieves superior fidelity, utility, privacy, and efficiency. Our
best model yields a 40% utility improvement with 1/16 of the baseline model
size.
|
2501.01222 | Classification of Operational Records in Aviation Using Deep Learning
Approaches | cs.LG | Ensuring safety in the aviation industry is critical, even minor anomalies
can lead to severe consequences. This study evaluates the performance of four
different models for DP (deep learning), including: Bidirectional Long
Short-Term Memory (BLSTM), Convolutional Neural Networks (CNN), Long Short-Term
Memory (LSTM), and Simple Recurrent Neural Networks (sRNN), on a multi-class
classification task involving Commercial, Military, and Private categories
using the Socrata aviation dataset of 4,864 records. The models were assessed
using a classification report, confusion matrix analysis, accuracy metrics,
validation loss and accuracy curves. Among the models, BLSTM achieved the
highest overall accuracy of 72%, demonstrating superior performance in
stability and balanced classification, while LSTM followed closely with 71%,
excelling in recall for the Commercial class. CNN and sRNN exhibited lower
accuracies of 67% and 69%, with significant misclassifications in the Private
class. While the results highlight the strengths of BLSTM and LSTM in handling
sequential dependencies and complex classification tasks, all models faced
challenges with class imbalance, particularly in predicting the Military and
Private categories. Addressing these limitations through data augmentation,
advanced feature engineering, and ensemble learning techniques could enhance
classification accuracy and robustness. This study underscores the importance
of selecting appropriate architectures for domain specific tasks
|
2501.01223 | Conditional Consistency Guided Image Translation and Enhancement | cs.CV cs.LG | Consistency models have emerged as a promising alternative to diffusion
models, offering high-quality generative capabilities through single-step
sample generation. However, their application to multi-domain image translation
tasks, such as cross-modal translation and low-light image enhancement remains
largely unexplored. In this paper, we introduce Conditional Consistency Models
(CCMs) for multi-domain image translation by incorporating additional
conditional inputs. We implement these modifications by introducing
task-specific conditional inputs that guide the denoising process, ensuring
that the generated outputs retain structural and contextual information from
the corresponding input domain. We evaluate CCMs on 10 different datasets
demonstrating their effectiveness in producing high-quality translated images
across multiple domains. Code is available at
https://github.com/amilbhagat/Conditional-Consistency-Models.
|
2501.01227 | Comparative Analysis of Topic Modeling Techniques on ATSB Text
Narratives Using Natural Language Processing | cs.LG | Improvements in aviation safety analysis call for innovative techniques to
extract valuable insights from the abundance of textual data available in
accident reports. This paper explores the application of four prominent topic
modelling techniques, namely Probabilistic Latent Semantic Analysis (pLSA),
Latent Semantic Analysis (LSA), Latent Dirichlet Allocation (LDA), and
Non-negative Matrix Factorization (NMF), to dissect aviation incident
narratives using the Australian Transport Safety Bureau (ATSB) dataset. The
study examines each technique's ability to unveil latent thematic structures
within the data, providing safety professionals with a systematic approach to
gain actionable insights. Through a comparative analysis, this research not
only showcases the potential of these methods in aviation safety but also
elucidates their distinct advantages and limitations.
|
2501.01230 | Modeling Multi-Task Model Merging as Adaptive Projective Gradient
Descent | cs.LG | Merging multiple expert models offers a promising approach for performing
multi-task learning without accessing their original data. Existing methods
attempt to alleviate task conflicts by sparsifying task vectors or promoting
orthogonality among them. However, they overlook the fundamental requirement of
model merging: ensuring the merged model performs comparably to task-specific
models on respective tasks. We find these methods inevitably discard
task-specific information that, while causing conflicts, is crucial for
performance. Based on our findings, we frame model merging as a constrained
optimization problem ($\textit{i.e.}$, minimizing the gap between the merged
model and individual models, subject to the constraint of retaining shared
knowledge) and solve it via adaptive projective gradient descent. Specifically,
we align the merged model with individual models by decomposing and
reconstituting the loss function, alleviating conflicts through
$\textit{data-free}$ optimization of task vectors. To retain shared knowledge,
we optimize this objective by projecting gradients within a $\textit{shared
subspace}$ spanning all tasks. Moreover, we view merging coefficients as
adaptive learning rates and propose a task-aware, training-free strategy.
Experiments show that our plug-and-play approach consistently outperforms
previous methods, achieving state-of-the-art results across diverse
architectures and tasks in both vision and NLP domains.
|
2501.01231 | Exploiting Latent Properties to Optimize Neural Codecs | cs.CV cs.LG | End-to-end image and video codecs are becoming increasingly competitive,
compared to traditional compression techniques that have been developed through
decades of manual engineering efforts. These trainable codecs have many
advantages over traditional techniques, such as their straightforward
adaptation to perceptual distortion metrics and high performance in specific
fields thanks to their learning ability. However, current state-of-the-art
neural codecs do not fully exploit the benefits of vector quantization and the
existence of the entropy gradient in decoding devices. In this paper, we
propose to leverage these two properties (vector quantization and entropy
gradient) to improve the performance of off-the-shelf codecs. Firstly, we
demonstrate that using non-uniform scalar quantization cannot improve
performance over uniform quantization. We thus suggest using predefined optimal
uniform vector quantization to improve performance. Secondly, we show that the
entropy gradient, available at the decoder, is correlated with the
reconstruction error gradient, which is not available at the decoder. We
therefore use the former as a proxy to enhance compression performance. Our
experimental results show that these approaches save between 1 to 3% of the
rate for the same quality across various pretrained methods. In addition, the
entropy gradient based solution improves traditional codec performance
significantly as well.
|
2501.01235 | SVFR: A Unified Framework for Generalized Video Face Restoration | cs.CV cs.LG eess.IV | Face Restoration (FR) is a crucial area within image and video processing,
focusing on reconstructing high-quality portraits from degraded inputs. Despite
advancements in image FR, video FR remains relatively under-explored, primarily
due to challenges related to temporal consistency, motion artifacts, and the
limited availability of high-quality video data. Moreover, traditional face
restoration typically prioritizes enhancing resolution and may not give as much
consideration to related tasks such as facial colorization and inpainting. In
this paper, we propose a novel approach for the Generalized Video Face
Restoration (GVFR) task, which integrates video BFR, inpainting, and
colorization tasks that we empirically show to benefit each other. We present a
unified framework, termed as stable video face restoration (SVFR), which
leverages the generative and motion priors of Stable Video Diffusion (SVD) and
incorporates task-specific information through a unified face restoration
framework. A learnable task embedding is introduced to enhance task
identification. Meanwhile, a novel Unified Latent Regularization (ULR) is
employed to encourage the shared feature representation learning among
different subtasks. To further enhance the restoration quality and temporal
stability, we introduce the facial prior learning and the self-referred
refinement as auxiliary strategies used for both training and inference. The
proposed framework effectively combines the complementary strengths of these
tasks, enhancing temporal coherence and achieving superior restoration quality.
This work advances the state-of-the-art in video FR and establishes a new
paradigm for generalized video face restoration. Code and video demo are
available at https://github.com/wangzhiyaoo/SVFR.git.
|
2501.01237 | Self-Refinement Strategies for LLM-based Product Attribute Value
Extraction | cs.CL | Structured product data, in the form of attribute-value pairs, is essential
for e-commerce platforms to support features such as faceted product search and
attribute-based product comparison. However, vendors often provide unstructured
product descriptions, making attribute value extraction necessary to ensure
data consistency and usability. Large language models (LLMs) have demonstrated
their potential for product attribute value extraction in few-shot scenarios.
Recent research has shown that self-refinement techniques can improve the
performance of LLMs on tasks such as code generation and text-to-SQL
translation. For other tasks, the application of these techniques has resulted
in increased costs due to processing additional tokens, without achieving any
improvement in performance. This paper investigates applying two
self-refinement techniques (error-based prompt rewriting and self-correction)
to the product attribute value extraction task. The self-refinement techniques
are evaluated across zero-shot, few-shot in-context learning, and fine-tuning
scenarios using GPT-4o. The experiments show that both self-refinement
techniques fail to significantly improve the extraction performance while
substantially increasing processing costs. For scenarios with development data,
fine-tuning yields the highest performance, while the ramp-up costs of
fine-tuning are balanced out as the amount of product descriptions increases.
|
2501.01238 | EHCTNet: Enhanced Hybrid of CNN and Transformer Network for Remote
Sensing Image Change Detection | cs.CV cs.LG | Remote sensing (RS) change detection incurs a high cost because of false
negatives, which are more costly than false positives. Existing frameworks,
struggling to improve the Precision metric to reduce the cost of false
positive, still have limitations in focusing on the change of interest, which
leads to missed detections and discontinuity issues. This work tackles these
issues by enhancing feature learning capabilities and integrating the frequency
components of feature information, with a strategy to incrementally boost the
Recall value. We propose an enhanced hybrid of CNN and Transformer network
(EHCTNet) for effectively mining the change information of interest. Firstly, a
dual branch feature extraction module is used to extract the multi scale
features of RS images. Secondly, the frequency component of these features is
exploited by a refined module I. Thirdly, an enhanced token mining module based
on the Kolmogorov Arnold Network is utilized to derive semantic information.
Finally, the semantic change information's frequency component, beneficial for
final detection, is mined from the refined module II. Extensive experiments
validate the effectiveness of EHCTNet in comprehending complex changes of
interest. The visualization outcomes show that EHCTNet detects more intact and
continuous changed areas and perceives more accurate neighboring distinction
than state of the art models.
|
2501.01239 | High-Order Tensor Regression in Sparse Convolutional Neural Networks | cs.LG | This article presents a generic approach to convolution that significantly
differs from conventional methodologies in the current Machine Learning
literature. The approach, in its mathematical aspects, proved to be clear and
concise, particularly when high-order tensors are involved. In this context, a
rational theory of regression in neural networks is developed, as a framework
for a generic view of sparse convolutional neural networks, the primary focus
of this study. As a direct outcome, the classic Backpropagation Algorithm is
redefined to align with this rational tensor-based approach and presented in
its simplest, most generic form.
|
2501.01240 | Asymmetric Reinforcing against Multi-modal Representation Bias | cs.CV | The strength of multimodal learning lies in its ability to integrate
information from various sources, providing rich and comprehensive insights.
However, in real-world scenarios, multi-modal systems often face the challenge
of dynamic modality contributions, the dominance of different modalities may
change with the environments, leading to suboptimal performance in multimodal
learning. Current methods mainly enhance weak modalities to balance multimodal
representation bias, which inevitably optimizes from a partialmodality
perspective, easily leading to performance descending for dominant modalities.
To address this problem, we propose an Asymmetric Reinforcing method against
Multimodal representation bias (ARM). Our ARM dynamically reinforces the weak
modalities while maintaining the ability to represent dominant modalities
through conditional mutual information. Moreover, we provide an in-depth
analysis that optimizing certain modalities could cause information loss and
prevent leveraging the full advantages of multimodal data. By exploring the
dominance and narrowing the contribution gaps between modalities, we have
significantly improved the performance of multimodal learning, making notable
progress in mitigating imbalanced multimodal learning.
|
2501.01242 | An Efficient Attention Mechanism for Sequential Recommendation Tasks:
HydraRec | cs.IR cs.AI | Transformer based models are increasingly being used in various domains
including recommender systems (RS). Pretrained transformer models such as BERT
have shown good performance at language modelling. With the greater ability to
model sequential tasks, variants of Encoder-only models (like BERT4Rec, SASRec
etc.) have found success in sequential RS problems. Computing dot-product
attention in traditional transformer models has quadratic complexity in
sequence length. This is a bigger problem with RS because unlike language
models, new items are added to the catalogue every day. User buying history is
a dynamic sequence which depends on multiple factors. Recently, various linear
attention models have tried to solve this problem by making the model linear in
sequence length (token dimensions). Hydra attention is one such linear
complexity model proposed for vision transformers which reduces the complexity
of attention for both the number of tokens as well as model embedding
dimensions. Building on the idea of Hydra attention, we introduce an efficient
Transformer based Sequential RS (HydraRec) which significantly improves
theoretical complexity of computing attention for longer sequences and bigger
datasets while preserving the temporal context. Extensive experiments are
conducted to evaluate other linear transformer-based RS models and compared
with HydraRec across various evaluation metrics. HydraRec outperforms other
linear attention-based models as well as dot-product based attention models
when used with causal masking for sequential recommendation next item
prediction tasks. For bi-directional models its performance is comparable to
the BERT4Rec model with an improvement in running time.
|
2501.01243 | Face-Human-Bench: A Comprehensive Benchmark of Face and Human
Understanding for Multi-modal Assistants | cs.CV cs.AI cs.CL | Faces and humans are crucial elements in social interaction and are widely
included in everyday photos and videos. Therefore, a deep understanding of
faces and humans will enable multi-modal assistants to achieve improved
response quality and broadened application scope. Currently, the multi-modal
assistant community lacks a comprehensive and scientific evaluation of face and
human understanding abilities. In this paper, we first propose a hierarchical
ability taxonomy that includes three levels of abilities. Then, based on this
taxonomy, we collect images and annotations from publicly available datasets in
the face and human community and build a semi-automatic data pipeline to
produce problems for the new benchmark. Finally, the obtained Face-Human-Bench
comprises a development set with 900 problems and a test set with 1800
problems, supporting both English and Chinese. We conduct evaluations over 25
mainstream multi-modal large language models (MLLMs) with our Face-Human-Bench,
focusing on the correlation between abilities, the impact of the relative
position of targets on performance, and the impact of Chain of Thought (CoT)
prompting on performance. Moreover, inspired by multi-modal agents, we also
explore which abilities of MLLMs need to be supplemented by specialist models.
|
2501.01245 | SeFAR: Semi-supervised Fine-grained Action Recognition with Temporal
Perturbation and Learning Stabilization | cs.CV cs.LG | Human action understanding is crucial for the advancement of multimodal
systems. While recent developments, driven by powerful large language models
(LLMs), aim to be general enough to cover a wide range of categories, they
often overlook the need for more specific capabilities. In this work, we
address the more challenging task of Fine-grained Action Recognition (FAR),
which focuses on detailed semantic labels within shorter temporal duration
(e.g., "salto backward tucked with 1 turn"). Given the high costs of annotating
fine-grained labels and the substantial data needed for fine-tuning LLMs, we
propose to adopt semi-supervised learning (SSL). Our framework, SeFAR,
incorporates several innovative designs to tackle these challenges.
Specifically, to capture sufficient visual details, we construct Dual-level
temporal elements as more effective representations, based on which we design a
new strong augmentation strategy for the Teacher-Student learning paradigm
through involving moderate temporal perturbation. Furthermore, to handle the
high uncertainty within the teacher model's predictions for FAR, we propose the
Adaptive Regulation to stabilize the learning process. Experiments show that
SeFAR achieves state-of-the-art performance on two FAR datasets, FineGym and
FineDiving, across various data scopes. It also outperforms other
semi-supervised methods on two classical coarse-grained datasets, UCF101 and
HMDB51. Further analysis and ablation studies validate the effectiveness of our
designs. Additionally, we show that the features extracted by our SeFAR could
largely promote the ability of multimodal foundation models to understand
fine-grained and domain-specific semantics.
|
2501.01246 | Large Language Model-Enhanced Symbolic Reasoning for Knowledge Base
Completion | cs.CL | Integrating large language models (LLMs) with rule-based reasoning offers a
powerful solution for improving the flexibility and reliability of Knowledge
Base Completion (KBC). Traditional rule-based KBC methods offer verifiable
reasoning yet lack flexibility, while LLMs provide strong semantic
understanding yet suffer from hallucinations. With the aim of combining LLMs'
understanding capability with the logical and rigor of rule-based approaches,
we propose a novel framework consisting of a Subgraph Extractor, an LLM
Proposer, and a Rule Reasoner. The Subgraph Extractor first samples subgraphs
from the KB. Then, the LLM uses these subgraphs to propose diverse and
meaningful rules that are helpful for inferring missing facts. To effectively
avoid hallucination in LLMs' generations, these proposed rules are further
refined by a Rule Reasoner to pinpoint the most significant rules in the KB for
Knowledge Base Completion. Our approach offers several key benefits: the
utilization of LLMs to enhance the richness and diversity of the proposed rules
and the integration with rule-based reasoning to improve reliability. Our
method also demonstrates strong performance across diverse KB datasets,
highlighting the robustness and generalizability of the proposed framework.
|
2501.01248 | Bayesian Active Learning By Distribution Disagreement | cs.LG | Active Learning (AL) for regression has been systematically under-researched
due to the increased difficulty of measuring uncertainty in regression models.
Since normalizing flows offer a full predictive distribution instead of a point
forecast, they facilitate direct usage of known heuristics for AL like Entropy
or Least-Confident sampling. However, we show that most of these heuristics do
not work well for normalizing flows in pool-based AL and we need more
sophisticated algorithms to distinguish between aleatoric and epistemic
uncertainty. In this work we propose BALSA, an adaptation of the BALD
algorithm, tailored for regression with normalizing flows. With this work we
extend current research on uncertainty quantification with normalizing flows
\cite{berry2023normalizing, berry2023escaping} to real world data and
pool-based AL with multiple acquisition functions and query sizes. We report
SOTA results for BALSA across 4 different datasets and 2 different
architectures.
|
2501.01256 | Digital Guardians: Can GPT-4, Perspective API, and Moderation API
reliably detect hate speech in reader comments of German online newspapers? | cs.CL cs.LG | In recent years, toxic content and hate speech have become widespread
phenomena on the internet. Moderators of online newspapers and forums are now
required, partly due to legal regulations, to carefully review and, if
necessary, delete reader comments. This is a labor-intensive process. Some
providers of large language models already offer solutions for automated hate
speech detection or the identification of toxic content. These include GPT-4o
from OpenAI, Jigsaw's (Google) Perspective API, and OpenAI's Moderation API.
Based on the selected German test dataset HOCON34k, which was specifically
created for developing tools to detect hate speech in reader comments of online
newspapers, these solutions are compared with each other and against the
HOCON34k baseline. The test dataset contains 1,592 annotated text samples. For
GPT-4o, three different promptings are used, employing a Zero-Shot, One-Shot,
and Few-Shot approach. The results of the experiments demonstrate that GPT-4o
outperforms both the Perspective API and the Moderation API, and exceeds the
HOCON34k baseline by approximately 5 percentage points, as measured by a
combined metric of MCC and F2-score.
|
2501.01257 | CodeElo: Benchmarking Competition-level Code Generation of LLMs with
Human-comparable Elo Ratings | cs.CL | With the increasing code reasoning capabilities of existing large language
models (LLMs) and breakthroughs in reasoning models like OpenAI o1 and o3,
there is a growing need to develop more challenging and comprehensive
benchmarks that effectively test their sophisticated competition-level coding
abilities. Existing benchmarks, like LiveCodeBench and USACO, fall short due to
the unavailability of private test cases, lack of support for special judges,
and misaligned execution environments. To bridge this gap, we introduce
CodeElo, a standardized competition-level code generation benchmark that
effectively addresses all these challenges for the first time. CodeElo
benchmark is mainly based on the official CodeForces platform and tries to
align with the platform as much as possible. We compile the recent six months
of contest problems on CodeForces with detailed information such as contest
divisions, problem difficulty ratings, and problem algorithm tags. We introduce
a unique judging method in which problems are submitted directly to the
platform and develop a reliable Elo rating calculation system that aligns with
the platform and is comparable with human participants but has lower variance.
By testing on our CodeElo, we provide the Elo ratings of 30 existing popular
open-source and 3 proprietary LLMs for the first time. The results show that
o1-mini and QwQ-32B-Preview stand out significantly, achieving Elo ratings of
1578 and 1261, respectively, while other models struggle even with the easiest
problems, placing in the lowest 25 percent among all human participants.
Detailed analysis experiments are also conducted to provide insights into
performance across algorithms and comparisons between using C++ and Python,
which can suggest directions for future studies.
|
2501.01262 | Detail Matters: Mamba-Inspired Joint Unfolding Network for Snapshot
Spectral Compressive Imaging | cs.CV | In the coded aperture snapshot spectral imaging system, Deep Unfolding
Networks (DUNs) have made impressive progress in recovering 3D hyperspectral
images (HSIs) from a single 2D measurement. However, the inherent nonlinear and
ill-posed characteristics of HSI reconstruction still pose challenges to
existing methods in terms of accuracy and stability. To address this issue, we
propose a Mamba-inspired Joint Unfolding Network (MiJUN), which integrates
physics-embedded DUNs with learning-based HSI imaging. Firstly, leveraging the
concept of trapezoid discretization to expand the representation space of
unfolding networks, we introduce an accelerated unfolding network scheme. This
approach can be interpreted as a generalized accelerated half-quadratic
splitting with a second-order differential equation, which reduces the reliance
on initial optimization stages and addresses challenges related to long-range
interactions. Crucially, within the Mamba framework, we restructure the
Mamba-inspired global-to-local attention mechanism by incorporating a selective
state space model and an attention mechanism. This effectively reinterprets
Mamba as a variant of the Transformer} architecture, improving its adaptability
and efficiency. Furthermore, we refine the scanning strategy with Mamba by
integrating the tensor mode-$k$ unfolding into the Mamba network. This approach
emphasizes the low-rank properties of tensors along various modes, while
conveniently facilitating 12 scanning directions. Numerical and visual
comparisons on both simulation and real datasets demonstrate the superiority of
our proposed MiJUN, and achieving overwhelming detail representation.
|
2501.01263 | Stealthy Backdoor Attack to Real-world Models in Android Apps | cs.CR cs.AI | Powered by their superior performance, deep neural networks (DNNs) have found
widespread applications across various domains. Many deep learning (DL) models
are now embedded in mobile apps, making them more accessible to end users
through on-device DL. However, deploying on-device DL to users' smartphones
simultaneously introduces several security threats. One primary threat is
backdoor attacks. Extensive research has explored backdoor attacks for several
years and has proposed numerous attack approaches. However, few studies have
investigated backdoor attacks on DL models deployed in the real world, or they
have shown obvious deficiencies in effectiveness and stealthiness. In this
work, we explore more effective and stealthy backdoor attacks on real-world DL
models extracted from mobile apps. Our main justification is that imperceptible
and sample-specific backdoor triggers generated by DNN-based steganography can
enhance the efficacy of backdoor attacks on real-world models. We first confirm
the effectiveness of steganography-based backdoor attacks on four
state-of-the-art DNN models. Subsequently, we systematically evaluate and
analyze the stealthiness of the attacks to ensure they are difficult to
perceive. Finally, we implement the backdoor attacks on real-world models and
compare our approach with three baseline methods. We collect 38,387 mobile
apps, extract 89 DL models from them, and analyze these models to obtain the
prerequisite model information for the attacks. After identifying the target
models, our approach achieves an average of 12.50% higher attack success rate
than DeepPayload while better maintaining the normal performance of the models.
Extensive experimental results demonstrate that our method enables more
effective, robust, and stealthy backdoor attacks on real-world models.
|
2501.01264 | ProgCo: Program Helps Self-Correction of Large Language Models | cs.CL cs.AI cs.LG | Self-Correction aims to enable large language models (LLMs) to self-verify
and self-refine their initial responses without external feedback. However,
LLMs often fail to effectively self-verify and generate correct feedback,
further misleading refinement and leading to the failure of self-correction,
especially in complex reasoning tasks. In this paper, we propose Program-driven
Self-Correction (ProgCo). First, program-driven verification (ProgVe) achieves
complex verification logic and extensive validation through self-generated,
self-executing verification pseudo-programs. Then, program-driven refinement
(ProgRe) receives feedback from ProgVe, conducts dual reflection and refinement
on both responses and verification programs to mitigate misleading of incorrect
feedback in complex reasoning tasks. Experiments on three instruction-following
and mathematical benchmarks indicate that ProgCo achieves effective
self-correction, and can be further enhance performance when combined with real
program tools.
|
2501.01266 | PIMAEX: Multi-Agent Exploration through Peer Incentivization | cs.MA cs.AI | While exploration in single-agent reinforcement learning has been studied
extensively in recent years, considerably less work has focused on its
counterpart in multi-agent reinforcement learning. To address this issue, this
work proposes a peer-incentivized reward function inspired by previous research
on intrinsic curiosity and influence-based rewards. The \textit{PIMAEX} reward,
short for Peer-Incentivized Multi-Agent Exploration, aims to improve
exploration in the multi-agent setting by encouraging agents to exert influence
over each other to increase the likelihood of encountering novel states. We
evaluate the \textit{PIMAEX} reward in conjunction with
\textit{PIMAEX-Communication}, a multi-agent training algorithm that employs a
communication channel for agents to influence one another. The evaluation is
conducted in the \textit{Consume/Explore} environment, a partially observable
environment with deceptive rewards, specifically designed to challenge the
exploration vs.\ exploitation dilemma and the credit-assignment problem. The
results empirically demonstrate that agents using the \textit{PIMAEX} reward
with \textit{PIMAEX-Communication} outperform those that do not.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.