id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
2501.13461
|
Knowledge-Informed Multi-Agent Trajectory Prediction at Signalized
Intersections for Infrastructure-to-Everything
|
cs.RO cs.CV cs.MA
|
Multi-agent trajectory prediction at signalized intersections is crucial for
developing efficient intelligent transportation systems and safe autonomous
driving systems. Due to the complexity of intersection scenarios and the
limitations of single-vehicle perception, the performance of vehicle-centric
prediction methods has reached a plateau. Furthermore, most works underutilize
critical intersection information, including traffic signals, and behavior
patterns induced by road structures. Therefore, we propose a multi-agent
trajectory prediction framework at signalized intersections dedicated to
Infrastructure-to-Everything (I2XTraj). Our framework leverages dynamic graph
attention to integrate knowledge from traffic signals and driving behaviors. A
continuous signal-informed mechanism is proposed to adaptively process
real-time traffic signals from infrastructure devices. Additionally, leveraging
the prior knowledge of the intersection topology, we propose a driving strategy
awareness mechanism to model the joint distribution of goal intentions and
maneuvers. To the best of our knowledge, I2XTraj represents the first
multi-agent trajectory prediction framework explicitly designed for
infrastructure deployment, supplying subscribable prediction services to all
vehicles at intersections. I2XTraj demonstrates state-of-the-art performance on
both the Vehicle-to-Infrastructure dataset V2X-Seq and the aerial-view dataset
SinD for signalized intersections. Quantitative evaluations show that our
approach outperforms existing methods by more than 30% in both multi-agent and
single-agent scenarios.
|
2501.13462
|
Generalized graph codes and thier minimum distances
|
math.CO cs.IT math.IT
|
Graph code is a linear code obtained from linear codes $C$ and a certain
bipartite graph G. In this paper, I propose an expansion of the definition of
graph code to general $l$-partite, and give its lower bound of minimum
distance. I also give an example of generalized graph code and calculate its
parameters $[n, k, d]$.
|
2501.13467
|
Multi-Level Attention and Contrastive Learning for Enhanced Text
Classification with an Optimized Transformer
|
cs.CL
|
This paper studies a text classification algorithm based on an improved
Transformer to improve the performance and efficiency of the model in text
classification tasks. Aiming at the shortcomings of the traditional Transformer
model in capturing deep semantic relationships and optimizing computational
complexity, this paper introduces a multi-level attention mechanism and a
contrastive learning strategy. The multi-level attention mechanism effectively
models the global semantics and local features in the text by combining global
attention with local attention; the contrastive learning strategy enhances the
model's ability to distinguish between different categories by constructing
positive and negative sample pairs while improving the classification effect.
In addition, in order to improve the training and inference efficiency of the
model on large-scale text data, this paper designs a lightweight module to
optimize the feature transformation process and reduce the computational cost.
Experimental results on the dataset show that the improved Transformer model
outperforms the comparative models such as BiLSTM, CNN, standard Transformer,
and BERT in terms of classification accuracy, F1 score, and recall rate,
showing stronger semantic representation ability and generalization
performance. The method proposed in this paper provides a new idea for
algorithm optimization in the field of text classification and has good
application potential and practical value. Future work will focus on studying
the performance of this model in multi-category imbalanced datasets and
cross-domain tasks and explore the integration wi
|
2501.13468
|
Streaming Video Understanding and Multi-round Interaction with
Memory-enhanced Knowledge
|
cs.CV cs.AI
|
Recent advances in Large Language Models (LLMs) have enabled the development
of Video-LLMs, advancing multimodal learning by bridging video data with
language tasks. However, current video understanding models struggle with
processing long video sequences, supporting multi-turn dialogues, and adapting
to real-world dynamic scenarios. To address these issues, we propose
StreamChat, a training-free framework for streaming video reasoning and
conversational interaction. $\StreamChat$ leverages a novel hierarchical memory
system to efficiently process and compress video features over extended
sequences, enabling real-time, multi-turn dialogue. Our framework incorporates
a parallel system scheduling strategy that enhances processing speed and
reduces latency, ensuring robust performance in real-world applications.
Furthermore, we introduce StreamBench, a versatile benchmark that evaluates
streaming video understanding across diverse media types and interactive
scenarios, including multi-turn interactions and complex reasoning tasks.
Extensive evaluations on StreamBench and other public benchmarks demonstrate
that StreamChat significantly outperforms existing state-of-the-art models in
terms of accuracy and response times, confirming its effectiveness for
streaming video understanding. Code is available at StreamChat:
https://github.com/hmxiong/StreamChat.
|
2501.13470
|
Leveraging Textual Anatomical Knowledge for Class-Imbalanced
Semi-Supervised Multi-Organ Segmentation
|
cs.CV
|
Annotating 3D medical images demands substantial time and expertise, driving
the adoption of semi-supervised learning (SSL) for segmentation tasks. However,
the complex anatomical structures of organs often lead to significant class
imbalances, posing major challenges for deploying SSL in real-world scenarios.
Despite the availability of valuable prior information, such as inter-organ
relative positions and organ shape priors, existing SSL methods have yet to
fully leverage these insights. To address this gap, we propose a novel approach
that integrates textual anatomical knowledge (TAK) into the segmentation model.
Specifically, we use GPT-4o to generate textual descriptions of anatomical
priors, which are then encoded using a CLIP-based model. These encoded priors
are injected into the segmentation model as parameters of the segmentation
head. Additionally, contrastive learning is employed to enhance the alignment
between textual priors and visual features. Extensive experiments demonstrate
the superior performance of our method, significantly surpassing
state-of-the-art approaches. The source code will be available at:
https://github.com/Lunn88/TAK-Semi.
|
2501.13472
|
Radio Map Estimation via Latent Domain Plug-and-Play Denoising
|
eess.SP cs.LG
|
Radio map estimation (RME), also known as spectrum cartography, aims to
reconstruct the strength of radio interference across different domains (e.g.,
space and frequency) from sparsely sampled measurements. To tackle this typical
inverse problem, state-of-the-art RME methods rely on handcrafted or
data-driven structural information of radio maps. However, the former often
struggles to model complex radio frequency (RF) environments and the latter
requires excessive training -- making it hard to quickly adapt to in situ
sensing tasks. This work presents a spatio-spectral RME approach based on
plug-and-play (PnP) denoising, a technique from computational imaging. The idea
is to leverage the observation that the denoising operations of signals like
natural images and radio maps are similar -- despite the nontrivial differences
of the signals themselves. Hence, sophisticated denoisers designed for or
learned from natural images can be directly employed to assist RME, avoiding
using radio map data for training. Unlike conventional PnP methods that operate
directly in the data domain, the proposed method exploits the underlying
physical structure of radio maps and proposes an ADMM algorithm that denoises
in a latent domain. This design significantly improves computational efficiency
and enhances noise robustness. Theoretical aspects, e.g., recoverability of the
complete radio map and convergence of the ADMM algorithm are analyzed.
Synthetic and real data experiments are conducted to demonstrate the
effectiveness of our approach.
|
2501.13473
|
Risk and Vulnerability Assessment of Energy-Transportation
Infrastructure Systems to Extreme Weather
|
eess.SY cs.SY
|
The interaction between extreme weather events and interdependent critical
infrastructure systems involves complex spatiotemporal dynamics. Multi-type
emergency decisions within energy-transportation infrastructures significantly
influence system performance throughout the extreme weather process. A
comprehensive assessment of these factors faces challenges in model complexity
and heterogeneity between energy and transportation systems. This paper
proposes an assessment framework that accommodates multiple types of emergency
decisions. It integrates the heterogeneous energy and transportation
infrastructures in the form of a network flow model to simulate and quantify
the impact of extreme weather events on the energy-transportation
infrastructure system. Based on this framework, a targeted method for
identifying system vulnerabilities is further introduced, utilizing a neural
network surrogate that achieves privacy protection and evaluation acceleration
while maintaining consideration of system interdependencies. Numerical
experiments demonstrate that the proposed framework and method can reveal the
risk levels faced by urban infrastructure systems, identify weak points that
should be prioritized for reinforcement, and strike a balance between accuracy
and evaluation speed.
|
2501.13474
|
Leveraging Digital Twin and Machine Learning Techniques for Anomaly
Detection in Power Electronics Dominated Grid
|
eess.SY cs.SY
|
Modern power grids are transitioning towards power electronics-dominated
grids (PEDG) due to the increasing integration of renewable energy sources and
energy storage systems. This shift introduces complexities in grid operation
and increases vulnerability to cyberattacks. This research explores the
application of digital twin (DT) technology and machine learning (ML)
techniques for anomaly detection in PEDGs. A DT can accurately track and
simulate the behavior of the physical grid in real-time, providing a platform
for monitoring and analyzing grid operations, with extended amount of data
about dynamic power flow along the whole power system. By integrating ML
algorithms, the DT can learn normal grid behavior and effectively identify
anomalies that deviate from established patterns, enabling early detection of
potential cyberattacks or system faults. This approach offers a comprehensive
and proactive strategy for enhancing cybersecurity and ensuring the stability
and reliability of PEDGs.
|
2501.13475
|
LDR-Net: A Novel Framework for AI-generated Image Detection via
Localized Discrepancy Representation
|
cs.CV
|
With the rapid advancement of generative models, the visual quality of
generated images has become nearly indistinguishable from the real ones, posing
challenges to content authenticity verification. Existing methods for detecting
AI-generated images primarily focus on specific forgery clues, which are often
tailored to particular generative models like GANs or diffusion models. These
approaches struggle to generalize across architectures. Building on the
observation that generative images often exhibit local anomalies, such as
excessive smoothness, blurred textures, and unnatural pixel variations in small
regions, we propose the localized discrepancy representation network (LDR-Net),
a novel approach for detecting AI-generated images. LDR-Net captures smoothing
artifacts and texture irregularities, which are common but often overlooked. It
integrates two complementary modules: local gradient autocorrelation (LGA)
which models local smoothing anomalies to detect smoothing anomalies, and local
variation pattern (LVP) which captures unnatural regularities by modeling the
complexity of image patterns. By merging LGA and LVP features, a comprehensive
representation of localized discrepancies can be provided. Extensive
experiments demonstrate that our LDR-Net achieves state-of-the-art performance
in detecting generated images and exhibits satisfactory generalization across
unseen generative models. The code will be released upon acceptance of this
paper.
|
2501.13479
|
Adaptive Few-Shot Learning (AFSL): Tackling Data Scarcity with
Stability, Robustness, and Versatility
|
cs.LG cs.AI
|
Few-shot learning (FSL) enables machine learning models to generalize
effectively with minimal labeled data, making it crucial for data-scarce
domains such as healthcare, robotics, and natural language processing. Despite
its potential, FSL faces challenges including sensitivity to initialization,
difficulty in adapting to diverse domains, and vulnerability to noisy datasets.
To address these issues, this paper introduces Adaptive Few-Shot Learning
(AFSL), a framework that integrates advancements in meta-learning, domain
alignment, noise resilience, and multi-modal integration. AFSL consists of four
key modules: a Dynamic Stability Module for performance consistency, a
Contextual Domain Alignment Module for domain adaptation, a Noise-Adaptive
Resilience Module for handling noisy data, and a Multi-Modal Fusion Module for
integrating diverse modalities. This work also explores strategies such as
task-aware data augmentation, semi-supervised learning, and explainable AI
techniques to enhance the applicability and robustness of FSL. AFSL provides
scalable, reliable, and impactful solutions for real-world, high-stakes
domains.
|
2501.13480
|
Adaptive Testing for LLM-Based Applications: A Diversity-based Approach
|
cs.SE cs.AI
|
The recent surge of building software systems powered by Large Language
Models (LLMs) has led to the development of various testing frameworks,
primarily focused on treating prompt templates as the unit of testing. Despite
the significant costs associated with test input execution and output
assessment, the curation of optimized test suites is yet overlooked in these
tools, which calls for tailored test selection or prioritization strategies. In
this paper, we show that diversity-based testing techniques, such as Adaptive
Random Testing (ART) with appropriate string distance metrics, can be
effectively applied to the testing of prompt templates. Our proposed adaptive
testing approach adjusts the conventional ART process to this context by
selecting new test inputs based on scores derived from existing test suite and
their labelling results. Our results, obtained using various implementations
that explore several string-based distances, confirm that our approach enables
the discovery of failures with reduced testing budgets and promotes the
generation of more varied outputs.
|
2501.13481
|
A Polynomial-Time Algorithm for EFX Orientations of Chores
|
cs.GT cs.AI cs.DM
|
This paper addresses the problem of finding EFX orientations of graphs of
chores, in which each vertex corresponds to an agent, each edge corresponds to
a chore, and a chore has zero marginal utility to an agent if its corresponding
edge is not incident to the vertex corresponding to the agent. Recently,
Zhou~et~al.~(IJCAI,~2024) analyzed the complexity of deciding whether graphs
containing a mixture of goods and chores admit EFX orientations, and
conjectured that deciding whether graphs containing only chores admit EFX
orientations is NP-complete. In this paper, we resolve this conjecture by
exhibiting a polynomial-time algorithm that finds an EFX orientation of a graph
containing only chores if one exists, even if the graph contains self-loops.
Remarkably, our first result demonstrates a surprising separation between the
case of goods and the case of chores, because deciding whether graphs
containing only goods admit EFX orientations of goods was shown to be
NP-complete by Christodoulou et al.~(EC,~2023). In addition, we show the
analogous decision problem for multigraphs to be NP-complete.
|
2501.13483
|
Robust Amortized Bayesian Inference with Self-Consistency Losses on
Unlabeled Data
|
stat.ML cs.LG
|
Neural amortized Bayesian inference (ABI) can solve probabilistic inverse
problems orders of magnitude faster than classical methods. However, neural ABI
is not yet sufficiently robust for widespread and safe applicability. In
particular, when performing inference on observations outside of the scope of
the simulated data seen during training, for example, because of model
misspecification, the posterior approximations are likely to become highly
biased. Due to the bad pre-asymptotic behavior of current neural posterior
estimators in the out-of-simulation regime, the resulting estimation biases
cannot be fixed in acceptable time by just simulating more training data. In
this proof-of-concept paper, we propose a semi-supervised approach that enables
training not only on (labeled) simulated data generated from the model, but
also on unlabeled data originating from any source, including real-world data.
To achieve the latter, we exploit Bayesian self-consistency properties that can
be transformed into strictly proper losses without requiring knowledge of true
parameter values, that is, without requiring data labels. The results of our
initial experiments show remarkable improvements in the robustness of ABI on
out-of-simulation data. Even if the observed data is far away from both labeled
and unlabeled training data, inference remains highly accurate. If our findings
also generalize to other scenarios and model classes, we believe that our new
method represents a major breakthrough in neural ABI.
|
2501.13484
|
MambaQuant: Quantizing the Mamba Family with Variance Aligned Rotation
Methods
|
cs.LG cs.AI cs.CL
|
Mamba is an efficient sequence model that rivals Transformers and
demonstrates significant potential as a foundational architecture for various
tasks. Quantization is commonly used in neural networks to reduce model size
and computational latency. However, applying quantization to Mamba remains
underexplored, and existing quantization methods, which have been effective for
CNN and Transformer models, appear inadequate for Mamba models (e.g., Quarot
suffers a 21% accuracy drop on Vim-T$^\dagger$ even under W8A8). We have
pioneered the exploration of this issue and identified several key challenges.
First, significant outliers are present in gate projections, output
projections, and matrix multiplications. Second, Mamba's unique parallel scan
further amplifies these outliers, leading to uneven and heavy-tailed data
distributions. Third, even with the application of the Hadamard transform, the
variance across channels in weights and activations still remains inconsistent.
To these ends, we propose MambaQuant, a post-training quantization (PTQ)
framework consisting of: 1) Karhunen-Loeve Transformation (KLT) enhanced
rotation, rendering the rotation matrix adaptable to diverse channel
distributions. 2) Smooth-Fused rotation, which equalizes channel variances and
can merge additional parameters into model weights. Experiments show that
MambaQuant can quantize both weights and activations into 8-bit with less than
1% accuracy loss for Mamba-based vision and language tasks. To the best of our
knowledge, MambaQuant is the first comprehensive PTQ design for the Mamba
family, paving the way for further advancements in its application.
|
2501.13491
|
RECALL: Library-Like Behavior In Language Models is Enhanced by
Self-Referencing Causal Cycles
|
cs.CL cs.AI
|
We introduce the concept of the self-referencing causal cycle (abbreviated
RECALL) - a mechanism that enables large language models (LLMs) to bypass the
limitations of unidirectional causality, which underlies a phenomenon known as
the reversal curse. When an LLM is prompted with sequential data, it often
fails to recall preceding context. For example, when we ask an LLM to recall
the line preceding "O say does that star-spangled banner yet wave" in the U.S.
National Anthem, it often fails to correctly return "Gave proof through the
night that our flag was still there" - this is due to the reversal curse. It
occurs because language models such as ChatGPT and Llama generate text based on
preceding tokens, requiring facts to be learned and reproduced in a consistent
token order. While the reversal curse is often viewed as a limitation, we offer
evidence of an alternative view: it is not always an obstacle in practice. We
find that RECALL is driven by what we designate as cycle tokens - sequences
that connect different parts of the training data, enabling recall of preceding
tokens from succeeding ones. Through rigorous probabilistic formalization and
controlled experiments, we demonstrate how the cycles they induce influence a
model's ability to reproduce information. To facilitate reproducibility, we
provide our code and experimental details at
https://anonymous.4open.science/r/remember-B0B8/.
|
2501.13492
|
Quantized Spike-driven Transformer
|
cs.CV
|
Spiking neural networks are emerging as a promising energy-efficient
alternative to traditional artificial neural networks due to their spike-driven
paradigm. However, recent research in the SNN domain has mainly focused on
enhancing accuracy by designing large-scale Transformer structures, which
typically rely on substantial computational resources, limiting their
deployment on resource-constrained devices. To overcome this challenge, we
propose a quantized spike-driven Transformer baseline (QSD-Transformer), which
achieves reduced resource demands by utilizing a low bit-width parameter.
Regrettably, the QSD-Transformer often suffers from severe performance
degradation. In this paper, we first conduct empirical analysis and find that
the bimodal distribution of quantized spike-driven self-attention (Q-SDSA)
leads to spike information distortion (SID) during quantization, causing
significant performance degradation. To mitigate this issue, we take
inspiration from mutual information entropy and propose a bi-level optimization
strategy to rectify the information distribution in Q-SDSA. Specifically, at
the lower level, we introduce an information-enhanced LIF to rectify the
information distribution in Q-SDSA. At the upper level, we propose a
fine-grained distillation scheme for the QSD-Transformer to align the
distribution in Q-SDSA with that in the counterpart ANN. By integrating the
bi-level optimization strategy, the QSD-Transformer can attain enhanced energy
efficiency without sacrificing its high-performance advantage.For instance,
when compared to the prior SNN benchmark on ImageNet, the QSD-Transformer
achieves 80.3% top-1 accuracy, accompanied by significant reductions of
6.0$\times$ and 8.1$\times$ in power consumption and model size, respectively.
Code is available at https://github.com/bollossom/QSD-Transformer.
|
2501.13493
|
GCAD: Anomaly Detection in Multivariate Time Series from the Perspective
of Granger Causality
|
cs.LG cs.AI
|
Multivariate time series anomaly detection has numerous real-world
applications and is being extensively studied. Modeling pairwise correlations
between variables is crucial. Existing methods employ learnable graph
structures and graph neural networks to explicitly model the spatial
dependencies between variables. However, these methods are primarily based on
prediction or reconstruction tasks, which can only learn similarity
relationships between sequence embeddings and lack interpretability in how
graph structures affect time series evolution. In this paper, we designed a
framework that models spatial dependencies using interpretable causal
relationships and detects anomalies through changes in causal patterns.
Specifically, we propose a method to dynamically discover Granger causality
using gradients in nonlinear deep predictors and employ a simple sparsification
strategy to obtain a Granger causality graph, detecting anomalies from a causal
perspective. Experiments on real-world datasets demonstrate that the proposed
model achieves more accurate anomaly detection compared to baseline methods.
|
2501.13497
|
DQ-Data2vec: Decoupling Quantization for Multilingual Speech Recognition
|
cs.SD cs.CL eess.AS
|
Data2vec is a self-supervised learning (SSL) approach that employs a
teacher-student architecture for contextual representation learning via masked
prediction, demonstrating remarkable performance in monolingual ASR. Previous
studies have revealed that data2vec's shallow layers capture speaker and
language information, middle layers encode phoneme and word features, while
deep layers are responsible for reconstruction. Language and phoneme features
are crucial for multilingual ASR. However, data2vec's masked representation
generation relies on multi-layer averaging, inevitably coupling these features.
To address this limitation, we propose a decoupling quantization based data2vec
(DQ-Data2vec) for multilingual ASR, which includes a data2vec backbone and two
improved online K-means quantizers. Our core idea is using the K-means
quantizer with specified cluster numbers to decouple language and phoneme
information for masked prediction. Specifically, in the language quantization,
considering that the number of languages is significantly different from other
irrelevant features (e.g., speakers), we assign the cluster number to match the
number of languages, explicitly decoupling shallow layers' language-related
information from irrelevant features. This strategy is also applied to
decoupling middle layers' phoneme and word features. In a self-supervised
scenario, experiments on the CommonVoice dataset demonstrate that DQ-Data2vec
achieves a relative reduction of 9.51% in phoneme error rate (PER) and 11.58%
in word error rate (WER) compared to data2vec and UniData2vec. Moreover, in a
weakly-supervised scenario incorporating language labels and high-resource
language text labels, the relative reduction is 18.09% and 1.55%, respectively.
|
2501.13503
|
Benchmark Study of Transient Stability during Power-Hardware-in-the-Loop
and Fault-Ride-Through capabilities of PV inverters
|
eess.SY cs.SY
|
The deployment of PV inverters is rapidly expanding across Europe, where
these devices must increasingly comply with stringent grid requirements.This
study presents a benchmark analysis of four PV inverter manufacturers, focusing
on their Fault Ride Through capabilities under varying grid strengths, voltage
dips, and fault durations, parameters critical for grid operators during fault
conditions.The findings highlight the influence of different inverter controls
on key metrics such as total harmonic distortion of current and voltage
signals, as well as system stability following grid faults.Additionally, the
study evaluates transient stability using two distinct testing approaches.The
first approach employs the current standard method, which is testing with an
ideal voltage source. The second utilizes a Power Hardware in the Loop
methodology with a benchmark CIGRE grid model.The results reveal that while
testing with an ideal voltage source is cost-effective and convenient in the
short term, it lacks the ability to capture the dynamic interactions and
feedback loops of physical grid components.This limitation can obscure critical
real world factors, potentially leading to unexpected inverter behavior and
operational challenges in grids with high PV penetration.This study underscores
the importance of re-evaluating conventional testing methods and incorporating
Power Hardware in the Loop structures to achieve test results that more closely
align with real-world conditions.
|
2501.13504
|
Continuous signal sparse encoding using analog neuromorphic variability
|
cs.NE
|
Achieving fast and reliable temporal signal encoding is crucial for
low-power, always-on systems. While current spike-based encoding algorithms
rely on complex networks or precise timing references, simple and robust
encoding models can be obtained by leveraging the intrinsic properties of
analog hardware substrates. We propose an encoding framework inspired by
biological principles that leverages intrinsic neuronal variability to robustly
encode continuous stimuli into spatio-temporal patterns, using at most one
spike per neuron. The encoder has low model complexity, relying on a shallow
network of heterogeneous neurons. It relies on an internal time reference,
allowing for continuous processing. Moreover, stimulus parameters can be
linearly decoded from the spiking patterns, granting fast information
retrieval. Our approach, validated on both analog neuromorphic hardware and
simulation, demonstrates high robustness to noise, spike jitter, and reduced
heterogeneity. Consistently with biological observations, we observed the
spontaneous emergence of patterns with stereotyped spiking order. The proposed
encoding scheme facilitates fast, robust and continuous information processing,
making it well-suited for low-power, low-latency processing of temporal data on
analog neuromorphic substrates.
|
2501.13507
|
Iterative Shaping of Multi-Particle Aggregates based on Action Trees and
VLM
|
cs.RO
|
In this paper, we address the problem of manipulating multi-particle
aggregates using a bimanual robotic system. Our approach enables the autonomous
transport of dispersed particles through a series of shaping and pushing
actions using robotically-controlled tools. Achieving this advanced
manipulation capability presents two key challenges: high-level task planning
and trajectory execution. For task planning, we leverage Vision Language Models
(VLMs) to enable primitive actions such as tool affordance grasping and
non-prehensile particle pushing. For trajectory execution, we represent the
evolving particle aggregate's contour using truncated Fourier series, providing
efficient parametrization of its closed shape. We adaptively compute trajectory
waypoints based on group cohesion and the geometric centroid of the aggregate,
accounting for its spatial distribution and collective motion. Through
real-world experiments, we demonstrate the effectiveness of our methodology in
actively shaping and manipulating multi-particle aggregates while maintaining
high system cohesion.
|
2501.13514
|
Self-Supervised Diffusion MRI Denoising via Iterative and Stable
Refinement
|
eess.IV cs.CV
|
Magnetic Resonance Imaging (MRI), including diffusion MRI (dMRI), serves as a
``microscope'' for anatomical structures and routinely mitigates the influence
of low signal-to-noise ratio scans by compromising temporal or spatial
resolution. However, these compromises fail to meet clinical demands for both
efficiency and precision. Consequently, denoising is a vital preprocessing
step, particularly for dMRI, where clean data is unavailable. In this paper, we
introduce Di-Fusion, a fully self-supervised denoising method that leverages
the latter diffusion steps and an adaptive sampling process. Unlike previous
approaches, our single-stage framework achieves efficient and stable training
without extra noise model training and offers adaptive and controllable results
in the sampling process. Our thorough experiments on real and simulated data
demonstrate that Di-Fusion achieves state-of-the-art performance in
microstructure modeling, tractography tracking, and other downstream tasks.
|
2501.13516
|
Communication-Efficient Stochastic Distributed Learning
|
cs.LG cs.SY eess.SY math.OC
|
We address distributed learning problems, both nonconvex and convex, over
undirected networks. In particular, we design a novel algorithm based on the
distributed Alternating Direction Method of Multipliers (ADMM) to address the
challenges of high communication costs, and large datasets. Our design tackles
these challenges i) by enabling the agents to perform multiple local training
steps between each round of communications; and ii) by allowing the agents to
employ stochastic gradients while carrying out local computations. We show that
the proposed algorithm converges to a neighborhood of a stationary point, for
nonconvex problems, and of an optimal point, for convex problems. We also
propose a variant of the algorithm to incorporate variance reduction thus
achieving exact convergence. We show that the resulting algorithm indeed
converges to a stationary (or optimal) point, and moreover that local training
accelerates convergence. We thoroughly compare the proposed algorithms with the
state of the art, both theoretically and through numerical results.
|
2501.13517
|
Propensity-driven Uncertainty Learning for Sample Exploration in
Source-Free Active Domain Adaptation
|
cs.CV cs.LG
|
Source-free active domain adaptation (SFADA) addresses the challenge of
adapting a pre-trained model to new domains without access to source data while
minimizing the need for target domain annotations. This scenario is
particularly relevant in real-world applications where data privacy, storage
limitations, or labeling costs are significant concerns. Key challenges in
SFADA include selecting the most informative samples from the target domain for
labeling, effectively leveraging both labeled and unlabeled target data, and
adapting the model without relying on source domain information. Additionally,
existing methods often struggle with noisy or outlier samples and may require
impractical progressive labeling during training. To effectively select more
informative samples without frequently requesting human annotations, we propose
the Propensity-driven Uncertainty Learning (ProULearn) framework. ProULearn
utilizes a novel homogeneity propensity estimation mechanism combined with
correlation index calculation to evaluate feature-level relationships. This
approach enables the identification of representative and challenging samples
while avoiding noisy outliers. Additionally, we develop a central correlation
loss to refine pseudo-labels and create compact class distributions during
adaptation. In this way, ProULearn effectively bridges the domain gap and
maximizes adaptation performance. The principles of informative sample
selection underlying ProULearn have broad implications beyond SFADA, offering
benefits across various deep learning tasks where identifying key data points
or features is crucial. Extensive experiments on four benchmark datasets
demonstrate that ProULearn outperforms state-of-the-art methods in domain
adaptation scenarios.
|
2501.13518
|
Text-driven Online Action Detection
|
cs.CV
|
Detecting actions as they occur is essential for applications like video
surveillance, autonomous driving, and human-robot interaction. Known as online
action detection, this task requires classifying actions in streaming videos,
handling background noise, and coping with incomplete actions. Transformer
architectures are the current state-of-the-art, yet the potential of recent
advancements in computer vision, particularly vision-language models (VLMs),
remains largely untapped for this problem, partly due to high computational
costs. In this paper, we introduce TOAD: a Text-driven Online Action Detection
architecture that supports zero-shot and few-shot learning. TOAD leverages CLIP
(Contrastive Language-Image Pretraining) textual embeddings, enabling efficient
use of VLMs without significant computational overhead. Our model achieves
82.46% mAP on the THUMOS14 dataset, outperforming existing methods, and sets
new baselines for zero-shot and few-shot performance on the THUMOS14 and
TVSeries datasets.
|
2501.13528
|
Diffusion-based Perceptual Neural Video Compression with Temporal
Diffusion Information Reuse
|
cs.CV cs.LG eess.IV
|
Recently, foundational diffusion models have attracted considerable attention
in image compression tasks, whereas their application to video compression
remains largely unexplored. In this article, we introduce DiffVC, a
diffusion-based perceptual neural video compression framework that effectively
integrates foundational diffusion model with the video conditional coding
paradigm. This framework uses temporal context from previously decoded frame
and the reconstructed latent representation of the current frame to guide the
diffusion model in generating high-quality results. To accelerate the iterative
inference process of diffusion model, we propose the Temporal Diffusion
Information Reuse (TDIR) strategy, which significantly enhances inference
efficiency with minimal performance loss by reusing the diffusion information
from previous frames. Additionally, to address the challenges posed by
distortion differences across various bitrates, we propose the Quantization
Parameter-based Prompting (QPP) mechanism, which utilizes quantization
parameters as prompts fed into the foundational diffusion model to explicitly
modulate intermediate features, thereby enabling a robust variable bitrate
diffusion-based neural compression framework. Experimental results demonstrate
that our proposed solution delivers excellent performance in both perception
metrics and visual quality.
|
2501.13529
|
Overcoming Support Dilution for Robust Few-shot Semantic Segmentation
|
cs.CV cs.LG
|
Few-shot Semantic Segmentation (FSS) is a challenging task that utilizes
limited support images to segment associated unseen objects in query images.
However, recent FSS methods are observed to perform worse, when enlarging the
number of shots. As the support set enlarges, existing FSS networks struggle to
concentrate on the high-contributed supports and could easily be overwhelmed by
the low-contributed supports that could severely impair the mask predictions.
In this work, we study this challenging issue, called support dilution, our
goal is to recognize, select, preserve, and enhance those high-contributed
supports in the raw support pool. Technically, our method contains three novel
parts. First, we propose a contribution index, to quantitatively estimate if a
high-contributed support dilutes. Second, we develop the Symmetric Correlation
(SC) module to preserve and enhance the high-contributed support features,
minimizing the distraction by the low-contributed features. Third, we design
the Support Image Pruning operation, to retrieve a compact and high quality
subset by discarding low-contributed supports. We conduct extensive experiments
on two FSS benchmarks, COCO-20i and PASCAL-5i, the segmentation results
demonstrate the compelling performance of our solution over state-of-the-art
FSS approaches. Besides, we apply our solution for online segmentation and
real-world segmentation, convincing segmentation results showing the practical
ability of our work for real-world demonstrations.
|
2501.13533
|
Towards a Theory of AI Personhood
|
cs.AI cs.LG
|
I am a person and so are you. Philosophically we sometimes grant personhood
to non-human animals, and entities such as sovereign states or corporations can
legally be considered persons. But when, if ever, should we ascribe personhood
to AI systems? In this paper, we outline necessary conditions for AI
personhood, focusing on agency, theory-of-mind, and self-awareness. We discuss
evidence from the machine learning literature regarding the extent to which
contemporary AI systems, such as language models, satisfy these conditions,
finding the evidence surprisingly inconclusive.
If AI systems can be considered persons, then typical framings of AI
alignment may be incomplete. Whereas agency has been discussed at length in the
literature, other aspects of personhood have been relatively neglected. AI
agents are often assumed to pursue fixed goals, but AI persons may be
self-aware enough to reflect on their aims, values, and positions in the world
and thereby induce their goals to change. We highlight open research directions
to advance the understanding of AI personhood and its relevance to alignment.
Finally, we reflect on the ethical considerations surrounding the treatment of
AI systems. If AI systems are persons, then seeking control and alignment may
be ethically untenable.
|
2501.13534
|
A New Construction of Non-Binary Deletion Correcting Codes and their
Decoding
|
cs.IT math.CO math.IT
|
Non-binary codes correcting multiple deletions have recently attracted a lot
of attention. In this work, we focus on multiplicity-free codes, a family of
non-binary codes where all symbols are distinct. Our main contribution is a new
explicit construction of such codes, based on set and permutation codes. We
show that our multiplicity-free codes can correct multiple deletions and
provide a decoding algorithm. We also show that, for a certain regime of
parameters, our constructed codes have size larger than all the previously
known non-binary codes correcting multiple deletions.
|
2501.13535
|
LITE: Efficiently Estimating Gaussian Probability of Maximality
|
stat.ML cs.LG stat.CO stat.ME
|
We consider the problem of computing the probability of maximality (PoM) of a
Gaussian random vector, i.e., the probability for each dimension to be maximal.
This is a key challenge in applications ranging from Bayesian optimization to
reinforcement learning, where the PoM not only helps with finding an optimal
action, but yields a fine-grained analysis of the action domain, crucial in
tasks such as drug discovery. Existing techniques are costly, scaling
polynomially in computation and memory with the vector size. We introduce LITE,
the first approach for estimating Gaussian PoM with almost-linear time and
memory complexity. LITE achieves SOTA accuracy on a number of tasks, while
being in practice several orders of magnitude faster than the baselines. This
also translates to a better performance on downstream tasks such as entropy
estimation and optimal control of bandits. Theoretically, we cast LITE as
entropy-regularized UCB and connect it to prior PoM estimators.
|
2501.13536
|
ReasVQA: Advancing VideoQA with Imperfect Reasoning Process
|
cs.CV cs.CL
|
Video Question Answering (VideoQA) is a challenging task that requires
understanding complex visual and temporal relationships within videos to answer
questions accurately. In this work, we introduce \textbf{ReasVQA}
(Reasoning-enhanced Video Question Answering), a novel approach that leverages
reasoning processes generated by Multimodal Large Language Models (MLLMs) to
improve the performance of VideoQA models. Our approach consists of three
phases: reasoning generation, reasoning refinement, and learning from
reasoning. First, we generate detailed reasoning processes using additional
MLLMs, and second refine them via a filtering step to ensure data quality.
Finally, we use the reasoning data, which might be in an imperfect form, to
guide the VideoQA model via multi-task learning, on how to interpret and answer
questions based on a given video. We evaluate ReasVQA on three popular
benchmarks, and our results establish new state-of-the-art performance with
significant improvements of +2.9 on NExT-QA, +7.3 on STAR, and +5.9 on
IntentQA. Our findings demonstrate the supervising benefits of integrating
reasoning processes into VideoQA. Further studies validate each component of
our method, also with different backbones and MLLMs, and again highlight the
advantages of this simple but effective method. We offer a new perspective on
enhancing VideoQA performance by utilizing advanced reasoning techniques,
setting a new benchmark in this research field.
|
2501.13545
|
LLMs Can Plan Only If We Tell Them
|
cs.CL cs.AI
|
Large language models (LLMs) have demonstrated significant capabilities in
natural language processing and reasoning, yet their effectiveness in
autonomous planning has been under debate. While existing studies have utilized
LLMs with external feedback mechanisms or in controlled environments for
planning, these approaches often involve substantial computational and
development resources due to the requirement for careful design and iterative
backprompting. Moreover, even the most advanced LLMs like GPT-4 struggle to
match human performance on standard planning benchmarks, such as the
Blocksworld, without additional support. This paper investigates whether LLMs
can independently generate long-horizon plans that rival human baselines. Our
novel enhancements to Algorithm-of-Thoughts (AoT), which we dub AoT+, help
achieve state-of-the-art results in planning benchmarks out-competing prior
methods and human baselines all autonomously.
|
2501.13551
|
Minimizing Queue Length Regret for Arbitrarily Varying Channels
|
cs.IT cs.LG cs.NI math.IT
|
We consider an online channel scheduling problem for a single
transmitter-receiver pair equipped with $N$ arbitrarily varying wireless
channels. The transmission rates of the channels might be non-stationary and
could be controlled by an oblivious adversary. At every slot, incoming data
arrives at an infinite-capacity data queue located at the transmitter. A
scheduler, which is oblivious to the current channel rates, selects one of the
$N$ channels for transmission. At the end of the slot, the scheduler only gets
to know the transmission rate of the selected channel. The objective is to
minimize the queue length regret, defined as the difference between the queue
length at some time $T$ achieved by an online policy and the queue length
obtained by always transmitting over the single best channel in hindsight. We
propose a weakly adaptive Multi-Armed Bandit (MAB) algorithm for minimizing the
queue length regret in this setup. Unlike previous works, we do not make any
stability assumptions about the queue or the arrival process. Hence, our result
holds even when the queueing process is unstable. Our main observation is that
the queue length regret can be upper bounded by the regret of a MAB policy that
competes against the best channel in hindsight uniformly over all sub-intervals
of $[T]$. As a technical contribution of independent interest, we then propose
a weakly adaptive adversarial MAB policy which achieves
$\tilde{O}(\sqrt{N}T^{\frac{3}{4}})$ regret with high probability, implying the
same bound for queue length regret.
|
2501.13552
|
Explainable AI-aided Feature Selection and Model Reduction for DRL-based
V2X Resource Allocation
|
eess.SP cs.AI cs.LG cs.MA
|
Artificial intelligence (AI) is expected to significantly enhance radio
resource management (RRM) in sixth-generation (6G) networks. However, the lack
of explainability in complex deep learning (DL) models poses a challenge for
practical implementation. This paper proposes a novel explainable AI (XAI)-
based framework for feature selection and model complexity reduction in a
model-agnostic manner. Applied to a multi-agent deep reinforcement learning
(MADRL) setting, our approach addresses the joint sub-band assignment and power
allocation problem in cellular vehicle-to-everything (V2X) communications. We
propose a novel two-stage systematic explainability framework leveraging
feature relevance-oriented XAI to simplify the DRL agents. While the former
stage generates a state feature importance ranking of the trained models using
Shapley additive explanations (SHAP)-based importance scores, the latter stage
exploits these importance-based rankings to simplify the state space of the
agents by removing the least important features from the model input.
Simulation results demonstrate that the XAI-assisted methodology achieves 97%
of the original MADRL sum-rate performance while reducing optimal state
features by 28%, average training time by 11%, and trainable weight parameters
by 46% in a network with eight vehicular pairs.
|
2501.13554
|
One-Prompt-One-Story: Free-Lunch Consistent Text-to-Image Generation
Using a Single Prompt
|
cs.CV cs.AI cs.LG
|
Text-to-image generation models can create high-quality images from input
prompts. However, they struggle to support the consistent generation of
identity-preserving requirements for storytelling. Existing approaches to this
problem typically require extensive training in large datasets or additional
modifications to the original model architectures. This limits their
applicability across different domains and diverse diffusion model
configurations. In this paper, we first observe the inherent capability of
language models, coined context consistency, to comprehend identity through
context with a single prompt. Drawing inspiration from the inherent context
consistency, we propose a novel training-free method for consistent
text-to-image (T2I) generation, termed "One-Prompt-One-Story" (1Prompt1Story).
Our approach 1Prompt1Story concatenates all prompts into a single input for T2I
diffusion models, initially preserving character identities. We then refine the
generation process using two novel techniques: Singular-Value Reweighting and
Identity-Preserving Cross-Attention, ensuring better alignment with the input
description for each frame. In our experiments, we compare our method against
various existing consistent T2I generation approaches to demonstrate its
effectiveness through quantitative metrics and qualitative assessments. Code is
available at https://github.com/byliutao/1Prompt1Story.
|
2501.13555
|
Instantaneous Core Loss -- Cycle-by-cycle Modeling of Power Magnetics in
PWM DC-AC Converters
|
eess.SY cs.SY
|
Nowadays, PWM excitation is one of the most common waveforms seen by magnetic
components in power electronic converters. Core loss modeling approaches such
as the improved Generalized Steinmetz Equation (iGSE) or the loss map based on
the composite waveform hypothesis (CWH) generally process the PWM excitation
piecewise, which has been proven effective for DC-DC converters. However, an
additional challenge arises in DC-AC converters, i.e. the fundamental-frequency
sinewave component induces the major loop loss on top of the piecewise
high-frequency segments. This major loop loss cannot be modeled on a switching
cycle basis by any existing methods. To address this gap, this paper proposes a
novel fundamental concept, instantaneous core loss, which is observed
empirically for the first time. Enabled by the reactive voltage cancellation
method, the instantaneous core loss, which only contains the real power loss,
can be measured as a function of time. Building on this concept, a modeling
method is proposed to break down the major loop core loss, typically
represented as an average value in the literature, into the time domain,
enabling cycle-by-cycle modeling as a practical workflow for predicting core
losses in PWM converters. Through experiments, the existence of the major loop
loss is verified, and generic instantaneous core loss models are extracted for
several magnetic components. An example workflow is proposed to extract the
cycle-by-cycle core loss at the design stage of a PWM DC-AC converter. This
work enhances the fundamental understanding of the core loss process in real
time, contributing to the advancement of scientific knowledge.
|
2501.13558
|
GoDe: Gaussians on Demand for Progressive Level of Detail and Scalable
Compression
|
cs.CV
|
3D Gaussian Splatting enhances real-time performance in novel view synthesis
by representing scenes with mixtures of Gaussians and utilizing differentiable
rasterization. However, it typically requires large storage capacity and high
VRAM, demanding the design of effective pruning and compression techniques.
Existing methods, while effective in some scenarios, struggle with scalability
and fail to adapt models based on critical factors such as computing
capabilities or bandwidth, requiring to re-train the model under different
configurations. In this work, we propose a novel, model-agnostic technique that
organizes Gaussians into several hierarchical layers, enabling progressive
Level of Detail (LoD) strategy. This method, combined with recent approach of
compression of 3DGS, allows a single model to instantly scale across several
compression ratios, with minimal to none impact to quality compared to a single
non-scalable model and without requiring re-training. We validate our approach
on typical datasets and benchmarks, showcasing low distortion and substantial
gains in terms of scalability and adaptability.
|
2501.13561
|
TROPIC - Trustworthiness Rating of Online Publishers through online
Interactions Calculation
|
cs.SI
|
Existing methods for assessing the trustworthiness of news publishers face
high costs and scalability issues. The tool presented in this paper supports
the efforts of specialized organizations by providing a solution that, starting
from an online discussion, provides (i) trustworthiness ratings for previously
unclassified news publishers and (ii) an interactive platform to guide
annotation efforts and improve the robustness of the ratings. The system
implements a novel framework for assessing the trustworthiness of online news
publishers based on user interactions on social media platforms.
|
2501.13563
|
Black-Box Adversarial Attack on Vision Language Models for Autonomous
Driving
|
cs.CV cs.AI
|
Vision-language models (VLMs) have significantly advanced autonomous driving
(AD) by enhancing reasoning capabilities; however, these models remain highly
susceptible to adversarial attacks. While existing research has explored
white-box attacks to some extent, the more practical and challenging black-box
scenarios remain largely underexplored due to their inherent difficulty. In
this paper, we take the first step toward designing black-box adversarial
attacks specifically targeting VLMs in AD. We identify two key challenges for
achieving effective black-box attacks in this context: the effectiveness across
driving reasoning chains in AD systems and the dynamic nature of driving
scenarios. To address this, we propose Cascading Adversarial Disruption (CAD).
It first introduces Decision Chain Disruption, which targets low-level
reasoning breakdown by generating and injecting deceptive semantics, ensuring
the perturbations remain effective across the entire decision-making chain.
Building on this, we present Risky Scene Induction, which addresses dynamic
adaptation by leveraging a surrogate VLM to understand and construct high-level
risky scenarios that are likely to result in critical errors in the current
driving contexts. Extensive experiments conducted on multiple AD VLMs and
benchmarks demonstrate that CAD achieves state-of-the-art attack effectiveness,
significantly outperforming existing methods (+13.43% on average). Moreover, we
validate its practical applicability through real-world attacks on AD vehicles
powered by VLMs, where the route completion rate drops by 61.11% and the
vehicle crashes directly into the obstacle vehicle with adversarial patches.
Finally, we release CADA dataset, comprising 18,808 adversarial
visual-question-answer pairs, to facilitate further evaluation and research in
this critical domain. Our codes and dataset will be available after paper's
acceptance.
|
2501.13564
|
ARCADE: An interactive playground for real-time immersed topology
optimization
|
cs.CE
|
Topology optimization (TO) has found applications across a wide range of
disciplines but remains underutilized in practice. Key barriers to broader
adoption include the absence of versatile commercial software, the need for
specialized expertise, and high computational demands. Additionally, challenges
such as ensuring manufacturability, optimizing hyper-parameters, and
integrating subjective design elements like aesthetics further hinder its
widespread use.
Emerging technologies like augmented reality (AR) and virtual reality (VR)
offer transformative potential for TO. By enabling intuitive, gesture-based
human-computer interactions, these immersive tools bridge the gap between human
intuition and computational processes. They provide the means to integrate
subjective human judgment into optimization workflows in real time, creating a
paradigm shift toward interactive and immersive design.
Here we introduce the concept of immersive topology optimization (ITO) as a
novel design paradigm that leverages AR environments for TO. To demonstrate
this concept, we present ARCADE: Augmented Reality Computational Analysis and
Design Environment. Developed in Swift for the Apple Vision Pro mixed reality
headset, ARCADE enables users to define, manipulate, and solve structural
optimization problems within an augmented reality setting. By incorporating
real-time human interaction and visualization of the design in its intended
target location, ARCADE has the potential to reduce lead times, enhance
manufacturability, and improve design integration. Although initially developed
for structural optimization, ARCADE's framework could be extended to other
disciplines, paving the way for a new era of interactive and immersive
computational design.
|
2501.13567
|
K-COMP: Retrieval-Augmented Medical Domain Question Answering With
Knowledge-Injected Compressor
|
cs.CL cs.AI
|
Retrieval-augmented question answering (QA) integrates external information
and thereby increases the QA accuracy of reader models that lack domain
knowledge. However, documents retrieved for closed domains require high
expertise, so the reader model may have difficulty fully comprehending the
text. Moreover, the retrieved documents contain thousands of tokens, some
unrelated to the question. As a result, the documents include some inaccurate
information, which could lead the reader model to mistrust the passages and
could result in hallucinations. To solve these problems, we propose K-comp
(Knowledge-injected compressor) which provides the knowledge required to answer
correctly. The compressor automatically generates the prior knowledge necessary
to facilitate the answer process prior to compression of the retrieved
passages. Subsequently, the passages are compressed autoregressively, with the
generated knowledge being integrated into the compression process. This process
ensures alignment between the question intent and the compressed context. By
augmenting this prior knowledge and concise context, the reader models are
guided toward relevant answers and trust the context.
|
2501.13573
|
Improving Contextual Faithfulness of Large Language Models via Retrieval
Heads-Induced Optimization
|
cs.CL
|
Ensuring contextual faithfulness in retrieval-augmented large language models
(LLMs) is crucial for building trustworthy information-seeking systems,
particularly in long-form question-answering (LFQA) scenarios. In this work, we
identify a salient correlation between LFQA faithfulness and retrieval heads, a
set of attention heads responsible for retrieving contextual information.
Leveraging this insight, we propose RHIO, a framework designed to teach LLMs to
explicitly discriminate between faithful and unfaithful generations. RHIO first
augments unfaithful samples that simulate realistic model-intrinsic errors by
selectively masking retrieval heads. Then, these samples are incorporated into
joint training, enabling the model to distinguish unfaithful outputs from
faithful ones conditioned on control tokens. Furthermore, these control tokens
are leveraged to self-induce contrastive outputs, amplifying their difference
through contrastive decoding. Additionally, to facilitate the evaluation of
contextual faithfulness, we also introduce GroundBench, a comprehensive
benchmark compiled from five existing LFQA datasets. Extensive experimental
results on GroundBench demonstrate that RHIO significantly improves
faithfulness, even outperforming GPT-4o.
|
2501.13576
|
Federated Conformance Checking
|
cs.IR
|
Conformance checking is a crucial aspect of process mining, where the main
objective is to compare the actual execution of a process, as recorded in an
event log, with a reference process model, e.g., in the form of a Petri net or
a BPMN. Conformance checking enables identifying deviations, anomalies, or
non-compliance instances. It offers different perspectives on problems in
processes, bottlenecks, or process instances that are not compliant with the
model. Performing conformance checking in federated (inter-organizational)
settings allows organizations to gain insights into the overall process
execution and to identify compliance issues across organizational boundaries,
which facilitates process improvement efforts among collaborating entities. In
this paper, we propose a privacy-aware federated conformance-checking approach
that allows for evaluating the correctness of overall cross-organizational
process models, identifying miscommunications, and quantifying their costs. For
evaluation, we design and simulate a supply chain process with three
organizations engaged in purchase-to-pay, order-to-cash, and shipment
processes. We generate synthetic event logs for each organization as well as
the complete process, and we apply our approach to identify and evaluate the
cost of pre-injected miscommunications.
|
2501.13579
|
MixRec: Individual and Collective Mixing Empowers Data Augmentation for
Recommender Systems
|
cs.IR
|
The core of the general recommender systems lies in learning high-quality
embedding representations of users and items to investigate their positional
relations in the feature space. Unfortunately, data sparsity caused by
difficult-to-access interaction data severely limits the effectiveness of
recommender systems. Faced with such a dilemma, various types of
self-supervised learning methods have been introduced into recommender systems
in an attempt to alleviate the data sparsity through distribution modeling or
data augmentation. However, most data augmentation relies on elaborate manual
design, which is not only not universal, but the bloated and redundant
augmentation process may significantly slow down model training progress. To
tackle these limitations, we propose a novel Dual Mixing-based Recommendation
Framework (MixRec) to empower data augmentation as we wish. Specifically, we
propose individual mixing and collective mixing, respectively. The former aims
to provide a new positive sample that is unique to the target (user or item)
and to make the pair-wise recommendation loss benefit from it, while the latter
aims to portray a new sample that contains group properties in a batch. The two
mentioned mixing mechanisms allow for data augmentation with only one parameter
that does not need to be set multiple times and can be done in linear time
complexity. Besides, we propose the dual-mixing contrastive learning to
maximize the utilization of these new-constructed samples to enhance the
consistency between pairs of positive samples. Experimental results on four
real-world datasets demonstrate the advantages of MixRec in terms of
effectiveness, simplicity, efficiency, and scalability.
|
2501.13582
|
Nonasymptotic Oblivious Relaying and Variable-Length Noisy Lossy Source
Coding
|
cs.IT math.IT
|
The information bottleneck channel (or the oblivious relay channel) concerns
a channel coding setting where the decoder does not directly observe the
channel output. Rather, the channel output is relayed to the decoder by an
oblivious relay (which does not know the codebook) via a rate-limited link. The
capacity is known to be given by the information bottleneck. We study
finite-blocklength achievability results of the channel, where the relay
communicates to the decoder via fixed-length or variable-length codes. These
two cases give rise to two different second-order versions of the information
bottleneck. Our proofs utilize the nonasymptotic noisy lossy source coding
results by Kostina and Verd\'{u}, the strong functional representation lemma,
and the Poisson matching lemma. Moreover, we also give a novel nonasymptotic
variable-length noisy lossy source coding result.
|
2501.13584
|
Towards Robust Incremental Learning under Ambiguous Supervision
|
cs.LG
|
Traditional Incremental Learning (IL) targets to handle sequential
fully-supervised learning problems where novel classes emerge from time to
time. However, due to inherent annotation uncertainty and ambiguity, collecting
high-quality annotated data in a dynamic learning system can be extremely
expensive. To mitigate this problem, we propose a novel weakly-supervised
learning paradigm called Incremental Partial Label Learning (IPLL), where the
sequentially arrived data relate to a set of candidate labels rather than the
ground truth. Technically, we develop the Prototype-Guided Disambiguation and
Replay Algorithm (PGDR) which leverages the class prototypes as a proxy to
mitigate two intertwined challenges in IPLL, i.e., label ambiguity and
catastrophic forgetting. To handle the former, PGDR encapsulates a
momentum-based pseudo-labeling algorithm along with prototype-guided
initialization, resulting in a balanced perception of classes. To alleviate
forgetting, we develop a memory replay technique that collects
well-disambiguated samples while maintaining representativeness and diversity.
By jointly distilling knowledge from curated memory data, our framework
exhibits a great disambiguation ability for samples of new tasks and achieves
less forgetting of knowledge. Extensive experiments demonstrate that PGDR
achieves superior
|
2501.13587
|
Contrastive Representation Learning Helps Cross-institutional Knowledge
Transfer: A Study in Pediatric Ventilation Management
|
cs.LG cs.AI
|
Clinical machine learning deployment across institutions faces significant
challenges when patient populations and clinical practices differ
substantially. We present a systematic framework for cross-institutional
knowledge transfer in clinical time series, demonstrated through pediatric
ventilation management between a general pediatric intensive care unit (PICU)
and a cardiac-focused unit. Using contrastive predictive coding (CPC) for
representation learning, we investigate how different data regimes and
fine-tuning strategies affect knowledge transfer across institutional
boundaries. Our results show that while direct model transfer performs poorly,
CPC with appropriate fine-tuning enables effective knowledge sharing between
institutions, with benefits particularly evident in limited data scenarios.
Analysis of transfer patterns reveals an important asymmetry: temporal
progression patterns transfer more readily than point-of-care decisions,
suggesting practical pathways for cross-institutional deployment. Through a
systematic evaluation of fine-tuning approaches and transfer patterns, our work
provides insights for developing more generalizable clinical decision support
systems while enabling smaller specialized units to leverage knowledge from
larger centers.
|
2501.13592
|
WFCRL: A Multi-Agent Reinforcement Learning Benchmark for Wind Farm
Control
|
cs.LG cs.MA cs.SY eess.SY
|
The wind farm control problem is challenging, since conventional model-based
control strategies require tractable models of complex aerodynamical
interactions between the turbines and suffer from the curse of dimension when
the number of turbines increases. Recently, model-free and multi-agent
reinforcement learning approaches have been used to address this challenge. In
this article, we introduce WFCRL (Wind Farm Control with Reinforcement
Learning), the first open suite of multi-agent reinforcement learning
environments for the wind farm control problem. WFCRL frames a cooperative
Multi-Agent Reinforcement Learning (MARL) problem: each turbine is an agent and
can learn to adjust its yaw, pitch or torque to maximize the common objective
(e.g. the total power production of the farm). WFCRL also offers turbine load
observations that will allow to optimize the farm performance while limiting
turbine structural damages. Interfaces with two state-of-the-art farm
simulators are implemented in WFCRL: a static simulator (FLORIS) and a dynamic
simulator (FAST.Farm). For each simulator, $10$ wind layouts are provided,
including $5$ real wind farms. Two state-of-the-art online MARL algorithms are
implemented to illustrate the scaling challenges. As learning online on
FAST.Farm is highly time-consuming, WFCRL offers the possibility of designing
transfer learning strategies from FLORIS to FAST.Farm.
|
2501.13594
|
Text-to-SQL based on Large Language Models and Database Keyword Search
|
cs.DB cs.AI
|
Text-to-SQL prompt strategies based on Large Language Models (LLMs) achieve
remarkable performance on well-known benchmarks. However, when applied to
real-world databases, their performance is significantly less than for these
benchmarks, especially for Natural Language (NL) questions requiring complex
filters and joins to be processed. This paper then proposes a strategy to
compile NL questions into SQL queries that incorporates a dynamic few-shot
examples strategy and leverages the services provided by a database keyword
search (KwS) platform. The paper details how the precision and recall of the
schema-linking process are improved with the help of the examples provided and
the keyword-matching service that the KwS platform offers. Then, it shows how
the KwS platform can be used to synthesize a view that captures the joins
required to process an input NL question and thereby simplify the SQL query
compilation step. The paper includes experiments with a real-world relational
database to assess the performance of the proposed strategy. The experiments
suggest that the strategy achieves an accuracy on the real-world relational
database that surpasses state-of-the-art approaches. The paper concludes by
discussing the results obtained.
|
2501.13597
|
A Comprehensive Survey on Spectral Clustering with Graph Structure
Learning
|
cs.LG
|
Spectral clustering is a powerful technique for clustering high-dimensional
data, utilizing graph-based representations to detect complex, non-linear
structures and non-convex clusters. The construction of a similarity graph is
essential for ensuring accurate and effective clustering, making graph
structure learning (GSL) central for enhancing spectral clustering performance
in response to the growing demand for scalable solutions. Despite advancements
in GSL, there is a lack of comprehensive surveys specifically addressing its
role within spectral clustering. To bridge this gap, this survey presents a
comprehensive review of spectral clustering methods, emphasizing on the
critical role of GSL. We explore various graph construction techniques,
including pairwise, anchor, and hypergraph-based methods, in both fixed and
adaptive settings. Additionally, we categorize spectral clustering approaches
into single-view and multi-view frameworks, examining their applications within
one-step and two-step clustering processes. We also discuss multi-view
information fusion techniques and their impact on clustering data. By
addressing current challenges and proposing future research directions, this
survey provides valuable insights for advancing spectral clustering
methodologies and highlights the pivotal role of GSL in tackling large-scale
and high-dimensional data clustering tasks.
|
2501.13598
|
A Transformer-based Autoregressive Decoder Architecture for Hierarchical
Text Classification
|
cs.LG
|
Recent approaches in hierarchical text classification (HTC) rely on the
capabilities of a pre-trained transformer model and exploit the label semantics
and a graph encoder for the label hierarchy. In this paper, we introduce an
effective hierarchical text classifier RADAr (Transformer-based Autoregressive
Decoder Architecture) that is based only on an off-the-shelf RoBERTa
transformer to process the input and a custom autoregressive decoder with two
decoder layers for generating the classification output. Thus, unlike existing
approaches for HTC, the encoder of RADAr has no explicit encoding of the label
hierarchy and the decoder solely relies on the label sequences of the samples
observed during training. We demonstrate on three benchmark datasets that RADAr
achieves results competitive to the state of the art with less training and
inference time. Our model consistently performs better when organizing the
label sequences from children to parents versus the inverse, as done in
existing HTC approaches. Our experiments show that neither the label semantics
nor an explicit graph encoder for the hierarchy is needed. This has strong
practical implications for HTC as the architecture has fewer requirements and
provides a speed-up by a factor of 2 at inference time. Moreover, training a
separate decoder from scratch in conjunction with fine-tuning the encoder
allows future researchers and practitioners to exchange the encoder part as new
models arise. The source code is available at
https://github.com/yousef-younes/RADAr.
|
2501.13599
|
Learning under Commission and Omission Event Outliers
|
stat.ML cs.LG
|
Event stream is an important data format in real life. The events are usually
expected to follow some regular patterns over time. However, the patterns could
be contaminated by unexpected absences or occurrences of events. In this paper,
we adopt the temporal point process framework for learning event stream and we
provide a simple-but-effective method to deal with both commission and omission
event outliers.In particular, we introduce a novel weight function to
dynamically adjust the importance of each observed event so that the final
estimator could offer multiple statistical merits. We compare the proposed
method with the vanilla one in the classification problems, where event streams
can be clustered into different groups. Both theoretical and numerical results
confirm the effectiveness of our new approach. To our knowledge, our method is
the first one to provably handle both commission and omission outliers
simultaneously.
|
2501.13604
|
FedPref: Federated Learning Across Heterogeneous Multi-objective
Preferences
|
cs.LG cs.DC
|
Federated Learning (FL) is a distributed machine learning strategy, developed
for settings where training data is owned by distributed devices and cannot be
shared. FL circumvents this constraint by carrying out model training in
distribution. The parameters of these local models are shared intermittently
among participants and aggregated to enhance model accuracy. This strategy has
been rapidly adopted by the industry in efforts to overcome privacy and
resource constraints in model training. However, the application of FL to
real-world settings brings additional challenges associated with heterogeneity
between participants. Research into mitigating these difficulties in FL has
largely focused on only two types of heterogeneity: the unbalanced distribution
of training data, and differences in client resources. Yet more types of
heterogeneity are becoming relevant as the capability of FL expands to cover
more complex problems, from the tuning of LLMs to enabling machine learning on
edge devices. In this work, we discuss a novel type of heterogeneity that is
likely to become increasingly relevant in future applications: this is
preference heterogeneity, emerging when clients learn under multiple
objectives, with different importance assigned to each objective on different
clients. In this work, we discuss the implications of this type of
heterogeneity and propose FedPref, a first algorithm designed to facilitate
personalised FL in this setting. We demonstrate the effectiveness of the
algorithm across different problems, preference distributions and model
architectures. In addition, we introduce a new analytical point of view, based
on multi-objective metrics, for evaluating the performance of FL algorithms in
this setting beyond the traditional client-focused metrics. We perform a second
experimental analysis based in this view, and show that FedPref outperforms
compared algorithms.
|
2501.13606
|
Two Step SOVA-Based Decoding Algorithm for Tailbiting Codes
|
cs.IT cs.NI math.IT
|
In this work we propose a novel decoding algorithm for tailbiting
convolutional codes and evaluate its performance over different channels. The
proposed method consists on a fixed two-step Viterbi decoding of the received
data. In the first step, an estimation of the most likely state is performed
based on a SOVA decoding. The second step consists of a conventional Viterbi
decoding that employs the state estimated in the previous step as the initial
and final states of the trellis. Simulations results show a performance close
to that of maximum-likelihood decoding.
|
2501.13607
|
Optimal Multi-Objective Best Arm Identification with Fixed Confidence
|
cs.LG cs.AI cs.IT math.IT stat.ML
|
We consider a multi-armed bandit setting with finitely many arms, in which
each arm yields an $M$-dimensional vector reward upon selection. We assume that
the reward of each dimension (a.k.a. {\em objective}) is generated
independently of the others. The best arm of any given objective is the arm
with the largest component of mean corresponding to the objective. The end goal
is to identify the best arm of {\em every} objective in the shortest (expected)
time subject to an upper bound on the probability of error (i.e.,
fixed-confidence regime). We establish a problem-dependent lower bound on the
limiting growth rate of the expected stopping time, in the limit of vanishing
error probabilities. This lower bound, we show, is characterised by a max-min
optimisation problem that is computationally expensive to solve at each time
step. We propose an algorithm that uses the novel idea of {\em surrogate
proportions} to sample the arms at each time step, eliminating the need to
solve the max-min optimisation problem at each step. We demonstrate
theoretically that our algorithm is asymptotically optimal. In addition, we
provide extensive empirical studies to substantiate the efficiency of our
algorithm. While existing works on pure exploration with multi-objective
multi-armed bandits predominantly focus on {\em Pareto frontier
identification}, our work fills the gap in the literature by conducting a
formal investigation of the multi-objective best arm identification problem.
|
2501.13608
|
AirTOWN: A Privacy-Preserving Mobile App for Real-time Pollution-Aware
POI Suggestion
|
cs.IR
|
This demo paper presents \airtown, a privacy-preserving mobile application
that provides real-time, pollution-aware recommendations for points of interest
(POIs) in urban environments. By combining real-time Air Quality Index (AQI)
data with user preferences, the proposed system aims to help users make
health-conscious decisions about the locations they visit. The application
utilizes collaborative filtering for personalized suggestions, and federated
learning for privacy protection, and integrates AQI data from sensor networks
in cities such as Bari, Italy, and Cork, UK. In areas with sparse sensor
coverage, interpolation techniques approximate AQI values, ensuring broad
applicability. This system offers a poromsing, health-oriented POI
recommendation solution that adapts dynamically to current urban air quality
conditions while safeguarding user privacy.
|
2501.13609
|
Domain-Specific Machine Translation to Translate Medicine Brochures in
English to Sorani Kurdish
|
cs.CL
|
Access to Kurdish medicine brochures is limited, depriving Kurdish-speaking
communities of critical health information. To address this problem, we
developed a specialized Machine Translation (MT) model to translate English
medicine brochures into Sorani Kurdish using a parallel corpus of 22,940
aligned sentence pairs from 319 brochures, sourced from two pharmaceutical
companies in the Kurdistan Region of Iraq (KRI). We trained a Statistical
Machine Translation (SMT) model using the Moses toolkit, conducting seven
experiments that resulted in BLEU scores ranging from 22.65 to 48.93. We
translated three new brochures to improve the evaluation process and
encountered unknown words. We addressed unknown words through post-processing
with a medical dictionary, resulting in BLEU scores of 56.87, 31.05, and 40.01.
Human evaluation by native Kurdish-speaking pharmacists, physicians, and
medicine users showed that 50% of professionals found the translations
consistent, while 83.3% rated them accurate. Among users, 66.7% considered the
translations clear and felt confident using the medications.
|
2501.13610
|
Efficient Synaptic Delay Implementation in Digital Event-Driven AI
Accelerators
|
cs.NE cs.AI
|
Synaptic delay parameterization of neural network models have remained
largely unexplored but recent literature has been showing promising results,
suggesting the delay parameterized models are simpler, smaller, sparser, and
thus more energy efficient than similar performing (e.g. task accuracy)
non-delay parameterized ones. We introduce Shared Circular Delay Queue (SCDQ),
a novel hardware structure for supporting synaptic delays on digital
neuromorphic accelerators. Our analysis and hardware results show that it
scales better in terms of memory, than current commonly used approaches, and is
more amortizable to algorithm-hardware co-optimizations, where in fact, memory
scaling is modulated by model sparsity and not merely network size. Next to
memory we also report performance on latency area and energy per inference.
|
2501.13620
|
Cognitive Paradigms for Evaluating VLMs on Visual Reasoning Task
|
cs.CV cs.AI
|
Advancing machine visual reasoning requires a deeper understanding of how
Vision-Language Models (VLMs) process and interpret complex visual patterns.
This work introduces a novel, cognitively-inspired evaluation framework to
systematically analyze VLM reasoning on natural image-based Bongard Problems.
We propose three structured paradigms -- Direct Visual Rule Learning, Deductive
Rule Learning, and Componential Analysis -- designed to progressively enforce
step-wise reasoning and disentangle the interplay between perception and
reasoning. Our evaluation shows that advanced, closed-source VLMs (GPT-4o and
Gemini 2.0) achieve near-superhuman performance, particularly when provided
with high-quality image descriptions, while open-source models exhibit a
significant performance bottleneck due to deficiencies in perception. An
ablation study further confirms that perception, rather than reasoning, is the
primary limiting factor, as open-source models apply extracted rules
effectively when given accurate descriptions. These findings underscore the
critical role of robust multimodal perception in enhancing generalizable visual
reasoning and highlight the importance of structured, step-wise reasoning
paradigms for advancing machine intelligence.
|
2501.13622
|
Coarse-to-Fine Process Reward Modeling for Mathematical Reasoning
|
cs.AI
|
The Process Reward Model (PRM) plays a crucial role in mathematical reasoning
tasks, requiring high-quality supervised process data. However, we observe that
reasoning steps generated by Large Language Models (LLMs) often fail to exhibit
strictly incremental information, leading to redundancy that can hinder
effective reasoning. To address this issue, we propose CFPRM, a simple yet
effective coarse-to-fine strategy. Instead of focusing on the detection of
redundant steps, our approach first establishes a coarse-grained window to
merge adjacent reasoning steps into unified, holistic steps. The window size is
then progressively reduced to extract fine-grained reasoning steps, enabling
data collection at multiple granularities for training. By leveraging this
hierarchical refinement process, CFPRM mitigates redundancy while preserving
essential fine-grained knowledge. Extensive experiments on two reasoning
datasets across three loss criteria validate the CFPRM's effectiveness and
versatility.
|
2501.13623
|
Targeting heuristics for cost-optimized institutional incentives in
heterogeneous networked populations
|
physics.soc-ph cs.SI
|
The world is currently grappling with challenges on both local and global
scales, many of which demand coordinated behavioral changes. However, breaking
away from the status is often difficult due to deeply ingrained social norms.
In such cases, social systems may require seemingly exogenous interventions to
set off endogenous, largely irreversible processes that drive change -- social
tipping. While studies have looked at targeted interventions, real-life
constraints faced by policymakers, like minimizing costs while ensuring a quick
and fair transition, remain understudied. To address this complexity, we
introduce a game-theoretic framework that accounts for individual heterogeneity
and networks of local influence. We implement various heuristics based on
information about individual preferences and commonly used local network
properties. Results show that where the change is initiated in the population
and the direction in which it propagates is essential to the effectiveness of
interventions. We identify optimal strategies under different scenarios, such
as varying levels of resistance to change, preference heterogeneity, and
homophily. These results provide insights that can be experimentally tested and
help policymakers to better direct incentives.
|
2501.13624
|
QMamba: Post-Training Quantization for Vision State Space Models
|
cs.CV
|
State Space Models (SSMs), as key components of Mamaba, have gained
increasing attention for vision models recently, thanks to their efficient long
sequence modeling capability. Given the computational cost of deploying SSMs on
resource-limited edge devices, Post-Training Quantization (PTQ) is a technique
with the potential for efficient deployment of SSMs. In this work, we propose
QMamba, one of the first PTQ frameworks to our knowledge, designed for vision
SSMs based on the analysis of the activation distributions in SSMs. We reveal
that the distribution of discrete parameters exhibits long-tailed skewness and
the distribution of the hidden state sequence exhibits highly dynamic
variations. Correspondingly, we design Long-tailed Skewness Quantization (LtSQ)
to quantize discrete parameters and Temporal Group Quantization (TGQ) to
quantize hidden states, which reduces the quantization errors. Extensive
experiments demonstrate that QMamba outperforms advanced PTQ methods on vision
models across multiple model sizes and architectures. Notably, QMamba surpasses
existing methods by 21.0% on ImageNet classification with 4-bit activations.
|
2501.13625
|
Information-theoretic limits and approximate message-passing for
high-dimensional time series
|
cs.IT cond-mat.dis-nn math.IT math.ST stat.TH
|
High-dimensional time series appear in many scientific setups, demanding a
nuanced approach to model and analyze the underlying dependence structure.
However, theoretical advancements so far often rely on stringent assumptions
regarding the sparsity of the underlying signals. In this contribution, we
expand the scope by investigating a high-dimensional time series model wherein
the number of features grows proportionally to the number of sampling points,
without assuming sparsity in the signal. Specifically, we consider the
stochastic regression model and derive a single-letter formula for the
normalized mutual information between observations and the signal. We also
empirically study the vector approximate message passing (VAMP) algorithm and
show that, despite a lack of theoretical guarantees, its performance for
inference in our time series model is robust and often statistically optimal.
|
2501.13629
|
Sigma: Differential Rescaling of Query, Key and Value for Efficient
Language Models
|
cs.CL
|
We introduce Sigma, an efficient large language model specialized for the
system domain, empowered by a novel architecture including DiffQKV attention,
and pre-trained on our meticulously collected system domain data. DiffQKV
attention significantly enhances the inference efficiency of Sigma by
optimizing the Query (Q), Key (K), and Value (V) components in the attention
mechanism differentially, based on their varying impacts on the model
performance and efficiency indicators. Specifically, we (1) conduct extensive
experiments that demonstrate the model's varying sensitivity to the compression
of K and V components, leading to the development of differentially compressed
KV, and (2) propose augmented Q to expand the Q head dimension, which enhances
the model's representation capacity with minimal impacts on the inference
speed. Rigorous theoretical and empirical analyses reveal that DiffQKV
attention significantly enhances efficiency, achieving up to a 33.36%
improvement in inference speed over the conventional grouped-query attention
(GQA) in long-context scenarios. We pre-train Sigma on 6T tokens from various
sources, including 19.5B system domain data that we carefully collect and 1T
tokens of synthesized and rewritten data. In general domains, Sigma achieves
comparable performance to other state-of-arts models. In the system domain, we
introduce the first comprehensive benchmark AIMicius, where Sigma demonstrates
remarkable performance across all tasks, significantly outperforming GPT-4 with
an absolute improvement up to 52.5%.
|
2501.13633
|
Representation of Molecules via Algebraic Data Types : Advancing Beyond
SMILES & SELFIES
|
cs.PL cs.LG
|
We introduce a novel molecular representation through Algebraic Data Types
(ADTs) - composite data structures formed through the combination of simpler
types that obey algebraic laws. By explicitly considering how the datatype of a
representation constrains the operations which may be performed, we ensure
meaningful inference can be performed over generative models (programs with
sample} and score operations). This stands in contrast to string-based
representations where string-type operations may only indirectly correspond to
chemical and physical molecular properties, and at worst produce nonsensical
output. The ADT presented implements the Dietz representation for molecular
constitution via multigraphs and bonding systems, and uses atomic coordinate
data to represent 3D information and stereochemical features. This creates a
general digital molecular representation which surpasses the limitations of the
string-based representations and the 2D-graph based models on which they are
based. In addition, we present novel support for quantum information through
representation of shells, subshells, and orbitals, greatly expanding the
representational scope beyond current approaches, for instance in Molecular
Orbital theory. The framework's capabilities are demonstrated through key
applications: Bayesian probabilistic programming is demonstrated through
integration with LazyPPL, a lazy probabilistic programming library; molecules
are made instances of a group under rotation, necessary for geometric learning
techniques which exploit the invariance of molecular properties under different
representations; and the framework's flexibility is demonstrated through an
extension to model chemical reactions. After critiquing previous
representations, we provide an open-source solution in Haskell - a type-safe,
purely functional programming language.
|
2501.13638
|
Quantification via Gaussian Latent Space Representations
|
cs.LG
|
Quantification, or prevalence estimation, is the task of predicting the
prevalence of each class within an unknown bag of examples. Most existing
quantification methods in the literature rely on prior probability shift
assumptions to create a quantification model that uses the predictions of an
underlying classifier to make optimal prevalence estimates. In this work, we
present an end-to-end neural network that uses Gaussian distributions in latent
spaces to obtain invariant representations of bags of examples. This approach
addresses the quantification problem using deep learning, enabling the
optimization of specific loss functions relevant to the problem and avoiding
the need for an intermediate classifier, tackling the quantification problem as
a direct optimization problem. Our method achieves state-of-the-art results,
both against traditional quantification methods and other deep learning
approaches for quantification. The code needed to reproduce all our experiments
is publicly available at https://github.com/AICGijon/gmnet.
|
2501.13641
|
The Road to Learning Explainable Inverse Kinematic Models: Graph Neural
Networks as Inductive Bias for Symbolic Regression
|
cs.RO cs.LG
|
This paper shows how a Graph Neural Network (GNN) can be used to learn an
Inverse Kinematics (IK) based on an automatically generated dataset. The
generated Inverse Kinematics is generalized to a family of manipulators with
the same Degree of Freedom (DOF), but varying link length configurations. The
results indicate a position error of less than 1.0 cm for 3 DOF and 4.5 cm for
5 DOF, and orientation error of 2$^\circ$ for 3 DOF and 8.2$^\circ$ for 6 DOF,
which allows the deployment to certain real world-problems. However,
out-of-domain errors and lack of extrapolation can be observed in the resulting
GNN. An extensive analysis of these errors indicates potential for enhancement
in the future. Consequently, the generated GNNs are tailored to be used in
future work as an inductive bias to generate analytical equations through
symbolic regression.
|
2501.13643
|
Enhancing Medical Image Analysis through Geometric and Photometric
transformations
|
eess.IV cs.CV
|
Medical image analysis suffers from a lack of labeled data due to several
challenges including patient privacy and lack of experts. Although some AI
models only perform well with large amounts of data, we will move to data
augmentation where there is a solution to improve the performance of our models
and increase the dataset size through traditional or advanced techniques. In
this paper, we evaluate the effectiveness of data augmentation techniques on
two different medical image datasets. In the first step, we applied some
transformation techniques to the skin cancer dataset containing benign and
malignant classes. Then, we trained the convolutional neural network (CNN) on
the dataset before and after augmentation, which significantly improved test
accuracy from 90.74% to 96.88% and decreased test loss from 0.7921 to 0.1468
after augmentation. In the second step, we used the Mixup technique by mixing
two random images and their corresponding masks using the retina and blood
vessels dataset, then we trained the U-net model and obtained the Dice
coefficient which increased from 0 before augmentation to 0.4163 after
augmentation. The result shows the effect of using data augmentation to
increase the dataset size on the classification and segmentation performance.
|
2501.13648
|
Revisiting Online Learning Approach to Inverse Linear Optimization: A
Fenchel$-$Young Loss Perspective and Gap-Dependent Regret Analysis
|
cs.LG
|
This paper revisits the online learning approach to inverse linear
optimization studied by B\"armann et al. (2017), where the goal is to infer an
unknown linear objective function of an agent from sequential observations of
the agent's input-output pairs. First, we provide a simple understanding of the
online learning approach through its connection to online convex optimization
of \emph{Fenchel--Young losses}. As a byproduct, we present an offline
guarantee on the \emph{suboptimality loss}, which measures how well predicted
objectives explain the agent's choices, without assuming the optimality of the
agent's choices. Second, assuming that there is a gap between optimal and
suboptimal objective values in the agent's decision problems, we obtain an
upper bound independent of the time horizon $T$ on the sum of suboptimality and
\emph{estimate losses}, where the latter measures the quality of solutions
recommended by predicted objectives. Interestingly, our gap-dependent analysis
achieves a faster rate than the standard $O(\sqrt{T})$ regret bound by
exploiting structures specific to inverse linear optimization, even though
neither the loss functions nor their domains enjoy desirable properties, such
as strong convexity.
|
2501.13650
|
Performance Analysis of Turbo Decoding Algorithms in Wireless OFDM
Systems
|
cs.IT math.IT
|
Turbo codes are well known to be one of the error correction techniques which
achieve closer results to the Shannon limit. Nevertheless, the specific
performance of the code highly depends on the particular decoding algorithm
used at the receiver. In this sense, the election of the decoding algorithm
involves a trade off between the gain introduced by the code and the complexity
of the decoding process. In this work we perform a thorough analysis of the
different iterative decoding techniques and analyze their suitability for being
implemented in the user terminals of new cellular and broadcast systems which
are based on orthogonal frequency division multiplexing (OFDM). The analyzed
iterative decoding algorithms are the Max-Log-MAP and the soft output Viterbi
algorithm (SOVA), since both of them have a relative low computational
complexity, simplifying their implementation in cost efficient terminals.
Simulation results have been obtained for different encoder structures, block
sizes and considering realistic channel conditions (an OFDM transmission over a
wireless channel).
|
2501.13652
|
LVPruning: An Effective yet Simple Language-Guided Vision Token Pruning
Approach for Multi-modal Large Language Models
|
cs.CL
|
Multi-modal Large Language Models (MLLMs) have achieved remarkable success by
integrating visual and textual modalities. However, they incur significant
computational overhead due to the large number of vision tokens processed,
limiting their practicality in resource-constrained environments. We introduce
Language-Guided Vision Token Pruning (LVPruning) for MLLMs, an effective yet
simple method that significantly reduces the computational burden while
preserving model performance. LVPruning employs cross-attention modules to
compute the importance of vision tokens based on their interaction with
language tokens, determining which to prune. Importantly, LVPruning can be
integrated without modifying the original MLLM parameters, which makes
LVPruning simple to apply or remove. Our experiments show that LVPruning can
effectively reduce up to 90% of vision tokens by the middle layer of LLaVA-1.5,
resulting in a 62.1% decrease in inference Tera Floating-Point Operations Per
Second (TFLOPs), with an average performance loss of just 0.45% across nine
multi-modal benchmarks.
|
2501.13667
|
MPG-SAM 2: Adapting SAM 2 with Mask Priors and Global Context for
Referring Video Object Segmentation
|
cs.CV
|
Referring video object segmentation (RVOS) aims to segment objects in a video
according to textual descriptions, which requires the integration of multimodal
information and temporal dynamics perception. The Segment Anything Model 2 (SAM
2) has shown great effectiveness across various video segmentation tasks.
However, its application to offline RVOS is challenged by the translation of
the text into effective prompts and a lack of global context awareness. In this
paper, we propose a novel RVOS framework, termed MPG-SAM 2, to address these
challenges. Specifically, MPG-SAM 2 employs a unified multimodal encoder to
jointly encode video and textual features, generating semantically aligned
video and text embeddings, along with multimodal class tokens. A mask prior
generator utilizes the video embeddings and class tokens to create pseudo masks
of target objects and global context. These masks are fed into the prompt
encoder as dense prompts along with multimodal class tokens as sparse prompts
to generate accurate prompts for SAM 2. To provide the online SAM 2 with a
global view, we introduce a hierarchical global-historical aggregator, which
allows SAM 2 to aggregate global and historical information of target objects
at both pixel and object levels, enhancing the target representation and
temporal consistency. Extensive experiments on several RVOS benchmarks
demonstrate the superiority of MPG-SAM 2 and the effectiveness of our proposed
modules.
|
2501.13669
|
How to Alleviate Catastrophic Forgetting in LLMs Finetuning?
Hierarchical Layer-Wise and Element-Wise Regularization
|
cs.CL cs.AI
|
Large Language Models (LLMs) exhibit strong general language capabilities.
However, fine-tuning these models on domain-specific tasks often leads to
catastrophic forgetting, where the model overwrites or loses essential
knowledge acquired during pretraining. This phenomenon significantly limits the
broader applicability of LLMs. To address this challenge, we propose a novel
approach to compute the element-wise importance of model parameters crucial for
preserving general knowledge during fine-tuning. Our method utilizes a
dual-objective optimization strategy: (1) regularization loss based on
element-wise parameter importance, which constrains the updates to parameters
crucial for general knowledge; (2) cross-entropy loss to adapt to
domain-specific tasks. Additionally, we introduce layer-wise coefficients to
account for the varying contributions of different layers, dynamically
balancing the dual-objective optimization. Extensive experiments on scientific,
medical, and physical tasks using GPT-J and LLaMA-3 demonstrate that our
approach mitigates catastrophic forgetting while enhancing model adaptability.
Compared to previous methods, our solution is approximately 20 times faster and
requires only 10-15% of the storage, highlighting the practical efficiency. The
code will be released.
|
2501.13676
|
Certified Robustness Under Bounded Levenshtein Distance
|
cs.LG cs.AI cs.CL
|
Text classifiers suffer from small perturbations, that if chosen
adversarially, can dramatically change the output of the model. Verification
methods can provide robustness certificates against such adversarial
perturbations, by computing a sound lower bound on the robust accuracy.
Nevertheless, existing verification methods incur in prohibitive costs and
cannot practically handle Levenshtein distance constraints. We propose the
first method for computing the Lipschitz constant of convolutional classifiers
with respect to the Levenshtein distance. We use these Lipschitz constant
estimates for training 1-Lipschitz classifiers. This enables computing the
certified radius of a classifier in a single forward pass. Our method, LipsLev,
is able to obtain $38.80$% and $13.93$% verified accuracy at distance $1$ and
$2$ respectively in the AG-News dataset, while being $4$ orders of magnitude
faster than existing approaches. We believe our work can open the door to more
efficient verification in the text domain.
|
2501.13677
|
HumorReject: Decoupling LLM Safety from Refusal Prefix via A Little
Humor
|
cs.LG cs.CR
|
Large Language Models (LLMs) commonly rely on explicit refusal prefixes for
safety, making them vulnerable to prefix injection attacks. We introduce
HumorReject, a novel data-driven approach that reimagines LLM safety by
decoupling it from refusal prefixes through humor as an indirect refusal
strategy. Rather than explicitly rejecting harmful instructions, HumorReject
responds with contextually appropriate humor that naturally defuses potentially
dangerous requests. Our approach effectively addresses common "over-defense"
issues while demonstrating superior robustness against various attack vectors.
Our findings suggest that improvements in training data design can be as
important as the alignment algorithm itself in achieving effective LLM safety.
|
2501.13682
|
Collective Memory and Narrative Cohesion: A Computational Study of
Palestinian Refugee Oral Histories in Lebanon
|
cs.CL
|
This study uses the Palestinian Oral History Archive (POHA) to investigate
how Palestinian refugee groups in Lebanon sustain a cohesive collective memory
of the Nakba through shared narratives. Grounded in Halbwachs' theory of group
memory, we employ statistical analysis of pairwise similarity of narratives,
focusing on the influence of shared gender and location. We use textual
representation and semantic embeddings of narratives to represent the
interviews themselves. Our analysis demonstrates that shared origin is a
powerful determinant of narrative similarity across thematic keywords,
landmarks, and significant figures, as well as in semantic embeddings of the
narratives. Meanwhile, shared residence fosters cohesion, with its impact
significantly amplified when paired with shared origin. Additionally, women's
narratives exhibit heightened thematic cohesion, particularly in recounting
experiences of the British occupation, underscoring the gendered dimensions of
memory formation. This research deepens the understanding of collective memory
in diasporic settings, emphasizing the critical role of oral histories in
safeguarding Palestinian identity and resisting erasure.
|
2501.13683
|
Unlearning Clients, Features and Samples in Vertical Federated Learning
|
cs.LG cs.AI
|
Federated Learning (FL) has emerged as a prominent distributed learning
paradigm. Within the scope of privacy preservation, information privacy
regulations such as GDPR entitle users to request the removal (or unlearning)
of their contribution from a service that is hosting the model. For this
purpose, a server hosting an ML model must be able to unlearn certain
information in cases such as copyright infringement or security issues that can
make the model vulnerable or impact the performance of a service based on that
model. While most unlearning approaches in FL focus on Horizontal FL (HFL),
where clients share the feature space and the global model, Vertical FL (VFL)
has received less attention from the research community. VFL involves clients
(passive parties) sharing the sample space among them while not having access
to the labels. In this paper, we explore unlearning in VFL from three
perspectives: unlearning clients, unlearning features, and unlearning samples.
To unlearn clients and features we introduce VFU-KD which is based on knowledge
distillation (KD) while to unlearn samples, VFU-GA is introduced which is based
on gradient ascent. To provide evidence of approximate unlearning, we utilize
Membership Inference Attack (MIA) to audit the effectiveness of our unlearning
approach. Our experiments across six tabular datasets and two image datasets
demonstrate that VFU-KD and VFU-GA achieve performance comparable to or better
than both retraining from scratch and the benchmark R2S method in many cases,
with improvements of $(0-2\%)$. In the remaining cases, utility scores remain
comparable, with a modest utility loss ranging from $1-5\%$. Unlike existing
methods, VFU-KD and VFU-GA require no communication between active and passive
parties during unlearning. However, they do require the active party to store
the previously communicated embeddings.
|
2501.13686
|
Learning in Conjectural Stackelberg Games
|
cs.GT cs.MA
|
We extend the formalism of Conjectural Variations games to Stackelberg games
involving multiple leaders and a single follower. To solve these nonconvex
games, a common assumption is that the leaders compute their strategies having
perfect knowledge of the follower's best response. However, in practice, the
leaders may have little to no knowledge about the other players' reactions. To
deal with this lack of knowledge, we assume that each leader can form
conjectures about the other players' best responses, and update its strategy
relying on these conjectures. Our contributions are twofold: (i) On the
theoretical side, we introduce the concept of Conjectural Stackelberg
Equilibrium -- keeping our formalism conjecture agnostic -- with Stackelberg
Equilibrium being a refinement of it. (ii) On the algorithmic side, we
introduce a two-stage algorithm with guarantees of convergence, which allows
the leaders to first learn conjectures on a training data set, and then update
their strategies. Theoretical results are illustrated numerically.
|
2501.13687
|
Question Answering on Patient Medical Records with Private Fine-Tuned
LLMs
|
cs.CL cs.AI
|
Healthcare systems continuously generate vast amounts of electronic health
records (EHRs), commonly stored in the Fast Healthcare Interoperability
Resources (FHIR) standard. Despite the wealth of information in these records,
their complexity and volume make it difficult for users to retrieve and
interpret crucial health insights. Recent advances in Large Language Models
(LLMs) offer a solution, enabling semantic question answering (QA) over medical
data, allowing users to interact with their health records more effectively.
However, ensuring privacy and compliance requires edge and private deployments
of LLMs.
This paper proposes a novel approach to semantic QA over EHRs by first
identifying the most relevant FHIR resources for a user query (Task1) and
subsequently answering the query based on these resources (Task2). We explore
the performance of privately hosted, fine-tuned LLMs, evaluating them against
benchmark models such as GPT-4 and GPT-4o. Our results demonstrate that
fine-tuned LLMs, while 250x smaller in size, outperform GPT-4 family models by
0.55% in F1 score on Task1 and 42% on Meteor Task in Task2. Additionally, we
examine advanced aspects of LLM usage, including sequential fine-tuning, model
self-evaluation (narcissistic evaluation), and the impact of training data size
on performance. The models and datasets are available here:
https://huggingface.co/genloop
|
2501.13690
|
Variational U-Net with Local Alignment for Joint Tumor Extraction and
Registration (VALOR-Net) of Breast MRI Data Acquired at Two Different Field
Strengths
|
eess.IV cs.CV
|
Background: Multiparametric breast MRI data might improve tumor diagnostics,
characterization, and treatment planning. Accurate alignment and delineation of
images acquired at different field strengths such as 3T and 7T, remain
challenging research tasks. Purpose: To address alignment challenges and enable
consistent tumor segmentation across different MRI field strengths. Study type:
Retrospective. Subjects: Nine female subjects with breast tumors were involved:
six histologically proven invasive ductal carcinomas (IDC) and three
fibroadenomas. Field strength/sequence: Imaging was performed at 3T and 7T
scanners using post-contrast T1-weighted three-dimensional time-resolved
angiography with stochastic trajectories (TWIST) sequence. Assessments: The
method's performance for joint image registration and tumor segmentation was
evaluated using several quantitative metrics, including signal-to-noise ratio
(PSNR), structural similarity index (SSIM), normalized cross-correlation (NCC),
Dice coefficient, F1 score, and relative sum of squared differences (rel SSD).
Statistical tests: The Pearson correlation coefficient was used to test the
relationship between the registration and segmentation metrics. Results: When
calculated for each subject individually, the PSNR was in a range from 27.5 to
34.5 dB, and the SSIM was from 82.6 to 92.8%. The model achieved an NCC from
96.4 to 99.3% and a Dice coefficient of 62.9 to 95.3%. The F1 score was between
55.4 and 93.2% and the rel SSD was in the range of 2.0 and 7.5%. The
segmentation metrics Dice and F1 Score are highly correlated (0.995), while a
moderate correlation between NCC and SSIM (0.681) was found for registration.
Data conclusion: Initial results demonstrate that the proposed method may be
feasible in providing joint tumor segmentation and registration of MRI data
acquired at different field strengths.
|
2501.13692
|
Training-Free Consistency Pipeline for Fashion Repose
|
cs.CV cs.AI cs.SE
|
Recent advancements in diffusion models have significantly broadened the
possibilities for editing images of real-world objects. However, performing
non-rigid transformations, such as changing the pose of objects or image-based
conditioning, remains challenging. Maintaining object identity during these
edits is difficult, and current methods often fall short of the precision
needed for industrial applications, where consistency is critical.
Additionally, fine-tuning diffusion models requires custom training data, which
is not always accessible in real-world scenarios. This work introduces
FashionRepose, a training-free pipeline for non-rigid pose editing specifically
designed for the fashion industry. The approach integrates off-the-shelf models
to adjust poses of long-sleeve garments, maintaining identity and branding
attributes. FashionRepose uses a zero-shot approach to perform these edits in
near real-time, eliminating the need for specialized training. consistent image
editing. The solution holds potential for applications in the fashion industry
and other fields demanding identity preservation in image editing.
|
2501.13697
|
Safety in safe Bayesian optimization and its ramifications for control
|
eess.SY cs.SY stat.ML
|
A recurring and important task in control engineering is parameter tuning
under constraints, which conceptually amounts to optimization of a blackbox
function accessible only through noisy evaluations. For example, in control
practice parameters of a pre-designed controller are often tuned online in
feedback with a plant, and only safe parameter values should be tried, avoiding
for example instability. Recently, machine learning methods have been deployed
for this important problem, in particular, Bayesian optimization (BO). To
handle safety constraints, algorithms from safe BO have been utilized,
especially SafeOpt-type algorithms, which enjoy considerable popularity in
learning-based control, robotics, and adjacent fields. However, we identify two
significant obstacles to practical safety. First, SafeOpt-type algorithms rely
on quantitative uncertainty bounds, and most implementations replace these by
theoretically unsupported heuristics. Second, the theoretically valid
uncertainty bounds crucially depend on a quantity - the reproducing kernel
Hilbert space norm of the target function - that at present is impossible to
reliably bound using established prior engineering knowledge. By careful
numerical experiments we show that these issues can indeed cause safety
violations. To overcome these problems, we propose Lipschitz-only Safe Bayesian
Optimization (LoSBO), a safe BO algorithm that relies only on a known Lipschitz
bound for its safety. Furthermore, we propose a variant (LoS-GP-UCB) that
avoids gridding of the search space and is therefore applicable even for
moderately high-dimensional problems.
|
2501.13698
|
The First Indoor Pathloss Radio Map Prediction Challenge
|
eess.SP cs.LG
|
To encourage further research and to facilitate fair comparisons in the
development of deep learning-based radio propagation models, in the less
explored case of directional radio signal emissions in indoor propagation
environments, we have launched the ICASSP 2025 First Indoor Pathloss Radio Map
Prediction Challenge. This overview paper describes the indoor path loss
prediction problem, the datasets used, the Challenge tasks, and the evaluation
methodology. Finally, the results of the Challenge and a summary of the
submitted methods are presented.
|
2501.13699
|
DI-BENCH: Benchmarking Large Language Models on Dependency Inference
with Testable Repositories at Scale
|
cs.CL cs.SE
|
Large Language Models have advanced automated software development, however,
it remains a challenge to correctly infer dependencies, namely, identifying the
internal components and external packages required for a repository to
successfully run. Existing studies highlight that dependency-related issues
cause over 40\% of observed runtime errors on the generated repository. To
address this, we introduce DI-BENCH, a large-scale benchmark and evaluation
framework specifically designed to assess LLMs' capability on dependency
inference. The benchmark features 581 repositories with testing environments
across Python, C#, Rust, and JavaScript. Extensive experiments with textual and
execution-based metrics reveal that the current best-performing model achieves
only a 42.9% execution pass rate, indicating significant room for improvement.
DI-BENCH establishes a new viewpoint for evaluating LLM performance on
repositories, paving the way for more robust end-to-end software synthesis.
|
2501.13703
|
GenTL: A General Transfer Learning Model for Building Thermal Dynamics
|
eess.SY cs.LG cs.SY
|
Transfer Learning (TL) is an emerging field in modeling building thermal
dynamics. This method reduces the data required for a data-driven model of a
target building by leveraging knowledge from a source building. Consequently,
it enables the creation of data-efficient models that can be used for advanced
control and fault detection & diagnosis. A major limitation of the TL approach
is its inconsistent performance across different sources. Although accurate
source-building selection for a target is crucial, it remains a persistent
challenge.
We present GenTL, a general transfer learning model for single-family houses
in Central Europe. GenTL can be efficiently fine-tuned to a large variety of
target buildings. It is pretrained on a Long Short-Term Memory (LSTM) network
with data from 450 different buildings. The general transfer learning model
eliminates the need for source-building selection by serving as a universal
source for fine-tuning. Comparative analysis with conventional single-source to
single-target TL demonstrates the efficacy and reliability of the general
pretraining approach. Testing GenTL on 144 target buildings for fine-tuning
reveals an average prediction error (RMSE) reduction of 42.1 % compared to
fine-tuning single-source models.
|
2501.13704
|
A real-time battle situation intelligent awareness system based on
Meta-learning & RNN
|
cs.LG cs.NA math.NA
|
In modern warfare, real-time and accurate battle situation analysis is
crucial for making strategic and tactical decisions. The proposed real-time
battle situation intelligent awareness system (BSIAS) aims at meta-learning
analysis and stepwise RNN (recurrent neural network) modeling, where the former
carries out the basic processing and analysis of battlefield data, which
includes multi-steps such as data cleansing, data fusion, data mining and
continuously updates, and the latter optimizes the battlefield modeling by
stepwise capturing the temporal dependencies of data set. BSIAS can predict the
possible movement from any side of the fence and attack routes by taking a
simulated battle as an example, which can be an intelligent support platform
for commanders to make scientific decisions during wartime. This work delivers
the potential application of integrated BSIAS in the field of battlefield
command & analysis engineering.
|
2501.13706
|
Analysis of Eccentric Coaxial Waveguides Filled with Lossy Anisotropic
Media via Finite Difference
|
cs.CE physics.comp-ph
|
This study presents a finite difference method (FDM) to model the
electromagnetic field propagation in eccentric coaxial waveguides filled with
lossy uniaxially anisotropic media. The formulation utilizes conformal
transformation to map the eccentric circular waveguide into an equivalent
concentric one. In the concentric problem, we introduce a novel normalized
Helmholtz equation to decouple TM and TE modes, and we solve this
non-homogeneous partial differential equation using the finite difference in
cylindrical coordinates. The proposed approach was validated against
perturbation-based, spectral element-based, and finite-integration-based
numerical solutions. The preliminary results show that our solution is superior
in computational time. Furthermore, our FDM formulation can be extended with
minimal adaptations to model complex media problems, such as metamaterial
devices, optical fibers, and geophysical exploration sensors.
|
2501.13707
|
EventVL: Understand Event Streams via Multimodal Large Language Model
|
cs.CV cs.AI
|
The event-based Vision-Language Model (VLM) recently has made good progress
for practical vision tasks. However, most of these works just utilize CLIP for
focusing on traditional perception tasks, which obstruct model understanding
explicitly the sufficient semantics and context from event streams. To address
the deficiency, we propose EventVL, the first generative event-based MLLM
(Multimodal Large Language Model) framework for explicit semantic
understanding. Specifically, to bridge the data gap for connecting different
modalities semantics, we first annotate a large event-image/video-text dataset,
containing almost 1.4 million high-quality pairs of data, which enables
effective learning across various scenes, e.g., drive scene or human motion.
After that, we design Event Spatiotemporal Representation to fully explore the
comprehensive information by diversely aggregating and segmenting the event
stream. To further promote a compact semantic space, Dynamic Semantic Alignment
is introduced to improve and complete sparse semantic spaces of events.
Extensive experiments show that our EventVL can significantly surpass existing
MLLM baselines in event captioning and scene description generation tasks. We
hope our research could contribute to the development of the event vision
community.
|
2501.13709
|
Regularizing cross entropy loss via minimum entropy and K-L divergence
|
cs.CV cs.LG
|
I introduce two novel loss functions for classification in deep learning. The
two loss functions extend standard cross entropy loss by regularizing it with
minimum entropy and Kullback-Leibler (K-L) divergence terms. The first of the
two novel loss functions is termed mixed entropy loss (MIX-ENT for short),
while the second one is termed minimum entropy regularized cross-entropy loss
(MIN-ENT for short). The MIX-ENT function introduces a regularizer that can be
shown to be equivalent to the sum of a minimum entropy term and a K-L
divergence term. However, it should be noted that the K-L divergence term here
is different from that in the standard cross-entropy loss function, in the
sense that it swaps the roles of the target probability and the hypothesis
probability. The MIN-ENT function simply adds a minimum entropy regularizer to
the standard cross entropy loss function. In both MIX-ENT and MIN-ENT, the
minimum entropy regularizer minimizes the entropy of the hypothesis probability
distribution which is output by the neural network. Experiments on the
EMNIST-Letters dataset shows that my implementation of MIX-ENT and MIN-ENT lets
the VGG model climb from its previous 3rd position on the paperswithcode
leaderboard to reach the 2nd position on the leaderboard, outperforming the
Spinal-VGG model in so doing. Specifically, using standard cross-entropy, VGG
achieves 95.86% while Spinal-VGG achieves 95.88% classification accuracies,
whereas using VGG (without Spinal-VGG) our MIN-ENT achieved 95.933%, while our
MIX-ENT achieved 95.927% accuracies. The pre-trained models for both MIX-ENT
and MIN-ENT are at https://github.com/rahmanoladi/minimum entropy project.
|
2501.13710
|
YOLO11-JDE: Fast and Accurate Multi-Object Tracking with Self-Supervised
Re-ID
|
cs.CV cs.AI
|
We introduce YOLO11-JDE, a fast and accurate multi-object tracking (MOT)
solution that combines real-time object detection with self-supervised
Re-Identification (Re-ID). By incorporating a dedicated Re-ID branch into
YOLO11s, our model performs Joint Detection and Embedding (JDE), generating
appearance features for each detection. The Re-ID branch is trained in a fully
self-supervised setting while simultaneously training for detection,
eliminating the need for costly identity-labeled datasets. The triplet loss,
with hard positive and semi-hard negative mining strategies, is used for
learning discriminative embeddings. Data association is enhanced with a custom
tracking implementation that successfully integrates motion, appearance, and
location cues. YOLO11-JDE achieves competitive results on MOT17 and MOT20
benchmarks, surpassing existing JDE methods in terms of FPS and using up to ten
times fewer parameters. Thus, making our method a highly attractive solution
for real-world applications.
|
2501.13712
|
Formally Verified Neurosymbolic Trajectory Learning via Tensor-based
Linear Temporal Logic on Finite Traces
|
cs.AI cs.LG cs.LO
|
We present a novel formalisation of tensor semantics for linear temporal
logic on finite traces (LTLf), with formal proofs of correctness carried out in
the theorem prover Isabelle/HOL. We demonstrate that this formalisation can be
integrated into a neurosymbolic learning process by defining and verifying a
differentiable loss function for the LTLf constraints, and automatically
generating an implementation that integrates with PyTorch. We show that, by
using this loss, the process learns to satisfy pre-specified logical
constraints. Our approach offers a fully rigorous framework for constrained
training, eliminating many of the inherent risks of ad-hoc, manual
implementations of logical aspects directly in an "unsafe" programming language
such as Python, while retaining efficiency in implementation.
|
2501.13713
|
Skin Disease Detection and Classification of Actinic Keratosis and
Psoriasis Utilizing Deep Transfer Learning
|
cs.CV cs.AI
|
Skin diseases can arise from infections, allergies, genetic factors,
autoimmune disorders, hormonal imbalances, or environmental triggers such as
sun damage and pollution. Some skin diseases, such as Actinic Keratosis and
Psoriasis, can be fatal if not treated in time. Early identification is
crucial, but the diagnostic methods for these conditions are often expensive
and not widely accessible. In this study, we propose a novel and efficient
method for diagnosing skin diseases using deep learning techniques. This
approach employs a modified VGG16 Convolutional Neural Network (CNN) model. The
model includes several convolutional layers and utilizes ImageNet weights with
modified top layers. The top layer is updated with fully connected layers and a
final softmax activation layer to classify skin diseases. The dataset used,
titled "Skin Disease Dataset," is publicly available. While the VGG16
architecture does not include data augmentation by default, preprocessing
techniques such as rotation, shifting, and zooming were applied to augment the
data prior to model training. The proposed methodology achieved 90.67% accuracy
using the modified VGG16 model, demonstrating its reliability in classifying
skin diseases. The promising results highlight the potential of this approach
for real-world applications.
|
2501.13718
|
A Mutual Information Perspective on Multiple Latent Variable Generative
Models for Positive View Generation
|
cs.CV
|
In image generation, Multiple Latent Variable Generative Models (MLVGMs)
employ multiple latent variables to gradually shape the final images, from
global characteristics to finer and local details (e.g., StyleGAN, NVAE),
emerging as powerful tools for diverse applications. Yet their generative
dynamics and latent variable utilization remain only empirically observed. In
this work, we propose a novel framework to systematically quantify the impact
of each latent variable in MLVGMs, using Mutual Information (MI) as a guiding
metric. Our analysis reveals underutilized variables and can guide the use of
MLVGMs in downstream applications.
With this foundation, we introduce a method for generating synthetic data for
Self-Supervised Contrastive Representation Learning (SSCRL). By leveraging the
hierarchical and disentangled variables of MLVGMs, and guided by the previous
analysis, we apply tailored latent perturbations to produce diverse views for
SSCRL, without relying on real data altogether.
Additionally, we introduce a Continuous Sampling (CS) strategy, where the
generator dynamically creates new samples during SSCRL training, greatly
increasing data variability. Our comprehensive experiments demonstrate the
effectiveness of these contributions, showing that MLVGMs' generated views
compete on par with or even surpass views generated from real data.
This work establishes a principled approach to understanding and exploiting
MLVGMs, advancing both generative modeling and self-supervised learning.
|
2501.13720
|
Musical ethnocentrism in Large Language Models
|
cs.CL cs.AI cs.SD eess.AS
|
Large Language Models (LLMs) reflect the biases in their training data and,
by extension, those of the people who created this training data. Detecting,
analyzing, and mitigating such biases is becoming a focus of research. One type
of bias that has been understudied so far are geocultural biases. Those can be
caused by an imbalance in the representation of different geographic regions
and cultures in the training data, but also by value judgments contained
therein. In this paper, we make a first step towards analyzing musical biases
in LLMs, particularly ChatGPT and Mixtral. We conduct two experiments. In the
first, we prompt LLMs to provide lists of the "Top 100" musical contributors of
various categories and analyze their countries of origin. In the second
experiment, we ask the LLMs to numerically rate various aspects of the musical
cultures of different countries. Our results indicate a strong preference of
the LLMs for Western music cultures in both experiments.
|
2501.13724
|
Dual-Domain Exponent of Maximum Mutual Information Decoding
|
cs.IT math.IT math.PR
|
This paper provides a dual domain derivation of the error exponent of maximum
mutual information (MMI) decoding with constant composition codes, showing it
coincides with that of maximum likelihood decoding for discrete memoryless
channels. The analysis is further extended to joint source-channel coding,
demonstrating that the generalized MMI decoder achieves the same random coding
error exponent as the maximum a posteriori decoder.
|
2501.13725
|
You Only Crash Once v2: Perceptually Consistent Strong Features for
One-Stage Domain Adaptive Detection of Space Terrain
|
cs.CV cs.AI cs.LG cs.RO
|
The in-situ detection of planetary, lunar, and small-body surface terrain is
crucial for autonomous spacecraft applications, where learning-based computer
vision methods are increasingly employed to enable intelligence without prior
information or human intervention. However, many of these methods remain
computationally expensive for spacecraft processors and prevent real-time
operation. Training of such algorithms is additionally complex due to the
scarcity of labeled data and reliance on supervised learning approaches.
Unsupervised Domain Adaptation (UDA) offers a promising solution by
facilitating model training with disparate data sources such as simulations or
synthetic scenes, although UDA is difficult to apply to celestial environments
where challenging feature spaces are paramount. To alleviate such issues, You
Only Crash Once (YOCOv1) has studied the integration of Visual Similarity-based
Alignment (VSA) into lightweight one-stage object detection architectures to
improve space terrain UDA. Although proven effective, the approach faces
notable limitations, including performance degradations in multi-class and
high-altitude scenarios. Building upon the foundation of YOCOv1, we propose
novel additions to the VSA scheme that enhance terrain detection capabilities
under UDA, and our approach is evaluated across both simulated and real-world
data. Our second YOCO rendition, YOCOv2, is capable of achieving
state-of-the-art UDA performance on surface terrain detection, where we
showcase improvements upwards of 31% compared with YOCOv1 and terrestrial
state-of-the-art. We demonstrate the practical utility of YOCOv2 with
spacecraft flight hardware performance benchmarking and qualitative evaluation
of NASA mission data.
|
2501.13726
|
RPO: Retrieval Preference Optimization for Robust Retrieval-Augmented
Generation
|
cs.CL
|
While Retrieval-Augmented Generation (RAG) has exhibited promise in utilizing
external knowledge, its generation process heavily depends on the quality and
accuracy of the retrieved context. Large language models (LLMs) struggle to
evaluate the correctness of non-parametric knowledge retrieved externally when
it differs from internal memorization, leading to knowledge conflicts during
response generation. To this end, we introduce the Retrieval Preference
Optimization (RPO), a lightweight and effective alignment method to adaptively
leverage multi-source knowledge based on retrieval relevance. An implicit
representation of retrieval relevance is derived and incorporated into the
reward model to integrate retrieval evaluation and response generation into a
single model, solving the problem that previous methods necessitate the
additional procedure to assess the retrieval quality. Notably, RPO is the only
RAG-dedicated alignment approach that quantifies the awareness of retrieval
relevance in training, overcoming mathematical obstacles. Experiments on four
datasets demonstrate that RPO outperforms RAG by 4-10% in accuracy without any
extra component, exhibiting its robust generalization.
|
2501.13727
|
Scalable Safe Multi-Agent Reinforcement Learning for Multi-Agent System
|
cs.MA cs.AI
|
Safety and scalability are two critical challenges faced by practical
Multi-Agent Systems (MAS). However, existing Multi-Agent Reinforcement Learning
(MARL) algorithms that rely solely on reward shaping are ineffective in
ensuring safety, and their scalability is rather limited due to the fixed-size
network output. To address these issues, we propose a novel framework, Scalable
Safe MARL (SS-MARL), to enhance the safety and scalability of MARL methods.
Leveraging the inherent graph structure of MAS, we design a multi-layer message
passing network to aggregate local observations and communications of varying
sizes. Furthermore, we develop a constrained joint policy optimization method
in the setting of local observation to improve safety. Simulation experiments
demonstrate that SS-MARL achieves a better trade-off between optimality and
safety compared to baselines, and its scalability significantly outperforms the
latest methods in scenarios with a large number of agents. The feasibility of
our method is also verified by hardware implementation with Mecanum-wheeled
vehicles.
|
2501.13731
|
Pseudocode-Injection Magic: Enabling LLMs to Tackle Graph Computational
Tasks
|
cs.CL cs.AI
|
Graph computational tasks are inherently challenging and often demand the
development of advanced algorithms for effective solutions. With the emergence
of large language models (LLMs), researchers have begun investigating their
potential to address these tasks. However, existing approaches are constrained
by LLMs' limited capability to comprehend complex graph structures and their
high inference costs, rendering them impractical for handling large-scale
graphs. Inspired by human approaches to graph problems, we introduce a novel
framework, PIE (Pseudocode-Injection-Enhanced LLM Reasoning for Graph
Computational Tasks), which consists of three key steps: problem understanding,
prompt design, and code generation. In this framework, LLMs are tasked with
understanding the problem and extracting relevant information to generate
correct code. The responsibility for analyzing the graph structure and
executing the code is delegated to the interpreter. We inject task-related
pseudocodes into the prompts to further assist the LLMs in generating efficient
code. We also employ cost-effective trial-and-error techniques to ensure that
the LLM-generated code executes correctly. Unlike other methods that require
invoking LLMs for each individual test case, PIE only calls the LLM during the
code generation phase, allowing the generated code to be reused and
significantly reducing inference costs. Extensive experiments demonstrate that
PIE outperforms existing baselines in terms of both accuracy and computational
efficiency.
|
2501.13732
|
A dimensionality reduction technique based on the Gromov-Wasserstein
distance
|
stat.ML cs.LG
|
Analyzing relationships between objects is a pivotal problem within data
science. In this context, Dimensionality reduction (DR) techniques are employed
to generate smaller and more manageable data representations. This paper
proposes a new method for dimensionality reduction, based on optimal
transportation theory and the Gromov-Wasserstein distance. We offer a new
probabilistic view of the classical Multidimensional Scaling (MDS) algorithm
and the nonlinear dimensionality reduction algorithm, Isomap (Isometric Mapping
or Isometric Feature Mapping) that extends the classical MDS, in which we use
the Gromov-Wasserstein distance between the probability measure of
high-dimensional data, and its low-dimensional representation. Through gradient
descent, our method embeds high-dimensional data into a lower-dimensional
space, providing a robust and efficient solution for analyzing complex
high-dimensional datasets.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.