id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
2501.11132
|
Advanced technology in railway track monitoring using the GPR Technique:
A Review
|
cs.LG cs.CV eess.IV
|
Subsurface evaluation of railway tracks is crucial for safe operation, as it
allows for the early detection and remediation of potential structural
weaknesses or defects that could lead to accidents or derailments. Ground
Penetrating Radar (GPR) is an electromagnetic survey technique as advanced
non-destructive technology (NDT) that can be used to monitor railway tracks.
This technology is well-suited for railway applications due to the sub-layered
composition of the track, which includes ties, ballast, sub-ballast, and
subgrade regions. It can detect defects such as ballast pockets, fouled
ballast, poor drainage, and subgrade settlement. The paper reviews recent works
on advanced technology and interpretations of GPR data collected for different
layers. Further, this paper demonstrates the current techniques for using
synthetic modeling to calibrate real-world GPR data, enhancing accuracy in
identifying subsurface features like ballast conditions and structural
anomalies and applying various algorithms to refine GPR data analysis. These
include Support Vector Machine (SVM) for classifying railway ballast types,
Fuzzy C-means, and Generalized Regression Neural Networks for high-accuracy
defect classification. Deep learning techniques, particularly Convolutional
Neural Networks (CNNs) and Recurrent Neural Networks (RNNs) are also
highlighted for their effectiveness in recognizing patterns associated with
defects in GPR images. The article specifically focuses on the development of a
Convolutional Recurrent Neural Network (CRNN) model, which combines CNN and RNN
architectures for efficient processing of GPR data. This model demonstrates
enhanced detection capabilities and faster processing compared to traditional
object detection models like Faster R-CNN.
|
2501.11133
|
A Simultaneous Decoding Approach to Joint State and Message
Communications
|
cs.IT math.IT
|
The capacity-distortion (C-D) trade-offs for joint state and message
communications (JSMC) over single- and multi-user channels are investigated,
where the transmitters have access to generalized state information and
feedback while the receivers jointly decode the messages and estimate the
channel state. A coding scheme is proposed based on backward simultaneous
decoding of messages and compressed state descriptions without the need for the
Wyner-Ziv random binning technique. For the point-to-point channel, the
proposed scheme results in the optimal C-D function. For the state-dependent
discrete memoryless degraded broadcast channel (SD-DMDBC), the successive
refinement method is adopted for designing multi-stage state descriptions. With
the simultaneous decoding approach, the derived achievable region is shown to
be larger than the region obtained by the sequential decoding approach that is
utilized in existing works. As for the state-dependent discrete memoryless
multiple access channel (SD-DMMAC), in addition to the proposed method,
Willem's coding strategy is applied to enable partial collaboration between
transmitters through the feedback links. Moreover, the state descriptions are
shown to enhance both communication and state estimation performance. Examples
are provided for the derived results to verify the analysis, either numerically
or analytically. With particular focus, simple but representative integrated
sensing and communications (ISAC) systems are also considered, and their
fundamental performance limits are studied.
|
2501.11135
|
Playing the Lottery With Concave Regularizers for Sparse Trainable
Neural Networks
|
cs.LG cs.AI
|
The design of sparse neural networks, i.e., of networks with a reduced number
of parameters, has been attracting increasing research attention in the last
few years. The use of sparse models may significantly reduce the computational
and storage footprint in the inference phase. In this context, the lottery
ticket hypothesis (LTH) constitutes a breakthrough result, that addresses not
only the performance of the inference phase, but also of the training phase. It
states that it is possible to extract effective sparse subnetworks, called
winning tickets, that can be trained in isolation. The development of effective
methods to play the lottery, i.e., to find winning tickets, is still an open
problem. In this article, we propose a novel class of methods to play the
lottery. The key point is the use of concave regularization to promote the
sparsity of a relaxed binary mask, which represents the network topology. We
theoretically analyze the effectiveness of the proposed method in the convex
framework. Then, we propose extended numerical tests on various datasets and
architectures, that show that the proposed method can improve the performance
of state-of-the-art algorithms.
|
2501.11136
|
A Novel Switch-Type Policy Network for Resource Allocation Problems:
Technical Report
|
cs.LG cs.SY eess.SY
|
Deep Reinforcement Learning (DRL) has become a powerful tool for developing
control policies in queueing networks, but the common use of Multi-layer
Perceptron (MLP) neural networks in these applications has significant
drawbacks. MLP architectures, while versatile, often suffer from poor sample
efficiency and a tendency to overfit training environments, leading to
suboptimal performance on new, unseen networks. In response to these issues, we
introduce a switch-type neural network (STN) architecture designed to improve
the efficiency and generalization of DRL policies in queueing networks. The STN
leverages structural patterns from traditional non-learning policies, ensuring
consistent action choices across similar states. This design not only
streamlines the learning process but also fosters better generalization by
reducing the tendency to overfit. Our works presents three key contributions:
first, the development of the STN as a more effective alternative to MLPs;
second, empirical evidence showing that STNs achieve superior sample efficiency
in various training scenarios; and third, experimental results demonstrating
that STNs match MLP performance in familiar environments and significantly
outperform them in new settings. By embedding domain-specific knowledge, the
STN enhances the Proximal Policy Optimization (PPO) algorithm's effectiveness
without compromising performance, suggesting its suitability for a wide range
of queueing network control problems.
|
2501.11139
|
Community Detection for Contextual-LSBM: Theoretical Limitations of
Misclassification Rate and Efficient Algorithms
|
stat.ML cs.LG
|
The integration of network information and node attribute information has
recently gained significant attention in the community detection literature. In
this work, we consider community detection in the Contextual Labeled Stochastic
Block Model (CLSBM), where the network follows an LSBM and node attributes
follow a Gaussian Mixture Model (GMM). Our primary focus is the
misclassification rate, which measures the expected number of nodes
misclassified by community detection algorithms. We first establish a lower
bound on the optimal misclassification rate that holds for any algorithm. When
we specialize our setting to the LSBM (which preserves only network
information) or the GMM (which preserves only node attribute information), our
lower bound recovers prior results. Moreover, we present an efficient
spectral-based algorithm tailored for the CLSBM and derive an upper bound on
its misclassification rate. Although the algorithm does not attain the lower
bound, it serves as a reliable starting point for designing more accurate
community detection algorithms (as many algorithms use spectral method as an
initial step, followed by refinement procedures to enhance accuracy).
|
2501.11140
|
CLOFAI: A Dataset of Real And Fake Image Classification Tasks for
Continual Learning
|
cs.CV cs.AI
|
The rapid advancement of generative AI models capable of creating realistic
media has led to a need for classifiers that can accurately distinguish between
genuine and artificially-generated images. A significant challenge for these
classifiers emerges when they encounter images from generative models that are
not represented in their training data, usually resulting in diminished
performance. A typical approach is to periodically update the classifier's
training data with images from the new generative models then retrain the
classifier on the updated dataset. However, in some real-life scenarios,
storage, computational, or privacy constraints render this approach
impractical. Additionally, models used in security applications may be required
to rapidly adapt. In these circumstances, continual learning provides a
promising alternative, as the classifier can be updated without retraining on
the entire dataset. In this paper, we introduce a new dataset called CLOFAI
(Continual Learning On Fake and Authentic Images), which takes the form of a
domain-incremental image classification problem. Moreover, we showcase the
applicability of this dataset as a benchmark for evaluating continual learning
methodologies. In doing this, we set a baseline on our novel dataset using
three foundational continual learning methods -- EWC, GEM, and Experience
Replay -- and find that EWC performs poorly, while GEM and Experience Replay
show promise, performing significantly better than a Naive baseline. The
dataset and code to run the experiments can be accessed from the following
GitHub repository: https://github.com/Will-Doherty/CLOFAI.
|
2501.11141
|
Kilometer-Scale E3SM Land Model Simulation over North America
|
cs.CE
|
The development of a kilometer-scale E3SM Land Model (km-scale ELM) is an
integral part of the E3SM project, which seeks to advance energy-related Earth
system science research with state-of-the-art modeling and simulation
capabilities on exascale computing systems. Through the utilization of
high-fidelity data products, such as atmospheric forcing and soil properties,
the km-scale ELM plays a critical role in accurately modeling geographical
characteristics and extreme weather occurrences. The model is vital for
enhancing our comprehension and prediction of climate patterns, as well as
their effects on ecosystems and human activities.
This study showcases the first set of full-capability, km-scale ELM
simulations over various computational domains, including simulations
encompassing 21.6 million land gridcells, reflecting approximately 21.5 million
square kilometers of North America at a 1 km x 1 km resolution. We present the
largest km-scale ELM simulation using up to 100,800 CPU cores across 2,400
nodes. This continental-scale simulation is 300 times larger than any previous
studies, and the computational resources used are about 400 times larger than
those used in prior efforts. Both strong and weak scaling tests have been
conducted, revealing exceptional performance efficiency and resource
utilization.
The km-scale ELM uses the common E3SM modeling infrastructure and a general
data toolkit known as KiloCraft. Consequently, it can be readily adapted for
both fully-coupled E3SM simulations and data-driven simulations over specific
areas, ranging from a single gridcell to the entire North America.
|
2501.11145
|
Blockchain and Stablecoin Integration for Crowdfunding: A framework for
enhanced efficiency, security, and liquidity
|
cs.CE
|
Crowdfunding platforms face high transaction fees, need for more
transparency, and trust deficits. These issues deter contributors and
entrepreneurs from effectively leveraging crowdfunding for innovation and
growth. Blockchain technology introduces decentralization, security, and
efficiency to address these limitations (1). This paper proposes a
blockchain-based crowdfunding framework that integrates stablecoins such as
USDT and USDC to mitigate cryptocurrency volatility and ensure seamless fund
management. Smart contracts automate compliance processes, including Know Your
Customer (KYC) / Anti-Money Laundering (AML) checks, and enhance operational
efficiency (2). Furthermore, tokenization enables liquidity by allowing
fractional ownership and secondary market trading, which must be effectively
implemented on any global market platform. A comparative analysis highlights
the superiority of the framework over traditional platforms in terms of cost
reduction, transparency, and investor trust. A case study focused on the
Turkish market illustrates the practical benefits of blockchain adoption in
equity crowdfunding, particularly in navigating local regulatory and financial
complexities. This approach provides a scalable, secure, and accessible
solution for modern crowdfunding ecosystems, while reducing the costs of
platforms and increasing the trust of investors and backers in crowdfunding
projects. Keywords Blockchain, stablecoins, crowdfunding, tokenization, and
compliance
|
2501.11149
|
CART-MPC: Coordinating Assistive Devices for Robot-Assisted Transferring
with Multi-Agent Model Predictive Control
|
cs.RO
|
Bed-to-wheelchair transferring is a ubiquitous activity of daily living
(ADL), but especially challenging for caregiving robots with limited payloads.
We develop a novel algorithm that leverages the presence of other assistive
devices: a Hoyer sling and a wheelchair for coarse manipulation of heavy loads,
alongside a robot arm for fine-grained manipulation of deformable objects
(Hoyer sling straps). We instrument the Hoyer sling and wheelchair with
actuators and sensors so that they can become intelligent agents in the
algorithm. We then focus on one subtask of the transferring ADL -- tying Hoyer
sling straps to the sling bar -- that exemplifies the challenges of transfer:
multi-agent planning, deformable object manipulation, and generalization to
varying hook shapes, sling materials, and care recipient bodies. To address
these challenges, we propose CART-MPC, a novel algorithm based on turn-taking
multi-agent model predictive control that uses a learned neural dynamics model
for a keypoint-based representation of the deformable Hoyer sling strap, and a
novel cost function that leverages linking numbers from knot theory and neural
amortization to accelerate inference. We validate it in both RCareWorld
simulation and real-world environments. In simulation, CART-MPC successfully
generalizes across diverse hook designs, sling materials, and care recipient
body shapes. In the real world, we show zero-shot sim-to-real generalization
capabilities to tie deformable Hoyer sling straps on a sling bar towards
transferring a manikin from a hospital bed to a wheelchair. See our website for
supplementary materials: https://emprise.cs.cornell.edu/cart-mpc/.
|
2501.11153
|
Efficient Frame Extraction: A Novel Approach Through Frame Similarity
and Surgical Tool Tracking for Video Segmentation
|
cs.CV
|
The interest in leveraging Artificial Intelligence (AI) for surgical
procedures to automate analysis has witnessed a significant surge in recent
years. One of the primary tools for recording surgical procedures and
conducting subsequent analyses, such as performance assessment, is through
videos. However, these operative videos tend to be notably lengthy compared to
other fields, spanning from thirty minutes to several hours, which poses a
challenge for AI models to effectively learn from them. Despite this challenge,
the foreseeable increase in the volume of such videos in the near future
necessitates the development and implementation of innovative techniques to
tackle this issue effectively. In this article, we propose a novel technique
called Kinematics Adaptive Frame Recognition (KAFR) that can efficiently
eliminate redundant frames to reduce dataset size and computation time while
retaining useful frames to improve accuracy. Specifically, we compute the
similarity between consecutive frames by tracking the movement of surgical
tools. Our approach follows these steps: i) Tracking phase: a YOLOv8 model is
utilized to detect tools presented in the scene, ii) Similarity phase:
Similarities between consecutive frames are computed by estimating variation in
the spatial positions and velocities of the tools, iii) Classification phase: A
X3D CNN is trained to classify segmentation. We evaluate the effectiveness of
our approach by analyzing datasets obtained through retrospective reviews of
cases at two referral centers. The Gastrojejunostomy (GJ) dataset covers
procedures performed between 2017 to 2021, while the Pancreaticojejunostomy
(PJ) dataset spans from 2011 to 2022 at the same centers. By adaptively
selecting relevant frames, we achieve a tenfold reduction in the number of
frames while improving accuracy by 4.32% (from 0.749 to 0.7814).
|
2501.11154
|
Modelling of automotive steel fatigue lifetime by machine learning
method
|
cs.LG cs.NE
|
In the current study, the fatigue life of QSTE340TM steel was modelled using
a machine learning method, namely, a neural network. This problem was solved by
a Multi-Layer Perceptron (MLP) neural network with a 3-75-1 architecture, which
allows the prediction of the crack length based on the number of load cycles N,
the stress ratio R, and the overload ratio Rol. The proposed model showed high
accuracy, with mean absolute percentage error (MAPE) ranging from 0.02% to
4.59% for different R and Rol. The neural network effectively reveals the
nonlinear relationships between input parameters and fatigue crack growth,
providing reliable predictions for different loading conditions.
|
2501.11159
|
LiFT: Lightweight, FPGA-tailored 3D object detection based on LiDAR data
|
cs.CV cs.AR eess.IV
|
This paper presents LiFT, a lightweight, fully quantized 3D object detection
algorithm for LiDAR data, optimized for real-time inference on FPGA platforms.
Through an in-depth analysis of FPGA-specific limitations, we identify a set of
FPGA-induced constraints that shape the algorithm's design. These include a
computational complexity limit of 30 GMACs (billion multiply-accumulate
operations), INT8 quantization for weights and activations, 2D cell-based
processing instead of 3D voxels, and minimal use of skip connections. To meet
these constraints while maximizing performance, LiFT combines novel mechanisms
with state-of-the-art techniques such as reparameterizable convolutions and
fully sparse architecture. Key innovations include the Dual-bound Pillar
Feature Net, which boosts performance without increasing complexity, and an
efficient scheme for INT8 quantization of input features. With a computational
cost of just 20.73 GMACs, LiFT stands out as one of the few algorithms
targeting minimal-complexity 3D object detection. Among comparable methods,
LiFT ranks first, achieving an mAP of 51.84% and an NDS of 61.01% on the
challenging NuScenes validation dataset. The code will be available at
https://github.com/vision-agh/lift.
|
2501.11161
|
Modeling Attention during Dimensional Shifts with Counterfactual and
Delayed Feedback
|
cs.LG
|
Attention can be used to inform choice selection in contextual bandit tasks
even when context features have not been previously experienced. One example of
this is in dimensional shifts, where additional feature values are introduced
and the relationship between features and outcomes can either be static or
variable. Attentional mechanisms have been extensively studied in contextual
bandit tasks where the feedback of choices is provided immediately, but less
research has been done on tasks where feedback is delayed or in counterfactual
feedback cases. Some methods have successfully modeled human attention with
immediate feedback based on reward prediction errors (RPEs), though recent
research raises questions of the applicability of RPEs onto more general
attentional mechanisms. Alternative models suggest that information theoretic
metrics can be used to model human attention, with broader applications to
novel stimuli. In this paper, we compare two different methods for modeling how
humans attend to specific features of decision making tasks, one that is based
on calculating an information theoretic metric using a memory of past
experiences, and another that is based on iteratively updating attention from
reward prediction errors. We compare these models using simulations in a
contextual bandit task with both intradimensional and extradimensional domain
shifts, as well as immediate, delayed, and counterfactual feedback. We find
that calculating an information theoretic metric over a history of experiences
is best able to account for human-like behavior in tasks that shift dimensions
and alter feedback presentation. These results indicate that information
theoretic metrics of attentional mechanisms may be better suited than RPEs to
predict human attention in decision making, though further studies of human
behavior are necessary to support these results.
|
2501.11162
|
Query Repairs
|
cs.DB
|
We formalize and study the problem of repairing database queries based on
user feedback in the form of a collection of labeled examples. We propose a
framework based on the notion of a proximity pre-order, and we investigate and
compare query repairs for conjunctive queries (CQs) using different such
pre-orders. The proximity pre-orders we consider are based on query containment
and on distance metrics for CQs.
|
2501.11165
|
Structure and Context of Retweet Coordination in the 2022 U.S. Midterm
Elections
|
cs.SI cs.CY
|
The ability to detect coordinated activity in communication networks is an
ongoing challenge. Prior approaches emphasize considering any activity
exceeding a specific threshold of similarity to be coordinated. However,
identifying such a threshold is often arbitrary and can be difficult to
distinguish from grassroots organized behavior. In this paper, we investigate a
set of Twitter retweeting data collected around the 2022 US midterm elections,
using a latent sharing-space model, in which we identify the main components of
an association network, thresholded with a k-nearest neighbor criterion. This
approach identifies a distribution of association values with different roles
in the network at different ranges, where the shape of the distribution
suggests a natural place to threshold for coordinated user candidates. We find
coordination candidates belonging to two broad categories, one involving music
awards and promotion of Korean pop or Taylor Swift, the other being users
engaged in political mobilization. In addition, the latent space suggests
common motivations for different coordinated groups otherwise fragmented by
using an appropriately high threshold criterion for coordination.
|
2501.11166
|
AIMA at SemEval-2024 Task 10: History-Based Emotion Recognition in
Hindi-English Code-Mixed Conversations
|
cs.CL cs.AI cs.LG
|
In this study, we introduce a solution to the SemEval 2024 Task 10 on subtask
1, dedicated to Emotion Recognition in Conversation (ERC) in code-mixed
Hindi-English conversations. ERC in code-mixed conversations presents unique
challenges, as existing models are typically trained on monolingual datasets
and may not perform well on code-mixed data. To address this, we propose a
series of models that incorporate both the previous and future context of the
current utterance, as well as the sequential information of the conversation.
To facilitate the processing of code-mixed data, we developed a
Hinglish-to-English translation pipeline to translate the code-mixed
conversations into English. We designed four different base models, each
utilizing powerful pre-trained encoders to extract features from the input but
with varying architectures. By ensembling all of these models, we developed a
final model that outperforms all other baselines.
|
2501.11167
|
Federated Testing (FedTest): A New Scheme to Enhance Convergence and
Mitigate Adversarial Attacks in Federating Learning
|
cs.LG cs.IT math.IT
|
Federated Learning (FL) has emerged as a significant paradigm for training
machine learning models. This is due to its data-privacy-preserving property
and its efficient exploitation of distributed computational resources. This is
achieved by conducting the training process in parallel at distributed users.
However, traditional FL strategies grapple with difficulties in evaluating the
quality of received models, handling unbalanced models, and reducing the impact
of detrimental models. To resolve these problems, we introduce a novel
federated learning framework, which we call federated testing for federated
learning (FedTest). In the FedTest method, the local data of a specific user is
used to train the model of that user and test the models of the other users.
This approach enables users to test each other's models and determine an
accurate score for each. This score can then be used to aggregate the models
efficiently and identify any malicious ones. Our numerical results reveal that
the proposed method not only accelerates convergence rates but also diminishes
the potential influence of malicious users. This significantly enhances the
overall efficiency and robustness of FL systems.
|
2501.11168
|
DeepEyeNet: Adaptive Genetic Bayesian Algorithm Based Hybrid
ConvNeXtTiny Framework For Multi-Feature Glaucoma Eye Diagnosis
|
cs.CV eess.SP
|
Glaucoma is a leading cause of irreversible blindness worldwide, emphasizing
the critical need for early detection and intervention. In this paper, we
present DeepEyeNet, a novel and comprehensive framework for automated glaucoma
detection using retinal fundus images. Our approach integrates advanced image
standardization through dynamic thresholding, precise optic disc and cup
segmentation via a U-Net model, and comprehensive feature extraction
encompassing anatomical and texture-based features. We employ a customized
ConvNeXtTiny based Convolutional Neural Network (CNN) classifier, optimized
using our Adaptive Genetic Bayesian Optimization (AGBO) algorithm. This
proposed AGBO algorithm balances exploration and exploitation in hyperparameter
tuning, leading to significant performance improvements. Experimental results
on the EyePACS-AIROGS-light-V2 dataset demonstrate that DeepEyeNet achieves a
high classification accuracy of 95.84%, which was possible due to the effective
optimization provided by the novel AGBO algorithm, outperforming existing
methods. The integration of sophisticated image processing techniques, deep
learning, and optimized hyperparameter tuning through our proposed AGBO
algorithm positions DeepEyeNet as a promising tool for early glaucoma detection
in clinical settings.
|
2501.11170
|
AIMA at SemEval-2024 Task 3: Simple Yet Powerful Emotion Cause Pair
Analysis
|
cs.CL cs.AI cs.LG
|
The SemEval-2024 Task 3 presents two subtasks focusing on emotion-cause pair
extraction within conversational contexts. Subtask 1 revolves around the
extraction of textual emotion-cause pairs, where causes are defined and
annotated as textual spans within the conversation. Conversely, Subtask 2
extends the analysis to encompass multimodal cues, including language, audio,
and vision, acknowledging instances where causes may not be exclusively
represented in the textual data. Our proposed model for emotion-cause analysis
is meticulously structured into three core segments: (i) embedding extraction,
(ii) cause-pair extraction & emotion classification, and (iii) cause extraction
using QA after finding pairs. Leveraging state-of-the-art techniques and
fine-tuning on task-specific datasets, our model effectively unravels the
intricate web of conversational dynamics and extracts subtle cues signifying
causality in emotional expressions. Our team, AIMA, demonstrated strong
performance in the SemEval-2024 Task 3 competition. We ranked as the 10th in
subtask 1 and the 6th in subtask 2 out of 23 teams.
|
2501.11171
|
Counteracting temporal attacks in Video Copy Detection
|
cs.CV cs.AI cs.IR cs.LG cs.MM
|
Video Copy Detection (VCD) plays a crucial role in copyright protection and
content verification by identifying duplicates and near-duplicates in
large-scale video databases. The META AI Challenge on video copy detection
provided a benchmark for evaluating state-of-the-art methods, with the
Dual-level detection approach emerging as a winning solution. This method
integrates Video Editing Detection and Frame Scene Detection to handle
adversarial transformations and large datasets efficiently. However, our
analysis reveals significant limitations in the VED component, particularly in
its ability to handle exact copies. Moreover, Dual-level detection shows
vulnerability to temporal attacks. To address it, we propose an improved frame
selection strategy based on local maxima of interframe differences, which
enhances robustness against adversarial temporal modifications while
significantly reducing computational overhead. Our method achieves an increase
of 1.4 to 5.8 times in efficiency over the standard 1 FPS approach. Compared to
Dual-level detection method, our approach maintains comparable micro-average
precision ($\mu$AP) while also demonstrating improved robustness against
temporal attacks. Given 56\% reduced representation size and the inference time
of more than 2 times faster, our approach is more suitable to real-world
resource restriction.
|
2501.11175
|
ProKeR: A Kernel Perspective on Few-Shot Adaptation of Large
Vision-Language Models
|
cs.CV cs.AI cs.LG
|
The growing popularity of Contrastive Language-Image Pretraining (CLIP) has
led to its widespread application in various visual downstream tasks. To
enhance CLIP's effectiveness and versatility, efficient few-shot adaptation
techniques have been widely adopted. Among these approaches, training-free
methods, particularly caching methods exemplified by Tip-Adapter, have gained
attention for their lightweight adaptation without the need for additional
fine-tuning. In this paper, we revisit Tip-Adapter from a kernel perspective,
showing that caching methods function as local adapters and are connected to a
well-established kernel literature. Drawing on this insight, we offer a
theoretical understanding of how these methods operate and suggest multiple
avenues for enhancing the Tip-Adapter baseline. Notably, our analysis shows the
importance of incorporating global information in local adapters. Therefore, we
subsequently propose a global method that learns a proximal regularizer in a
reproducing kernel Hilbert space (RKHS) using CLIP as a base learner. Our
method, which we call ProKeR (Proximal Kernel ridge Regression), has a closed
form solution and achieves state-of-the-art performances across 11 datasets in
the standard few-shot adaptation benchmark.
|
2501.11178
|
Conditional Feature Importance with Generative Modeling Using
Adversarial Random Forests
|
stat.ML cs.LG
|
This paper proposes a method for measuring conditional feature importance via
generative modeling. In explainable artificial intelligence (XAI), conditional
feature importance assesses the impact of a feature on a prediction model's
performance given the information of other features. Model-agnostic post hoc
methods to do so typically evaluate changes in the predictive performance under
on-manifold feature value manipulations. Such procedures require creating
feature values that respect conditional feature distributions, which can be
challenging in practice. Recent advancements in generative modeling can
facilitate this. For tabular data, which may consist of both categorical and
continuous features, the adversarial random forest (ARF) stands out as a
generative model that can generate on-manifold data points without requiring
intensive tuning efforts or computational resources, making it a promising
candidate model for subroutines in XAI methods. This paper proposes cARFi
(conditional ARF feature importance), a method for measuring conditional
feature importance through feature values sampled from ARF-estimated
conditional distributions. cARFi requires only little tuning to yield robust
importance scores that can flexibly adapt for conditional or marginal notions
of feature importance, including straightforward extensions to condition on
feature subsets and allows for inferring the significance of feature
importances through statistical tests.
|
2501.11183
|
Can Safety Fine-Tuning Be More Principled? Lessons Learned from
Cybersecurity
|
cs.CR cs.AI cs.LG
|
As LLMs develop increasingly advanced capabilities, there is an increased
need to minimize the harm that could be caused to society by certain model
outputs; hence, most LLMs have safety guardrails added, for example via
fine-tuning. In this paper, we argue the position that current safety
fine-tuning is very similar to a traditional cat-and-mouse game (or arms race)
between attackers and defenders in cybersecurity. Model jailbreaks and attacks
are patched with bandaids to target the specific attack mechanism, but many
similar attack vectors might remain. When defenders are not proactively coming
up with principled mechanisms, it becomes very easy for attackers to sidestep
any new defenses. We show how current defenses are insufficient to prevent new
adversarial jailbreak attacks, reward hacking, and loss of control problems. In
order to learn from past mistakes in cybersecurity, we draw analogies with
historical examples and develop lessons learned that can be applied to LLM
safety. These arguments support the need for new and more principled approaches
to designing safe models, which are architected for security from the
beginning. We describe several such approaches from the AI literature.
|
2501.11188
|
Global Attitude Synchronization for Multi-agent Systems on SO(3)
|
eess.SY cs.SY
|
In this paper, we address the problem of attitude synchronization for a group
of rigid body systems evolving on SO(3). The interaction among these systems is
modeled through an undirected, connected, and acyclic graph topology. First, we
present an almost global continuous distributed attitude synchronization scheme
with rigorously proven stability guarantees. Thereafter, we propose two global
distributed hybrid attitude synchronization schemes on SO(3). The first scheme
is a hybrid control law that leverages angular velocities and relative
orientations to achieve global alignment to a common orientation. The second
scheme eliminates the dependence on angular velocities by introducing dynamic
auxiliary variables, while ensuring global asymptotic attitude synchronization.
This velocity-free control scheme relies exclusively on attitude information.
Simulation results are provided to illustrate the effectiveness of the proposed
distributed attitude synchronization schemes.
|
2501.11190
|
Reinforcement Learning Based Goodput Maximization with Quantized
Feedback in URLLC
|
cs.IT cs.LG eess.SP math.IT
|
This paper presents a comprehensive system model for goodput maximization
with quantized feedback in Ultra-Reliable Low-Latency Communication (URLLC),
focusing on dynamic channel conditions and feedback schemes. The study
investigates a communication system, where the receiver provides quantized
channel state information to the transmitter. The system adapts its feedback
scheme based on reinforcement learning, aiming to maximize goodput while
accommodating varying channel statistics. We introduce a novel Rician-$K$
factor estimation technique to enable the communication system to optimize the
feedback scheme. This dynamic approach increases the overall performance,
making it well-suited for practical URLLC applications where channel statistics
vary over time.
|
2501.11196
|
Enhancing Brain Tumor Segmentation Using Channel Attention and Transfer
learning
|
eess.IV cs.CV
|
Accurate and efficient segmentation of brain tumors is critical for
diagnosis, treatment planning, and monitoring in clinical practice. In this
study, we present an enhanced ResUNet architecture for automatic brain tumor
segmentation, integrating an EfficientNetB0 encoder, a channel attention
mechanism, and an Atrous Spatial Pyramid Pooling (ASPP) module. The
EfficientNetB0 encoder leverages pre-trained features to improve feature
extraction efficiency, while the channel attention mechanism enhances the
model's focus on tumor-relevant features. ASPP enables multiscale contextual
learning, crucial for handling tumors of varying sizes and shapes. The proposed
model was evaluated on two benchmark datasets: TCGA LGG and BraTS 2020.
Experimental results demonstrate that our method consistently outperforms the
baseline ResUNet and its EfficientNet variant, achieving Dice coefficients of
0.903 and 0.851 and HD95 scores of 9.43 and 3.54 for whole tumor and tumor core
regions on the BraTS 2020 dataset, respectively. compared with state-of-the-art
methods, our approach shows competitive performance, particularly in whole
tumor and tumor core segmentation. These results indicate that combining a
powerful encoder with attention mechanisms and ASPP can significantly enhance
brain tumor segmentation performance. The proposed approach holds promise for
further optimization and application in other medical image segmentation tasks.
|
2501.11197
|
Q-RESTORE: Quantum-Driven Framework for Resilient and Equitable
Transportation Network Restoration
|
cs.MA cs.ET
|
Efficient and socially equitable restoration of transportation networks post
disasters is crucial for community resilience and access to essential services.
The ability to rapidly recover critical infrastructure can significantly
mitigate the impacts of disasters, particularly in underserved communities
where prolonged isolation exacerbates vulnerabilities. Traditional restoration
methods prioritize functionality over computational efficiency and equity,
leaving low-income communities at a disadvantage during recovery. To address
this gap, this research introduces a novel framework that combines quantum
computing technology with an equity-focused approach to network restoration.
Optimization of road link recovery within budget constraints is achieved by
leveraging D Wave's hybrid quantum solver, which targets the connectivity needs
of low, average, and high income communities. This framework combines
computational speed with equity, ensuring priority support for underserved
populations. Findings demonstrate that this hybrid quantum solver achieves near
instantaneous computation times of approximately 8.7 seconds across various
budget scenarios, significantly outperforming the widely used genetic
algorithm. It offers targeted restoration by first aiding low-income
communities and expanding aid as budgets increase, aligning with equity goals.
This work showcases quantum computing's potential in disaster recovery
planning, providing a rapid and equitable solution that elevates urban
resilience and social sustainability by aiding vulnerable populations in
disasters.
|
2501.11199
|
Embedding-Driven Diversity Sampling to Improve Few-Shot Synthetic Data
Generation
|
cs.CL
|
Accurate classification of clinical text often requires fine-tuning
pre-trained language models, a process that is costly and time-consuming due to
the need for high-quality data and expert annotators. Synthetic data generation
offers an alternative, though pre-trained models may not capture the syntactic
diversity of clinical notes. We propose an embedding-driven approach that uses
diversity sampling from a small set of real clinical notes to guide large
language models in few-shot prompting, generating synthetic text that better
reflects clinical syntax. We evaluated this method using the CheXpert dataset
on a classification task, comparing it to random few-shot and zero-shot
approaches. Using cosine similarity and a Turing test, our approach produced
synthetic notes that more closely align with real clinical text. Our pipeline
reduced the data needed to reach the 0.85 AUC cutoff by 40% for AUROC and 30%
for AUPRC, while augmenting models with synthetic data improved AUROC by 57%
and AUPRC by 68%. Additionally, our synthetic data was 0.9 times as effective
as real data, a 60% improvement in value.
|
2501.11202
|
Online Hybrid-Belief POMDP with Coupled Semantic-Geometric Models and
Semantic Safety Awareness
|
cs.RO
|
Robots operating in complex and unknown environments frequently require
geometric-semantic representations of the environment to safely perform their
tasks. While inferring the environment, they must account for many possible
scenarios when planning future actions. Since objects' class types are discrete
and the robot's self-pose and the objects' poses are continuous, the
environment can be represented by a hybrid discrete-continuous belief which is
updated according to models and incoming data. Prior probabilities and
observation models representing the environment can be learned from data using
deep learning algorithms. Such models often couple environmental semantic and
geometric properties. As a result, semantic variables are interconnected,
causing semantic state space dimensionality to increase exponentially. In this
paper, we consider planning under uncertainty using partially observable Markov
decision processes (POMDPs) with hybrid semantic-geometric beliefs. The models
and priors consider the coupling between semantic and geometric variables.
Within POMDP, we introduce the concept of semantically aware safety. Obtaining
representative samples of the theoretical hybrid belief, required for
estimating the value function, is very challenging. As a key contribution, we
develop a novel form of the hybrid belief and leverage it to sample
representative samples. We show that under certain conditions, the value
function and probability of safety can be calculated efficiently with an
explicit expectation over all possible semantic mappings. Our simulations show
that our estimates of the objective function and probability of safety achieve
similar levels of accuracy compared to estimators that run exhaustively on the
entire semantic state-space using samples from the theoretical hybrid belief.
Nevertheless, the complexity of our estimators is polynomial rather than
exponential.
|
2501.11203
|
Advancing Oyster Phenotype Segmentation with Multi-Network Ensemble and
Multi-Scale mechanism
|
cs.CV
|
Phenotype segmentation is pivotal in analysing visual features of living
organisms, enhancing our understanding of their characteristics. In the context
of oysters, meat quality assessment is paramount, focusing on shell, meat,
gonad, and muscle components. Traditional manual inspection methods are
time-consuming and subjective, prompting the adoption of machine vision
technology for efficient and objective evaluation. We explore machine vision's
capacity for segmenting oyster components, leading to the development of a
multi-network ensemble approach with a global-local hierarchical attention
mechanism. This approach integrates predictions from diverse models and
addresses challenges posed by varying scales, ensuring robust instance
segmentation across components. Finally, we provide a comprehensive evaluation
of the proposed method's performance using different real-world datasets,
highlighting its efficacy and robustness in enhancing oyster phenotype
segmentation.
|
2501.11211
|
Ditto: Accelerating Diffusion Model via Temporal Value Similarity
|
cs.AR cs.CV cs.LG
|
Diffusion models achieve superior performance in image generation tasks.
However, it incurs significant computation overheads due to its iterative
structure. To address these overheads, we analyze this iterative structure and
observe that adjacent time steps in diffusion models exhibit high value
similarity, leading to narrower differences between consecutive time steps. We
adapt these characteristics to a quantized diffusion model and reveal that the
majority of these differences can be represented with reduced bit-width, and
even zero. Based on our observations, we propose the Ditto algorithm, a
difference processing algorithm that leverages temporal similarity with
quantization to enhance the efficiency of diffusion models. By exploiting the
narrower differences and the distributive property of layer operations, it
performs full bit-width operations for the initial time step and processes
subsequent steps with temporal differences. In addition, Ditto execution flow
optimization is designed to mitigate the memory overhead of temporal difference
processing, further boosting the efficiency of the Ditto algorithm. We also
design the Ditto hardware, a specialized hardware accelerator, fully exploiting
the dynamic characteristics of the proposed algorithm. As a result, the Ditto
hardware achieves up to 1.5x speedup and 17.74% energy saving compared to other
accelerators.
|
2501.11213
|
Risk Analysis of Flowlines in the Oil and Gas Sector: A GIS and Machine
Learning Approach
|
cs.LG
|
This paper presents a risk analysis of flowlines in the oil and gas sector
using Geographic Information Systems (GIS) and machine learning (ML).
Flowlines, vital conduits transporting oil, gas, and water from wellheads to
surface facilities, often face under-assessment compared to transmission
pipelines. This study addresses this gap using advanced tools to predict and
mitigate failures, improving environmental safety and reducing human exposure.
Extensive datasets from the Colorado Energy and Carbon Management Commission
(ECMC) were processed through spatial matching, feature engineering, and
geometric extraction to build robust predictive models. Various ML algorithms,
including logistic regression, support vector machines, gradient boosting
decision trees, and K-Means clustering, were used to assess and classify risks,
with ensemble classifiers showing superior accuracy, especially when paired
with Principal Component Analysis (PCA) for dimensionality reduction. Finally,
a thorough data analysis highlighted spatial and operational factors
influencing risks, identifying high-risk zones for focused monitoring. Overall,
the study demonstrates the transformative potential of integrating GIS and ML
in flowline risk management, proposing a data-driven approach that emphasizes
the need for accurate data and refined models to improve safety in petroleum
extraction.
|
2501.11214
|
Mitigating Spatial Disparity in Urban Prediction Using Residual-Aware
Spatiotemporal Graph Neural Networks: A Chicago Case Study
|
cs.LG
|
Urban prediction tasks, such as forecasting traffic flow, temperature, and
crime rates, are crucial for efficient urban planning and management. However,
existing Spatiotemporal Graph Neural Networks (ST-GNNs) often rely solely on
accuracy, overlooking spatial and demographic disparities in their predictions.
This oversight can lead to imbalanced resource allocation and exacerbate
existing inequities in urban areas. This study introduces a Residual-Aware
Attention (RAA) Block and an equality-enhancing loss function to address these
disparities. By adapting the adjacency matrix during training and incorporating
spatial disparity metrics, our approach aims to reduce local segregation of
residuals and errors. We applied our methodology to urban prediction tasks in
Chicago, utilizing a travel demand dataset as an example. Our model achieved a
48% significant improvement in fairness metrics with only a 9% increase in
error metrics. Spatial analysis of residual distributions revealed that models
with RAA Blocks produced more equitable prediction results, particularly by
reducing errors clustered in central regions. Attention maps demonstrated the
model's ability to dynamically adjust focus, leading to more balanced
predictions. Case studies of various community areas in Chicago further
illustrated the effectiveness of our approach in addressing spatial and
demographic disparities, supporting more balanced and equitable urban planning
and policy-making.
|
2501.11216
|
TigerVector: Supporting Vector Search in Graph Databases for Advanced
RAGs
|
cs.DB
|
In this paper, we introduce TigerVector, a system that integrates vector
search and graph query within TigerGraph, a Massively Parallel Processing (MPP)
native graph database. We extend the vertex attribute type with the embedding
type. To support fast vector search, we devise an MPP index framework that
interoperates efficiently with the graph engine. The graph query language GSQL
is enhanced to support vector type expressions and enable query compositions
between vector search results and graph query blocks. These advancements
elevate the expressive power and analytical capabilities of graph databases,
enabling seamless fusion of unstructured and structured data in ways previously
unattainable. Through extensive experiments, we demonstrate TigerVector's
hybrid search capability, scalability, and superior performance compared to
other graph databases (including Neo4j and Amazon Neptune) and a highly
optimized specialized vector database (Milvus). TigerVector was integrated into
TigerGraph v4.2, the latest release of TigerGraph, in December 2024.
|
2501.11218
|
Leveraging GANs For Active Appearance Models Optimized Model Fitting
|
cs.CV cs.AI cs.LG
|
Generative Adversarial Networks (GANs) have gained prominence in refining
model fitting tasks in computer vision, particularly in domains involving
deformable models like Active Appearance Models (AAMs). This paper explores the
integration of GANs to enhance the AAM fitting process, addressing challenges
in optimizing nonlinear parameters associated with appearance and shape
variations. By leveraging GANs' adversarial training framework, the aim is to
minimize fitting errors and improve convergence rates. Achieving robust
performance even in cases with high appearance variability and occlusions. Our
approach demonstrates significant improvements in accuracy and computational
efficiency compared to traditional optimization techniques, thus establishing
GANs as a potent tool for advanced image model fitting.
|
2501.11219
|
Zero-determinant strategies in repeated continuously-relaxed games
|
physics.soc-ph cs.MA
|
Mixed extension has played an important role in game theory, especially in
the proof of the existence of Nash equilibria in strategic form games. Mixed
extension can be regarded as continuous relaxation of a strategic form game.
Recently, in repeated games, a class of behavior strategies, called
zero-determinant strategies, was introduced. Zero-determinant strategies
unilaterally enforce linear relations between payoffs, and are used to control
payoffs of players. There are many attempts to extend zero-determinant
strategies so as to apply them to broader situations. Here, we extend
zero-determinant strategies to repeated games where action sets of players in
stage game are continuously relaxed. We see that continuous relaxation broadens
the range of possible zero-determinant strategies, compared to the original
repeated games. Furthermore, we introduce a special type of zero-determinant
strategies, called one-point zero-determinant strategies, which repeat only one
continuously-relaxed action in all rounds. By investigating several examples,
we show that some property of mixed-strategy Nash equilibria can be
reinterpreted as a payoff-control property of one-point zero-determinant
strategies.
|
2501.11221
|
Finding Reproducible and Prognostic Radiomic Features in Variable Slice
Thickness Contrast Enhanced CT of Colorectal Liver Metastases
|
eess.IV cs.CV
|
Establishing the reproducibility of radiomic signatures is a critical step in
the path to clinical adoption of quantitative imaging biomarkers; however,
radiomic signatures must also be meaningfully related to an outcome of clinical
importance to be of value for personalized medicine. In this study, we analyze
both the reproducibility and prognostic value of radiomic features extracted
from the liver parenchyma and largest liver metastases in contrast enhanced CT
scans of patients with colorectal liver metastases (CRLM). A prospective cohort
of 81 patients from two major US cancer centers was used to establish the
reproducibility of radiomic features extracted from images reconstructed with
different slice thicknesses. A publicly available, single-center cohort of 197
preoperative scans from patients who underwent hepatic resection for treatment
of CRLM was used to evaluate the prognostic value of features and models to
predict overall survival. A standard set of 93 features was extracted from all
images, with a set of eight different extractor settings. The feature
extraction settings producing the most reproducible, as well as the most
prognostically discriminative feature values were highly dependent on both the
region of interest and the specific feature in question. While the best overall
predictive model was produced using features extracted with a particular
setting, without accounting for reproducibility, (C-index = 0.630
(0.603--0.649)) an equivalent-performing model (C-index = 0.629 (0.605--0.645))
was produced by pooling features from all extraction settings, and thresholding
features with low reproducibility ($\mathrm{CCC} \geq 0.85$), prior to feature
selection. Our findings support a data-driven approach to feature extraction
and selection, preferring the inclusion of many features, and narrowing feature
selection based on reproducibility when relevant data is available.
|
2501.11222
|
An Imbalanced Learning-based Sampling Method for Physics-informed Neural
Networks
|
cs.LG stat.ML
|
This paper introduces Residual-based Smote (RSmote), an innovative local
adaptive sampling technique tailored to improve the performance of
Physics-Informed Neural Networks (PINNs) through imbalanced learning
strategies. Traditional residual-based adaptive sampling methods, while
effective in enhancing PINN accuracy, often struggle with efficiency and high
memory consumption, particularly in high-dimensional problems. RSmote addresses
these challenges by targeting regions with high residuals and employing
oversampling techniques from imbalanced learning to refine the sampling
process. Our approach is underpinned by a rigorous theoretical analysis that
supports the effectiveness of RSmote in managing computational resources more
efficiently. Through extensive evaluations, we benchmark RSmote against the
state-of-the-art Residual-based Adaptive Distribution (RAD) method across a
variety of dimensions and differential equations. The results demonstrate that
RSmote not only achieves or exceeds the accuracy of RAD but also significantly
reduces memory usage, making it particularly advantageous in high-dimensional
scenarios. These contributions position RSmote as a robust and
resource-efficient solution for solving complex partial differential equations,
especially when computational constraints are a critical consideration.
|
2501.11223
|
Reasoning Language Models: A Blueprint
|
cs.AI cs.CL
|
Reasoning language models (RLMs), also known as Large Reasoning Models
(LRMs), such as OpenAI's o1 and o3, DeepSeek-V3, and Alibaba's QwQ, have
redefined AI's problem-solving capabilities by extending LLMs with advanced
reasoning mechanisms. Yet, their high costs, proprietary nature, and complex
architectures - uniquely combining Reinforcement Learning (RL), search
heuristics, and LLMs - present accessibility and scalability challenges. To
address these, we propose a comprehensive blueprint that organizes RLM
components into a modular framework, based on a survey and analysis of all RLM
works. This blueprint incorporates diverse reasoning structures (chains, trees,
graphs, and nested forms), reasoning strategies (e.g., Monte Carlo Tree Search,
Beam Search), RL concepts (policy, value models and others), supervision
schemes (Outcome-Based and Process-Based Supervision), and other related
concepts (e.g., Test-Time Compute, Retrieval-Augmented Generation, agent
tools). We also provide detailed mathematical formulations and algorithmic
specifications to simplify RLM implementation. By showing how schemes like
LLaMA-Berry, QwQ, Journey Learning, and Graph of Thoughts fit as special cases,
we demonstrate the blueprint's versatility and unifying potential. To
illustrate its utility, we introduce x1, a modular implementation for rapid RLM
prototyping and experimentation. Using x1 and a literature review, we provide
key insights, such as multi-phase training for policy and value models, and the
importance of familiar training distributions. Finally, we discuss scalable RLM
cloud deployments and we outline how RLMs can integrate with a broader LLM
ecosystem. Our work demystifies RLM construction, democratizes advanced
reasoning capabilities, and fosters innovation, aiming to mitigate the gap
between "rich AI" and "poor AI" by lowering barriers to RLM design and
experimentation.
|
2501.11225
|
CNN-based TEM image denoising from first principles
|
cond-mat.mtrl-sci cs.CV eess.IV
|
Transmission electron microscope (TEM) images are often corrupted by noise,
hindering their interpretation. To address this issue, we propose a deep
learning-based approach using simulated images. Using density functional theory
calculations with a set of pseudo-atomic orbital basis sets, we generate highly
accurate ground truth images. We introduce four types of noise into these
simulations to create realistic training datasets. Each type of noise is then
used to train a separate convolutional neural network (CNN) model. Our results
show that these CNNs are effective in reducing noise, even when applied to
images with different noise levels than those used during training. However, we
observe limitations in some cases, particularly in preserving the integrity of
circular shapes and avoiding visible artifacts between image patches. To
overcome these challenges, we propose alternative training strategies and
future research directions. This study provides a valuable framework for
training deep learning models for TEM image denoising.
|
2501.11226
|
Local Limits of Small World Networks
|
math.PR cs.DS cs.SI math.CO
|
Small-world networks, known for their high local clustering and short average
path lengths, are a fundamental structure in many real-world systems, including
social, biological, and technological networks. We apply the theory of local
convergence (Benjamini-Schramm convergence) to derive the limiting behavior of
the local structures for two of the most commonly studied small-world network
models: the Watts-Strogatz model and the Kleinberg model. Establishing local
convergence enables us to show that key network measures, such as PageRank,
clustering coefficients, and maximum matching size, converge as network size
increases with their limits determined by the graph's local structure.
Additionally, this framework facilitates the estimation of global phenomena,
such as information cascades, using local information from small neighborhoods.
As an additional outcome of our results, we observe a critical change in the
behavior of the limit exactly when the parameter governing long-range
connections in the Kleinberg model crosses the threshold where decentralized
search remains efficient, offering a new perspective on why decentralized
algorithms fail in certain regimes.
|
2501.11229
|
Successive Interference Cancellation-aided Diffusion Models for Joint
Channel Estimation and Data Detection in Low Rank Channel Scenarios
|
cs.CV cs.IT eess.SP math.IT
|
This paper proposes a novel joint channel-estimation and source-detection
algorithm using successive interference cancellation (SIC)-aided generative
score-based diffusion models. Prior work in this area focuses on massive MIMO
scenarios, which are typically characterized by full-rank channels, and fail in
low-rank channel scenarios. The proposed algorithm outperforms existing methods
in joint source-channel estimation, especially in low-rank scenarios where the
number of users exceeds the number of antennas at the access point (AP). The
proposed score-based iterative diffusion process estimates the gradient of the
prior distribution on partial channels, and recursively updates the estimated
channel parts as well as the source. Extensive simulation results show that the
proposed method outperforms the baseline methods in terms of normalized mean
squared error (NMSE) and symbol error rate (SER) in both full-rank and low-rank
channel scenarios, while having a more dominant effect in the latter, at
various signal-to-noise ratios (SNR).
|
2501.11230
|
Optimum Power-Subcarrier Allocation and Time-Sharing in Multicarrier
NOMA Uplink
|
eess.SP cs.IT math.IT
|
Currently used resource allocation methods for uplink multicarrier
non-orthogonal multiple access (MC-NOMA) systems have multiple shortcomings.
Current approaches either allocate the same power across all subcarriers to a
user, or use heuristic-based near-far, strong channel-weak channel user
grouping to assign the decoding order for successive interference cancellation
(SIC). This paper proposes a novel optimal power-subcarrier allocation for
uplink MC-NOMA. This new allocation achieves the optimal power-subcarrier
allocation as well as the optimal SIC decoding order. Furthermore, the proposed
method includes a time-sharing algorithm that dynamically alters the decoding
orders of the participating users to achieve the required data rates, even in
cases where any single decoding order fails to do so. Extensive experimental
evaluations show that the new method achieves higher sum data rates and lower
power consumption compared to current NOMA methods.
|
2501.11231
|
KPL: Training-Free Medical Knowledge Mining of Vision-Language Models
|
cs.CV
|
Visual Language Models such as CLIP excel in image recognition due to
extensive image-text pre-training. However, applying the CLIP inference in
zero-shot classification, particularly for medical image diagnosis, faces
challenges due to: 1) the inadequacy of representing image classes solely with
single category names; 2) the modal gap between the visual and text spaces
generated by CLIP encoders. Despite attempts to enrich disease descriptions
with large language models, the lack of class-specific knowledge often leads to
poor performance. In addition, empirical evidence suggests that existing proxy
learning methods for zero-shot image classification on natural image datasets
exhibit instability when applied to medical datasets. To tackle these
challenges, we introduce the Knowledge Proxy Learning (KPL) to mine knowledge
from CLIP. KPL is designed to leverage CLIP's multimodal understandings for
medical image classification through Text Proxy Optimization and Multimodal
Proxy Learning. Specifically, KPL retrieves image-relevant knowledge
descriptions from the constructed knowledge-enhanced base to enrich semantic
text proxies. It then harnesses input images and these descriptions, encoded
via CLIP, to stably generate multimodal proxies that boost the zero-shot
classification performance. Extensive experiments conducted on both medical and
natural image datasets demonstrate that KPL enables effective zero-shot image
classification, outperforming all baselines. These findings highlight the great
potential in this paradigm of mining knowledge from CLIP for medical image
classification and broader areas.
|
2501.11233
|
PlotEdit: Natural Language-Driven Accessible Chart Editing in PDFs via
Multimodal LLM Agents
|
cs.IR cs.CL cs.MA
|
Chart visualizations, while essential for data interpretation and
communication, are predominantly accessible only as images in PDFs, lacking
source data tables and stylistic information. To enable effective editing of
charts in PDFs or digital scans, we present PlotEdit, a novel multi-agent
framework for natural language-driven end-to-end chart image editing via
self-reflective LLM agents. PlotEdit orchestrates five LLM agents: (1)
Chart2Table for data table extraction, (2) Chart2Vision for style attribute
identification, (3) Chart2Code for retrieving rendering code, (4) Instruction
Decomposition Agent for parsing user requests into executable steps, and (5)
Multimodal Editing Agent for implementing nuanced chart component modifications
- all coordinated through multimodal feedback to maintain visual fidelity.
PlotEdit outperforms existing baselines on the ChartCraft dataset across style,
layout, format, and data-centric edits, enhancing accessibility for visually
challenged users and improving novice productivity.
|
2501.11236
|
A New Formulation of Lipschitz Constrained With Functional Gradient
Learning for GANs
|
cs.CV cs.LG
|
This paper introduces a promising alternative method for training Generative
Adversarial Networks (GANs) on large-scale datasets with clear theoretical
guarantees. GANs are typically learned through a minimax game between a
generator and a discriminator, which is known to be empirically unstable.
Previous learning paradigms have encountered mode collapse issues without a
theoretical solution. To address these challenges, we propose a novel
Lipschitz-constrained Functional Gradient GANs learning (Li-CFG) method to
stabilize the training of GAN and provide a theoretical foundation for
effectively increasing the diversity of synthetic samples by reducing the
neighborhood size of the latent vector. Specifically, we demonstrate that the
neighborhood size of the latent vector can be reduced by increasing the norm of
the discriminator gradient, resulting in enhanced diversity of synthetic
samples. To efficiently enlarge the norm of the discriminator gradient, we
introduce a novel {\epsilon}-centered gradient penalty that amplifies the norm
of the discriminator gradient using the hyper-parameter {\epsilon}. In
comparison to other constraints, our method enlarging the discriminator norm,
thus obtaining the smallest neighborhood size of the latent vector. Extensive
experiments on benchmark datasets for image generation demonstrate the efficacy
of the Li-CFG method and the {\epsilon}-centered gradient penalty. The results
showcase improved stability and increased diversity of synthetic samples.
|
2501.11238
|
WSSM: Geographic-enhanced hierarchical state-space model for global
station weather forecast
|
cs.LG cs.AI physics.ao-ph
|
Global Station Weather Forecasting (GSWF), a prominent meteorological
research area, is pivotal in providing timely localized weather predictions.
Despite the progress existing models have made in the overall accuracy of the
GSWF, executing high-precision extreme event prediction still presents a
substantial challenge. The recent emergence of state-space models, with their
ability to efficiently capture continuous-time dynamics and latent states,
offer potential solutions. However, early investigations indicated that Mamba
underperforms in the context of GSWF, suggesting further adaptation and
optimization. To tackle this problem, in this paper, we introduce Weather
State-space Model (WSSM), a novel Mamba-based approach tailored for GSWF.
Geographical knowledge is integrated in addition to the widely-used positional
encoding to represent the absolute special-temporal position. The multi-scale
time-frequency features are synthesized from coarse to fine to model the
seasonal to extreme weather dynamic. Our method effectively improves the
overall prediction accuracy and addresses the challenge of forecasting extreme
weather events. The state-of-the-art results obtained on the Weather-5K subset
underscore the efficacy of the WSSM
|
2501.11240
|
Fast instance-specific algorithm configuration with graph neural network
|
cs.LG
|
Combinatorial optimization (CO) problems are pivotal across various
industrial applications, where the speed of solving these problems is crucial.
Improving the performance of CO solvers across diverse input instances requires
fine-tuning solver parameters for each instance. However, this tuning process
is time-consuming, and the time required increases with the number of
instances. To address this, a method called instance-specific algorithm
configuration (ISAC) has been devised. This approach involves two main steps:
training and execution. During the training step, features are extracted from
various instances and then grouped into clusters. For each cluster, parameters
are fine-tuned. This cluster-specific tuning process results in a set of
generalized parameters for instances belonging to each class. In the execution
step, features are extracted from an unknown instance to determine its cluster,
and the corresponding pre-tuned parameters are applied. Generally, the running
time of a solver is evaluated by the time to solution ($TTS$). However, methods
like ISAC require preprocessing. Therefore, the total execution time is
$T_{tot}=TTS+T_{tune}$, where $T_{tune}$ represents the tuning time. While the
goal is to minimize $T_{tot}$, it is important to note that extracting features
in the ISAC method requires a certain amount of computational time. The
extracting features include summary statistics of the solver execution logs,
which takes several 10 seconds. This research presents a method to
significantly reduce the time of the ISAC execution step by streamlining
feature extraction and class determination with a graph neural network.
Experimental results show that $T_{tune}$ in the execution step, which take
several 10 seconds in the original ISAC manner, could be reduced to
sub-seconds.
|
2501.11241
|
Irony in Emojis: A Comparative Study of Human and LLM Interpretation
|
cs.CL cs.CV cs.SI
|
Emojis have become a universal language in online communication, often
carrying nuanced and context-dependent meanings. Among these, irony poses a
significant challenge for Large Language Models (LLMs) due to its inherent
incongruity between appearance and intent. This study examines the ability of
GPT-4o to interpret irony in emojis. By prompting GPT-4o to evaluate the
likelihood of specific emojis being used to express irony on social media and
comparing its interpretations with human perceptions, we aim to bridge the gap
between machine and human understanding. Our findings reveal nuanced insights
into GPT-4o's interpretive capabilities, highlighting areas of alignment with
and divergence from human behavior. Additionally, this research underscores the
importance of demographic factors, such as age and gender, in shaping emoji
interpretation and evaluates how these factors influence GPT-4o's performance.
|
2501.11246
|
Unlocking the Potential: A Novel Tool for Assessing Untapped
Micro-Pumped Hydro Energy Storage Systems in Michigan
|
eess.SY cs.SY
|
This study presents an innovative tool designed to unlock the potential of
Michigan's lakes and dams for applications such as water resource management
and renewable energy generation. Given Michigan's relatively flat landscape,
the focus is on systems that could serve as micro-hydro energy storage
solutions. To ensure accuracy and reliability, the tool incorporates extensive
data gathered from authorized sources, covering more than 420 water facilities
and potential reservoirs in the state. These data are used as part of a case
study to evaluate the tool's capabilities. Key parameters assessed include
horizontal and vertical distances (head), volume, and the total storage
capacity of each reservoir, measured in GWh. By analyzing these factors, the
tool determines the suitability of various lakes and dams for hydroelectric
power generation, and other uses based on the horizontal and vertical threshold
distances. Its robust assessment framework integrates these metrics to
comprehensively evaluate each site's potential. The tool's friendly interface
and advanced data visualization features make the findings easy to interpret,
facilitating optimal resource utilization and informed decision-making for
state authorities. Hence, this tool represents a meaningful advancement in
managing Michigan's water resources sustainably, promoting environmentally
friendly practices, and supporting economic development.
|
2501.11247
|
Multivariate Wireless Link Quality Prediction Based on Pre-trained Large
Language Models
|
cs.LG cs.NI
|
Accurate and reliable link quality prediction (LQP) is crucial for optimizing
network performance, ensuring communication stability, and enhancing user
experience in wireless communications. However, LQP faces significant
challenges due to the dynamic and lossy nature of wireless links, which are
influenced by interference, multipath effects, fading, and blockage. In this
paper, we propose GAT-LLM, a novel multivariate wireless link quality
prediction model that combines Large Language Models (LLMs) with Graph
Attention Networks (GAT) to enable accurate and reliable multivariate LQP of
wireless communications. By framing LQP as a time series prediction task and
appropriately preprocessing the input data, we leverage LLMs to improve the
accuracy of link quality prediction. To address the limitations of LLMs in
multivariate prediction due to typically handling one-dimensional data, we
integrate GAT to model interdependencies among multiple variables across
different protocol layers, enhancing the model's ability to handle complex
dependencies. Experimental results demonstrate that GAT-LLM significantly
improves the accuracy and robustness of link quality prediction, particularly
in multi-step prediction scenarios.
|
2501.11249
|
Enhancing SAR Object Detection with Self-Supervised Pre-training on
Masked Auto-Encoders
|
cs.CV
|
Supervised fine-tuning methods (SFT) perform great efficiency on artificial
intelligence interpretation in SAR images, leveraging the powerful
representation knowledge from pre-training models. Due to the lack of
domain-specific pre-trained backbones in SAR images, the traditional strategies
are loading the foundation pre-train models of natural scenes such as ImageNet,
whose characteristics of images are extremely different from SAR images. This
may hinder the model performance on downstream tasks when adopting SFT on
small-scale annotated SAR data. In this paper, an self-supervised learning
(SSL) method of masked image modeling based on Masked Auto-Encoders (MAE) is
proposed to learn feature representations of SAR images during the pre-training
process and benefit the object detection task in SAR images of SFT. The
evaluation experiments on the large-scale SAR object detection benchmark named
SARDet-100k verify that the proposed method captures proper latent
representations of SAR images and improves the model generalization in
downstream tasks by converting the pre-trained domain from natural scenes to
SAR images through SSL. The proposed method achieves an improvement of 1.3 mAP
on the SARDet-100k benchmark compared to only the SFT strategies.
|
2501.11252
|
Constant Optimization Driven Database System Testing
|
cs.SE cs.DB cs.PL
|
Logic bugs are bugs that can cause database management systems (DBMSs) to
silently produce incorrect results for given queries. Such bugs are severe,
because they can easily be overlooked by both developers and users, and can
cause applications that rely on the DBMSs to malfunction. In this work, we
propose Constant-Optimization-Driven Database Testing (CODDTest) as a novel
approach for detecting logic bugs in DBMSs. This method draws inspiration from
two well-known optimizations in compilers: constant folding and constant
propagation. Our key insight is that for a certain database state and query
containing a predicate, we can apply constant folding on the predicate by
replacing an expression in the predicate with a constant, anticipating that the
results of this predicate remain unchanged; any discrepancy indicates a bug in
the DBMS. We evaluated CODDTest on five mature and extensively-tested
DBMSs-SQLite, MySQL, CockroachDB, DuckDB, and TiDB-and found 45 unique,
previously unknown bugs in them. Out of these, 24 are unique logic bugs. Our
manual analysis of the state-of-the-art approaches indicates that 11 logic bugs
are detectable only by CODDTest. We believe that CODDTest is easy to implement,
and can be widely adopted in practice.
|
2501.11253
|
How Well Do Supervised 3D Models Transfer to Medical Imaging Tasks?
|
eess.IV cs.CV
|
The pre-training and fine-tuning paradigm has become prominent in transfer
learning. For example, if the model is pre-trained on ImageNet and then
fine-tuned to PASCAL, it can significantly outperform that trained on PASCAL
from scratch. While ImageNet pre-training has shown enormous success, it is
formed in 2D, and the learned features are for classification tasks; when
transferring to more diverse tasks, like 3D image segmentation, its performance
is inevitably compromised due to the deviation from the original ImageNet
context. A significant challenge lies in the lack of large, annotated 3D
datasets rivaling the scale of ImageNet for model pre-training. To overcome
this challenge, we make two contributions. Firstly, we construct AbdomenAtlas
1.1 that comprises 9,262 three-dimensional computed tomography (CT) volumes
with high-quality, per-voxel annotations of 25 anatomical structures and pseudo
annotations of seven tumor types. Secondly, we develop a suite of models that
are pre-trained on our AbdomenAtlas 1.1 for transfer learning. Our preliminary
analyses indicate that the model trained only with 21 CT volumes, 672 masks,
and 40 GPU hours has a transfer learning ability similar to the model trained
with 5,050 (unlabeled) CT volumes and 1,152 GPU hours. More importantly, the
transfer learning ability of supervised models can further scale up with larger
annotated datasets, achieving significantly better performance than preexisting
pre-trained models, irrespective of their pre-training methodologies or data
sources. We hope this study can facilitate collective efforts in constructing
larger 3D medical datasets and more releases of supervised pre-trained models.
|
2501.11255
|
Bounding the Settling Time of Finite-Time Stable Systems using Sum of
Squares
|
math.OC cs.SY eess.SY
|
Finite-time stability (FTS) of a differential equation guarantees that
solutions reach a given equilibrium point in finite time, where the time of
convergence depends on the initial state of the system. For traditional
stability notions such as exponential stability, the convex optimization
framework of Sum-of-Squares (SoS) enables the computation of polynomial
Lyapunov functions to certify stability. However, finite-time stable systems
are characterized by non-Lipschitz, non-polynomial vector fields, rendering
standard SoS methods inapplicable. To this end, in this paper, we show that the
computation of a non-polynomial Lyapunov function certifying finite-time
stability can be reformulated as computation of a polynomial one under a
particular transformation that we develop in this work. As a result, SoS can be
utilized to compute a Lyapunov function for FTS. This Lyapunov function can
then be used to obtain a bound on the settling time. We first present this
approach for the scalar case and then extend it to the multivariate case.
Numerical examples demonstrate the effectiveness of our approach in both
certifying finite-time stability and computing accurate settling time bounds.
This work represents the first combination of SoS programming with settling
time bounds for finite-time stable systems.
|
2501.11258
|
Enhancing Uncertainty Estimation in Semantic Segmentation via
Monte-Carlo Frequency Dropout
|
cs.CV cs.LG eess.IV stat.ML
|
Monte-Carlo (MC) Dropout provides a practical solution for estimating
predictive distributions in deterministic neural networks. Traditional dropout,
applied within the signal space, may fail to account for frequency-related
noise common in medical imaging, leading to biased predictive estimates. A
novel approach extends Dropout to the frequency domain, allowing stochastic
attenuation of signal frequencies during inference. This creates diverse global
textural variations in feature maps while preserving structural integrity -- a
factor we hypothesize and empirically show is contributing to accurately
estimating uncertainties in semantic segmentation. We evaluated traditional
MC-Dropout and the MC-frequency Dropout in three segmentation tasks involving
different imaging modalities: (i) prostate zones in biparametric MRI, (ii)
liver tumors in contrast-enhanced CT, and (iii) lungs in chest X-ray scans. Our
results show that MC-Frequency Dropout improves calibration, convergence, and
semantic uncertainty, thereby improving prediction scrutiny, boundary
delineation, and has the potential to enhance medical decision-making.
|
2501.11260
|
A Survey of World Models for Autonomous Driving
|
cs.RO cs.CV
|
Recent breakthroughs in autonomous driving have been propelled by advances in
robust world modeling, fundamentally transforming how vehicles interpret
dynamic scenes and execute safe decision-making. In particular, world models
have emerged as a linchpin technology, offering high-fidelity representations
of the driving environment that integrate multi-sensor data, semantic cues, and
temporal dynamics. This paper systematically reviews recent advances in world
models for autonomous driving, proposing a three-tiered taxonomy: 1) Generation
of Future Physical World, covering image-, BEV-, OG-, and PC-based generation
methods that enhance scene evolution modeling through diffusion models and 4D
occupancy forecasting; 2) Behavior Planning for Intelligent Agents, combining
rule-driven and learning-based paradigms with cost map optimization and
reinforcement learning for trajectory generation in complex traffic conditions;
3) Interaction Between Prediction and Planning, achieving multi-agent
collaborative decision-making through latent space diffusion and
memory-augmented architectures. The study further analyzes training paradigms
including self-supervised learning, multimodal pretraining, and generative data
augmentation, while evaluating world models' performance in scene understanding
and motion prediction tasks. Future research must address key challenges in
self-supervised representation learning, long-tail scenario generation, and
multimodal fusion to advance the practical deployment of world models in
complex urban environments. Overall, our comprehensive analysis provides a
theoretical framework and technical roadmap for harnessing the transformative
potential of world models in advancing safe and reliable autonomous driving
solutions.
|
2501.11263
|
Towards Loss-Resilient Image Coding for Unstable Satellite Networks
|
cs.CV eess.IV
|
Geostationary Earth Orbit (GEO) satellite communication demonstrates
significant advantages in emergency short burst data services. However,
unstable satellite networks, particularly those with frequent packet loss,
present a severe challenge to accurate image transmission. To address it, we
propose a loss-resilient image coding approach that leverages end-to-end
optimization in learned image compression (LIC). Our method builds on the
channel-wise progressive coding framework, incorporating Spatial-Channel
Rearrangement (SCR) on the encoder side and Mask Conditional Aggregation (MCA)
on the decoder side to improve reconstruction quality with unpredictable
errors. By integrating the Gilbert-Elliot model into the training process, we
enhance the model's ability to generalize in real-world network conditions.
Extensive evaluations show that our approach outperforms traditional and deep
learning-based methods in terms of compression performance and stability under
diverse packet loss, offering robust and efficient progressive transmission
even in challenging environments. Code is available at
https://github.com/NJUVISION/LossResilientLIC.
|
2501.11264
|
Code Readability in the Age of Large Language Models: An Industrial Case
Study from Atlassian
|
cs.SE cs.AI cs.CL
|
Programmers spend a significant amount of time reading code during the
software development process. This trend is amplified by the emergence of large
language models (LLMs) that automatically generate code. However, little is
known about the readability of the LLM-generated code and whether it is still
important from practitioners' perspectives in this new era. In this paper, we
conduct a survey to explore the practitioners' perspectives on code readability
in the age of LLMs and investigate the readability of our LLM-based software
development agents framework, HULA, by comparing its generated code with
human-written code in real-world scenarios. Overall, the findings underscore
that (1) readability remains a critical aspect of software development; (2) the
readability of our LLM-generated code is comparable to human-written code,
fostering the establishment of appropriate trust and driving the broad adoption
of our LLM-powered software development platform.
|
2501.11265
|
A Metric Topology of Deep Learning for Data Classification
|
cs.LG stat.ML
|
Empirically, Deep Learning (DL) has demonstrated unprecedented success in
practical applications. However, DL remains by and large a mysterious
"black-box", spurring recent theoretical research to build its mathematical
foundations. In this paper, we investigate DL for data classification through
the prism of metric topology. Considering that conventional Euclidean metric
over the network parameter space typically fails to discriminate DL networks
according to their classification outcomes, we propose from a probabilistic
point of view a meaningful distance measure, whereby DL networks yielding
similar classification performances are close. The proposed distance measure
defines such an equivalent relation among network parameter vectors that
networks performing equally well belong to the same equivalent class.
Interestingly, our proposed distance measure can provably serve as a metric on
the quotient set modulo the equivalent relation. Then, under quite mild
conditions it is shown that, apart from a vanishingly small subset of networks
likely to predict non-unique labels, our proposed metric space is compact, and
coincides with the well-known quotient topological space. Our study contributes
to fundamental understanding of DL, and opens up new ways of studying DL using
fruitful metric space theory.
|
2501.11267
|
Communication-Efficient Federated Learning by Quantized Variance
Reduction for Heterogeneous Wireless Edge Networks
|
cs.DC cs.LG
|
Federated learning (FL) has been recognized as a viable solution for
local-privacy-aware collaborative model training in wireless edge networks, but
its practical deployment is hindered by the high communication overhead caused
by frequent and costly server-device synchronization. Notably, most existing
communication-efficient FL algorithms fail to reduce the significant
inter-device variance resulting from the prevalent issue of device
heterogeneity. This variance severely decelerates algorithm convergence,
increasing communication overhead and making it more challenging to achieve a
well-performed model. In this paper, we propose a novel communication-efficient
FL algorithm, named FedQVR, which relies on a sophisticated variance-reduced
scheme to achieve heterogeneity-robustness in the presence of quantized
transmission and heterogeneous local updates among active edge devices.
Comprehensive theoretical analysis justifies that FedQVR is inherently
resilient to device heterogeneity and has a comparable convergence rate even
with a small number of quantization bits, yielding significant communication
savings. Besides, considering non-ideal wireless channels, we propose FedQVR-E
which enhances the convergence of FedQVR by performing joint allocation of
bandwidth and quantization bits across devices under constrained transmission
delays. Extensive experimental results are also presented to demonstrate the
superior performance of the proposed algorithms over their counterparts in
terms of both communication efficiency and application performance.
|
2501.11268
|
Sparse L0-norm based Kernel-free Quadratic Surface Support Vector
Machines
|
cs.LG stat.ML
|
Kernel-free quadratic surface support vector machine (SVM) models have gained
significant attention in machine learning. However, introducing a quadratic
classifier increases the model's complexity by quadratically expanding the
number of parameters relative to the dimensionality of the data, exacerbating
overfitting. To address this, we propose sparse $\ell_0$-norm based Kernel-free
quadratic surface SVMs, designed to mitigate overfitting and enhance
interpretability. Given the intractable nature of these models, we present a
penalty decomposition algorithm to efficiently obtain first-order optimality
points. Our analysis shows that the subproblems in this framework either admit
closed-form solutions or can leverage duality theory to improve computational
efficiency. Through empirical evaluations on real-world datasets, we
demonstrate the efficacy and robustness of our approach, showcasing its
potential to advance Kernel-free quadratic surface SVMs in practical
applications while addressing overfitting concerns. All the implemented models
and experiment codes are available at
\url{https://github.com/raminzandvakili/L0-QSVM}.
|
2501.11269
|
Can xLLMs Understand the Structure of Dialog? Exploring Multilingual
Response Generation in Complex Scenarios
|
cs.CL
|
Multilingual research has garnered increasing attention, especially in the
domain of dialogue systems. The rapid advancements in large language models
(LLMs) have fueled the demand for high-performing multilingual models. However,
two major challenges persist: the scarcity of high-quality multilingual
datasets and the limited complexity of existing datasets in capturing realistic
dialogue scenarios. To address these gaps, we introduce XMP, a high-quality
parallel Multilingual dataset sourced from Multi-party Podcast dialogues. Each
sample in the dataset features at least three participants discussing a wide
range of topics, including society, culture, politics, and
entertainment.Through extensive experiments, we uncover significant limitations
in previously recognized multilingual capabilities of LLMs when applied to such
complex dialogue scenarios. For instance, the widely accepted multilingual
complementary ability of LLMs is notably impacted. By conducting further
experiments, we explore the mechanisms of LLMs in multilingual environments
from multiple perspectives, shedding new light on their performance in
real-world, diverse conversational contexts.
|
2501.11270
|
Spatiotemporal Air Quality Mapping in Urban Areas Using Sparse Sensor
Data, Satellite Imagery, Meteorological Factors, and Spatial Features
|
cs.LG cs.AI cs.CV
|
Monitoring air pollution is crucial for protecting human health from exposure
to harmful substances. Traditional methods of air quality monitoring, such as
ground-based sensors and satellite-based remote sensing, face limitations due
to high deployment costs, sparse sensor coverage, and environmental
interferences. To address these challenges, this paper proposes a framework for
high-resolution spatiotemporal Air Quality Index (AQI) mapping using sparse
sensor data, satellite imagery, and various spatiotemporal factors. By
leveraging Graph Neural Networks (GNNs), we estimate AQI values at unmonitored
locations based on both spatial and temporal dependencies. The framework
incorporates a wide range of environmental features, including meteorological
data, road networks, points of interest (PoIs), population density, and urban
green spaces, which enhance prediction accuracy. We illustrate the use of our
approach through a case study in Lahore, Pakistan, where multi-resolution data
is used to generate the air quality index map at a fine spatiotemporal scale.
|
2501.11273
|
Multi-round, Chain-of-thought Post-editing for Unfaithful Summaries
|
cs.CL
|
Recent large language models (LLMs) have demonstrated a remarkable ability to
perform natural language understanding and generation tasks. In this work, we
investigate the use of LLMs for evaluating faithfulness in news summarization,
finding that it achieves a strong correlation with human judgments. We further
investigate LLMs' capabilities as a faithfulness post-editor, experimenting
with different chain-of-thought prompts for locating and correcting factual
inconsistencies between a generated summary and the source news document and
are able to achieve a higher editing success rate than was reported in prior
work. We perform both automated and human evaluations of the post-edited
summaries, finding that prompting LLMs using chain-of-thought reasoning about
factual error types is an effective faithfulness post-editing strategy,
performing comparably to fine-tuned post-editing models. We also demonstrate
that multiple rounds of post-editing, which has not previously been explored,
can be used to gradually improve the faithfulness of summaries whose errors
cannot be fully corrected in a single round.
|
2501.11275
|
Higher Order Approximation Rates for ReLU CNNs in Korobov Spaces
|
cs.LG cs.NA math.NA
|
This paper investigates the $L_p$ approximation error for higher order
Korobov functions using deep convolutional neural networks (CNNs) with ReLU
activation. For target functions having a mixed derivative of order m+1 in each
direction, we improve classical approximation rate of second order to (m+1)-th
order (modulo a logarithmic factor) in terms of the depth of CNNs. The key
ingredient in our analysis is approximate representation of high-order sparse
grid basis functions by CNNs. The results suggest that higher order
expressivity of CNNs does not severely suffer from the curse of dimensionality.
|
2501.11276
|
ITCFN: Incomplete Triple-Modal Co-Attention Fusion Network for Mild
Cognitive Impairment Conversion Prediction
|
eess.IV cs.CV
|
Alzheimer's disease (AD) is a common neurodegenerative disease among the
elderly. Early prediction and timely intervention of its prodromal stage, mild
cognitive impairment (MCI), can decrease the risk of advancing to AD. Combining
information from various modalities can significantly improve predictive
accuracy. However, challenges such as missing data and heterogeneity across
modalities complicate multimodal learning methods as adding more modalities can
worsen these issues. Current multimodal fusion techniques often fail to adapt
to the complexity of medical data, hindering the ability to identify
relationships between modalities. To address these challenges, we propose an
innovative multimodal approach for predicting MCI conversion, focusing
specifically on the issues of missing positron emission tomography (PET) data
and integrating diverse medical information. The proposed incomplete
triple-modal MCI conversion prediction network is tailored for this purpose.
Through the missing modal generation module, we synthesize the missing PET data
from the magnetic resonance imaging and extract features using specifically
designed encoders. We also develop a channel aggregation module and a
triple-modal co-attention fusion module to reduce feature redundancy and
achieve effective multimodal data fusion. Furthermore, we design a loss
function to handle missing modality issues and align cross-modal features.
These components collectively harness multimodal data to boost network
performance. Experimental results on the ADNI1 and ADNI2 datasets show that our
method significantly surpasses existing unimodal and other multimodal models.
Our code is available at https://github.com/justinhxy/ITFC.
|
2501.11280
|
Empirical Bayes Estimation for Lasso-Type Regularizers: Analysis of
Automatic Relevance Determination
|
math.ST cs.IT cs.LG math.IT stat.TH
|
This paper focuses on linear regression models with non-conjugate
sparsity-inducing regularizers such as lasso and group lasso. Although
empirical Bayes approach enables us to estimate the regularization parameter,
little is known on the properties of the estimators. In particular, there are
many unexplained aspects regarding the specific conditions under which the
mechanism of automatic relevance determination (ARD) occurs. In this paper, we
derive the empirical Bayes estimators for the group lasso regularized linear
regression models with a limited number of parameters. It is shown that the
estimators diverge under a certain condition, giving rise to the ARD mechanism.
We also prove that empirical Bayes methods can produce ARD mechanism in general
regularized linear regression models and clarify the conditions under which
models such as ridge, lasso, and group lasso can produce ARD mechanism.
|
2501.11282
|
Several classes of linear codes with few weights derived from Weil sums
|
cs.IT math.IT
|
Linear codes with few weights have applications in secret sharing,
authentication codes, association schemes and strongly regular graphs. In this
paper, several classes of $t$-weight linear codes over ${\mathbb F}_{q}$ are
presented with the defining sets given by the intersection, difference and
union of two certain sets, where $t=3,4,5,6$ and $q$ is an odd prime power. By
using Weil sums and Gauss sums, the parameters and weight distributions of
these codes are determined completely. Moreover, three classes of optimal codes
meeting the Griesmer bound are obtained, and computer experiments show that
many (almost) optimal codes can be derived from our constructions.
|
2501.11283
|
Large Language Model Agents for Radio Map Generation and Wireless
Network Planning
|
cs.IT math.IT
|
Using commercial software for radio map generation and wireless network
planning often require complex manual operations, posing significant challenges
in terms of scalability, adaptability, and user-friendliness, due to heavy
manual operations. To address these issues, we propose an automated solution
that employs large language model (LLM) agents. These agents are designed to
autonomously generate radio maps and facilitate wireless network planning for
specified areas, thereby minimizing the necessity for extensive manual
intervention. To validate the effectiveness of our proposed solution, we
develop a software platform that integrates LLM agents. Experimental results
demonstrate that a large amount manual operations can be saved via the proposed
LLM agent, and the automated solutions can achieve an enhanced coverage and
signal-to-interference-noise ratio (SINR), especially in urban environments.
|
2501.11284
|
RedStar: Does Scaling Long-CoT Data Unlock Better Slow-Reasoning
Systems?
|
cs.LG cs.AI cs.CL
|
Can scaling transform reasoning? In this work, we explore the untapped
potential of scaling Long Chain-of-Thought (Long-CoT) data to 1000k samples,
pioneering the development of a slow-thinking model, RedStar. Through extensive
experiments with various LLMs and different sizes, we uncover the ingredients
for specialization and scale for Long-CoT training. Surprisingly, even smaller
models show significant performance gains with limited data, revealing the
sample efficiency of Long-CoT and the critical role of sample difficulty in the
learning process. Our findings demonstrate that Long-CoT reasoning can be
effectively triggered with just a few thousand examples, while larger models
achieve unparalleled improvements. We also introduce reinforcement learning
(RL)-scale training as a promising direction for advancing slow-thinking
systems. RedStar shines across domains: on the MATH-Hard benchmark,
RedStar-code-math boosts performance from 66.2\% to 81.6\%, and on the USA Math
Olympiad (AIME), it solves 46.7\% of problems using only 21k mixed-code-math
datasets. In multimodal tasks like GeoQA and MathVista-GEO, RedStar-Geo
achieves competitive results with minimal Long-CoT data, outperforming other
slow-thinking systems like QvQ-Preview. Compared to QwQ, RedStar strikes the
perfect balance between reasoning and generalizability. Our work highlights
that, with careful tuning, scaling Long-CoT can unlock extraordinary reasoning
capabilities-even with limited dataset and set a new standard for slow-thinking
models across diverse challenges. Our data and models are released at
https://huggingface.co/RedStar-Reasoning.
|
2501.11288
|
PD-SORT: Occlusion-Robust Multi-Object Tracking Using Pseudo-Depth Cues
|
cs.CV
|
Multi-object tracking (MOT) is a rising topic in video processing
technologies and has important application value in consumer electronics.
Currently, tracking-by-detection (TBD) is the dominant paradigm for MOT, which
performs target detection and association frame by frame. However, the
association performance of TBD methods degrades in complex scenes with heavy
occlusions, which hinders the application of such methods in real-world
scenarios.To this end, we incorporate pseudo-depth cues to enhance the
association performance and propose Pseudo-Depth SORT (PD-SORT). First, we
extend the Kalman filter state vector with pseudo-depth states. Second, we
introduce a novel depth volume IoU (DVIoU) by combining the conventional 2D IoU
with pseudo-depth. Furthermore, we develop a quantized pseudo-depth measurement
(QPDM) strategy for more robust data association. Besides, we also integrate
camera motion compensation (CMC) to handle dynamic camera situations. With the
above designs, PD-SORT significantly alleviates the occlusion-induced ambiguous
associations and achieves leading performances on DanceTrack, MOT17, and MOT20.
Note that the improvement is especially obvious on DanceTrack, where objects
show complex motions, similar appearances, and frequent occlusions. The code is
available at https://github.com/Wangyc2000/PD_SORT.
|
2501.11292
|
Advancing Multi-Party Dialogue Systems with Speaker-ware Contrastive
Learning
|
cs.CL
|
Dialogue response generation has made significant progress, but most research
has focused on dyadic dialogue. In contrast, multi-party dialogues involve more
participants, each potentially discussing different topics, making the task
more complex. Current methods often rely on graph neural networks to model
dialogue context, which helps capture the structural dynamics of multi-party
conversations. However, these methods are heavily dependent on intricate graph
structures and dataset annotations, and they often overlook the distinct
speaking styles of participants. To address these challenges, we propose CMR, a
Contrastive learning-based Multi-party dialogue Response generation model. CMR
uses self-supervised contrastive learning to better distinguish "who says
what." Additionally, by comparing speakers within the same conversation, the
model captures differences in speaking styles and thematic transitions. To the
best of our knowledge, this is the first approach to apply contrastive learning
in multi-party dialogue generation. Experimental results show that CMR
significantly outperforms state-of-the-art models in multi-party dialogue
response tasks.
|
2501.11293
|
A Machine Learning Framework for Handling Unreliable Absence Label and
Class Imbalance for Marine Stinger Beaching Prediction
|
cs.LG cs.AI stat.ML
|
Bluebottles (\textit{Physalia} spp.) are marine stingers resembling
jellyfish, whose presence on Australian beaches poses a significant public risk
due to their venomous nature. Understanding the environmental factors driving
bluebottles ashore is crucial for mitigating their impact, and machine learning
tools are to date relatively unexplored. We use bluebottle marine stinger
presence/absence data from beaches in Eastern Sydney, Australia, and compare
machine learning models (Multilayer Perceptron, Random Forest, and XGBoost) to
identify factors influencing their presence. We address challenges such as
class imbalance, class overlap, and unreliable absence data by employing data
augmentation techniques, including the Synthetic Minority Oversampling
Technique (SMOTE), Random Undersampling, and Synthetic Negative Approach that
excludes the negative class. Our results show that SMOTE failed to resolve
class overlap, but the presence-focused approach effectively handled imbalance,
class overlap, and ambiguous absence data. The data attributes such as the wind
direction, which is a circular variable, emerged as a key factor influencing
bluebottle presence, confirming previous inference studies. However, in the
absence of population dynamics, biological behaviours, and life cycles, the
best predictive model appears to be Random Forests combined with Synthetic
Negative Approach. This research contributes to mitigating the risks posed by
bluebottles to beachgoers and provides insights into handling class overlap and
unreliable negative class in environmental modelling.
|
2501.11299
|
MIFNet: Learning Modality-Invariant Features for Generalizable
Multimodal Image Matching
|
cs.CV
|
Many keypoint detection and description methods have been proposed for image
matching or registration. While these methods demonstrate promising performance
for single-modality image matching, they often struggle with multimodal data
because the descriptors trained on single-modality data tend to lack robustness
against the non-linear variations present in multimodal data. Extending such
methods to multimodal image matching often requires well-aligned multimodal
data to learn modality-invariant descriptors. However, acquiring such data is
often costly and impractical in many real-world scenarios. To address this
challenge, we propose a modality-invariant feature learning network (MIFNet) to
compute modality-invariant features for keypoint descriptions in multimodal
image matching using only single-modality training data. Specifically, we
propose a novel latent feature aggregation module and a cumulative hybrid
aggregation module to enhance the base keypoint descriptors trained on
single-modality data by leveraging pre-trained features from Stable Diffusion
models. We validate our method with recent keypoint detection and description
methods in three multimodal retinal image datasets (CF-FA, CF-OCT, EMA-OCTA)
and two remote sensing datasets (Optical-SAR and Optical-NIR). Extensive
experiments demonstrate that the proposed MIFNet is able to learn
modality-invariant feature for multimodal image matching without accessing the
targeted modality and has good zero-shot generalization ability. The source
code will be made publicly available.
|
2501.11301
|
Question-to-Question Retrieval for Hallucination-Free Knowledge Access:
An Approach for Wikipedia and Wikidata Question Answering
|
cs.CL cs.AI
|
This paper introduces an approach to question answering over knowledge bases
like Wikipedia and Wikidata by performing "question-to-question" matching and
retrieval from a dense vector embedding store. Instead of embedding document
content, we generate a comprehensive set of questions for each logical content
unit using an instruction-tuned LLM. These questions are vector-embedded and
stored, mapping to the corresponding content. Vector embedding of user queries
are then matched against this question vector store. The highest similarity
score leads to direct retrieval of the associated article content, eliminating
the need for answer generation. Our method achieves high cosine similarity ( >
0.9 ) for relevant question pairs, enabling highly precise retrieval. This
approach offers several advantages including computational efficiency, rapid
response times, and increased scalability. We demonstrate its effectiveness on
Wikipedia and Wikidata, including multimedia content through structured fact
retrieval from Wikidata, opening up new pathways for multimodal question
answering.
|
2501.11305
|
Generalizable Spectral Embedding with an Application to UMAP
|
cs.LG stat.ML
|
Spectral Embedding (SE) is a popular method for dimensionality reduction,
applicable across diverse domains. Nevertheless, its current implementations
face three prominent drawbacks which curtail its broader applicability:
generalizability (i.e., out-of-sample extension), scalability, and eigenvectors
separation. In this paper, we introduce GrEASE: Generalizable and Efficient
Approximate Spectral Embedding, a novel deep-learning approach designed to
address these limitations. GrEASE incorporates an efficient post-processing
step to achieve eigenvectors separation, while ensuring both generalizability
and scalability, allowing for the computation of the Laplacian's eigenvectors
on unseen data. This method expands the applicability of SE to a wider range of
tasks and can enhance its performance in existing applications. We empirically
demonstrate GrEASE's ability to consistently approximate and generalize SE,
while ensuring scalability. Additionally, we show how GrEASE can be leveraged
to enhance existing methods. Specifically, we focus on UMAP, a leading
visualization technique, and introduce NUMAP, a generalizable version of UMAP
powered by GrEASE. Our codes are publicly available.
|
2501.11306
|
Collaborative Imputation of Urban Time Series through Cross-city
Meta-learning
|
cs.LG cs.AI
|
Urban time series, such as mobility flows, energy consumption, and pollution
records, encapsulate complex urban dynamics and structures. However, data
collection in each city is impeded by technical challenges such as budget
limitations and sensor failures, necessitating effective data imputation
techniques that can enhance data quality and reliability. Existing imputation
models, categorized into learning-based and analytics-based paradigms, grapple
with the trade-off between capacity and generalizability. Collaborative
learning to reconstruct data across multiple cities holds the promise of
breaking this trade-off. Nevertheless, urban data's inherent irregularity and
heterogeneity issues exacerbate challenges of knowledge sharing and
collaboration across cities. To address these limitations, we propose a novel
collaborative imputation paradigm leveraging meta-learned implicit neural
representations (INRs). INRs offer a continuous mapping from domain coordinates
to target values, integrating the strengths of both paradigms. By imposing
embedding theory, we first employ continuous parameterization to handle
irregularity and reconstruct the dynamical system. We then introduce a
cross-city collaborative learning scheme through model-agnostic meta learning,
incorporating hierarchical modulation and normalization techniques to
accommodate multiscale representations and reduce variance in response to
heterogeneity. Extensive experiments on a diverse urban dataset from 20 global
cities demonstrate our model's superior imputation performance and
generalizability, underscoring the effectiveness of collaborative imputation in
resource-constrained settings.
|
2501.11309
|
Finer-CAM: Spotting the Difference Reveals Finer Details for Visual
Explanation
|
cs.CV cs.AI
|
Class activation map (CAM) has been widely used to highlight image regions
that contribute to class predictions. Despite its simplicity and computational
efficiency, CAM often struggles to identify discriminative regions that
distinguish visually similar fine-grained classes. Prior efforts address this
limitation by introducing more sophisticated explanation processes, but at the
cost of extra complexity. In this paper, we propose Finer-CAM, a method that
retains CAM's efficiency while achieving precise localization of discriminative
regions. Our key insight is that the deficiency of CAM lies not in "how" it
explains, but in "what" it explains}. Specifically, previous methods attempt to
identify all cues contributing to the target class's logit value, which
inadvertently also activates regions predictive of visually similar classes. By
explicitly comparing the target class with similar classes and spotting their
differences, Finer-CAM suppresses features shared with other classes and
emphasizes the unique, discriminative details of the target class. Finer-CAM is
easy to implement, compatible with various CAM methods, and can be extended to
multi-modal models for accurate localization of specific concepts.
Additionally, Finer-CAM allows adjustable comparison strength, enabling users
to selectively highlight coarse object contours or fine discriminative details.
Quantitatively, we show that masking out the top 5% of activated pixels by
Finer-CAM results in a larger relative confidence drop compared to baselines.
The source code and demo are available at
https://github.com/Imageomics/Finer-CAM.
|
2501.11310
|
Anomaly Detection for Industrial Applications, Its Challenges,
Solutions, and Future Directions: A Review
|
cs.CV
|
Anomaly detection from images captured using camera sensors is one of the
mainstream applications at the industrial level. Particularly, it maintains the
quality and optimizes the efficiency in production processes across diverse
industrial tasks, including advanced manufacturing and aerospace engineering.
Traditional anomaly detection workflow is based on a manual inspection by human
operators, which is a tedious task. Advances in intelligent automated
inspection systems have revolutionized the Industrial Anomaly Detection (IAD)
process. Recent vision-based approaches can automatically extract, process, and
interpret features using computer vision and align with the goals of automation
in industrial operations. In light of the shift in inspection methodologies,
this survey reviews studies published since 2019, with a specific focus on
vision-based anomaly detection. The components of an IAD pipeline that are
overlooked in existing surveys are presented, including areas related to data
acquisition, preprocessing, learning mechanisms, and evaluation. In addition to
the collected publications, several scientific and industry-related challenges
and their perspective solutions are highlighted. Popular and relevant
industrial datasets are also summarized, providing further insight into
inspection applications. Finally, future directions of vision-based IAD are
discussed, offering researchers insight into the state-of-the-art of industrial
inspection.
|
2501.11311
|
A2SB: Audio-to-Audio Schrodinger Bridges
|
cs.SD cs.LG eess.AS
|
Audio in the real world may be perturbed due to numerous factors, causing the
audio quality to be degraded. The following work presents an audio restoration
model tailored for high-res music at 44.1kHz. Our model, Audio-to-Audio
Schrodinger Bridges (A2SB), is capable of both bandwidth extension (predicting
high-frequency components) and inpainting (re-generating missing segments).
Critically, A2SB is end-to-end without need of a vocoder to predict waveform
outputs, able to restore hour-long audio inputs, and trained on permissively
licensed music data. A2SB is capable of achieving state-of-the-art bandwidth
extension and inpainting quality on several out-of-distribution music test
sets. Our demo website is https: //research.nvidia.com/labs/adlr/A2SB/.
|
2501.11313
|
Asymptotically Optimal Aperiodic and Periodic Sequence Sets with Low
Ambiguity Zone Through Locally Perfect Nonlinear Functions
|
cs.IT math.IT
|
Low ambiguity zone (LAZ) sequences play a crucial role in modern integrated
sensing and communication (ISAC) systems. In this paper, we introduce a novel
class of functions known as locally perfect nonlinear functions (LPNFs). By
utilizing LPNFs and interleaving techniques, we propose three new classes of
both periodic and aperiodic LAZ sequence sets with flexible parameters. The
proposed periodic LAZ sequence sets are asymptotically optimal in relation to
the periodic Ye-Zhou-Liu-Fan-Lei-Tang bound. Notably, the aperiodic LAZ
sequence sets also asymptotically satisfy the aperiodic
Ye-Zhou-Liu-Fan-Lei-Tang bound, marking the first construction in the
literature. Finally, we demonstrate that the proposed sequence sets are
cyclically distinct.
|
2501.11318
|
Nested Annealed Training Scheme for Generative Adversarial Networks
|
cs.CV cs.LG
|
Recently, researchers have proposed many deep generative models, including
generative adversarial networks(GANs) and denoising diffusion models. Although
significant breakthroughs have been made and empirical success has been
achieved with the GAN, its mathematical underpinnings remain relatively
unknown. This paper focuses on a rigorous mathematical theoretical framework:
the composite-functional-gradient GAN (CFG)[1]. Specifically, we reveal the
theoretical connection between the CFG model and score-based models. We find
that the training objective of the CFG discriminator is equivalent to finding
an optimal D(x). The optimal gradient of D(x) differentiates the integral of
the differences between the score functions of real and synthesized samples.
Conversely, training the CFG generator involves finding an optimal G(x) that
minimizes this difference. In this paper, we aim to derive an annealed weight
preceding the weight of the CFG discriminator. This new explicit theoretical
explanation model is called the annealed CFG method. To overcome the limitation
of the annealed CFG method, as the method is not readily applicable to the SOTA
GAN model, we propose a nested annealed training scheme (NATS). This scheme
keeps the annealed weight from the CFG method and can be seamlessly adapted to
various GAN models, no matter their structural, loss, or regularization
differences. We conduct thorough experimental evaluations on various benchmark
datasets for image generation. The results show that our annealed CFG and NATS
methods significantly improve the quality and diversity of the synthesized
samples. This improvement is clear when comparing the CFG method and the SOTA
GAN models.
|
2501.11319
|
StyleSSP: Sampling StartPoint Enhancement for Training-free
Diffusion-based Method for Style Transfer
|
cs.CV
|
Training-free diffusion-based methods have achieved remarkable success in
style transfer, eliminating the need for extensive training or fine-tuning.
However, due to the lack of targeted training for style information extraction
and constraints on the content image layout, training-free methods often suffer
from layout changes of original content and content leakage from style images.
Through a series of experiments, we discovered that an effective startpoint in
the sampling stage significantly enhances the style transfer process. Based on
this discovery, we propose StyleSSP, which focuses on obtaining a better
startpoint to address layout changes of original content and content leakage
from style image. StyleSSP comprises two key components: (1) Frequency
Manipulation: To improve content preservation, we reduce the low-frequency
components of the DDIM latent, allowing the sampling stage to pay more
attention to the layout of content images; and (2) Negative Guidance via
Inversion: To mitigate the content leakage from style image, we employ negative
guidance in the inversion stage to ensure that the startpoint of the sampling
stage is distanced from the content of style image. Experiments show that
StyleSSP surpasses previous training-free style transfer baselines,
particularly in preserving original content and minimizing the content leakage
from style image.
|
2501.11323
|
Physics-Informed Machine Learning for Efficient Reconfigurable
Intelligent Surface Design
|
cs.LG eess.SP physics.app-ph stat.ML
|
Reconfigurable intelligent surface (RIS) is a two-dimensional periodic
structure integrated with a large number of reflective elements, which can
manipulate electromagnetic waves in a digital way, offering great potentials
for wireless communication and radar detection applications. However,
conventional RIS designs highly rely on extensive full-wave EM simulations that
are extremely time-consuming. To address this challenge, we propose a
machine-learning-assisted approach for efficient RIS design. An accurate and
fast model to predict the reflection coefficient of RIS element is developed by
combining a multi-layer perceptron neural network (MLP) and a dual-port
network, which can significantly reduce tedious EM simulations in the network
training. A RIS has been practically designed based on the proposed method. To
verify the proposed method, the RIS has also been fabricated and measured. The
experimental results are in good agreement with the simulation results, which
validates the efficacy of the proposed method in RIS design.
|
2501.11325
|
CatV2TON: Taming Diffusion Transformers for Vision-Based Virtual Try-On
with Temporal Concatenation
|
cs.CV cs.AI
|
Virtual try-on (VTON) technology has gained attention due to its potential to
transform online retail by enabling realistic clothing visualization of images
and videos. However, most existing methods struggle to achieve high-quality
results across image and video try-on tasks, especially in long video
scenarios. In this work, we introduce CatV2TON, a simple and effective
vision-based virtual try-on (V2TON) method that supports both image and video
try-on tasks with a single diffusion transformer model. By temporally
concatenating garment and person inputs and training on a mix of image and
video datasets, CatV2TON achieves robust try-on performance across static and
dynamic settings. For efficient long-video generation, we propose an
overlapping clip-based inference strategy that uses sequential frame guidance
and Adaptive Clip Normalization (AdaCN) to maintain temporal consistency with
reduced resource demands. We also present ViViD-S, a refined video try-on
dataset, achieved by filtering back-facing frames and applying 3D mask
smoothing for enhanced temporal consistency. Comprehensive experiments
demonstrate that CatV2TON outperforms existing methods in both image and video
try-on tasks, offering a versatile and reliable solution for realistic virtual
try-ons across diverse scenarios.
|
2501.11326
|
The "Law" of the Unconscious Contrastive Learner: Probabilistic
Alignment of Unpaired Modalities
|
cs.LG stat.ML
|
While internet-scale data often comes in pairs (e.g., audio/image,
image/text), we often want to perform inferences over modalities unseen
together in the training data (e.g., audio/text). Empirically, this can often
be addressed by learning multiple contrastive embedding spaces between existing
modality pairs, implicitly hoping that unseen modality pairs will end up being
aligned. This theoretical paper proves that this hope is well founded, under
certain assumptions. Starting with the proper Bayesian approach of integrating
out intermediate modalities, we show that directly comparing the
representations of data from unpaired modalities can recover the same
likelihood ratio. Our analysis builds on prior work on the geometry and
probabilistic interpretation of contrastive representations, showing how these
representations can answer many of the same inferences as probabilistic
graphical models. Our analysis suggests two new ways of using contrastive
representations: in settings with pre-trained contrastive models, and for
handling language ambiguity in reinforcement learning. Our numerical
experiments study the importance of our assumptions and demonstrate these new
applications.
|
2501.11333
|
A Dynamic Improvement Framework for Vehicular Task Offloading
|
eess.SY cs.NI cs.SY
|
In this paper, the task offloading from vehicles with random velocities is
optimized via a novel dynamic improvement framework. Particularly, in a
vehicular network with multiple vehicles and base stations (BSs), computing
tasks of vehicles are offloaded via BSs to an edge server. Due to the random
velocities, the exact trajectories of vehicles cannot be predicted in advance.
Hence, instead of deterministic optimization, the cell association, uplink time
and throughput allocation of multiple vehicles in a period of task offloading
are formulated as a finite-horizon Markov decision process. In the proposed
solution framework, we first obtain a reference scheduling scheme of cell
association, uplink time and throughput allocation via deterministic
optimization at the very beginning. The reference scheduling scheme is then
used to approximate the value functions of the Bellman's equations, and the
actual scheduling action is determined in each time slot according to the
current system state and approximate value functions. Thus, the intensive
computation for value iteration in the conventional solution is eliminated.
Moreover, a non-trivial average cost upper bound is provided for the proposed
solution framework. In the simulation, the random trajectories of vehicles are
generated from a high-fidelity traffic simulator. It is shown that the
performance gain of the proposed scheduling framework over the baselines is
significant.
|
2501.11335
|
Few-shot Policy (de)composition in Conversational Question Answering
|
cs.CL cs.AI
|
The task of policy compliance detection (PCD) is to determine if a scenario
is in compliance with respect to a set of written policies. In a conversational
setting, the results of PCD can indicate if clarifying questions must be asked
to determine compliance status. Existing approaches usually claim to have
reasoning capabilities that are latent or require a large amount of annotated
data. In this work, we propose logical decomposition for policy compliance
(LDPC): a neuro-symbolic framework to detect policy compliance using large
language models (LLMs) in a few-shot setting. By selecting only a few exemplars
alongside recently developed prompting techniques, we demonstrate that our
approach soundly reasons about policy compliance conversations by extracting
sub-questions to be answered, assigning truth values from contextual
information, and explicitly producing a set of logic statements from the given
policies. The formulation of explicit logic graphs can in turn help answer
PCDrelated questions with increased transparency and explainability. We apply
this approach to the popular PCD and conversational machine reading benchmark,
ShARC, and show competitive performance with no task-specific finetuning. We
also leverage the inherently interpretable architecture of LDPC to understand
where errors occur, revealing ambiguities in the ShARC dataset and highlighting
the challenges involved with reasoning for conversational question answering.
|
2501.11338
|
Driver Behavior Soft-Sensor Based on Neurofuzzy Systems and Weighted
Projection on Principal Components
|
eess.SY cs.SY
|
This work has as main objective the development of a soft-sensor to classify,
in real time, the behaviors of drivers when they are at the controls of a
vehicle. Efficient classification of drivers' behavior while driving, using
only the measurements of the sensors already incorporated in the vehicles and
without the need to add extra hardware (smart phones, cameras, etc.), is a
challenge. The main advantage of using only the data center signals of modern
vehicles is economical. The classification of the driving behavior and the
warning to the driver of dangerous behaviors without the need to add extra
hardware (and their software) to the vehicle, would allow the direct
integration of these classifiers into the current vehicles without incurring a
greater cost in the manufacture of the vehicles and therefore be an added
value. In this work, the classification is obtained based only on speed,
acceleration and inertial measurements which are already present in many modern
vehicles. The proposed algorithm is based on a structure made by several
Neurofuzzy systems with the combination of projected data in components of
various Principal Component Analysis. A comparison with several types of
classical classifying algorithms has been made.
|
2501.11340
|
GenVidBench: A Challenging Benchmark for Detecting AI-Generated Video
|
cs.CV
|
The rapid advancement of video generation models has made it increasingly
challenging to distinguish AI-generated videos from real ones. This issue
underscores the urgent need for effective AI-generated video detectors to
prevent the dissemination of false information through such videos. However,
the development of high-performance generative video detectors is currently
impeded by the lack of large-scale, high-quality datasets specifically designed
for generative video detection. To this end, we introduce GenVidBench, a
challenging AI-generated video detection dataset with several key advantages:
1) Cross Source and Cross Generator: The cross-generation source mitigates the
interference of video content on the detection. The cross-generator ensures
diversity in video attributes between the training and test sets, preventing
them from being overly similar. 2) State-of-the-Art Video Generators: The
dataset includes videos from 8 state-of-the-art AI video generators, ensuring
that it covers the latest advancements in the field of video generation. 3)
Rich Semantics: The videos in GenVidBench are analyzed from multiple dimensions
and classified into various semantic categories based on their content. This
classification ensures that the dataset is not only large but also diverse,
aiding in the development of more generalized and effective detection models.
We conduct a comprehensive evaluation of different advanced video generators
and present a challenging setting. Additionally, we present rich experimental
results including advanced video classification models as baselines. With the
GenVidBench, researchers can efficiently develop and evaluate AI-generated
video detection models. Datasets and code are available at
https://genvidbench.github.io.
|
2501.11341
|
Lee and Seung (2000)'s Algorithms for Non-negative Matrix Factorization:
A Supplementary Proof Guide
|
math.NA cs.LG cs.NA
|
Lee and Seung (2000) introduced numerical solutions for non-negative matrix
factorization (NMF) using iterative multiplicative update algorithms. These
algorithms have been actively utilized as dimensionality reduction tools for
high-dimensional non-negative data and learning algorithms for artificial
neural networks. Despite a considerable amount of literature on the
applications of the NMF algorithms, detailed explanations about their
formulation and derivation are lacking. This report provides supplementary
details to help understand the formulation and derivation of the proofs as used
in the original paper.
|
2501.11342
|
Disentangled Modeling of Preferences and Social Influence for Group
Recommendation
|
cs.IR
|
The group recommendation (GR) aims to suggest items for a group of users in
social networks. Existing work typically considers individual preferences as
the sole factor in aggregating group preferences. Actually, social influence is
also an important factor in modeling users' contributions to the final group
decision. However, existing methods either neglect the social influence of
individual members or bundle preferences and social influence together as a
unified representation. As a result, these models emphasize the preferences of
the majority within the group rather than the actual interaction items, which
we refer to as the preference bias issue in GR. Moreover, the self-supervised
learning (SSL) strategies they designed to address the issue of group data
sparsity fail to account for users' contextual social weights when regulating
group representations, leading to suboptimal results. To tackle these issues,
we propose a novel model based on Disentangled Modeling of Preferences and
Social Influence for Group Recommendation (DisRec). Concretely, we first design
a user-level disentangling network to disentangle the preferences and social
influence of group members with separate embedding propagation schemes based on
(hyper)graph convolution networks. We then introduce a socialbased contrastive
learning strategy, selectively excluding user nodes based on their social
importance to enhance group representations and alleviate the group-level data
sparsity issue. The experimental results demonstrate that our model
significantly outperforms state-of-the-art methods on two realworld datasets.
|
2501.11347
|
EndoChat: Grounded Multimodal Large Language Model for Endoscopic
Surgery
|
cs.CV
|
Recently, Multimodal Large Language Models (MLLMs) have demonstrated their
immense potential in computer-aided diagnosis and decision-making. In the
context of robotic-assisted surgery, MLLMs can serve as effective tools for
surgical training and guidance. However, there is still a lack of MLLMs
specialized for surgical scene understanding in clinical applications. In this
work, we introduce EndoChat to address various dialogue paradigms and subtasks
in surgical scene understanding that surgeons encounter. To train our EndoChat,
we construct the Surg-396K dataset through a novel pipeline that systematically
extracts surgical information and generates structured annotations based on
collected large-scale endoscopic surgery datasets. Furthermore, we introduce a
multi-scale visual token interaction mechanism and a visual contrast-based
reasoning mechanism to enhance the model's representation learning and
reasoning capabilities. Our model achieves state-of-the-art performance across
five dialogue paradigms and eight surgical scene understanding tasks.
Additionally, we conduct evaluations with professional surgeons, most of whom
provide positive feedback on collaborating with EndoChat. Overall, these
results demonstrate that our EndoChat has great potential to significantly
advance training and automation in robotic-assisted surgery.
|
2501.11350
|
Adaptive parameters identification for nonlinear dynamics using deep
permutation invariant networks
|
cs.LG
|
The promising outcomes of dynamical system identification techniques, such as
SINDy [Brunton et al. 2016], highlight their advantages in providing
qualitative interpretability and extrapolation compared to non-interpretable
deep neural networks [Rudin 2019]. These techniques suffer from parameter
updating in real-time use cases, especially when the system parameters are
likely to change during or between processes. Recently, the OASIS [Bhadriraju
et al. 2020] framework introduced a data-driven technique to address the
limitations of real-time dynamical system parameters updating, yielding
interesting results. Nevertheless, we show in this work that superior
performance can be achieved using more advanced model architectures. We present
an innovative encoding approach, based mainly on the use of Set Encoding
methods of sequence data, which give accurate adaptive model identification for
complex dynamic systems, with variable input time series length. Two Set
Encoding methods are used, the first is Deep Set [Zaheer et al. 2017], and the
second is Set Transformer [Lee et al. 2019]. Comparing Set Transformer to OASIS
framework on Lotka Volterra for real-time local dynamical system identification
and time series forecasting, we find that the Set Transformer architecture is
well adapted to learning relationships within data sets. We then compare the
two Set Encoding methods based on the Lorenz system for online global dynamical
system identification. Finally, we trained a Deep Set model to perform
identification and characterization of abnormalities for 1D heat-transfer
problem.
|
2501.11351
|
Automatic Labelling & Semantic Segmentation with 4D Radar Tensors
|
cs.CV eess.SP
|
In this paper, an automatic labelling process is presented for automotive
datasets, leveraging on complementary information from LiDAR and camera. The
generated labels are then used as ground truth with the corresponding 4D radar
data as inputs to a proposed semantic segmentation network, to associate a
class label to each spatial voxel. Promising results are shown by applying both
approaches to the publicly shared RaDelft dataset, with the proposed network
achieving over 65% of the LiDAR detection performance, improving 13.2% in
vehicle detection probability, and reducing 0.54 m in terms of Chamfer
distance, compared to variants inspired from the literature.
|
2501.11353
|
Accelerating Data Access for Single Node in Distributed Storage Systems
via MDS Codes
|
cs.IT math.IT
|
Maximum distance separable (MDS) array codes are widely employed in modern
distributed storage systems to provide high data reliability with small storage
overhead. Compared with the data access latency of the entire file, the data
access latency of a single node in a distributed storage system is equally
important. In this paper, we propose two algorithms to effectively reduce the
data access latency on a single node in different scenarios for MDS codes. We
show theoretically that our algorithms have an expected reduction ratio of
$\frac{(n-k)(n-k+1)}{n(n+1)}$ and $\frac{n-k}{n}$ for the data access latency
of a single node when it obeys uniform distribution and shifted-exponential
distribution, respectively, where $n$ and $k$ are the numbers of all nodes and
the number of data nodes respectively. In the worst-case analysis, we show that
our algorithms have a reduction ratio of more than $60\%$ when $(n,k)=(3,2)$.
Furthermore, in simulation experiments, we use the Monte Carlo simulation
algorithm to demonstrate less data access latency compared with the baseline
algorithm.
|
2501.11354
|
Towards Advancing Code Generation with Large Language Models: A Research
Roadmap
|
cs.SE cs.AI
|
Recently, we have witnessed the rapid development of large language models,
which have demonstrated excellent capabilities in the downstream task of code
generation. However, despite their potential, LLM-based code generation still
faces numerous technical and evaluation challenges, particularly when embedded
in real-world development. In this paper, we present our vision for current
research directions, and provide an in-depth analysis of existing studies on
this task. We propose a six-layer vision framework that categorizes code
generation process into distinct phases, namely Input Phase, Orchestration
Phase, Development Phase, and Validation Phase. Additionally, we outline our
vision workflow, which reflects on the currently prevalent frameworks. We
systematically analyse the challenges faced by large language models, including
those LLM-based agent frameworks, in code generation tasks. With these, we
offer various perspectives and actionable recommendations in this area. Our aim
is to provide guidelines for improving the reliability, robustness and
usability of LLM-based code generation systems. Ultimately, this work seeks to
address persistent challenges and to provide practical suggestions for a more
pragmatic LLM-based solution for future code generation endeavors.
|
2501.11357
|
On the Dimension of Pullback Attractors in Recurrent Neural Networks
|
math.DS cs.AI cs.LG
|
Recurrent Neural Networks (RNNs) are high-dimensional state space models
capable of learning functions on sequence data. Recently, it has been
conjectured that reservoir computers, a particular class of RNNs, trained on
observations of a dynamical systems can be interpreted as embeddings. This
result has been established for the case of linear reservoir systems. In this
work, we use a nonautonomous dynamical systems approach to establish an upper
bound for the fractal dimension of the subset of reservoir state space
approximated during training and prediction phase. We prove that when the input
sequences comes from an Nin-dimensional invertible dynamical system, the
fractal dimension of this set is bounded above by Nin. The result obtained here
are useful in dimensionality reduction of computation in RNNs as well as
estimating fractal dimensions of dynamical systems from limited observations of
their time series. It is also a step towards understanding embedding properties
of reservoir computers.
|
2501.11360
|
Federated Learning with Sample-level Client Drift Mitigation
|
cs.LG cs.AI
|
Federated Learning (FL) suffers from severe performance degradation due to
the data heterogeneity among clients. Existing works reveal that the
fundamental reason is that data heterogeneity can cause client drift where the
local model update deviates from the global one, and thus they usually tackle
this problem from the perspective of calibrating the obtained local update.
Despite effectiveness, existing methods substantially lack a deep understanding
of how heterogeneous data samples contribute to the formation of client drift.
In this paper, we bridge this gap by identifying that the drift can be viewed
as a cumulative manifestation of biases present in all local samples and the
bias between samples is different. Besides, the bias dynamically changes as the
FL training progresses. Motivated by this, we propose FedBSS that first
mitigates the heterogeneity issue in a sample-level manner, orthogonal to
existing methods. Specifically, the core idea of our method is to adopt a
bias-aware sample selection scheme that dynamically selects the samples from
small biases to large epoch by epoch to train progressively the local model in
each round. In order to ensure the stability of training, we set the
diversified knowledge acquisition stage as the warm-up stage to avoid the local
optimality caused by knowledge deviation in the early stage of the model.
Evaluation results show that FedBSS outperforms state-of-the-art baselines. In
addition, we also achieved effective results on feature distribution skew and
noise label dataset setting, which proves that FedBSS can not only reduce
heterogeneity, but also has scalability and robustness.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.