id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
2501.12194
|
An End-to-End Approach for Korean Wakeword Systems with Speaker
Authentication
|
cs.SD cs.AI cs.LG eess.AS
|
Wakeword detection plays a critical role in enabling AI assistants to listen
to user voices and interact effectively. However, for languages other than
English, there is a significant lack of pre-trained wakeword models.
Additionally, systems that merely determine the presence of a wakeword can pose
serious privacy concerns. In this paper, we propose an end-to-end approach that
trains wakewords for Non-English languages, particulary Korean, and uses this
to develop a Voice Authentication model to protect user privacy. Our
implementation employs an open-source platform OpenWakeWord, which performs
wakeword detection using an FCN (Fully-Connected Network) architecture. Once a
wakeword is detected, our custom-developed code calculates cosine similarity
for robust user authentication. Experimental results demonstrate the
effectiveness of our approach, achieving a 16.79% and a 6.6% Equal Error Rate
(EER) each in the Wakeword Detection and the Voice Authentication. These
findings highlight the model's potential in providing secure and accurate
wakeword detection and authentication for Korean users.
|
2501.12198
|
Opinion dynamics in bounded confidence models with manipulative agents:
Moving the Overton window
|
cs.SI physics.soc-ph stat.AP
|
This paper focuses on the opinion dynamics under the influence of
manipulative agents. This type of agents is characterized by the fact that
their opinions follow a trajectory that does not respond to the dynamics of the
model, although it does influence the rest of the normal agents. Simulation has
been implemented to study how one manipulative group modifies the natural
dynamics of some opinion models of bounded confidence. It is studied what
strategies based on the number of manipulative agents and their common opinion
trajectory can be carried out by a manipulative group to influence normal
agents and attract them to their opinions. In certain weighted models, some
effects are observed in which normal agents move in the opposite direction to
the manipulator group. Moreover, the conditions which ensure the influence of a
manipulative group on a group of normal agents over time are also established
for the Hegselmann-Krause model.
|
2501.12199
|
Experience-replay Innovative Dynamics
|
cs.LG cs.GT cs.MA
|
Despite its groundbreaking success, multi-agent reinforcement learning (MARL)
still suffers from instability and nonstationarity. Replicator dynamics, the
most well-known model from evolutionary game theory (EGT), provide a
theoretical framework for the convergence of the trajectories to Nash
equilibria and, as a result, have been used to ensure formal guarantees for
MARL algorithms in stable game settings. However, they exhibit the opposite
behavior in other settings, which poses the problem of finding alternatives to
ensure convergence. In contrast, innovative dynamics, such as the Brown-von
Neumann-Nash (BNN) or Smith, result in periodic trajectories with the potential
to approximate Nash equilibria. Yet, no MARL algorithms based on these dynamics
have been proposed. In response to this challenge, we develop a novel
experience replay-based MARL algorithm that incorporates revision protocols as
tunable hyperparameters. We demonstrate, by appropriately adjusting the
revision protocols, that the behavior of our algorithm mirrors the trajectories
resulting from these dynamics. Importantly, our contribution provides a
framework capable of extending the theoretical guarantees of MARL algorithms
beyond replicator dynamics. Finally, we corroborate our theoretical findings
with empirical results.
|
2501.12202
|
Hunyuan3D 2.0: Scaling Diffusion Models for High Resolution Textured 3D
Assets Generation
|
cs.CV
|
We present Hunyuan3D 2.0, an advanced large-scale 3D synthesis system for
generating high-resolution textured 3D assets. This system includes two
foundation components: a large-scale shape generation model -- Hunyuan3D-DiT,
and a large-scale texture synthesis model -- Hunyuan3D-Paint. The shape
generative model, built on a scalable flow-based diffusion transformer, aims to
create geometry that properly aligns with a given condition image, laying a
solid foundation for downstream applications. The texture synthesis model,
benefiting from strong geometric and diffusion priors, produces high-resolution
and vibrant texture maps for either generated or hand-crafted meshes.
Furthermore, we build Hunyuan3D-Studio -- a versatile, user-friendly production
platform that simplifies the re-creation process of 3D assets. It allows both
professional and amateur users to manipulate or even animate their meshes
efficiently. We systematically evaluate our models, showing that Hunyuan3D 2.0
outperforms previous state-of-the-art models, including the open-source models
and closed-source models in geometry details, condition alignment, texture
quality, and etc. Hunyuan3D 2.0 is publicly released in order to fill the gaps
in the open-source 3D community for large-scale foundation generative models.
The code and pre-trained weights of our models are available at:
https://github.com/Tencent/Hunyuan3D-2
|
2501.12203
|
Explainability for Vision Foundation Models: A Survey
|
cs.CV
|
As artificial intelligence systems become increasingly integrated into daily
life, the field of explainability has gained significant attention. This trend
is particularly driven by the complexity of modern AI models and their
decision-making processes. The advent of foundation models, characterized by
their extensive generalization capabilities and emergent uses, has further
complicated this landscape. Foundation models occupy an ambiguous position in
the explainability domain: their complexity makes them inherently challenging
to interpret, yet they are increasingly leveraged as tools to construct
explainable models. In this survey, we explore the intersection of foundation
models and eXplainable AI (XAI) in the vision domain. We begin by compiling a
comprehensive corpus of papers that bridge these fields. Next, we categorize
these works based on their architectural characteristics. We then discuss the
challenges faced by current research in integrating XAI within foundation
models. Furthermore, we review common evaluation methodologies for these
combined approaches. Finally, we present key observations and insights from our
survey, offering directions for future research in this rapidly evolving field.
|
2501.12204
|
Score Combining for Contrastive OOD Detection
|
cs.LG
|
In out-of-distribution (OOD) detection, one is asked to classify whether a
test sample comes from a known inlier distribution or not. We focus on the case
where the inlier distribution is defined by a training dataset and there exists
no additional knowledge about the novelties that one is likely to encounter.
This problem is also referred to as novelty detection, one-class
classification, and unsupervised anomaly detection. The current literature
suggests that contrastive learning techniques are state-of-the-art for OOD
detection. We aim to improve on those techniques by combining/ensembling their
scores using the framework of null hypothesis testing and, in particular, a
novel generalized likelihood ratio test (GLRT). We demonstrate that our
proposed GLRT-based technique outperforms the state-of-the-art CSI and SupCSI
techniques from Tack et al. 2020 in dataset-vs-dataset experiments with
CIFAR-10, SVHN, LSUN, ImageNet, and CIFAR-100, as well as leave-one-class-out
experiments with CIFAR-10. We also demonstrate that our GLRT outperforms the
score-combining methods of Fisher, Bonferroni, Simes, Benjamini-Hochwald, and
Stouffer in our application.
|
2501.12206
|
Fixing Imbalanced Attention to Mitigate In-Context Hallucination of
Large Vision-Language Model
|
cs.CV cs.CL
|
Large Vision Language Models (LVLMs) have demonstrated remarkable
capabilities in understanding and describing visual content, achieving
state-of-the-art performance across various vision-language tasks. However,
these models frequently exhibit hallucination behavior, where they generate
descriptions containing objects or details absent in the input image. Our work
investigates this phenomenon by analyzing attention patterns across transformer
layers and heads, revealing that hallucinations often stem from progressive
degradation of visual grounding in deeper layers. We propose a novel attention
modification approach that combines selective token emphasis and head-specific
modulation to maintain visual grounding throughout the generation process. Our
method introduces two key components: (1) a dual-stream token selection
mechanism that identifies and prioritizes both locally informative and
spatially significant visual tokens, and (2) an attention head-specific
modulation strategy that differentially amplifies visual information processing
based on measured visual sensitivity of individual attention heads. Through
extensive experimentation on the MSCOCO dataset, we demonstrate that our
approach reduces hallucination rates by up to 62.3\% compared to baseline
models while maintaining comparable task performance. Our analysis reveals that
selectively modulating tokens across attention heads with varying levels of
visual sensitivity can significantly improve visual grounding without requiring
model retraining.
|
2501.12208
|
Community Discovery Algorithm Based on Spatio-temporal Graph Embedding
in Dynamic Social Networks
|
cs.SI
|
Community discovery is one of the key issues in the study of dynamic social
networks. Traditional community discovery algorithms only focus on the
establishment and disconnection of connections between nodes, failing to
capture deeper factors. To address this limitation, in this work, we propose a
community discovery algorithm based on spatiotemporal graph embedding
(CDA-SGE), which integrates spatial information and evolutions of nodes to
comprehensively capture the dynamic features of networks. Specifically, this
algorithm employs Graph Convolutional Neural Networks (GCN) to aggregate latent
spatial information, effectively representing the embedding of nodes in space.
Temporal evolutions of the nodes are then modeled using Gated Recurrent Units
(GRU), thereby solving problems such as node dynamism and relationship
transmission. Finally, a Self-Organizing Map (SOM) is applied to cluster
dynamic network representations and identify community affiliations of nodes.
We then perform simulations on four types of dynamic networks and show that the
CDA-SGE outperforms traditional community discovery algorithms in terms of
purity, standardized mutual information, heterogeneity, and homogeneity. These
results demonstrate the algorithm's superior ability to accurately uncover
community structures hidden in dynamic social networks.
|
2501.12212
|
Quantitative Error Bounds for Scaling Limits of Stochastic Iterative
Algorithms
|
stat.ML cs.LG math.PR math.ST stat.TH
|
Stochastic iterative algorithms, including stochastic gradient descent (SGD)
and stochastic gradient Langevin dynamics (SGLD), are widely utilized for
optimization and sampling in large-scale and high-dimensional problems in
machine learning, statistics, and engineering. Numerous works have bounded the
parameter error in, and characterized the uncertainty of, these approximations.
One common approach has been to use scaling limit analyses to relate the
distribution of algorithm sample paths to a continuous-time stochastic process
approximation, particularly in asymptotic setups. Focusing on the univariate
setting, in this paper, we build on previous work to derive non-asymptotic
functional approximation error bounds between the algorithm sample paths and
the Ornstein-Uhlenbeck approximation using an infinite-dimensional version of
Stein's method of exchangeable pairs. We show that this bound implies weak
convergence under modest additional assumptions and leads to a bound on the
error of the variance of the iterate averages of the algorithm. Furthermore, we
use our main result to construct error bounds in terms of two common metrics:
the L\'{e}vy-Prokhorov and bounded Wasserstein distances. Our results provide a
foundation for developing similar error bounds for the multivariate setting and
for more sophisticated stochastic approximation algorithms.
|
2501.12214
|
Improving robot understanding using conversational AI: demonstration and
feasibility study
|
cs.RO cs.HC
|
Explanations constitute an important aspect of successful human robot
interactions and can enhance robot understanding. To improve the understanding
of the robot, we have developed four levels of explanation (LOE) based on two
questions: what needs to be explained, and why the robot has made a particular
decision. The understandable robot requires a communicative action when there
is disparity between the human s mental model of the robot and the robots state
of mind. This communicative action was generated by utilizing a conversational
AI platform to generate explanations. An adaptive dialog was implemented for
transition from one LOE to another. Here, we demonstrate the adaptive dialog in
a collaborative task with errors and provide results of a feasibility study
with users.
|
2501.12215
|
Automatic selection of the best neural architecture for time series
forecasting via multi-objective optimization and Pareto optimality conditions
|
cs.LG
|
Time series forecasting plays a pivotal role in a wide range of applications,
including weather prediction, healthcare, structural health monitoring,
predictive maintenance, energy systems, and financial markets. While models
such as LSTM, GRU, Transformers, and State-Space Models (SSMs) have become
standard tools in this domain, selecting the optimal architecture remains a
challenge. Performance comparisons often depend on evaluation metrics and the
datasets under analysis, making the choice of a universally optimal model
controversial. In this work, we introduce a flexible automated framework for
time series forecasting that systematically designs and evaluates diverse
network architectures by integrating LSTM, GRU, multi-head Attention, and SSM
blocks. Using a multi-objective optimization approach, our framework determines
the number, sequence, and combination of blocks to align with specific
requirements and evaluation objectives. From the resulting Pareto-optimal
architectures, the best model for a given context is selected via a
user-defined preference function. We validate our framework across four
distinct real-world applications. Results show that a single-layer GRU or LSTM
is usually optimal when minimizing training time alone. However, when
maximizing accuracy or balancing multiple objectives, the best architectures
are often composite designs incorporating multiple block types in specific
configurations. By employing a weighted preference function, users can resolve
trade-offs between objectives, revealing novel, context-specific optimal
architectures. Our findings underscore that no single neural architecture is
universally optimal for time series forecasting. Instead, the best-performing
model emerges as a data-driven composite architecture tailored to user-defined
criteria and evaluation objectives.
|
2501.12216
|
RL-RC-DoT: A Block-level RL agent for Task-Aware Video Compression
|
cs.LG cs.CV eess.IV
|
Video encoders optimize compression for human perception by minimizing
reconstruction error under bit-rate constraints. In many modern applications
such as autonomous driving, an overwhelming majority of videos serve as input
for AI systems performing tasks like object recognition or segmentation, rather
than being watched by humans. It is therefore useful to optimize the encoder
for a downstream task instead of for perceptual image quality. However, a major
challenge is how to combine such downstream optimization with existing standard
video encoders, which are highly efficient and popular. Here, we address this
challenge by controlling the Quantization Parameters (QPs) at the macro-block
level to optimize the downstream task. This granular control allows us to
prioritize encoding for task-relevant regions within each frame. We formulate
this optimization problem as a Reinforcement Learning (RL) task, where the
agent learns to balance long-term implications of choosing QPs on both task
performance and bit-rate constraints. Notably, our policy does not require the
downstream task as an input during inference, making it suitable for streaming
applications and edge devices such as vehicles. We demonstrate significant
improvements in two tasks, car detection, and ROI (saliency) encoding. Our
approach improves task performance for a given bit rate compared to traditional
task agnostic encoding methods, paving the way for more efficient task-aware
video compression.
|
2501.12217
|
Early Detection and Classification of Breast Cancer Using Deep Learning
Techniques
|
cs.CV cs.LG
|
Breast cancer is one of the deadliest cancers causing about massive number of
patients to die annually all over the world according to the WHO. It is a kind
of cancer that develops when the tissues of the breast grow rapidly and
unboundly. This fatality rate can be prevented if the cancer is detected before
it gets malignant. Using automation for early-age detection of breast cancer,
Artificial Intelligence and Machine Learning technologies can be implemented
for the best outcome. In this study, we are using the Breast Cancer Image
Classification dataset collected from the Kaggle depository, which comprises
9248 Breast Ultrasound Images and is classified into three categories: Benign,
Malignant, and Normal which refers to non-cancerous, cancerous, and normal
images.This research introduces three pretrained model featuring custom
classifiers that includes ResNet50, MobileNet, and VGG16, along with a custom
CNN model utilizing the ReLU activation function.The models ResNet50,
MobileNet, VGG16, and a custom CNN recorded accuracies of 98.41%, 97.91%,
98.19%, and 92.94% on the dataset, correspondingly, with ResNet50 achieving the
highest accuracy of 98.41%.This model, with its deep and powerful architecture,
is particularly successful in detecting aberrant cells as well as cancerous or
non-cancerous tumors. These accuracies show that the Machine Learning methods
are more compatible for the classification and early detection of breast
cancer.
|
2501.12218
|
Exploring Temporally-Aware Features for Point Tracking
|
cs.CV
|
Point tracking in videos is a fundamental task with applications in robotics,
video editing, and more. While many vision tasks benefit from pre-trained
feature backbones to improve generalizability, point tracking has primarily
relied on simpler backbones trained from scratch on synthetic data, which may
limit robustness in real-world scenarios. Additionally, point tracking requires
temporal awareness to ensure coherence across frames, but using
temporally-aware features is still underexplored. Most current methods often
employ a two-stage process: an initial coarse prediction followed by a
refinement stage to inject temporal information and correct errors from the
coarse stage. These approach, however, is computationally expensive and
potentially redundant if the feature backbone itself captures sufficient
temporal information.
In this work, we introduce Chrono, a feature backbone specifically designed
for point tracking with built-in temporal awareness. Leveraging pre-trained
representations from self-supervised learner DINOv2 and enhanced with a
temporal adapter, Chrono effectively captures long-term temporal context,
enabling precise prediction even without the refinement stage. Experimental
results demonstrate that Chrono achieves state-of-the-art performance in a
refiner-free setting on the TAP-Vid-DAVIS and TAP-Vid-Kinetics datasets, among
common feature backbones used in point tracking as well as DINOv2, with
exceptional efficiency. Project page: https://cvlab-kaist.github.io/Chrono/
|
2501.12222
|
Strong phonon-mediated high temperature superconductivity in
Li$_2$AuH$_6$ under ambient pressure
|
cond-mat.supr-con cond-mat.mtrl-sci cs.AI physics.comp-ph
|
We used our developed AI search engine~(InvDesFlow) to perform extensive
investigations regarding ambient stable superconducting hydrides. A cubic
structure Li$_2$AuH$_6$ with Au-H octahedral motifs is identified to be a
candidate. After performing thermodynamical analysis, we provide a feasible
route to experimentally synthesize this material via the known LiAu and LiH
compounds under ambient pressure. The further first-principles calculations
suggest that Li$_2$AuH$_6$ shows a high superconducting transition temperature
($T_c$) $\sim$ 140 K under ambient pressure. The H-1$s$ electrons strongly
couple with phonon modes of vibrations of Au-H octahedrons as well as
vibrations of Li atoms, where the latter is not taken seriously in other
previously similar cases. Hence, different from previous claims of searching
metallic covalent bonds to find high-$T_c$ superconductors, we emphasize here
the importance of those phonon modes with strong electron-phonon coupling
(EPC). And we suggest that one can intercalate atoms into binary or ternary
hydrides to introduce more potential phonon modes with strong EPC, which is an
effective approach to find high-$T_c$ superconductors within multicomponent
compounds.
|
2501.12224
|
TokenVerse: Versatile Multi-concept Personalization in Token Modulation
Space
|
cs.CV
|
We present TokenVerse -- a method for multi-concept personalization,
leveraging a pre-trained text-to-image diffusion model. Our framework can
disentangle complex visual elements and attributes from as little as a single
image, while enabling seamless plug-and-play generation of combinations of
concepts extracted from multiple images. As opposed to existing works,
TokenVerse can handle multiple images with multiple concepts each, and supports
a wide-range of concepts, including objects, accessories, materials, pose, and
lighting. Our work exploits a DiT-based text-to-image model, in which the input
text affects the generation through both attention and modulation (shift and
scale). We observe that the modulation space is semantic and enables localized
control over complex concepts. Building on this insight, we devise an
optimization-based framework that takes as input an image and a text
description, and finds for each word a distinct direction in the modulation
space. These directions can then be used to generate new images that combine
the learned concepts in a desired configuration. We demonstrate the
effectiveness of TokenVerse in challenging personalization settings, and
showcase its advantages over existing methods. project's webpage in
https://token-verse.github.io/
|
2501.12226
|
CDW-CoT: Clustered Distance-Weighted Chain-of-Thoughts Reasoning
|
cs.LG
|
Large Language Models (LLMs) have recently achieved impressive results in
complex reasoning tasks through Chain of Thought (CoT) prompting. However, most
existing CoT methods rely on using the same prompts, whether manually designed
or automatically generated, to handle the entire dataset. This
one-size-fits-all approach may fail to meet the specific needs arising from the
diversities within a single dataset. To solve this problem, we propose the
Clustered Distance-Weighted Chain of Thought (CDW-CoT) method, which
dynamically constructs prompts tailored to the characteristics of each data
instance by integrating clustering and prompt optimization techniques. Our
method employs clustering algorithms to categorize the dataset into distinct
groups, from which a candidate pool of prompts is selected to reflect the
inherent diversity within the dataset. For each cluster, CDW-CoT trains the
optimal prompt probability distribution tailored to their specific
characteristics. Finally, it dynamically constructs a unique prompt probability
distribution for each test instance, based on its proximity to cluster centers,
from which prompts are selected for reasoning. CDW-CoT consistently outperforms
traditional CoT methods across six datasets, including commonsense, symbolic,
and mathematical reasoning tasks. Specifically, when compared to manual CoT,
CDW-CoT achieves an average accuracy improvement of 25.34% on LLaMA2 (13B) and
15.72% on LLaMA3 (8B).
|
2501.12227
|
Multi-terminal Strong Coordination over Noisy Channels with Encoder
Cooperation
|
cs.IT math.IT
|
We investigate the problem of strong coordination over a multiple-access
channel (MAC) with cribbing encoders. In this configuration, two encoders
observe independent and identically distributed (i.i.d.) samples of a source
random variable each and encode the inputs to the MAC. The decoder which
observes the output of the MAC together with side-information, must generate
approximately i.i.d. samples of another random variable which is jointly
distributed with the two sources and the side information. We also allow for
possible encoder cooperation, where one of the encoders can non-causally crib
from the other encoders input. Independent pairwise shared randomness is
assumed between each encoder and the decoder at limited rates. Firstly, in the
presence of cribbing, we derive an achievable region based on joint
source-channel coding. We also prove that in the absence of cribbing, our inner
bound is tight for the special case when the MAC is composed of deterministic
links, and the sources are conditionally independent given the side
information. We then explicitly compute the regions for an example both with
and without cribbing between the encoders, and demonstrate that cribbing
strictly improves upon the achievable region.
|
2501.12231
|
InsTALL: Context-aware Instructional Task Assistance with Multi-modal
Large Language Models
|
cs.CV cs.AI cs.CL
|
The improved competence of generative models can help building multi-modal
virtual assistants that leverage modalities beyond language. By observing
humans performing multi-step tasks, one can build assistants that have
situational awareness of actions and tasks being performed, enabling them to
cater assistance based on this understanding. In this paper, we develop a
Context-aware Instructional Task Assistant with Multi-modal Large Language
Models (InsTALL) that leverages an online visual stream (e.g. a user's screen
share or video recording) and responds in real-time to user queries related to
the task at hand. To enable useful assistance, InsTALL 1) trains a multi-modal
model on task videos and paired textual data, and 2) automatically extracts
task graph from video data and leverages it at training and inference time. We
show InsTALL achieves state-of-the-art performance across proposed sub-tasks
considered for multimodal activity understanding -- task recognition (TR),
action recognition (AR), next action prediction (AP), and plan prediction (PP)
-- and outperforms existing baselines on two novel sub-tasks related to
automatic error identification.
|
2501.12234
|
Multi-Agent Feedback Motion Planning using Probably Approximately
Correct Nonlinear Model Predictive Control
|
cs.RO
|
For many tasks, multi-robot teams often provide greater efficiency,
robustness, and resiliency. However, multi-robot collaboration in real-world
scenarios poses a number of major challenges, especially when dynamic robots
must balance competing objectives like formation control and obstacle avoidance
in the presence of stochastic dynamics and sensor uncertainty. In this paper,
we propose a distributed, multi-agent receding-horizon feedback motion planning
approach using Probably Approximately Correct Nonlinear Model Predictive
Control (PAC-NMPC) that is able to reason about both model and measurement
uncertainty to achieve robust multi-agent formation control while navigating
cluttered obstacle fields and avoiding inter-robot collisions. Our approach
relies not only on the underlying PAC-NMPC algorithm but also on a terminal
cost-function derived from gyroscopic obstacle avoidance. Through numerical
simulation, we show that our distributed approach performs on par with a
centralized formulation, that it offers improved performance in the case of
significant measurement noise, and that it can scale to more complex dynamical
systems.
|
2501.12235
|
DLEN: Dual Branch of Transformer for Low-Light Image Enhancement in Dual
Domains
|
cs.CV eess.IV
|
Low-light image enhancement (LLE) aims to improve the visual quality of
images captured in poorly lit conditions, which often suffer from low
brightness, low contrast, noise, and color distortions. These issues hinder the
performance of computer vision tasks such as object detection, facial
recognition, and autonomous driving.Traditional enhancement techniques, such as
multi-scale fusion and histogram equalization, fail to preserve fine details
and often struggle with maintaining the natural appearance of enhanced images
under complex lighting conditions. Although the Retinex theory provides a
foundation for image decomposition, it often amplifies noise, leading to
suboptimal image quality. In this paper, we propose the Dual Light Enhance
Network (DLEN), a novel architecture that incorporates two distinct attention
mechanisms, considering both spatial and frequency domains. Our model
introduces a learnable wavelet transform module in the illumination estimation
phase, preserving high- and low-frequency components to enhance edge and
texture details. Additionally, we design a dual-branch structure that leverages
the power of the Transformer architecture to enhance both the illumination and
structural components of the image.Through extensive experiments, our model
outperforms state-of-the-art methods on standard benchmarks.Code is available
here: https://github.com/LaLaLoXX/DLEN
|
2501.12236
|
Fast sparse optimization via adaptive shrinkage
|
math.OC cs.LG cs.SY eess.SY
|
The need for fast sparse optimization is emerging, e.g., to deal with
large-dimensional data-driven problems and to track time-varying systems. In
the framework of linear sparse optimization, the iterative
shrinkage-thresholding algorithm is a valuable method to solve Lasso, which is
particularly appreciated for its ease of implementation. Nevertheless, it
converges slowly. In this paper, we develop a proximal method, based on
logarithmic regularization, which turns out to be an iterative
shrinkage-thresholding algorithm with adaptive shrinkage hyperparameter. This
adaptivity substantially enhances the trajectory of the algorithm, in a way
that yields faster convergence, while keeping the simplicity of the original
method. Our contribution is twofold: on the one hand, we derive and analyze the
proposed algorithm; on the other hand, we validate its fast convergence via
numerical experiments and we discuss the performance with respect to
state-of-the-art algorithms.
|
2501.12239
|
Investigating Market Strength Prediction with CNNs on Candlestick Chart
Images
|
cs.CV
|
This paper investigates predicting market strength solely from candlestick
chart images to assist investment decisions. The core research problem is
developing an effective computer vision-based model using raw candlestick
visuals without time-series data. We specifically analyze the impact of
incorporating candlestick patterns that were detected by YOLOv8. The study
implements two approaches: pure CNN on chart images and a Decomposer
architecture detecting patterns. Experiments utilize diverse financial datasets
spanning stocks, cryptocurrencies, and forex assets. Key findings demonstrate
candlestick patterns do not improve model performance over only image data in
our research. The significance is illuminating limitations in candlestick image
signals. Performance peaked at approximately 0.7 accuracy, below more complex
time-series models. Outcomes reveal challenges in distilling sufficient
predictive power from visual shapes alone, motivating the incorporation of
other data modalities. This research clarifies how purely image-based models
can inform trading while confirming patterns add little value over raw charts.
Our content is endeavored to be delineated into distinct sections, each
autonomously furnishing a unique contribution while maintaining cohesive
linkage. Note that, the examples discussed herein are not limited to the scope,
applicability, or knowledge outlined in the paper.
|
2501.12243
|
FOCUS: First Order Concentrated Updating Scheme
|
cs.LG cs.CL math.OC
|
Large language models (LLMs) demonstrate remarkable performance, and
improving their pre-training process appears to be key to enhancing their
capabilities further. Based on the documented success of Adam, learning rate
decay, and weight decay, we hypothesize that the pre-training loss landscape
features a narrowing valley structure. Through experiments with synthetic loss
functions, we discover that when gradient query noise is high relative to the
valley's sharpness, Adam's performance falls behind that of Signum because Adam
reduces the effective step size too drastically. This observation led us to
develop FOCUS, an optimizer that enhances Signum by incorporating attraction
toward moving averaged parameters, allowing it to handle noise better while
maintaining larger step sizes. In training GPT-2, FOCUS proves to be more
stable than Signum and faster than Adam. These results suggest that gradient
noise may be an underappreciated limiting factor in LLM training, and FOCUS
offers promising solutions.
|
2501.12244
|
Zero-shot Bias Correction: Efficient MR Image Inhomogeneity Reduction
Without Any Data
|
eess.IV cs.CV
|
In recent years, deep neural networks for image inhomogeneity reduction have
shown promising results. However, current methods with (un)supervised solutions
require preparing a training dataset, which is expensive and laborious for data
collection. In this work, we demonstrate a novel zero-shot deep neural
networks, which requires no data for pre-training and dedicated assumption of
the bias field. The designed light-weight CNN enables an efficient zero-shot
adaptation for bias-corrupted image correction. Our method provides a novel
solution to mitigate the biased corrupted image as iterative homogeneity
refinement, which therefore ensures the considered issue can be solved easier
with stable convergence of zero-shot optimization. Extensive comparison on
different datasets show that the proposed method performs better than current
data-free N4 methods in both efficiency and accuracy.
|
2501.12245
|
Quality Enhancement of Radiographic X-ray Images by Interpretable
Mapping
|
eess.IV cs.CV
|
X-ray imaging is the most widely used medical imaging modality. However, in
the common practice, inconsistency in the initial presentation of X-ray images
is a common complaint by radiologists. Different patient positions, patient
habitus and scanning protocols can lead to differences in image presentations,
e.g., differences in brightness and contrast globally or regionally. To
compensate for this, additional work will be executed by clinical experts to
adjust the images to the desired presentation, which can be time-consuming.
Existing deep-learning-based end-to-end solutions can automatically correct
images with promising performances. Nevertheless, these methods are hard to be
interpreted and difficult to be understood by clinical experts. In this
manuscript, a novel interpretable mapping method by deep learning is proposed,
which automatically enhances the image brightness and contrast globally and
locally. Meanwhile, because the model is inspired by the workflow of the
brightness and contrast manipulation, it can provide interpretable pixel maps
for explaining the motivation of image enhancement. The experiment on the
clinical datasets show the proposed method can provide consistent brightness
and contrast correction on X-ray images with accuracy of 24.75 dB PSNR and
0.8431 SSIM.
|
2501.12246
|
Video Deblurring by Sharpness Prior Detection and Edge Information
|
cs.CV
|
Video deblurring is essential task for autonomous driving, facial
recognition, and security surveillance. Traditional methods directly estimate
motion blur kernels, often introducing artifacts and leading to poor results.
Recent approaches utilize the detection of sharp frames within video sequences
to enhance deblurring. However, existing datasets rely on fixed number of sharp
frames, which may be too restrictive for some applications and may introduce a
bias during model training. To address these limitations and enhance domain
adaptability, this work first introduces GoPro Random Sharp (GoProRS), a new
dataset where the the frequency of sharp frames within the sequence is
customizable, allowing more diverse training and testing scenarios.
Furthermore, it presents a novel video deblurring model, called SPEINet, that
integrates sharp frame features into blurry frame reconstruction through an
attention-based encoder-decoder architecture, a lightweight yet robust sharp
frame detection and an edge extraction phase. Extensive experimental results
demonstrate that SPEINet outperforms state-of-the-art methods across multiple
datasets, achieving an average of +3.2% PSNR improvement over recent
techniques. Given such promising results, we believe that both the proposed
model and dataset pave the way for future advancements in video deblurring
based on the detection of sharp frames.
|
2501.12251
|
Solar Panel Selection using Extended WASPAS with Disc Intuitionistic
Fuzzy Choquet Integral Operators: CASPAS Methodology
|
cs.IT math.IT
|
Renewable energy is crucial for addressing the growing energy demands of
modern society while mitigating the adverse effects of climate change. Unlike
fossil fuels, renewable energy sources such as solar, wind, hydro, geothermal,
and biomass are abundant, sustainable, and environmentally friendly. This study
focuses on addressing a critical challenge in renewable energy decision-making
by developing a novel framework for optimal solar panel selection, a key
component of sustainable energy solutions. Solar panel selection involves
evaluating multiple interdependent criteria, such as efficiency, cost,
durability, and environmental impact. Traditional multi-criteria
decision-making (MCDM) methods often fail to account for the interdependencies
among these criteria, leading to suboptimal outcomes. To overcome this
limitation, the study introduces the Choquet Aggregated Sum Product Assessment
(CASPAS) method, a Choquet integral-based MCDM approach that incorporates fuzzy
measures to model interactions among criteria. CASPAS generalizes the Weighted
Aggregated Sum Product Assessment (WASPAS) method, thereby enhancing
decision-making accuracy and reliability. This study also introduces the
concept of disc intuitionistic fuzzy set (D-IFS), a generalization of the
concept of circular intuitionistic fuzzy set, which employ a radius function
capable of assigning varying values to individual elements instead of relying
on a fixed radius. Recognizing that traditional weighted aggregation operators
neglect the interaction among criteria, this study proposes disc intuitionistic
fuzzy Choquet integral operators by incorporating the concept of fuzzy
measures, which are effective in modeling such interactions. The proposed
method is applied to a renewable energy problem on selecting optimal solar
panels.
|
2501.12254
|
Memory Storyboard: Leveraging Temporal Segmentation for Streaming
Self-Supervised Learning from Egocentric Videos
|
cs.CV cs.LG
|
Self-supervised learning holds the promise to learn good representations from
real-world continuous uncurated data streams. However, most existing works in
visual self-supervised learning focus on static images or artificial data
streams. Towards exploring a more realistic learning substrate, we investigate
streaming self-supervised learning from long-form real-world egocentric video
streams. Inspired by the event segmentation mechanism in human perception and
memory, we propose "Memory Storyboard" that groups recent past frames into
temporal segments for more effective summarization of the past visual streams
for memory replay. To accommodate efficient temporal segmentation, we propose a
two-tier memory hierarchy: the recent past is stored in a short-term memory,
and the storyboard temporal segments are then transferred to a long-term
memory. Experiments on real-world egocentric video datasets including SAYCam
and KrishnaCam show that contrastive learning objectives on top of storyboard
frames result in semantically meaningful representations which outperform those
produced by state-of-the-art unsupervised continual learning methods.
|
2501.12255
|
HAC++: Towards 100X Compression of 3D Gaussian Splatting
|
cs.CV
|
3D Gaussian Splatting (3DGS) has emerged as a promising framework for novel
view synthesis, boasting rapid rendering speed with high fidelity. However, the
substantial Gaussians and their associated attributes necessitate effective
compression techniques. Nevertheless, the sparse and unorganized nature of the
point cloud of Gaussians (or anchors in our paper) presents challenges for
compression. To achieve a compact size, we propose HAC++, which leverages the
relationships between unorganized anchors and a structured hash grid, utilizing
their mutual information for context modeling. Additionally, HAC++ captures
intra-anchor contextual relationships to further enhance compression
performance. To facilitate entropy coding, we utilize Gaussian distributions to
precisely estimate the probability of each quantized attribute, where an
adaptive quantization module is proposed to enable high-precision quantization
of these attributes for improved fidelity restoration. Moreover, we incorporate
an adaptive masking strategy to eliminate invalid Gaussians and anchors.
Overall, HAC++ achieves a remarkable size reduction of over 100X compared to
vanilla 3DGS when averaged on all datasets, while simultaneously improving
fidelity. It also delivers more than 20X size reduction compared to
Scaffold-GS. Our code is available at
https://github.com/YihangChen-ee/HAC-plus.
|
2501.12256
|
Lie-Bracket Nash Equilibrium Seeking with Bounded Update Rates for
Noncooperative Games
|
math.OC cs.SY eess.SY
|
This paper proposes a novel approach for local convergence to Nash
equilibrium in quadratic noncooperative games based on a distributed
Lie-bracket extremum seeking control scheme. This is the first instance of
noncooperative games being tackled in a model-free fashion integrated with the
extremum seeking method of bounded update rates. In particular, the stability
analysis is carried out using Lie-bracket approximation and Lyapunov's direct
method. We quantify the size of the ultimate small residual sets around the
Nash equilibrium and illustrate the theoretical results numerically on an
example in an oligopoly setting.
|
2501.12263
|
mmCooper: A Multi-agent Multi-stage Communication-efficient and
Collaboration-robust Cooperative Perception Framework
|
cs.CV
|
Collaborative perception significantly enhances individual vehicle perception
performance through the exchange of sensory information among agents. However,
real-world deployment faces challenges due to bandwidth constraints and
inevitable calibration errors during information exchange. To address these
issues, we propose mmCooper, a novel multi-agent, multi-stage,
communication-efficient, and collaboration-robust cooperative perception
framework. Our framework leverages a multi-stage collaboration strategy that
dynamically and adaptively balances intermediate- and late-stage information to
share among agents, enhancing perceptual performance while maintaining
communication efficiency. To support robust collaboration despite potential
misalignments and calibration errors, our framework captures multi-scale
contextual information for robust fusion in the intermediate stage and
calibrates the received detection results to improve accuracy in the late
stage. We validate the effectiveness of mmCooper through extensive experiments
on real-world and simulated datasets. The results demonstrate the superiority
of our proposed framework and the effectiveness of each component.
|
2501.12266
|
CBVLM: Training-free Explainable Concept-based Large Vision Language
Models for Medical Image Classification
|
cs.CV cs.AI cs.CL
|
The main challenges limiting the adoption of deep learning-based solutions in
medical workflows are the availability of annotated data and the lack of
interpretability of such systems. Concept Bottleneck Models (CBMs) tackle the
latter by constraining the final disease prediction on a set of predefined and
human-interpretable concepts. However, the increased interpretability achieved
through these concept-based explanations implies a higher annotation burden.
Moreover, if a new concept needs to be added, the whole system needs to be
retrained. Inspired by the remarkable performance shown by Large
Vision-Language Models (LVLMs) in few-shot settings, we propose a simple, yet
effective, methodology, CBVLM, which tackles both of the aforementioned
challenges. First, for each concept, we prompt the LVLM to answer if the
concept is present in the input image. Then, we ask the LVLM to classify the
image based on the previous concept predictions. Moreover, in both stages, we
incorporate a retrieval module responsible for selecting the best examples for
in-context learning. By grounding the final diagnosis on the predicted
concepts, we ensure explainability, and by leveraging the few-shot capabilities
of LVLMs, we drastically lower the annotation cost. We validate our approach
with extensive experiments across four medical datasets and twelve LVLMs (both
generic and medical) and show that CBVLM consistently outperforms CBMs and
task-specific supervised methods without requiring any training and using just
a few annotated examples. More information on our project page:
https://cristianopatricio.github.io/CBVLM/.
|
2501.12267
|
VipDiff: Towards Coherent and Diverse Video Inpainting via Training-free
Denoising Diffusion Models
|
cs.CV
|
Recent video inpainting methods have achieved encouraging improvements by
leveraging optical flow to guide pixel propagation from reference frames either
in the image space or feature space. However, they would produce severe
artifacts in the mask center when the masked area is too large and no pixel
correspondences can be found for the center. Recently, diffusion models have
demonstrated impressive performance in generating diverse and high-quality
images, and have been exploited in a number of works for image inpainting.
These methods, however, cannot be applied directly to videos to produce
temporal-coherent inpainting results. In this paper, we propose a training-free
framework, named VipDiff, for conditioning diffusion model on the reverse
diffusion process to produce temporal-coherent inpainting results without
requiring any training data or fine-tuning the pre-trained diffusion models.
VipDiff takes optical flow as guidance to extract valid pixels from reference
frames to serve as constraints in optimizing the randomly sampled Gaussian
noise, and uses the generated results for further pixel propagation and
conditional generation. VipDiff also allows for generating diverse video
inpainting results over different sampled noise. Experiments demonstrate that
VipDiff can largely outperform state-of-the-art video inpainting methods in
terms of both spatial-temporal coherence and fidelity.
|
2501.12269
|
Benchmarking Image Perturbations for Testing Automated Driving
Assistance Systems
|
cs.SE cs.CV
|
Advanced Driver Assistance Systems (ADAS) based on deep neural networks
(DNNs) are widely used in autonomous vehicles for critical perception tasks
such as object detection, semantic segmentation, and lane recognition. However,
these systems are highly sensitive to input variations, such as noise and
changes in lighting, which can compromise their effectiveness and potentially
lead to safety-critical failures.
This study offers a comprehensive empirical evaluation of image
perturbations, techniques commonly used to assess the robustness of DNNs, to
validate and improve the robustness and generalization of ADAS perception
systems. We first conducted a systematic review of the literature, identifying
38 categories of perturbations. Next, we evaluated their effectiveness in
revealing failures in two different ADAS, both at the component and at the
system level. Finally, we explored the use of perturbation-based data
augmentation and continuous learning strategies to improve ADAS adaptation to
new operational design domains. Our results demonstrate that all categories of
image perturbations successfully expose robustness issues in ADAS and that the
use of dataset augmentation and continuous learning significantly improves ADAS
performance in novel, unseen environments.
|
2501.12271
|
Faithful Simulation of Distributed Quantum Measurement with Coding for
Computing
|
cs.IT math.IT
|
This papers consider a two terminal problem, where Alice and Bob jointly want
to perform a measurement on a bipartite quantum system $\rho^{AB}$. Alice can
transmit the results of her measurements to Bob on a classical channel, and
Alice and Bob have common randomness. The question is what is the minimum
amount of communications and common randomness needed for faithful simulation.
The paper derives an achievable rate region.
|
2501.12272
|
A Lightweight Approach for User and Keyword Classification in
Controversial Topics
|
cs.SI
|
Classifying the stance of individuals on controversial topics and uncovering
their concerns is crucial for social scientists and policymakers. Data from
Online Social Networks (OSNs), which serve as a proxy to a representative
sample of society, offers an opportunity to classify these stances, discover
society's concerns regarding controversial topics, and track the evolution of
these concerns over time. Consequently, stance classification in OSNs has
garnered significant attention from researchers. However, most existing methods
for this task often rely on labelled data and utilise the text of users' posts
or the interactions between users, necessitating large volumes of data,
considerable processing time, and access to information that is not readily
available (e.g. users' followers/followees). This paper proposes a lightweight
approach for the stance classification of users and keywords in OSNs, aiming at
understanding the collective opinion of individuals and their concerns. Our
approach employs a tailored random walk model, requiring just one keyword
representing each stance, using solely the keywords in social media posts.
Experimental results demonstrate the superior performance of our method
compared to the baselines, excelling in stance classification of users and
keywords, with a running time that, while not the fastest, remains competitive.
|
2501.12273
|
Condor: Enhance LLM Alignment with Knowledge-Driven Data Synthesis and
Refinement
|
cs.CL cs.AI
|
The quality of Supervised Fine-Tuning (SFT) data plays a critical role in
enhancing the conversational capabilities of Large Language Models (LLMs).
However, as LLMs become more advanced, the availability of high-quality
human-annotated SFT data has become a significant bottleneck, necessitating a
greater reliance on synthetic training data. In this work, we introduce Condor,
a novel two-stage synthetic data generation framework that incorporates World
Knowledge Tree and Self-Reflection Refinement to produce high-quality SFT data
at scale. Our experimental results demonstrate that a base model fine-tuned on
only 20K Condor-generated samples achieves superior performance compared to
counterparts. The additional refinement stage in Condor further enables
iterative self-improvement for LLMs at various scales (up to 72B), validating
the effectiveness of our approach. Furthermore, our investigation into the
scaling for synthetic data in post-training reveals substantial unexplored
potential for performance improvements, opening promising avenues for future
research.
|
2501.12274
|
Making it to First: The Random Access Problem in DNA Storage
|
cs.IT math.IT
|
We study the Random Access Problem in DNA storage, which addresses the
challenge of retrieving a specific information strand from a DNA-based storage
system. Given that $k$ information strands, representing the data, are encoded
into $n$ strands using a code. The goal under this paradigm is to identify and
analyze codes that minimize the expected number of reads required to retrieve
any of the $k$ information strand, while in each read one of the $n$ encoded
strands is read uniformly at random. We fully solve the case when $k=2$,
showing that the best possible code attains a random access expectation of
$0.914 \cdot 2$. Moreover, we generalize a construction from \cite{GMZ24},
specific to $k=3$, for any value of $k$. Our construction uses $B_{k-1}$
sequences over $\mathbb{Z}_{q-1}$, that always exist over large finite fields.
For $k=4$, we show that this generalized construction outperforms all previous
constructions in terms of reducing the random access expectation .
|
2501.12275
|
With Great Backbones Comes Great Adversarial Transferability
|
cs.CV cs.AI cs.CR cs.LG cs.MA
|
Advances in self-supervised learning (SSL) for machine vision have improved
representation robustness and model performance, giving rise to pre-trained
backbones like \emph{ResNet} and \emph{ViT} models tuned with SSL methods such
as \emph{SimCLR}. Due to the computational and data demands of pre-training,
the utilization of such backbones becomes a strenuous necessity. However,
employing these backbones may inherit vulnerabilities to adversarial attacks.
While adversarial robustness has been studied under \emph{white-box} and
\emph{black-box} settings, the robustness of models tuned on pre-trained
backbones remains largely unexplored. Additionally, the role of tuning
meta-information in mitigating exploitation risks is unclear. This work
systematically evaluates the adversarial robustness of such models across
$20,000$ combinations of tuning meta-information, including fine-tuning
techniques, backbone families, datasets, and attack types. We propose using
proxy models to transfer attacks, simulating varying levels of target knowledge
by fine-tuning these proxies with diverse configurations. Our findings reveal
that proxy-based attacks approach the effectiveness of \emph{white-box}
methods, even with minimal tuning knowledge. We also introduce a naive
"backbone attack," leveraging only the backbone to generate adversarial
samples, which outperforms \emph{black-box} attacks and rivals \emph{white-box}
methods, highlighting critical risks in model-sharing practices. Finally, our
ablations reveal how increasing tuning meta-information impacts attack
transferability, measuring each meta-information combination.
|
2501.12279
|
Spatial exponential decay of perturbations in optimal control of general
evolution equations
|
math.OC cs.SY eess.SY math.AP
|
We analyze the robustness of optimally controlled evolution equations with
respect to spatially localized perturbations. We prove that if the involved
operators are domain-uniformly stabilizable and detectable, then these
localized perturbations only have a local effect on the optimal solution. We
characterize this domain-uniform stabilizability and detectability for the
transport equation with constant transport velocity, showing that even for
unitary semigroups, optimality implies exponential damping. Finally, we extend
our result to the case of a space-dependent transport velocity. Numerical
examples in one space dimension complement the theoretical results.
|
2501.12280
|
Bounds and Codes for General Phased Burst Errors
|
cs.IT math.IT
|
Phased burst errors (PBEs) are bursts of errors occurring at one or more
known locations. The correction of PBEs is a classical topic in coding theory,
with prominent applications such as the design of array codes for memory
systems or distributed storage. We propose a general yet fine-grained approach
to this problem, accounting not only for the number of bursts but also the
error structure in each burst. By modeling PBEs as an error set in an
adversarial channel, we investigate bounds on the maximal size of codes that
can correct them. The PBE-correction capability of generalized concatenated
codes is analyzed, and asymptotically good PBE-correcting codes are
constructed, recovering a classical construction in a specific problem
instance.
|
2501.12281
|
MoGERNN: An Inductive Traffic Predictor for Unobserved Locations in
Dynamic Sensing Networks
|
cs.LG
|
Given a partially observed road network, how can we predict the traffic state
of unobserved locations? While deep learning approaches show exceptional
performance in traffic prediction, most assume sensors at all locations of
interest, which is impractical due to financial constraints. Furthermore, these
methods typically require costly retraining when sensor configurations change.
We propose MoGERNN, an inductive spatio-temporal graph representation model, to
address these challenges. Inspired by the Mixture of Experts approach in Large
Language Models, we introduce a Mixture of Graph Expert (MoGE) block to model
complex spatial dependencies through multiple graph message aggregators and a
sparse gating network. This block estimates initial states for unobserved
locations, which are then processed by a GRU-based Encoder-Decoder that
integrates a graph message aggregator to capture spatio-temporal dependencies
and predict future states. Experiments on two real-world datasets show MoGERNN
consistently outperforms baseline methods for both observed and unobserved
locations. MoGERNN can accurately predict congestion evolution even in areas
without sensors, offering valuable information for traffic management.
Moreover, MoGERNN is adaptable to dynamic sensing networks, maintaining
competitive performance even compared to its retrained counterpart. Tests with
different numbers of available sensors confirm its consistent superiority, and
ablation studies validate the effectiveness of its key modules.
|
2501.12285
|
Implementation of an Asymmetric Adjusted Activation Function for Class
Imbalance Credit Scoring
|
cs.LG cs.AI q-fin.RM
|
Credit scoring is a systematic approach to evaluate a borrower's probability
of default (PD) on a bank loan. The data associated with such scenarios are
characteristically imbalanced, complicating binary classification owing to the
often-underestimated cost of misclassification during the classifier's learning
process. Considering the high imbalance ratio (IR) of these datasets, we
introduce an innovative yet straightforward optimized activation function by
incorporating an IR-dependent asymmetric adjusted factor embedded Sigmoid
activation function (ASIG). The embedding of ASIG makes the sensitive margin of
the Sigmoid function auto-adjustable, depending on the imbalance nature of the
datasets distributed, thereby giving the activation function an asymmetric
characteristic that prevents the underrepresentation of the minority class
(positive samples) during the classifier's learning process. The experimental
results show that the ASIG-embedded-classifier outperforms traditional
classifiers on datasets across wide-ranging IRs in the downstream
credit-scoring task. The algorithm also shows robustness and stability, even
when the IR is ultra-high. Therefore, the algorithm provides a competitive
alternative in the financial industry, especially in credit scoring, possessing
the ability to effectively process highly imbalanced distribution data.
|
2501.12286
|
A Linear Programming Approach to Private Information Retrieval
|
cs.IT math.IT
|
This work presents an algorithmic framework that uses linear programming to
construct \emph{addition-based Private Information Retrieval (AB-PIR)} schemes,
where retrieval is performed by downloading only linear combinations of message
symbols with coefficients set to 0 or 1. The AB-PIR schemes generalize several
existing capacity-achieving PIR schemes and are of practical interest because
they use only addition operations -- avoiding multiplication and other complex
operations -- and are compatible with any finite field, including binary. Our
framework broadens the search space to include all feasible solutions and can
be used to construct optimal AB-PIR schemes for the entire range of problem
parameters, including the number of servers, the total number of messages, and
the number of messages that need to be retrieved. The framework enables us to
identify schemes that outperform the previously proposed PIR schemes in certain
cases and, in other cases, achieve performance on par with the best-known
AB-PIR solutions. Additionally, the schemes generated by our framework can be
integrated into existing solutions for several related PIR scenarios, improving
their overall performance.
|
2501.12288
|
Microgrid Operation Control with State-of-Charge- Dependent Storage
Power Constraints
|
eess.SY cs.SY
|
The microgrid concept offers high flexibility and resilience due to the
possibility of switching between grid-connected and stand-alone operation. This
renders microgrids an auspicious solution for rural areas and critical
infrastructure. In standalone or islanded mode, the main objective is cost
minimization while ensuring a safe and reliable operation. Optimal operation
schemes for microgrids usually assume fixed power limits for energy storage
units. This, however, is not sufficient for lithiumion energy storage systems,
which often come with dynamic power limits that depend on the state of charge.
These limits are especially prominent when the state of charge is close to its
boundaries. In this paper, dynamic constraints for energy storages are modelled
using convex polytopes and fitted to experimental data acquired from an 11.6
kWh lithium-ion energy storage system. The polytopic constraints are integrated
in a model predictive control scheme that was designed for a standalone
microgrid composed of a fuel cell, a photovoltaic generator and a lithium-ion
energy storage system. To evaluate the advantages, a case study with two
configurations is performed. The model predictive controller without polytopic
constraints led to constraint violations in 11.77 % of the simulation time
steps with a maximum deviation of 118 % above the power limits. The
configuration with polytopic constraints in contrary led to no violations over
the entire simulation horizon.
|
2501.12289
|
Regressor-Guided Image Editing Regulates Emotional Response to Reduce
Online Engagement
|
cs.CV cs.AI cs.HC
|
Emotions are known to mediate the relationship between users' content
consumption and their online engagement, with heightened emotional intensity
leading to increased engagement. Building on this insight, we propose three
regressor-guided image editing approaches aimed at diminishing the emotional
impact of images. These include (i) a parameter optimization approach based on
global image transformations known to influence emotions, (ii) an optimization
approach targeting the style latent space of a generative adversarial network,
and (iii) a diffusion-based approach employing classifier guidance and
classifier-free guidance. Our findings demonstrate that approaches can
effectively alter the emotional properties of images while maintaining high
visual quality. Optimization-based methods primarily adjust low-level
properties like color hues and brightness, whereas the diffusion-based approach
introduces semantic changes, such as altering appearance or facial expressions.
Notably, results from a behavioral study reveal that only the diffusion-based
approach successfully elicits changes in viewers' emotional responses while
preserving high perceived image quality. In future work, we will investigate
the impact of these image adaptations on internet user behavior.
|
2501.12293
|
Improved Decoding of Tanner Codes
|
cs.IT cs.CC cs.DS math.IT
|
In this paper, we present improved decoding algorithms for expander-based
Tanner codes.
We begin by developing a randomized linear-time decoding algorithm that,
under the condition that $ \delta d_0 > 2 $, corrects up to $ \alpha n $ errors
for a Tanner code $ T(G, C_0) $, where $ G $ is a $ (c, d, \alpha, \delta)
$-bipartite expander with $n$ left vertices, and $ C_0 \subseteq \mathbb{F}_2^d
$ is a linear inner code with minimum distance $ d_0 $. This result improves
upon the previous work of Cheng, Ouyang, Shangguan, and Shen (RANDOM 2024),
which required $ \delta d_0 > 3 $.
We further derandomize the algorithm to obtain a deterministic linear-time
decoding algorithm with the same decoding radius. Our algorithm improves upon
the previous deterministic algorithm of Cheng et al.\ by achieving a decoding
radius of $ \alpha n $, compared with the previous radius of $
\frac{2\alpha}{d_0(1 + 0.5c\delta) }n$.
Additionally, we investigate the size-expansion trade-off introduced by the
recent work of Chen, Cheng, Li, and Ouyang (IEEE TIT 2023), and use it to
provide new bounds on the minimum distance of Tanner codes. Specifically, we
prove that the minimum distance of a Tanner code $T(G,C_0)$ is approximately
$f_\delta^{-1} \left( \frac{1}{d_0} \right) \alpha n $, where $ f_\delta(\cdot)
$ is the Size-Expansion Function. As another application, we improve the
decoding radius of our decoding algorithms from $\alpha n$ to approximately
$f_\delta^{-1}(\frac{2}{d_0})\alpha n$.
|
2501.12294
|
Wrap-Decoding in Asynchronous Unsourced Multiple Access With and Without
Delay Information
|
cs.IT math.IT
|
An asynchronous $\ka$-active-user unsourced multiple access channel (AUMAC)
is a key model for uncoordinated massive access in future networks. We focus on
a scenario where each transmission is subject to the maximal delay constraint
($\dm$), and the precise delay of each user is unknown at the receiver. The
combined effects of asynchronicity and uncertain delays require analysis over
all possible delay-codeword combinations, making the complexity of the analysis
grow with $\dm$ and $\ka$ exponentially. To overcome the complexity, we employ
a wrap-decoder for the AUMAC and derive a uniform upper bound on the per-user
probability of error (PUPE). The numerical result shows the trade-off between
energy per bit and the number of active users under various delay constraints.
Furthermore, in our considered AUMAC, decoding without explicit delay
information is shown to achieve nearly the same energy efficiency as decoding
with perfect delay knowledge.
|
2501.12295
|
Towards Accurate Unified Anomaly Segmentation
|
cs.CV
|
Unsupervised anomaly detection (UAD) from images strives to model normal data
distributions, creating discriminative representations to distinguish and
precisely localize anomalies. Despite recent advancements in the efficient and
unified one-for-all scheme, challenges persist in accurately segmenting
anomalies for further monitoring. Moreover, this problem is obscured by the
widely-used AUROC metric under imbalanced UAD settings. This motivates us to
emphasize the significance of precise segmentation of anomaly pixels using pAP
and DSC as metrics. To address the unsolved segmentation task, we introduce the
Unified Anomaly Segmentation (UniAS). UniAS presents a multi-level hybrid
pipeline that progressively enhances normal information from coarse to fine,
incorporating a novel multi-granularity gated CNN (MGG-CNN) into Transformer
layers to explicitly aggregate local details from different granularities.
UniAS achieves state-of-the-art anomaly segmentation performance, attaining
65.12/59.33 and 40.06/32.50 in pAP/DSC on the MVTec-AD and VisA datasets,
respectively, surpassing previous methods significantly. The codes are shared
at https://github.com/Mwxinnn/UniAS.
|
2501.12296
|
RALAD: Bridging the Real-to-Sim Domain Gap in Autonomous Driving with
Retrieval-Augmented Learning
|
cs.CV cs.AI
|
In the pursuit of robust autonomous driving systems, models trained on
real-world datasets often struggle to adapt to new environments, particularly
when confronted with corner cases such as extreme weather conditions.
Collecting these corner cases in the real world is non-trivial, which
necessitates the use of simulators for validation. However,the high
computational cost and the domain gap in data distribution have hindered the
seamless transition between real and simulated driving scenarios. To tackle
this challenge, we propose Retrieval-Augmented Learning for Autonomous Driving
(RALAD), a novel framework designed to bridge the real-to-sim gap at a low
cost. RALAD features three primary designs, including (1) domain adaptation via
an enhanced Optimal Transport (OT) method that accounts for both individual and
grouped image distances, (2) a simple and unified framework that can be applied
to various models, and (3) efficient fine-tuning techniques that freeze the
computationally expensive layers while maintaining robustness. Experimental
results demonstrate that RALAD compensates for the performance degradation in
simulated environments while maintaining accuracy in real-world scenarios
across three different models. Taking Cross View as an example, the mIOU and
mAP metrics in real-world scenarios remain stable before and after RALAD
fine-tuning, while in simulated environments,the mIOU and mAP metrics are
improved by 10.30% and 12.29%, respectively. Moreover, the re-training cost of
our approach is reduced by approximately 88.1%. Our code is available at
https://github.com/JiachengZuo/RALAD.git.
|
2501.12299
|
Sublinear Variational Optimization of Gaussian Mixture Models with
Millions to Billions of Parameters
|
stat.ML cs.CV cs.LG
|
Gaussian Mixture Models (GMMs) range among the most frequently used machine
learning models. However, training large, general GMMs becomes computationally
prohibitive for datasets with many data points $N$ of high-dimensionality $D$.
For GMMs with arbitrary covariances, we here derive a highly efficient
variational approximation, which is integrated with mixtures of factor
analyzers (MFAs). For GMMs with $C$ components, our proposed algorithm
significantly reduces runtime complexity per iteration from
$\mathcal{O}(NCD^2)$ to a complexity scaling linearly with $D$ and remaining
constant w.r.t. $C$. Numerical validation of this theoretical complexity
reduction then shows the following: the distance evaluations required for the
entire GMM optimization process scale sublinearly with $NC$. On large-scale
benchmarks, this sublinearity results in speed-ups of an order-of-magnitude
compared to the state-of-the-art. As a proof of concept, we train GMMs with
over 10 billion parameters on about 100 million images, and observe training
times of approximately nine hours on a single state-of-the-art CPU.
|
2501.12300
|
LLM-Assisted Knowledge Graph Completion for Curriculum and Domain
Modelling in Personalized Higher Education Recommendations
|
cs.HC cs.AI
|
While learning personalization offers great potential for learners, modern
practices in higher education require a deeper consideration of domain models
and learning contexts, to develop effective personalization algorithms. This
paper introduces an innovative approach to higher education curriculum
modelling that utilizes large language models (LLMs) for knowledge graph (KG)
completion, with the goal of creating personalized learning-path
recommendations. Our research focuses on modelling university subjects and
linking their topics to corresponding domain models, enabling the integration
of learning modules from different faculties and institutions in the student's
learning path. Central to our approach is a collaborative process, where LLMs
assist human experts in extracting high-quality, fine-grained topics from
lecture materials. We develop a domain, curriculum, and user models for
university modules and stakeholders. We implement this model to create the KG
from two study modules: Embedded Systems and Development of Embedded Systems
Using FPGA. The resulting KG structures the curriculum and links it to the
domain models. We evaluate our approach through qualitative expert feedback and
quantitative graph quality metrics. Domain experts validated the relevance and
accuracy of the model, while the graph quality metrics measured the structural
properties of our KG. Our results show that the LLM-assisted graph completion
approach enhances the ability to connect related courses across disciplines to
personalize the learning experience. Expert feedback also showed high
acceptance of the proposed collaborative approach for concept extraction and
classification.
|
2501.12309
|
A Hybrid Supervised and Self-Supervised Graph Neural Network for
Edge-Centric Applications
|
cs.LG q-bio.MN
|
This paper presents a novel graph-based deep learning model for tasks
involving relations between two nodes (edge-centric tasks), where the focus
lies on predicting relationships and interactions between pairs of nodes rather
than node properties themselves. This model combines supervised and
self-supervised learning, taking into account for the loss function the
embeddings learned and patterns with and without ground truth. Additionally it
incorporates an attention mechanism that leverages both node and edge features.
The architecture, trained end-to-end, comprises two primary components:
embedding generation and prediction. First, a graph neural network (GNN)
transform raw node features into dense, low-dimensional embeddings,
incorporating edge attributes. Then, a feedforward neural model processes the
node embeddings to produce the final output. Experiments demonstrate that our
model matches or exceeds existing methods for protein-protein interactions
prediction and Gene Ontology (GO) terms prediction. The model also performs
effectively with one-hot encoding for node features, providing a solution for
the previously unsolved problem of predicting similarity between compounds with
unknown structures.
|
2501.12310
|
Optimizing Leaky Private Information Retrieval Codes to Achieve
${O}(\log K)$ Leakage Ratio Exponent
|
cs.IR cs.IT math.IT
|
We study the problem of leaky private information retrieval (L-PIR), where
the amount of privacy leakage is measured by the pure differential privacy
parameter, referred to as the leakage ratio exponent. Unlike the previous L-PIR
scheme proposed by Samy et al., which only adjusted the probability allocation
to the clean (low-cost) retrieval pattern, we optimize the probabilities
assigned to all the retrieval patterns jointly. It is demonstrated that the
optimal retrieval pattern probability distribution is quite sophisticated and
has a layered structure: the retrieval patterns associated with the random key
values of lower Hamming weights should be assigned higher probabilities. This
new scheme provides a significant improvement, leading to an ${O}(\log K)$
leakage ratio exponent with fixed download cost $D$ and number of servers $N$,
in contrast to the previous art that only achieves a $\Theta(K)$ exponent,
where $K$ is the number of messages.
|
2501.12314
|
Uncertainty Quantification With Noise Injection in Neural Networks: A
Bayesian Perspective
|
stat.ML cs.LG
|
Model uncertainty quantification involves measuring and evaluating the
uncertainty linked to a model's predictions, helping assess their reliability
and confidence. Noise injection is a technique used to enhance the robustness
of neural networks by introducing randomness. In this paper, we establish a
connection between noise injection and uncertainty quantification from a
Bayesian standpoint. We theoretically demonstrate that injecting noise into the
weights of a neural network is equivalent to Bayesian inference on a deep
Gaussian process. Consequently, we introduce a Monte Carlo Noise Injection
(MCNI) method, which involves injecting noise into the parameters during
training and performing multiple forward propagations during inference to
estimate the uncertainty of the prediction. Through simulation and experiments
on regression and classification tasks, our method demonstrates superior
performance compared to the baseline model.
|
2501.12318
|
BlanketGen2-Fit3D: Synthetic Blanket Augmentation Towards Improving
Real-World In-Bed Blanket Occluded Human Pose Estimation
|
cs.CV
|
Human Pose Estimation (HPE) from monocular RGB images is crucial for clinical
in-bed skeleton-based action recognition, however, it poses unique challenges
for HPE models due to the frequent presence of blankets occluding the person,
while labeled HPE data in this scenario is scarce. To address this we introduce
BlanketGen2-Fit3D (BG2-Fit3D), an augmentation of Fit3D dataset that contains
1,217,312 frames with synthetic photo-realistic blankets. To generate it we
used BlanketGen2, our new and improved version of our BlanketGen pipeline that
simulates synthetic blankets using ground-truth Skinned Multi-Person Linear
model (SMPL) meshes and then renders them as transparent images that can be
layered on top of the original frames. This dataset was used in combination
with the original Fit3D to finetune the ViTPose-B HPE model, to evaluate
synthetic blanket augmentation effectiveness. The trained models were further
evaluated on a real-world blanket occluded in-bed HPE dataset (SLP dataset).
Comparing architectures trained on only Fit3D with the ones trained with our
synthetic blanket augmentation the later improved pose estimation performance
on BG2-Fit3D, the synthetic blanket occluded dataset significantly to (0.977
Percentage of Correct Keypoints (PCK), 0.149 Normalized Mean Error (NME)) with
an absolute 4.4% PCK increase. Furthermore, the test results on SLP
demonstrated the utility of synthetic data augmentation by improving
performance by an absolute 2.3% PCK, on real-world images with the poses
occluded by real blankets. These results show synthetic blanket augmentation
has the potential to improve in-bed blanket occluded HPE from RGB images. The
dataset as well as the code will be made available to the public.
|
2501.12319
|
Metric for Evaluating Performance of Reference-Free Demorphing Methods
|
cs.CV
|
A facial morph is an image created by combining two (or more) face images
pertaining to two (or more) distinct identities. Reference-free face demorphing
inverts the process and tries to recover the face images constituting a facial
morph without using any other information. However, there is no consensus on
the evaluation metrics to be used to evaluate and compare such demorphing
techniques. In this paper, we first analyze the shortcomings of the demorphing
metrics currently used in the literature. We then propose a new metric called
biometrically cross-weighted IQA that overcomes these issues and extensively
benchmark current methods on the proposed metric to show its efficacy.
Experiments on three existing demorphing methods and six datasets on two
commonly used face matchers validate the efficacy of our proposed metric.
|
2501.12322
|
A General Achievable Scheme for Linear Computation Broadcast Channel
|
cs.IT math.IT
|
This paper presents a new achievable scheme for the Linear Computation
Broadcast Channel (LCBC), which is based on a generalized subspace
decomposition derived from representable polymatroid space. This decomposition
enables the server to serve user demands with an approach of effective
multicast and interference elimination. We extend existing results by
introducing a linear programming framework to optimize multicast opportunities
across an arbitrary number of users.
|
2501.12323
|
Deep Learning Based Segmentation of Blood Vessels from H&E Stained
Oesophageal Adenocarcinoma Whole-Slide Images
|
eess.IV cs.CV
|
Blood vessels (BVs) play a critical role in the Tumor Micro-Environment
(TME), potentially influencing cancer progression and treatment response.
However, manually quantifying BVs in Hematoxylin and Eosin (H&E) stained images
is challenging and labor-intensive due to their heterogeneous appearances. We
propose a novel approach of constructing guiding maps to improve the
performance of state-of-the-art segmentation models for BV segmentation, the
guiding maps encourage the models to learn representative features of BVs. This
is particularly beneficial for computational pathology, where labeled training
data is often limited and large models are prone to overfitting. We have
quantitative and qualitative results to demonstrate the efficacy of our
approach in improving segmentation accuracy. In future, we plan to validate
this method to segment BVs across various tissue types and investigate the role
of cellular structures in relation to BVs in the TME.
|
2501.12326
|
UI-TARS: Pioneering Automated GUI Interaction with Native Agents
|
cs.AI cs.CL cs.CV cs.HC
|
This paper introduces UI-TARS, a native GUI agent model that solely perceives
the screenshots as input and performs human-like interactions (e.g., keyboard
and mouse operations). Unlike prevailing agent frameworks that depend on
heavily wrapped commercial models (e.g., GPT-4o) with expert-crafted prompts
and workflows, UI-TARS is an end-to-end model that outperforms these
sophisticated frameworks. Experiments demonstrate its superior performance:
UI-TARS achieves SOTA performance in 10+ GUI agent benchmarks evaluating
perception, grounding, and GUI task execution. Notably, in the OSWorld
benchmark, UI-TARS achieves scores of 24.6 with 50 steps and 22.7 with 15
steps, outperforming Claude (22.0 and 14.9 respectively). In AndroidWorld,
UI-TARS achieves 46.6, surpassing GPT-4o (34.5). UI-TARS incorporates several
key innovations: (1) Enhanced Perception: leveraging a large-scale dataset of
GUI screenshots for context-aware understanding of UI elements and precise
captioning; (2) Unified Action Modeling, which standardizes actions into a
unified space across platforms and achieves precise grounding and interaction
through large-scale action traces; (3) System-2 Reasoning, which incorporates
deliberate reasoning into multi-step decision making, involving multiple
reasoning patterns such as task decomposition, reflection thinking, milestone
recognition, etc. (4) Iterative Training with Reflective Online Traces, which
addresses the data bottleneck by automatically collecting, filtering, and
reflectively refining new interaction traces on hundreds of virtual machines.
Through iterative training and reflection tuning, UI-TARS continuously learns
from its mistakes and adapts to unforeseen situations with minimal human
intervention. We also analyze the evolution path of GUI agents to guide the
further development of this domain.
|
2501.12327
|
VARGPT: Unified Understanding and Generation in a Visual Autoregressive
Multimodal Large Language Model
|
cs.CV
|
We present VARGPT, a novel multimodal large language model (MLLM) that
unifies visual understanding and generation within a single autoregressive
framework. VARGPT employs a next-token prediction paradigm for visual
understanding and a next-scale prediction paradigm for visual autoregressive
generation. VARGPT innovatively extends the LLaVA architecture, achieving
efficient scale-wise autoregressive visual generation within MLLMs while
seamlessly accommodating mixed-modal input and output within a single model
framework. Our VARGPT undergoes a three-stage unified training process on
specially curated datasets, comprising a pre-training phase and two mixed
visual instruction-tuning phases. The unified training strategy are designed to
achieve alignment between visual and textual features, enhance instruction
following for both understanding and generation, and improve visual generation
quality, respectively. Despite its LLAVA-based architecture for multimodel
understanding, VARGPT significantly outperforms LLaVA-1.5 across various
vision-centric benchmarks, such as visual question-answering and reasoning
tasks. Notably, VARGPT naturally supports capabilities in autoregressive visual
generation and instruction-to-image synthesis, showcasing its versatility in
both visual understanding and generation tasks. Project page is at:
\url{https://vargpt-1.github.io/}
|
2501.12330
|
The Gap Between Principle and Practice of Lossy Image Coding
|
cs.IT cs.LG math.IT
|
Lossy image coding is the art of computing that is principally bounded by the
image's rate-distortion function. This bound, though never accurately
characterized, has been approached practically via deep learning technologies
in recent years. Indeed, learned image coding schemes allow direct optimization
of the joint rate-distortion cost, thereby outperforming the handcrafted image
coding schemes by a large margin. Still, it is observed that there is room for
further improvement in the rate-distortion performance of learned image coding.
In this article, we identify the gap between the ideal rate-distortion function
forecasted by Shannon's information theory and the empirical rate-distortion
function achieved by the state-of-the-art learned image coding schemes,
revealing that the gap is incurred by five different effects: modeling effect,
approximation effect, amortization effect, digitization effect, and asymptotic
effect. We design simulations and experiments to quantitively evaluate the last
three effects, which demonstrates the high potential of future lossy image
coding technologies.
|
2501.12331
|
Cinepro: Robust Training of Foundation Models for Cancer Detection in
Prostate Ultrasound Cineloops
|
eess.IV cs.CV cs.LG q-bio.TO
|
Prostate cancer (PCa) detection using deep learning (DL) models has shown
potential for enhancing real-time guidance during biopsies. However, prostate
ultrasound images lack pixel-level cancer annotations, introducing label noise.
Current approaches often focus on limited regions of interest (ROIs),
disregarding anatomical context necessary for accurate diagnosis. Foundation
models can overcome this limitation by analyzing entire images to capture
global spatial relationships; however, they still encounter challenges stemming
from the weak labels associated with coarse pathology annotations in ultrasound
data. We introduce Cinepro, a novel framework that strengthens foundation
models' ability to localize PCa in ultrasound cineloops. Cinepro adapts robust
training by integrating the proportion of cancer tissue reported by pathology
in a biopsy core into its loss function to address label noise, providing a
more nuanced supervision. Additionally, it leverages temporal data across
multiple frames to apply robust augmentations, enhancing the model's ability to
learn stable cancer-related features. Cinepro demonstrates superior performance
on a multi-center prostate ultrasound dataset, achieving an AUROC of 77.1% and
a balanced accuracy of 83.8%, surpassing current benchmarks. These findings
underscore Cinepro's promise in advancing foundation models for weakly labeled
ultrasound data.
|
2501.12332
|
Automatic Labelling with Open-source LLMs using Dynamic Label Schema
Integration
|
cs.CL cs.AI cs.LG
|
Acquiring labelled training data remains a costly task in real world machine
learning projects to meet quantity and quality requirements. Recently Large
Language Models (LLMs), notably GPT-4, have shown great promises in labelling
data with high accuracy. However, privacy and cost concerns prevent the
ubiquitous use of GPT-4. In this work, we explore effectively leveraging
open-source models for automatic labelling. We identify integrating label
schema as a promising technology but found that naively using the label
description for classification leads to poor performance on high cardinality
tasks. To address this, we propose Retrieval Augmented Classification (RAC) for
which LLM performs inferences for one label at a time using corresponding label
schema; we start with the most related label and iterates until a label is
chosen by the LLM. We show that our method, which dynamically integrates label
description, leads to performance improvements in labelling tasks. We further
show that by focusing only on the most promising labels, RAC can trade off
between label quality and coverage - a property we leverage to automatically
label our internal datasets.
|
2501.12336
|
FuocChuVIP123 at CoMeDi Shared Task: Disagreement Ranking with
XLM-Roberta Sentence Embeddings and Deep Neural Regression
|
cs.CL cs.AI
|
This paper presents results of our system for CoMeDi Shared Task, focusing on
Subtask 2: Disagreement Ranking. Our system leverages sentence embeddings
generated by the paraphrase-xlm-r-multilingual-v1 model, combined with a deep
neural regression model incorporating batch normalization and dropout for
improved generalization. By predicting the mean of pairwise judgment
differences between annotators, our method explicitly targets disagreement
ranking, diverging from traditional "gold label" aggregation approaches. We
optimized our system with a customized architecture and training procedure,
achieving competitive performance in Spearman correlation against mean
disagreement labels. Our results highlight the importance of robust embeddings,
effective model architecture, and careful handling of judgment differences for
ranking disagreement in multilingual contexts. These findings provide insights
into the use of contextualized representations for ordinal judgment tasks and
open avenues for further refinement of disagreement prediction models.
|
2501.12337
|
Understanding User Preference -- Comparison between Linear and
Directional Top-K Query results
|
cs.DB
|
This paper investigates user preferences for Linear Top-k Queries and
Directional Top-k Queries, two methods for ranking results in multidimensional
datasets. While Linear Queries prioritize weighted sums of attributes,
Directional Queries aim to deliver more balanced results by incorporating the
spatial relationship between data points and a user-defined preference line.
The study explores how preferences for these methods vary across different
contexts by focusing on two real-world topics: used cars (e-commerce domain)
and football players (personal interest domain). A user survey involving 106
participants was conducted to evaluate preferences, with results visualized as
scatter plots for comparison. The findings reveal a significant preference for
directional queries in the used cars topic, where balanced results align better
with user goals. In contrast, preferences in the football players topic were
more evenly distributed, influenced by user expertise and familiarity with the
domain. Additionally, the study demonstrates that the two specific topics
selected for this research exhibit significant differences in their impact on
user preferences. This research reveals authentic user preferences,
highlighting the practical utility of Directional Queries for lifestyle-related
applications and the subjective nature of preferences in specialized domains.
These insights contribute to advancing personalized database technologies,
guiding the development of more user-centric ranking systems.
|
2501.12339
|
Treefix: Enabling Execution with a Tree of Prefixes
|
cs.SE cs.AI
|
The ability to execute code is a prerequisite for various dynamic program
analyses. Learning-guided execution has been proposed as an approach to enable
the execution of arbitrary code snippets by letting a neural model predict
likely values for any missing variables. Although state-of-the-art
learning-guided execution approaches, such as LExecutor, can enable the
execution of a relative high amount of code, they are limited to predicting a
restricted set of possible values and do not use any feedback from previous
executions to execute even more code. This paper presents Treefix, a novel
learning-guided execution approach that leverages LLMs to iteratively create
code prefixes that enable the execution of a given code snippet. The approach
addresses the problem in a multi-step fashion, where each step uses feedback
about the code snippet and its execution to instruct an LLM to improve a
previously generated prefix. This process iteratively creates a tree of
prefixes, a subset of which is returned to the user as prefixes that maximize
the number of executed lines in the code snippet. In our experiments with two
datasets of Python code snippets, Treefix achieves 25% and 7% more coverage
relative to the current state of the art in learning-guided execution, covering
a total of 84% and 82% of all lines in the code snippets.
|
2501.12344
|
CYCle: Choosing Your Collaborators Wisely to Enhance Collaborative
Fairness in Decentralized Learning
|
cs.LG cs.DC
|
Collaborative learning (CL) enables multiple participants to jointly train
machine learning (ML) models on decentralized data sources without raw data
sharing. While the primary goal of CL is to maximize the expected accuracy gain
for each participant, it is also important to ensure that the gains are fairly
distributed. Specifically, no client should be negatively impacted by the
collaboration, and the individual gains must ideally be commensurate with the
contributions. Most existing CL algorithms require central coordination and
focus on the gain maximization objective while ignoring collaborative fairness.
In this work, we first show that the existing measure of collaborative fairness
based on the correlation between accuracy values without and with collaboration
has drawbacks because it does not account for negative collaboration gain. We
argue that maximizing mean collaboration gain (MCG) while simultaneously
minimizing the collaboration gain spread (CGS) is a fairer alternative. Next,
we propose the CYCle protocol that enables individual participants in a private
decentralized learning (PDL) framework to achieve this objective through a
novel reputation scoring method based on gradient alignment between the local
cross-entropy and distillation losses. Experiments on the CIFAR-10, CIFAR-100,
and Fed-ISIC2019 datasets empirically demonstrate the effectiveness of the
CYCle protocol to ensure positive and fair collaboration gain for all
participants, even in cases where the data distributions of participants are
highly skewed. For the simple mean estimation problem with two participants, we
also theoretically show that CYCle performs better than standard FedAvg,
especially when there is large statistical heterogeneity.
|
2501.12348
|
Rate-Distortion-Perception Function of Bernoulli Vector Sources
|
cs.IT math.IT
|
In this paper, we consider the rate-distortion-perception (RDP) trade-off for
the lossy compression of a Bernoulli vector source, which is a finite
collection of independent binary random variables. The RDP function quantifies
in a way the efficient compression of a source when we impose a distortion
constraint that limits the dissimilarity between the source and the
reconstruction and a perception constraint that restricts the distributional
discrepancy of the source and the reconstruction. In this work, we obtain an
exact characterization of the RDP function of a Bernoulli vector source with
the Hamming distortion function and a single-letter perception function that
measures the closeness of the distributions of the components of the source.
The solution can be described by partitioning the set of distortion and
perception levels $(D,P)$ into three regions, where in each region the optimal
distortion and perception levels we allot to the components have a similar
nature. Finally, we introduce the RDP function for graph sources and apply our
result to the Erd\H{o}s-R\'enyi graph model.
|
2501.12349
|
General Field Evaluation in High-Order Meshes on GPUs
|
cs.MS cs.CE
|
Robust and scalable function evaluation at any arbitrary point in the
finite/spectral element mesh is required for querying the partial differential
equation solution at points of interest, comparison of solution between
different meshes, and Lagrangian particle tracking. This is a challenging
problem, particularly for high-order unstructured meshes partitioned in
parallel with MPI, as it requires identifying the element that overlaps a given
point and computing the corresponding reference space coordinates. We present a
robust and efficient technique for general field evaluation in large-scale
high-order meshes with quadrilaterals and hexahedra. In the proposed method, a
combination of globally partitioned and processor-local maps are used to first
determine a list of candidate MPI ranks, and then locally candidate elements
that could contain a given point. Next, element-wise bounding boxes further
reduce the list of candidate elements. Finally, Newton's method with trust
region is used to determine the overlapping element and corresponding reference
space coordinates. Since GPU-based architectures have become popular for
accelerating computational analyses using meshes with tensor-product elements,
specialized kernels have been developed to utilize the proposed methodology on
GPUs. The method is also extended to enable general field evaluation on surface
meshes. The paper concludes by demonstrating the use of proposed method in
various applications ranging from mesh-to-mesh transfer during r-adaptivity to
Lagrangian particle tracking.
|
2501.12352
|
Test-time regression: a unifying framework for designing sequence models
with associative memory
|
cs.LG cs.AI cs.NE stat.ML
|
Sequences provide a remarkably general way to represent and process
information. This powerful abstraction has placed sequence modeling at the
center of modern deep learning applications, inspiring numerous architectures
from transformers to recurrent networks. While this fragmented development has
yielded powerful models, it has left us without a unified framework to
understand their fundamental similarities and explain their effectiveness. We
present a unifying framework motivated by an empirical observation: effective
sequence models must be able to perform associative recall. Our key insight is
that memorizing input tokens through an associative memory is equivalent to
performing regression at test-time. This regression-memory correspondence
provides a framework for deriving sequence models that can perform associative
recall, offering a systematic lens to understand seemingly ad-hoc architectural
choices. We show numerous recent architectures -- including linear attention
models, their gated variants, state-space models, online learners, and softmax
attention -- emerge naturally as specific approaches to test-time regression.
Each architecture corresponds to three design choices: the relative importance
of each association, the regressor function class, and the optimization
algorithm. This connection leads to new understanding: we provide theoretical
justification for QKNorm in softmax attention, and we motivate higher-order
generalizations of softmax attention. Beyond unification, our work unlocks
decades of rich statistical tools that can guide future development of more
powerful yet principled sequence models.
|
2501.12354
|
Diffusion-aware Censored Gaussian Processes for Demand Modelling
|
cs.LG stat.ME stat.ML
|
Inferring the true demand for a product or a service from aggregate data is
often challenging due to the limited available supply, thus resulting in
observations that are censored and correspond to the realized demand, thereby
not accounting for the unsatisfied demand. Censored regression models are able
to account for the effect of censoring due to the limited supply, but they
don't consider the effect of substitutions, which may cause the demand for
similar alternative products or services to increase. This paper proposes
Diffusion-aware Censored Demand Models, which combine a Tobit likelihood with a
graph diffusion process in order to model the latent process of transfer of
unsatisfied demand between similar products or services. We instantiate this
new class of models under the framework of GPs and, based on both simulated and
real-world data for modeling sales, bike-sharing demand, and EV charging
demand, demonstrate its ability to better recover the true demand and produce
more accurate out-of-sample predictions.
|
2501.12356
|
Vision-Language Models for Automated Chest X-ray Interpretation:
Leveraging ViT and GPT-2
|
cs.CV
|
Radiology plays a pivotal role in modern medicine due to its non-invasive
diagnostic capabilities. However, the manual generation of unstructured medical
reports is time consuming and prone to errors. It creates a significant
bottleneck in clinical workflows. Despite advancements in AI-generated
radiology reports, challenges remain in achieving detailed and accurate report
generation. In this study we have evaluated different combinations of
multimodal models that integrate Computer Vision and Natural Language
Processing to generate comprehensive radiology reports. We employed a
pretrained Vision Transformer (ViT-B16) and a SWIN Transformer as the image
encoders. The BART and GPT-2 models serve as the textual decoders. We used
Chest X-ray images and reports from the IU-Xray dataset to evaluate the
usability of the SWIN Transformer-BART, SWIN Transformer-GPT-2, ViT-B16-BART
and ViT-B16-GPT-2 models for report generation. We aimed at finding the best
combination among the models. The SWIN-BART model performs as the
best-performing model among the four models achieving remarkable results in
almost all the evaluation metrics like ROUGE, BLEU and BERTScore.
|
2501.12359
|
Measured Hockey-Stick Divergence and its Applications to Quantum
Pufferfish Privacy
|
quant-ph cs.CR cs.IT cs.LG math.IT
|
The hockey-stick divergence is a fundamental quantity characterizing several
statistical privacy frameworks that ensure privacy for classical and quantum
data. In such quantum privacy frameworks, the adversary is allowed to perform
all possible measurements. However, in practice, there are typically
limitations to the set of measurements that can be performed. To this end,
here, we comprehensively analyze the measured hockey-stick divergence under
several classes of practically relevant measurement classes. We prove several
of its properties, including data processing and convexity. We show that it is
efficiently computable by semi-definite programming for some classes of
measurements and can be analytically evaluated for Werner and isotropic states.
Notably, we show that the measured hockey-stick divergence characterizes
optimal privacy parameters in the quantum pufferfish privacy framework. With
this connection and the developed technical tools, we enable methods to
quantify and audit privacy for several practically relevant settings. Lastly,
we introduce the measured hockey-stick divergence of channels and explore its
applications in ensuring privacy for channels.
|
2501.12362
|
ARM-IRL: Adaptive Resilience Metric Quantification Using Inverse
Reinforcement Learning
|
eess.SY cs.SY
|
Resilience of safety-critical systems is gaining importance, particularly
with the increasing number of cyber and physical threats. Cyber-physical
threats are becoming increasingly prevalent, as digital systems are ubiquitous
in critical infrastructure. The challenge with determining the resilience of
cyber-physical systems is identifying a set of resilience metrics that can
adapt to the changing states of the system. A static resilience metric can lead
to an inaccurate estimation of system state, and can result in unintended
consequences against cyber threats. In this work, we propose a data-driven
method for adaptive resilience metric learning. The primary goal is to learn a
single resilience metric by formulating an inverse reinforcement learning
problem that learns a reward or objective from a set of control actions from an
expert. It learns the structure or parameters of the reward function based on
information provided by expert demonstrations. Most prior work has considered
static weights or theories from fuzzy logic to formulate a single resilience
metric. Instead, this work learns the resilience metric, represented as reward
function, using adversarial inverse reinforcement learning, to determine the
optimal policy through training the generator discriminator in parallel. We
evaluate our proposed technique in scenarios such as optimal communication
network rerouting, power distribution network reconfiguration, and a combined
cyber-physical restoration of critical load using the IEEE 123-bus system.
|
2501.12365
|
Efficient Algorithm for Sparse Fourier Transform of Generalized q-ary
Functions
|
cs.CC cs.DM cs.IT cs.LG math.IT
|
Computing the Fourier transform of a $q$-ary function
$f:\mathbb{Z}_{q}^n\rightarrow \mathbb{R}$, which maps $q$-ary sequences to
real numbers, is an important problem in mathematics with wide-ranging
applications in biology, signal processing, and machine learning. Previous
studies have shown that, under the sparsity assumption, the Fourier transform
can be computed efficiently using fast and sample-efficient algorithms.
However, in many practical settings, the function is defined over a more
general space -- the space of generalized $q$-ary sequences $\mathbb{Z}_{q_1}
\times \mathbb{Z}_{q_2} \times \cdots \times \mathbb{Z}_{q_n}$ -- where each
$\mathbb{Z}_{q_i}$ corresponds to integers modulo $q_i$. A naive approach
involves setting $q=\max_i{q_i}$ and treating the function as $q$-ary, which
results in heavy computational overheads. Herein, we develop GFast, an
algorithm that computes the $S$-sparse Fourier transform of $f$ with a sample
complexity of $O(Sn)$, computational complexity of $O(Sn \log N)$, and a
failure probability that approaches zero as $N=\prod_{i=1}^n q_i \rightarrow
\infty$ with $S = N^\delta$ for some $0 \leq \delta < 1$. In the presence of
noise, we further demonstrate that a robust version of GFast computes the
transform with a sample complexity of $O(Sn^2)$ and computational complexity of
$O(Sn^2 \log N)$ under the same high probability guarantees. Using large-scale
synthetic experiments, we demonstrate that GFast computes the sparse Fourier
transform of generalized $q$-ary functions using $16\times$ fewer samples and
running $8\times$ faster than existing algorithms. In real-world protein
fitness datasets, GFast explains the predictive interactions of a neural
network with $>25\%$ smaller normalized mean-squared error compared to existing
algorithms.
|
2501.12367
|
Budget-constrained Collaborative Renewable Energy Forecasting Market
|
cs.LG
|
Accurate power forecasting from renewable energy sources (RES) is crucial for
integrating additional RES capacity into the power system and realizing
sustainability goals. This work emphasizes the importance of integrating
decentralized spatio-temporal data into forecasting models. However,
decentralized data ownership presents a critical obstacle to the success of
such spatio-temporal models, and incentive mechanisms to foster data-sharing
need to be considered. The main contributions are a) a comparative analysis of
the forecasting models, advocating for efficient and interpretable spline LASSO
regression models, and b) a bidding mechanism within the data/analytics market
to ensure fair compensation for data providers and enable both buyers and
sellers to express their data price requirements. Furthermore, an incentive
mechanism for time series forecasting is proposed, effectively incorporating
price constraints and preventing redundant feature allocation. Results show
significant accuracy improvements and potential monetary gains for data
sellers. For wind power data, an average root mean squared error improvement of
over 10% was achieved by comparing forecasts generated by the proposal with
locally generated ones.
|
2501.12368
|
InternLM-XComposer2.5-Reward: A Simple Yet Effective Multi-Modal Reward
Model
|
cs.CV cs.CL
|
Despite the promising performance of Large Vision Language Models (LVLMs) in
visual understanding, they occasionally generate incorrect outputs. While
reward models (RMs) with reinforcement learning or test-time scaling offer the
potential for improving generation quality, a critical gap remains: publicly
available multi-modal RMs for LVLMs are scarce, and the implementation details
of proprietary models are often unclear. We bridge this gap with
InternLM-XComposer2.5-Reward (IXC-2.5-Reward), a simple yet effective
multi-modal reward model that aligns LVLMs with human preferences. To ensure
the robustness and versatility of IXC-2.5-Reward, we set up a high-quality
multi-modal preference corpus spanning text, image, and video inputs across
diverse domains, such as instruction following, general understanding,
text-rich documents, mathematical reasoning, and video understanding.
IXC-2.5-Reward achieves excellent results on the latest multi-modal reward
model benchmark and shows competitive performance on text-only reward model
benchmarks. We further demonstrate three key applications of IXC-2.5-Reward:
(1) Providing a supervisory signal for RL training. We integrate IXC-2.5-Reward
with Proximal Policy Optimization (PPO) yields IXC-2.5-Chat, which shows
consistent improvements in instruction following and multi-modal open-ended
dialogue; (2) Selecting the best response from candidate responses for
test-time scaling; and (3) Filtering outlier or noisy samples from existing
image and video instruction tuning training data. To ensure reproducibility and
facilitate further research, we have open-sourced all model weights and
training recipes at https://github.com/InternLM/InternLM-XComposer
|
2501.12369
|
DARB-Splatting: Generalizing Splatting with Decaying Anisotropic Radial
Basis Functions
|
cs.CV cs.AI cs.GR
|
Splatting-based 3D reconstruction methods have gained popularity with the
advent of 3D Gaussian Splatting, efficiently synthesizing high-quality novel
views. These methods commonly resort to using exponential family functions,
such as the Gaussian function, as reconstruction kernels due to their
anisotropic nature, ease of projection, and differentiability in rasterization.
However, the field remains restricted to variations within the exponential
family, leaving generalized reconstruction kernels largely underexplored,
partly due to the lack of easy integrability in 3D to 2D projections. In this
light, we show that a class of decaying anisotropic radial basis functions
(DARBFs), which are non-negative functions of the Mahalanobis distance,
supports splatting by approximating the Gaussian function's closed-form
integration advantage. With this fresh perspective, we demonstrate up to 34%
faster convergence during training and a 15% reduction in memory consumption
across various DARB reconstruction kernels, while maintaining comparable PSNR,
SSIM, and LPIPS results. We will make the code available.
|
2501.12370
|
Parameters vs FLOPs: Scaling Laws for Optimal Sparsity for
Mixture-of-Experts Language Models
|
cs.LG cs.AI
|
Scaling the capacity of language models has consistently proven to be a
reliable approach for improving performance and unlocking new capabilities.
Capacity can be primarily defined by two dimensions: the number of model
parameters and the compute per example. While scaling typically involves
increasing both, the precise interplay between these factors and their combined
contribution to overall capacity remains not fully understood. We explore this
relationship in the context of sparse Mixture-of-Experts (MoEs), which allow
scaling the number of parameters without proportionally increasing the FLOPs
per example. We investigate how varying the sparsity level, i.e., the fraction
of inactive parameters, impacts model's performance during pretraining and
downstream few-shot evaluation. We find that under different constraints (e.g.,
parameter size and total training compute), there is an optimal level of
sparsity that improves both training efficiency and model performance. These
results provide a better understanding of the impact of sparsity in scaling
laws for MoEs and complement existing works in this area, offering insights for
designing more efficient architectures.
|
2501.12371
|
CAT and DOG: Improved Codes for Private Distributed Matrix
Multiplication
|
cs.IT math.IT
|
We present novel constructions of polynomial codes for private distributed
matrix multiplication (PDMM/SDMM) using outer product partitioning (OPP). We
extend the degree table framework from the literature to cyclic-addition degree
tables (CATs). By using roots of unity as evaluation points, we enable
modulo-addition in the table. Based on CATs, we present an explicit
construction, called CATx, that requires fewer workers than existing schemes in
the low-privacy regime. Additionally, we present new families of schemes based
on conventional degree tables, called GASPrs and DOGrs, that outperform the
state-of-the-art for a wide range of parameters.
|
2501.12372
|
Is Long Context All You Need? Leveraging LLM's Extended Context for
NL2SQL
|
cs.DB cs.AI
|
Large Language Models (LLMs) have demonstrated impressive capabilities across
a range of natural language processing tasks. In particular, improvements in
reasoning abilities and the expansion of context windows have opened new
avenues for leveraging these powerful models. NL2SQL is challenging in that the
natural language question is inherently ambiguous, while the SQL generation
requires a precise understanding of complex data schema and semantics. One
approach to this semantic ambiguous problem is to provide more and sufficient
contextual information.
In this work, we explore the performance and the latency trade-offs of the
extended context window (a.k.a., long context) offered by Google's
state-of-the-art LLM (\textit{gemini-1.5-pro}). We study the impact of various
contextual information, including column example values, question and SQL query
pairs, user-provided hints, SQL documentation, and schema. To the best of our
knowledge, this is the first work to study how the extended context window and
extra contextual information can help NL2SQL generation with respect to both
accuracy and latency cost. We show that long context LLMs are robust and do not
get lost in the extended contextual information. Additionally, our long-context
NL2SQL pipeline based on Google's \textit{gemini-pro-1.5} achieve strong
performances on various benchmark datasets without finetuning and expensive
self-consistency based techniques.
|
2501.12374
|
Expertise elevates AI usage: experimental evidence comparing laypeople
and professional artists
|
cs.HC cs.AI cs.CY
|
Novel capacities of generative AI to analyze and generate cultural artifacts
raise inevitable questions about the nature and value of artistic education and
human expertise. Has AI already leveled the playing field between professional
artists and laypeople, or do trained artistic expressive capacity, curation
skills and experience instead enhance the ability to use these new tools? In
this pre-registered study, we conduct experimental comparisons between 50
active artists and a demographically matched sample of laypeople. We designed
two tasks to approximate artistic practice for testing their capabilities in
both faithful and creative image creation: replicating a reference image, and
moving as far away as possible from it. We developed a bespoke platform where
participants used a modern text-to-image model to complete both tasks. We also
collected and compared participants' sentiments towards AI. On average, artists
produced more faithful and creative outputs than their lay counterparts,
although only by a small margin. While AI may ease content creation,
professional expertise is still valuable - even within the confined space of
generative AI itself. Finally, we also explored how well an exemplary
vision-capable large language model (GPT-4o) would complete the same tasks, if
given the role of an image generation agent, and found it performed on par in
copying but outperformed even artists in the creative task. The very best
results were still produced by humans in both tasks. These outcomes highlight
the importance of integrating artistic skills with AI training to prepare
artists and other visual professionals for a technologically evolving
landscape. We see a potential in collaborative synergy with generative AI,
which could reshape creative industries and education in the arts.
|
2501.12375
|
Video Depth Anything: Consistent Depth Estimation for Super-Long Videos
|
cs.CV cs.AI
|
Depth Anything has achieved remarkable success in monocular depth estimation
with strong generalization ability. However, it suffers from temporal
inconsistency in videos, hindering its practical applications. Various methods
have been proposed to alleviate this issue by leveraging video generation
models or introducing priors from optical flow and camera poses. Nonetheless,
these methods are only applicable to short videos (< 10 seconds) and require a
trade-off between quality and computational efficiency. We propose Video Depth
Anything for high-quality, consistent depth estimation in super-long videos
(over several minutes) without sacrificing efficiency. We base our model on
Depth Anything V2 and replace its head with an efficient spatial-temporal head.
We design a straightforward yet effective temporal consistency loss by
constraining the temporal depth gradient, eliminating the need for additional
geometric priors. The model is trained on a joint dataset of video depth and
unlabeled images, similar to Depth Anything V2. Moreover, a novel
key-frame-based strategy is developed for long video inference. Experiments
show that our model can be applied to arbitrarily long videos without
compromising quality, consistency, or generalization ability. Comprehensive
evaluations on multiple video benchmarks demonstrate that our approach sets a
new state-of-the-art in zero-shot video depth estimation. We offer models of
different scales to support a range of scenarios, with our smallest model
capable of real-time performance at 30 FPS.
|
2501.12379
|
Constant Weight Polar Codes through Periodic Markov Processes
|
cs.IT math.IT
|
Constant weight codes can arise from an input process sampled from a periodic
Markov chain. A previous result showed that, in general, polarization does not
occur for input-output processes with an underlying periodic Markov chain. In
this work, we show that if we fix the initial state of an underlying periodic
Markov chain, polarization does occur. Fixing the initial state is aligned with
ensuring a constant weight code.
|
2501.12380
|
MMVU: Measuring Expert-Level Multi-Discipline Video Understanding
|
cs.CV cs.AI cs.CL
|
We introduce MMVU, a comprehensive expert-level, multi-discipline benchmark
for evaluating foundation models in video understanding. MMVU includes 3,000
expert-annotated questions spanning 27 subjects across four core disciplines:
Science, Healthcare, Humanities & Social Sciences, and Engineering. Compared to
prior benchmarks, MMVU features three key advancements. First, it challenges
models to apply domain-specific knowledge and perform expert-level reasoning to
analyze specialized-domain videos, moving beyond the basic visual perception
typically assessed in current video benchmarks. Second, each example is
annotated by human experts from scratch. We implement strict data quality
controls to ensure the high quality of the dataset. Finally, each example is
enriched with expert-annotated reasoning rationals and relevant domain
knowledge, facilitating in-depth analysis. We conduct an extensive evaluation
of 32 frontier multimodal foundation models on MMVU. The latest
System-2-capable models, o1 and Gemini 2.0 Flash Thinking, achieve the highest
performance among the tested models. However, they still fall short of matching
human expertise. Through in-depth error analyses and case studies, we offer
actionable insights for future advancements in expert-level,
knowledge-intensive video understanding for specialized domains.
|
2501.12381
|
Parallel Sequence Modeling via Generalized Spatial Propagation Network
|
cs.CV cs.LG
|
We present the Generalized Spatial Propagation Network (GSPN), a new
attention mechanism optimized for vision tasks that inherently captures 2D
spatial structures. Existing attention models, including transformers, linear
attention, and state-space models like Mamba, process multi-dimensional data as
1D sequences, compromising spatial coherence and efficiency. GSPN overcomes
these limitations by directly operating on spatially coherent image data and
forming dense pairwise connections through a line-scan approach. Central to
GSPN is the Stability-Context Condition, which ensures stable, context-aware
propagation across 2D sequences and reduces the effective sequence length to
$\sqrt{N}$ for a square map with N elements, significantly enhancing
computational efficiency. With learnable, input-dependent weights and no
reliance on positional embeddings, GSPN achieves superior spatial fidelity and
state-of-the-art performance in vision tasks, including ImageNet
classification, class-guided image generation, and text-to-image generation.
Notably, GSPN accelerates SD-XL with softmax-attention by over $84\times$ when
generating 16K images.
|
2501.12382
|
DiffDoctor: Diagnosing Image Diffusion Models Before Treating
|
cs.CV
|
In spite of the recent progress, image diffusion models still produce
artifacts. A common solution is to refine an established model with a quality
assessment system, which generally rates an image in its entirety. In this
work, we believe problem-solving starts with identification, yielding the
request that the model should be aware of not just the presence of defects in
an image, but their specific locations. Motivated by this, we propose
DiffDoctor, a two-stage pipeline to assist image diffusion models in generating
fewer artifacts. Concretely, the first stage targets developing a robust
artifact detector, for which we collect a dataset of over 1M flawed synthesized
images and set up an efficient human-in-the-loop annotation process,
incorporating a carefully designed class-balance strategy. The learned artifact
detector is then involved in the second stage to tune the diffusion model
through assigning a per-pixel confidence map for each synthesis. Extensive
experiments on text-to-image diffusion models demonstrate the effectiveness of
our artifact detector as well as the soundness of our diagnose-then-treat
design.
|
2501.12384
|
CCESAR: Coastline Classification-Extraction From SAR Images Using
CNN-U-Net Combination
|
cs.CV cs.LG eess.IV
|
In this article, we improve the deep learning solution for coastline
extraction from Synthetic Aperture Radar (SAR) images by proposing a two-stage
model involving image classification followed by segmentation. We hypothesize
that a single segmentation model usually used for coastline detection is
insufficient to characterize different coastline types. We demonstrate that the
need for a two-stage workflow prevails through different compression levels of
these images. Our results from experiments using a combination of CNN and U-Net
models on Sentinel-1 images show that the two-stage workflow, coastline
classification-extraction from SAR images (CCESAR) outperforms a single U-Net
segmentation model.
|
2501.12385
|
Audio Texture Manipulation by Exemplar-Based Analogy
|
cs.SD cs.LG eess.AS
|
Audio texture manipulation involves modifying the perceptual characteristics
of a sound to achieve specific transformations, such as adding, removing, or
replacing auditory elements. In this paper, we propose an exemplar-based
analogy model for audio texture manipulation. Instead of conditioning on
text-based instructions, our method uses paired speech examples, where one clip
represents the original sound and another illustrates the desired
transformation. The model learns to apply the same transformation to new input,
allowing for the manipulation of sound textures. We construct a quadruplet
dataset representing various editing tasks, and train a latent diffusion model
in a self-supervised manner. We show through quantitative evaluations and
perceptual studies that our model outperforms text-conditioned baselines and
generalizes to real-world, out-of-distribution, and non-speech scenarios.
Project page: https://berkeley-speech-group.github.io/audio-texture-analogy/
|
2501.12386
|
InternVideo2.5: Empowering Video MLLMs with Long and Rich Context
Modeling
|
cs.CV
|
This paper aims to improve the performance of video multimodal large language
models (MLLM) via long and rich context (LRC) modeling. As a result, we develop
a new version of InternVideo2.5 with a focus on enhancing the original MLLMs'
ability to perceive fine-grained details and capture long-form temporal
structure in videos. Specifically, our approach incorporates dense vision task
annotations into MLLMs using direct preference optimization and develops
compact spatiotemporal representations through adaptive hierarchical token
compression. Experimental results demonstrate this unique design of LRC greatly
improves the results of video MLLM in mainstream video understanding benchmarks
(short & long), enabling the MLLM to memorize significantly longer video inputs
(at least 6x longer than the original), and master specialized vision
capabilities like object tracking and segmentation. Our work highlights the
importance of multimodal context richness (length and fineness) in empowering
MLLM's innate abilites (focus and memory), providing new insights for future
research on video MLLM. Code and models are available at
https://github.com/OpenGVLab/InternVideo/tree/main/InternVideo2.5
|
2501.12387
|
Continuous 3D Perception Model with Persistent State
|
cs.CV
|
We present a unified framework capable of solving a broad range of 3D tasks.
Our approach features a stateful recurrent model that continuously updates its
state representation with each new observation. Given a stream of images, this
evolving state can be used to generate metric-scale pointmaps (per-pixel 3D
points) for each new input in an online fashion. These pointmaps reside within
a common coordinate system, and can be accumulated into a coherent, dense scene
reconstruction that updates as new images arrive. Our model, called CUT3R
(Continuous Updating Transformer for 3D Reconstruction), captures rich priors
of real-world scenes: not only can it predict accurate pointmaps from image
observations, but it can also infer unseen regions of the scene by probing at
virtual, unobserved views. Our method is simple yet highly flexible, naturally
accepting varying lengths of images that may be either video streams or
unordered photo collections, containing both static and dynamic content. We
evaluate our method on various 3D/4D tasks and demonstrate competitive or
state-of-the-art performance in each. Project Page: https://cut3r.github.io/
|
2501.12389
|
Taming Teacher Forcing for Masked Autoregressive Video Generation
|
cs.CV
|
We introduce MAGI, a hybrid video generation framework that combines masked
modeling for intra-frame generation with causal modeling for next-frame
generation. Our key innovation, Complete Teacher Forcing (CTF), conditions
masked frames on complete observation frames rather than masked ones (namely
Masked Teacher Forcing, MTF), enabling a smooth transition from token-level
(patch-level) to frame-level autoregressive generation. CTF significantly
outperforms MTF, achieving a +23% improvement in FVD scores on first-frame
conditioned video prediction. To address issues like exposure bias, we employ
targeted training strategies, setting a new benchmark in autoregressive video
generation. Experiments show that MAGI can generate long, coherent video
sequences exceeding 100 frames, even when trained on as few as 16 frames,
highlighting its potential for scalable, high-quality video generation.
|
2501.12390
|
GPS as a Control Signal for Image Generation
|
cs.CV
|
We show that the GPS tags contained in photo metadata provide a useful
control signal for image generation. We train GPS-to-image models and use them
for tasks that require a fine-grained understanding of how images vary within a
city. In particular, we train a diffusion model to generate images conditioned
on both GPS and text. The learned model generates images that capture the
distinctive appearance of different neighborhoods, parks, and landmarks. We
also extract 3D models from 2D GPS-to-image models through score distillation
sampling, using GPS conditioning to constrain the appearance of the
reconstruction from each viewpoint. Our evaluations suggest that our
GPS-conditioned models successfully learn to generate images that vary based on
location, and that GPS conditioning improves estimated 3D structure.
|
2501.12391
|
Physics of Skill Learning
|
cs.LG cs.AI physics.data-an stat.ML
|
We aim to understand physics of skill learning, i.e., how skills are learned
in neural networks during training. We start by observing the Domino effect,
i.e., skills are learned sequentially, and notably, some skills kick off
learning right after others complete learning, similar to the sequential fall
of domino cards. To understand the Domino effect and relevant behaviors of
skill learning, we take physicists' approach of abstraction and simplification.
We propose three models with varying complexities -- the Geometry model, the
Resource model, and the Domino model, trading between reality and simplicity.
The Domino effect can be reproduced in the Geometry model, whose resource
interpretation inspires the Resource model, which can be further simplified to
the Domino model. These models present different levels of abstraction and
simplification; each is useful to study some aspects of skill learning. The
Geometry model provides interesting insights into neural scaling laws and
optimizers; the Resource model sheds light on the learning dynamics of
compositional tasks; the Domino model reveals the benefits of modularity. These
models are not only conceptually interesting -- e.g., we show how Chinchilla
scaling laws can emerge from the Geometry model, but also are useful in
practice by inspiring algorithmic development -- e.g., we show how simple
algorithmic changes, motivated by these toy models, can speed up the training
of deep learning models.
|
2501.12392
|
Learning segmentation from point trajectories
|
cs.CV cs.AI cs.LG
|
We consider the problem of segmenting objects in videos based on their motion
and no other forms of supervision. Prior work has often approached this problem
by using the principle of common fate, namely the fact that the motion of
points that belong to the same object is strongly correlated. However, most
authors have only considered instantaneous motion from optical flow. In this
work, we present a way to train a segmentation network using long-term point
trajectories as a supervisory signal to complement optical flow. The key
difficulty is that long-term motion, unlike instantaneous motion, is difficult
to model -- any parametric approximation is unlikely to capture complex motion
patterns over long periods of time. We instead draw inspiration from subspace
clustering approaches, proposing a loss function that seeks to group the
trajectories into low-rank matrices where the motion of object points can be
approximately explained as a linear combination of other point tracks. Our
method outperforms the prior art on motion-based segmentation, which shows the
utility of long-term motion and the effectiveness of our formulation.
|
2501.12393
|
Towards Affordance-Aware Articulation Synthesis for Rigged Objects
|
cs.CV
|
Rigged objects are commonly used in artist pipelines, as they can flexibly
adapt to different scenes and postures. However, articulating the rigs into
realistic affordance-aware postures (e.g., following the context, respecting
the physics and the personalities of the object) remains time-consuming and
heavily relies on human labor from experienced artists. In this paper, we
tackle the novel problem and design A3Syn. With a given context, such as the
environment mesh and a text prompt of the desired posture, A3Syn synthesizes
articulation parameters for arbitrary and open-domain rigged objects obtained
from the Internet. The task is incredibly challenging due to the lack of
training data, and we do not make any topological assumptions about the
open-domain rigs. We propose using 2D inpainting diffusion model and several
control techniques to synthesize in-context affordance information. Then, we
develop an efficient bone correspondence alignment using a combination of
differentiable rendering and semantic correspondence. A3Syn has stable
convergence, completes in minutes, and synthesizes plausible affordance on
different combinations of in-the-wild object rigs and scenes.
|
2501.12394
|
The ELEVATE-AI LLMs Framework: An Evaluation Framework for Use of Large
Language Models in HEOR: an ISPOR Working Group Report
|
cs.CY cs.LG
|
Introduction. Generative Artificial Intelligence, particularly large language
models (LLMs), offers transformative potential for Health Economics and
Outcomes Research (HEOR). However, evaluating the quality, transparency, and
rigor of LLM-assisted research lacks standardized guidance. This article
introduces the ELEVATE AI LLMs framework and checklist, designed to support
researchers and reviewers in assessing LLM use in HEOR.
Methods. The ELEVATE AI LLMs framework was developed through a targeted
review of existing guidelines and evaluation frameworks. The framework
comprises ten evaluation domains, including model characteristics, accuracy,
comprehensiveness, and fairness. The accompanying checklist operationalizes the
framework. To validate the framework, we applied it to two published studies,
demonstrating its usability across different HEOR tasks.
Results. The ELEVATE AI LLMs framework provides a comprehensive structure for
evaluating LLM-assisted research, while the checklist facilitates practical
application. Validation of the framework and checklist on studies of systematic
literature reviews and health economic modeling highlighted their ability to
identify strengths and gaps in reporting.
Limitations. While the ELEVATE AI LLMs framework provides robust guidance,
its broader generalizability and applicability to diverse HEOR tasks require
further empirical testing. Additionally, several metrics adapted from computer
science need further validation in HEOR contexts.
Conclusion. The ELEVATE AI LLMs framework and checklist fill a critical gap
in HEOR by offering structured guidance for evaluating LLM-assisted research.
By promoting transparency, accuracy, and reproducibility, they aim to
standardize and improve the integration of LLMs into HEOR, ensuring their
outputs meet the field's rigorous standards.
|
2501.12399
|
FinSphere: A Conversational Stock Analysis Agent Equipped with
Quantitative Tools based on Real-Time Database
|
cs.AI cs.CL cs.IR q-fin.CP
|
Current financial Large Language Models (LLMs) struggle with two critical
limitations: a lack of depth in stock analysis, which impedes their ability to
generate professional-grade insights, and the absence of objective evaluation
metrics to assess the quality of stock analysis reports. To address these
challenges, this paper introduces FinSphere, a conversational stock analysis
agent, along with three major contributions: (1) Stocksis, a dataset curated by
industry experts to enhance LLMs' stock analysis capabilities, (2) AnalyScore,
a systematic evaluation framework for assessing stock analysis quality, and (3)
FinSphere, an AI agent that can generate high-quality stock analysis reports in
response to user queries. Experiments demonstrate that FinSphere achieves
superior performance compared to both general and domain-specific LLMs, as well
as existing agent-based systems, even when they are enhanced with real-time
data access and few-shot guidance. The integrated framework, which combines
real-time data feeds, quantitative tools, and an instruction-tuned LLM, yields
substantial improvements in both analytical quality and practical applicability
for real-world stock analysis.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.