id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
2501.13236
|
Time-Constrained Model Predictive Control for Autonomous Satellite
Rendezvous, Proximity Operations, and Docking
|
eess.SY cs.SY
|
This paper presents a time-constrained model predictive control strategy for
the six degree-of-freedom autonomous rendezvous, proximity, operations and
docking problem between a controllable "deputy" satellite and an uncontrolled
"chief" satellite. The objective is to achieve a docking configuration defined
by both the translational and attitudinal states of the deputy relative to the
chief, whose dynamics are respectively governed by both the Clohessy-Wiltshire
equations and Euler's second law of motion. The proposed control strategy
explicitly addresses computational time constraints that are common to
state-of-the-art space vehicles. Thus, a time-constrained model predictive
control strategy is implemented on a space-grade processor. Although suboptimal
with regards to energy consumption when compared to conventional optimal RPO
trajectories, it is empirically demonstrated via numerical simulations that the
deputy spacecraft still achieves a successful docking configuration while
subject to computational time constraints.
|
2501.13241
|
State Combinatorial Generalization In Decision Making With Conditional
Diffusion Models
|
cs.LG
|
Many real-world decision-making problems are combinatorial in nature, where
states (e.g., surrounding traffic of a self-driving car) can be seen as a
combination of basic elements (e.g., pedestrians, trees, and other cars). Due
to combinatorial complexity, observing all combinations of basic elements in
the training set is infeasible, which leads to an essential yet understudied
problem of zero-shot generalization to states that are unseen combinations of
previously seen elements. In this work, we first formalize this problem and
then demonstrate how existing value-based reinforcement learning (RL)
algorithms struggle due to unreliable value predictions in unseen states. We
argue that this problem cannot be addressed with exploration alone, but
requires more expressive and generalizable models. We demonstrate that behavior
cloning with a conditioned diffusion model trained on expert trajectory
generalizes better to states formed by new combinations of seen elements than
traditional RL methods. Through experiments in maze, driving, and multiagent
environments, we show that conditioned diffusion models outperform traditional
RL techniques and highlight the broad applicability of our problem formulation.
|
2501.13242
|
Distributed Multiple Testing with False Discovery Rate Control in the
Presence of Byzantines
|
eess.SP cs.IT math.IT stat.ME
|
This work studies distributed multiple testing with false discovery rate
(FDR) control in the presence of Byzantine attacks, where an adversary captures
a fraction of the nodes and corrupts their reported p-values. We focus on two
baseline attack models: an oracle model with the full knowledge of which
hypotheses are true nulls, and a practical attack model that leverages the
Benjamini-Hochberg (BH) procedure locally to classify which p-values follow the
true null hypotheses. We provide a thorough characterization of how both attack
models affect the global FDR, which in turn motivates counter-attack strategies
and stronger attack models. Our extensive simulation studies confirm the
theoretical results, highlight key design trade-offs under attacks and
countermeasures, and provide insights into more sophisticated attacks.
|
2501.13247
|
Multimodal AI on Wound Images and Clinical Notes for Home Patient
Referral
|
cs.LG cs.CV eess.IV
|
Chronic wounds affect 8.5 million Americans, particularly the elderly and
patients with diabetes. These wounds can take up to nine months to heal, making
regular care essential to ensure healing and prevent severe outcomes like limb
amputations. Many patients receive care at home from visiting nurses with
varying levels of wound expertise, leading to inconsistent care. Problematic,
non-healing wounds should be referred to wound specialists, but referral
decisions in non-clinical settings are often erroneous, delayed, or
unnecessary.
This paper introduces the Deep Multimodal Wound Assessment Tool (DM-WAT), a
machine learning framework designed to assist visiting nurses in deciding
whether to refer chronic wound patients. DM-WAT analyzes smartphone-captured
wound images and clinical notes from Electronic Health Records (EHRs). It uses
DeiT-Base-Distilled, a Vision Transformer (ViT), to extract visual features
from images and DeBERTa-base to extract text features from clinical notes.
DM-WAT combines visual and text features using an intermediate fusion approach.
To address challenges posed by a small and imbalanced dataset, it integrates
image and text augmentation with transfer learning to achieve high performance.
In evaluations, DM-WAT achieved 77% with std 3% accuracy and a 70% with std 2%
F1 score, outperforming prior approaches. Score-CAM and Captum interpretation
algorithms provide insights into specific parts of image and text inputs that
influence recommendations, enhancing interpretability and trust.
|
2501.13252
|
Exploring the Technology Landscape through Topic Modeling, Expert
Involvement, and Reinforcement Learning
|
cs.LG cs.CR quant-ph
|
In today's rapidly evolving technological landscape, organizations face the
challenge of integrating external insights into their decision-making processes
to stay competitive. To address this issue, this study proposes a method that
combines topic modeling, expert knowledge inputs, and reinforcement learning
(RL) to enhance the detection of technological changes. The method has four
main steps: (1) Build a relevant topic model, starting with textual data like
documents and reports to find key themes. (2) Create aspect-based topic models.
Experts use curated keywords to build models that showcase key domain-specific
aspects. (3) Iterative analysis and RL driven refinement: We examine metrics
such as topic magnitude, similarity, entropy shifts, and how models change over
time. We optimize topic selection with RL. Our reward function balances the
diversity and similarity of the topics. (4) Synthesis and operational
integration: Each iteration provides insights. In the final phase, the experts
check these insights and reach new conclusions. These conclusions are designed
for use in the firm's operational processes. The application is tested by
forecasting trends in quantum communication. Results demonstrate the method's
effectiveness in identifying, ranking, and tracking trends that align with
expert input, providing a robust tool for exploring evolving technological
landscapes. This research offers a scalable and adaptive solution for
organizations to make informed strategic decisions in dynamic environments.
|
2501.13255
|
Stochastic Deep Learning Surrogate Models for Uncertainty Propagation in
Microstructure-Properties of Ceramic Aerogels
|
cs.CE
|
Deep learning surrogate models have become pivotal in enabling model-driven
materials discovery to achieve exceptional properties. However, ensuring the
accuracy and reliability of predictions from these models, trained on limited
and sparse material datasets remains a significant challenge. This study
introduces an integrated deep learning framework for predicting the synthesis,
microstructure, and mechanical properties of ceramic aerogels, leveraging
physics-based models such as Lattice Boltzmann simulations for microstructure
formation and stochastic finite element methods for mechanical property
calculations. To address the computational demands of repeated physics-based
simulations required for experimental calibration and material design, a linked
surrogate model is developed, leveraging Convolutional Neural Networks (CNNs)
for stochastic microstructure generation and microstructure-to-mechanical
property mapping. To overcome challenges associated with limited training
datasets from expensive physical modeling, CNN training is formulated within a
Bayesian inference framework, enabling robust uncertainty quantification in
predictions. Numerical results highlight the strengths and limitations of the
linked surrogate framework, demonstrating its effectiveness in predicting
properties of aerogels with pore sizes and morphologies similar to the training
data (in-distribution) and its ability to interpolate to new microstructural
features between training data (out-of-distribution).
|
2501.13261
|
Exploring GPT's Ability as a Judge in Music Understanding
|
cs.IR cs.SD eess.AS
|
Recent progress in text-based Large Language Models (LLMs) and their extended
ability to process multi-modal sensory data have led us to explore their
applicability in addressing music information retrieval (MIR) challenges. In
this paper, we use a systematic prompt engineering approach for LLMs to solve
MIR problems. We convert the music data to symbolic inputs and evaluate LLMs'
ability in detecting annotation errors in three key MIR tasks: beat tracking,
chord extraction, and key estimation. A concept augmentation method is proposed
to evaluate LLMs' music reasoning consistency with the provided music concepts
in the prompts. Our experiments tested the MIR capabilities of Generative
Pre-trained Transformers (GPT). Results show that GPT has an error detection
accuracy of 65.20%, 64.80%, and 59.72% in beat tracking, chord extraction, and
key estimation tasks, respectively, all exceeding the random baseline.
Moreover, we observe a positive correlation between GPT's error finding
accuracy and the amount of concept information provided. The current findings
based on symbolic music input provide a solid ground for future LLM-based MIR
research.
|
2501.13264
|
RAG-Reward: Optimizing RAG with Reward Modeling and RLHF
|
cs.CL
|
Retrieval-augmented generation (RAG) enhances Large Language Models (LLMs)
with relevant and up-to-date knowledge, improving their ability to answer
knowledge-intensive questions. It has been shown to enhance both generation
quality and trustworthiness. While numerous works have focused on improving
retrieval, generation, and evaluation, the role of reward models in
reinforcement learning for optimizing RAG remains underexplored. In this paper,
we introduce \textbf{RAG-Reward}, a framework designed to develop reward models
to enable \textit{hallucination-free, comprehensive, reliable, and efficient
RAG}. We define four key metrics to assess generation quality and develop an
automated benchmarking pipeline to evaluate the outputs of multiple LLMs across
a variety of RAG scenarios. Using \textbf{RAG-Reward}, we train reward models
and apply {reinforcement learning with human feedback (RLHF)} to improve LLMs'
effectiveness in RAG. Experimental results demonstrate that our reward model
achieves state-of-the-art performance in automatic benchmarking and aligns
closely with human evaluations. Furthermore, the improved generation quality of
the trained policy model highlights the feasibility and efficiency of using
RLHF to enhance RAG outputs.
|
2501.13268
|
Threat-based Security Controls to Protect Industrial Control Systems
|
cs.CR cs.SY eess.SY
|
This paper analyzes the reported threats to Industrial Control Systems
(ICS)/Operational Technology (OT) and identifies common tactics, techniques,
and procedures (TTP) used by threat actors. The paper then uses the MITRE
ATT&CK framework to map the common TTPs and provide an understanding of the
security controls needed to defend against the reported ICS threats. The paper
also includes a review of ICS testbeds and ideas for future research using the
identified controls.
|
2501.13271
|
Hybrid Two-Stage Reconstruction of Multiscale Subsurface Flow with
Physics-informed Residual Connected Neural Operator
|
cs.LG
|
The novel neural networks show great potential in solving partial
differential equations. For single-phase flow problems in subsurface porous
media with high-contrast coefficients, the key is to develop neural operators
with accurate reconstruction capability and strict adherence to physical laws.
In this study, we proposed a hybrid two-stage framework that uses multiscale
basis functions and physics-guided deep learning to solve the Darcy flow
problem in high-contrast fractured porous media. In the first stage, a
data-driven model is used to reconstruct the multiscale basis function based on
the permeability field to achieve effective dimensionality reduction while
preserving the necessary multiscale features. In the second stage, the
physics-informed neural network, together with Transformer-based global
information extractor is used to reconstruct the pressure field by integrating
the physical constraints derived from the Darcy equation, ensuring consistency
with the physical laws of the real world. The model was evaluated on datasets
with different combinations of permeability and basis functions and performed
well in terms of reconstruction accuracy. Specifically, the framework achieves
R2 values above 0.9 in terms of basis function fitting and pressure
reconstruction, and the residual indicator is on the order of $1\times
10^{-4}$. These results validate the ability of the proposed framework to
achieve accurate reconstruction while maintaining physical consistency.
|
2501.13272
|
PCSI -- The Platform for Content-Structure Inference
|
cs.IR cs.CY
|
The Platform for Content-Structure Inference (PCSI, pronounced "pixie")
facilitates the sharing of information about the process of converting Web
resources into structured content objects that conform to a predefined format.
PCSI records encode methods for deriving structured content from classes of
URLs, and report the results of applying particular methods to particular URLs.
The methods are scripts written in Hex, a variant of Awk with facilities for
traversing the HTML DOM.
|
2501.13273
|
Enhancing Robust Fairness via Confusional Spectral Regularization
|
cs.LG
|
Recent research has highlighted a critical issue known as ``robust fairness",
where robust accuracy varies significantly across different classes,
undermining the reliability of deep neural networks (DNNs). A common approach
to address this has been to dynamically reweight classes during training,
giving more weight to those with lower empirical robust performance. However,
we find there is a divergence of class-wise robust performance between training
set and testing set, which limits the effectiveness of these explicit
reweighting methods, indicating the need for a principled alternative. In this
work, we derive a robust generalization bound for the worst-class robust error
within the PAC-Bayesian framework, accounting for unknown data distributions.
Our analysis shows that the worst-class robust error is influenced by two main
factors: the spectral norm of the empirical robust confusion matrix and the
information embedded in the model and training set. While the latter has been
extensively studied, we propose a novel regularization technique targeting the
spectral norm of the robust confusion matrix to improve worst-class robust
accuracy and enhance robust fairness. We validate our approach through
comprehensive experiments on various datasets and models, demonstrating its
effectiveness in enhancing robust fairness.
|
2501.13274
|
T-Graphormer: Using Transformers for Spatiotemporal Forecasting
|
cs.LG
|
Multivariate time series data is ubiquitous, and forecasting it has important
applications in many domains. However, its complex spatial dependencies and
non-linear temporal dynamics can be challenging for traditional techniques.
Existing methods tackle these challenges by learning the two dimensions
separately. Here, we introduce Temporal Graphormer (T-Graphormer), a
Transformer-based approach capable of modelling spatiotemporal correlations
simultaneously. By incorporating temporal dynamics in the Graphormer
architecture, each node attends to all other nodes within the graph sequence.
Our design enables the model to capture rich spatiotemporal patterns with
minimal reliance on predefined spacetime inductive biases. We validate the
effectiveness of T-Graphormer on real-world traffic prediction benchmark
datasets. Compared to state-of-the-art methods, T-Graphormer reduces root mean
squared error (RMSE) and mean absolute percentage error (MAPE) by up to 10%.
|
2501.13277
|
MEDFORM: A Foundation Model for Contrastive Learning of CT Imaging and
Clinical Numeric Data in Multi-Cancer Analysis
|
cs.CV
|
Computed tomography (CT) and clinical numeric data are essential modalities
for cancer evaluation, but building large-scale multimodal training datasets
for developing medical foundation models remains challenging due to the
structural complexity of multi-slice CT data and high cost of expert
annotation. In this study, we propose MEDFORM, a multimodal pre-training
strategy that guides CT image representation learning using complementary
information from clinical data for medical foundation model development.
MEDFORM efficiently processes CT slice through multiple instance learning (MIL)
and adopts a dual pre-training strategy: first pretraining the CT slice feature
extractor using SimCLR-based self-supervised learning, then aligning CT and
clinical modalities through cross-modal contrastive learning. Our model was
pre-trained on three different cancer types: lung cancer (141,171 slices),
breast cancer (8,100 slices), colorectal cancer (10,393 slices). The
experimental results demonstrated that this dual pre-training strategy improves
cancer classification performance and maintains robust performance in few-shot
learning scenarios. Code available at
https://github.com/DigitalHealthcareLab/25MultiModalFoundationModel.git
|
2501.13278
|
On Subset Retrieval and Group Testing Problems with Differential Privacy
Constraints
|
cs.IT math.IT
|
This paper focuses on the design and analysis of privacy-preserving
techniques for group testing and infection status retrieval. Our work is
motivated by the need to provide accurate information on the status of disease
spread among a group of individuals while protecting the privacy of the
infection status of any single individual involved. The paper is motivated by
practical scenarios, such as controlling the spread of infectious diseases,
where individuals might be reluctant to participate in testing if their
outcomes are not kept confidential.
The paper makes the following contributions. First, we present a differential
privacy framework for the subset retrieval problem, which focuses on sharing
the infection status of individuals with administrators and decision-makers. We
characterize the trade-off between the accuracy of subset retrieval and the
degree of privacy guaranteed to the individuals. In particular, we establish
tight lower and upper bounds on the achievable level of accuracy subject to the
differential privacy constraints. We then formulate the differential privacy
framework for the noisy group testing problem in which noise is added either
before or after the pooling process. We establish a reduction between the
private subset retrieval and noisy group testing problems and show that the
converse and achievability schemes for subset retrieval carry over to
differentially private group testing.
|
2501.13282
|
Experience with GitHub Copilot for Developer Productivity at Zoominfo
|
cs.SE cs.AI
|
This paper presents a comprehensive evaluation of GitHub Copilot's deployment
and impact on developer productivity at Zoominfo, a leading Go-To-Market (GTM)
Intelligence Platform. We describe our systematic four-phase approach to
evaluating and deploying GitHub Copilot across our engineering organization,
involving over 400 developers. Our analysis combines both quantitative metrics,
focusing on acceptance rates of suggestions given by GitHub Copilot and
qualitative feedback given by developers through developer satisfaction
surveys. The results show an average acceptance rate of 33% for suggestions and
20% for lines of code, with high developer satisfaction scores of 72%. We also
discuss language-specific performance variations, limitations, and lessons
learned from this medium-scale enterprise deployment. Our findings contribute
to the growing body of knowledge about AI-assisted software development in
enterprise settings.
|
2501.13284
|
Toyteller: AI-powered Visual Storytelling Through Toy-Playing with
Character Symbols
|
cs.HC cs.AI cs.CL
|
We introduce Toyteller, an AI-powered storytelling system where users
generate a mix of story text and visuals by directly manipulating character
symbols like they are toy-playing. Anthropomorphized symbol motions can convey
rich and nuanced social interactions; Toyteller leverages these motions (1) to
let users steer story text generation and (2) as a visual output format that
accompanies story text. We enabled motion-steered text generation and
text-steered motion generation by mapping motions and text onto a shared
semantic space so that large language models and motion generation models can
use it as a translational layer. Technical evaluations showed that Toyteller
outperforms a competitive baseline, GPT-4o. Our user study identified that
toy-playing helps express intentions difficult to verbalize. However, only
motions could not express all user intentions, suggesting combining it with
other modalities like language. We discuss the design space of toy-playing
interactions and implications for technical HCI research on human-AI
interaction.
|
2501.13288
|
Task-Oriented Automatic Fact-Checking with Frame-Semantics
|
cs.CL
|
We propose a novel paradigm for automatic fact-checking that leverages frame
semantics to enhance the structured understanding of claims and guide the
process of fact-checking them. To support this, we introduce a pilot dataset of
real-world claims extracted from PolitiFact, specifically annotated for
large-scale structured data. This dataset underpins two case studies: the first
investigates voting-related claims using the Vote semantic frame, while the
second explores various semantic frames based on data sources from the
Organisation for Economic Co-operation and Development (OECD). Our findings
demonstrate the effectiveness of frame semantics in improving evidence
retrieval and explainability for fact-checking. Finally, we conducted a survey
of frames evoked in fact-checked claims, identifying high-impact frames to
guide future work in this direction.
|
2501.13295
|
Parallel Belief Contraction via Order Aggregation
|
cs.AI
|
The standard ``serial'' (aka ``singleton'') model of belief contraction
models the manner in which an agent's corpus of beliefs responds to the removal
of a single item of information. One salient extension of this model introduces
the idea of ``parallel'' (aka ``package'' or ``multiple'') change, in which an
entire set of items of information are simultaneously removed. Existing
research on the latter has largely focussed on single-step parallel
contraction: understanding the behaviour of beliefs after a single parallel
contraction. It has also focussed on generalisations to the parallel case of
serial contraction operations whose characteristic properties are extremely
weak. Here we consider how to extend serial contraction operations that obey
stronger properties. Potentially more importantly, we also consider the
iterated case: the behaviour of beliefs after a sequence of parallel
contractions. We propose a general method for extending serial iterated belief
change operators to handle parallel change based on an n-ary generalisation of
Booth & Chandler's TeamQueue binary order aggregators.
|
2501.13296
|
Exploring Variance Reduction in Importance Sampling for Efficient DNN
Training
|
cs.LG stat.ML
|
Importance sampling is widely used to improve the efficiency of deep neural
network (DNN) training by reducing the variance of gradient estimators.
However, efficiently assessing the variance reduction relative to uniform
sampling remains challenging due to computational overhead. This paper proposes
a method for estimating variance reduction during DNN training using only
minibatches sampled under importance sampling. By leveraging the proposed
method, the paper also proposes an effective minibatch size to enable automatic
learning rate adjustment. An absolute metric to quantify the efficiency of
importance sampling is also introduced as well as an algorithm for real-time
estimation of importance scores based on moving gradient statistics.
Theoretical analysis and experiments on benchmark datasets demonstrated that
the proposed algorithm consistently reduces variance, improves training
efficiency, and enhances model accuracy compared with current
importance-sampling approaches while maintaining minimal computational
overhead.
|
2501.13297
|
RAMQA: A Unified Framework for Retrieval-Augmented Multi-Modal Question
Answering
|
cs.CL cs.AI cs.IR cs.LG
|
Multi-modal retrieval-augmented Question Answering (MRAQA), integrating text
and images, has gained significant attention in information retrieval (IR) and
natural language processing (NLP). Traditional ranking methods rely on small
encoder-based language models, which are incompatible with modern decoder-based
generative large language models (LLMs) that have advanced various NLP tasks.
To bridge this gap, we propose RAMQA, a unified framework combining
learning-to-rank methods with generative permutation-enhanced ranking
techniques. We first train a pointwise multi-modal ranker using LLaVA as the
backbone. Then, we apply instruction tuning to train a LLaMA model for
re-ranking the top-k documents using an innovative autoregressive multi-task
learning approach. Our generative ranking model generates re-ranked document
IDs and specific answers from document candidates in various permutations.
Experiments on two MRAQA benchmarks, WebQA and MultiModalQA, show significant
improvements over strong baselines, highlighting the effectiveness of our
approach. Code and data are available at: https://github.com/TonyBY/RAMQA
|
2501.13298
|
Collaborative Coded Caching for Partially Connected Networks
|
cs.IT math.IT
|
Coded caching leverages the differences in user cache memories to achieve
gains that scale with the total cache size, alleviating network congestion due
to high-quality content requests. Additionally, distributing transmitters over
a wide area can mitigate the adverse effects of path loss. In this work, we
consider a partially connected network where the channel between distributed
transmitters (helpers) and users is modeled as a distributed MIMO Gaussian
broadcast channel. We propose a novel delivery scheme consisting of two phases:
partitioning and transmission. In the partitioning phase, users with identical
cache profiles are partitioned into the minimum number of sets, such that users
within each set can successfully decode their desired message from a joint
transmission enabled by MIMO precoding. To optimally partition the users, we
employ the branch and bound method. In the transmission phase, each partition
is treated as a single entity, and codewords are multicast to partitions with
distinct cache profiles. The proposed delivery scheme is applicable to any
partially connected network, and while the partitioning is optimal, the overall
delivery scheme, including transmission, is heuristic. Interestingly,
simulation results show that its performance closely approximates that of the
fully connected optimal solution.
|
2501.13299
|
Hypothesis Generation for Materials Discovery and Design Using
Goal-Driven and Constraint-Guided LLM Agents
|
cs.CL
|
Materials discovery and design are essential for advancing technology across
various industries by enabling the development of application-specific
materials. Recent research has leveraged Large Language Models (LLMs) to
accelerate this process. We explore the potential of LLMs to generate viable
hypotheses that, once validated, can expedite materials discovery.
Collaborating with materials science experts, we curated a novel dataset from
recent journal publications, featuring real-world goals, constraints, and
methods for designing real-world applications. Using this dataset, we test
LLM-based agents that generate hypotheses for achieving given goals under
specific constraints. To assess the relevance and quality of these hypotheses,
we propose a novel scalable evaluation metric that emulates the process a
materials scientist would use to evaluate a hypothesis critically. Our curated
dataset, proposed method, and evaluation framework aim to advance future
research in accelerating materials discovery and design with LLMs.
|
2501.13302
|
Watching the AI Watchdogs: A Fairness and Robustness Analysis of AI
Safety Moderation Classifiers
|
cs.CL cs.AI
|
AI Safety Moderation (ASM) classifiers are designed to moderate content on
social media platforms and to serve as guardrails that prevent Large Language
Models (LLMs) from being fine-tuned on unsafe inputs. Owing to their potential
for disparate impact, it is crucial to ensure that these classifiers: (1) do
not unfairly classify content belonging to users from minority groups as unsafe
compared to those from majority groups and (2) that their behavior remains
robust and consistent across similar inputs. In this work, we thus examine the
fairness and robustness of four widely-used, closed-source ASM classifiers:
OpenAI Moderation API, Perspective API, Google Cloud Natural Language (GCNL)
API, and Clarifai API. We assess fairness using metrics such as demographic
parity and conditional statistical parity, comparing their performance against
ASM models and a fair-only baseline. Additionally, we analyze robustness by
testing the classifiers' sensitivity to small and natural input perturbations.
Our findings reveal potential fairness and robustness gaps, highlighting the
need to mitigate these issues in future versions of these models.
|
2501.13306
|
OSUM: Advancing Open Speech Understanding Models with Limited Resources
in Academia
|
cs.SD cs.CL eess.AS
|
Large Language Models (LLMs) have made significant progress in various
downstream tasks, inspiring the development of Speech Understanding Language
Models (SULMs) to enable comprehensive speech-based interactions. However, most
advanced SULMs are developed by the industry, leveraging large-scale datasets
and computational resources that are not readily available to the academic
community. Moreover, the lack of transparency in training details creates
additional barriers to further innovation. In this study, we present OSUM, an
Open Speech Understanding Model designed to explore the potential of training
SLUMs under constrained academic resources. The OSUM model combines a Whisper
encoder with a Qwen2 LLM and supports a wide range of speech tasks, including
speech recognition (ASR), speech recognition with timestamps (SRWT), vocal
event detection (VED), speech emotion recognition (SER), speaking style
recognition (SSR), speaker gender classification (SGC), speaker age prediction
(SAP), and speech-to-text chat (STTC). By employing an ASR+X training strategy,
OSUM achieves efficient and stable multi-task training by simultaneously
optimizing ASR alongside target tasks. Beyond delivering strong performance,
OSUM emphasizes transparency by providing openly available data preparation and
training methodologies, offering valuable insights and practical guidance for
the academic community. By doing so, we aim to accelerate research and
innovation in advanced SULM technologies.
|
2501.13307
|
From Cross-Modal to Mixed-Modal Visible-Infrared Re-Identification
|
cs.CV
|
Visible-infrared person re-identification (VI-ReID) aims to match individuals
across different camera modalities, a critical task in modern surveillance
systems. While current VI-ReID methods focus on cross-modality matching,
real-world applications often involve mixed galleries containing both V and I
images, where state-of-the-art methods show significant performance limitations
due to large domain shifts and low discrimination across mixed modalities. This
is because gallery images from the same modality may have lower domain gaps but
correspond to different identities. This paper introduces a novel mixed-modal
ReID setting, where galleries contain data from both modalities. To address the
domain shift among inter-modal and low discrimination capacity in intra-modal
matching, we propose the Mixed Modality-Erased and -Related (MixER) method. The
MixER learning approach disentangles modality-specific and modality-shared
identity information through orthogonal decomposition, modality-confusion, and
ID-modality-related objectives. MixER enhances feature robustness across
modalities, improving cross-modal and mixed-modal settings performance. Our
extensive experiments on the SYSU-MM01, RegDB and LLMC datasets indicate that
our approach can provide state-of-the-art results using a single backbone, and
showcase the flexibility of our approach in mixed gallery applications.
|
2501.13312
|
Tensor-Var: Variational Data Assimilation in Tensor Product Feature
Space
|
cs.LG
|
Variational data assimilation estimates the dynamical system states by
minimizing a cost function that fits the numerical models with observational
data. The widely used method, four-dimensional variational assimilation
(4D-Var), has two primary challenges: (1) computationally demanding for complex
nonlinear systems and (2) relying on state-observation mappings, which are
often not perfectly known. Deep learning (DL) has been used as a more
expressive class of efficient model approximators to address these challenges.
However, integrating such models into 4D-Var remains challenging due to their
inherent nonlinearities and the lack of theoretical guarantees for consistency
in assimilation results. In this paper, we propose Tensor-Var to address these
challenges using kernel Conditional Mean Embedding (CME). Tensor-Var improves
optimization efficiency by characterizing system dynamics and state-observation
mappings as linear operators, leading to a convex cost function in the feature
space. Furthermore, our method provides a new perspective to incorporate CME
into 4D-Var, offering theoretical guarantees of consistent assimilation results
between the original and feature spaces. To improve scalability, we propose a
method to learn deep features (DFs) using neural networks within the Tensor-Var
framework. Experiments on chaotic systems and global weather prediction with
real-time observations show that Tensor-Var outperforms conventional and DL
hybrid 4D-Var baselines in accuracy while achieving efficiency comparable to
the static 3D-Var method.
|
2501.13320
|
Toward Ethical AI: A Qualitative Analysis of Stakeholder Perspectives
|
cs.CY cs.AI
|
As Artificial Intelligence (AI) systems become increasingly integrated into
various aspects of daily life, concerns about privacy and ethical
accountability are gaining prominence. This study explores stakeholder
perspectives on privacy in AI systems, focusing on educators, parents, and AI
professionals. Using qualitative analysis of survey responses from 227
participants, the research identifies key privacy risks, including data
breaches, ethical misuse, and excessive data collection, alongside perceived
benefits such as personalized services, enhanced efficiency, and educational
advancements. Stakeholders emphasized the need for transparency,
privacy-by-design, user empowerment, and ethical oversight to address privacy
concerns effectively. The findings provide actionable insights into balancing
the benefits of AI with robust privacy protections, catering to the diverse
needs of stakeholders. Recommendations include implementing selective data use,
fostering transparency, promoting user autonomy, and integrating ethical
principles into AI development. This study contributes to the ongoing discourse
on ethical AI, offering guidance for designing privacy-centric systems that
align with societal values and build trust among users. By addressing privacy
challenges, this research underscores the importance of developing AI
technologies that are not only innovative but also ethically sound and
responsive to the concerns of all stakeholders.
|
2501.13321
|
Investigation of the Privacy Concerns in AI Systems for Young Digital
Citizens: A Comparative Stakeholder Analysis
|
cs.CY cs.AI
|
The integration of Artificial Intelligence (AI) systems into technologies
used by young digital citizens raises significant privacy concerns. This study
investigates these concerns through a comparative analysis of stakeholder
perspectives. A total of 252 participants were surveyed, with the analysis
focusing on 110 valid responses from parents/educators and 100 from AI
professionals after data cleaning. Quantitative methods, including descriptive
statistics and Partial Least Squares Structural Equation Modeling, examined
five validated constructs: Data Ownership and Control, Parental Data Sharing,
Perceived Risks and Benefits, Transparency and Trust, and Education and
Awareness. Results showed Education and Awareness significantly influenced data
ownership and risk assessment, while Data Ownership and Control strongly
impacted Transparency and Trust. Transparency and Trust, along with Perceived
Risks and Benefits, showed minimal influence on Parental Data Sharing,
suggesting other factors may play a larger role. The study underscores the need
for user-centric privacy controls, tailored transparency strategies, and
targeted educational initiatives. Incorporating diverse stakeholder
perspectives offers actionable insights into ethical AI design and governance,
balancing innovation with robust privacy protections to foster trust in a
digital age.
|
2501.13324
|
Comparative Withholding Behavior Analysis of Historical Energy Storage
Bids in California
|
eess.SY cs.SY econ.TH
|
The rapid growth of battery energy storage in wholesale electricity markets
calls for a deeper understanding of storage operators' bidding strategies and
their market impacts. This study examines energy storage bidding data from the
California Independent System Operator (CAISO) between July 1, 2023, and
October 1, 2024, with a primary focus on economic withholding strategies. Our
analysis reveals that storage bids are closely aligned with day-ahead and
real-time market clearing prices, with notable bid inflation during price
spikes. Statistical tests demonstrate a strong correlation between price spikes
and capacity withholding, indicating that operators can anticipate price surges
and use market volatility to increase profitability. Comparisons with optimal
hindsight bids further reveal a clear daily periodic bidding pattern,
highlighting extensive economic withholding. These results underscore potential
market inefficiencies and highlight the need for refined regulatory measures to
address economic withholding as storage capacity in the market continues to
grow.
|
2501.13329
|
Sparse identification of nonlinear dynamics and Koopman operators with
Shallow Recurrent Decoder Networks
|
cs.LG cs.AI math.DS
|
Spatiotemporal modeling of real-world data poses a challenging problem due to
inherent high dimensionality, measurement noise, and expensive data collection
procedures. In this paper, we present Sparse Identification of Nonlinear
Dynamics with SHallow REcurrent Decoder networks (SINDy-SHRED), a method to
jointly solve the sensing and model identification problems with simple
implementation, efficient computation, and robust performance. SINDy-SHRED uses
Gated Recurrent Units (GRUs) to model the temporal sequence of sensor
measurements along with a shallow decoder network to reconstruct the full
spatiotemporal field from the latent state space using only a few available
sensors. Our proposed algorithm introduces a SINDy-based regularization;
beginning with an arbitrary latent state space, the dynamics of the latent
space progressively converges to a SINDy-class functional, provided the
projection remains within the set. In restricting SINDy to a linear model, the
architecture produces a Koopman-SHRED model which enforces a linear latent
space dynamics. We conduct a systematic experimental study including synthetic
PDE data, real-world sensor measurements for sea surface temperature, and
direct video data. With no explicit encoder, SINDy-SHRED and Koopman-SHRED
enable efficient training with minimal hyperparameter tuning and laptop-level
computing; further, it demonstrates robust generalization in a variety of
applications with minimal to no hyperparameter adjustments. Finally, the
interpretable SINDy and Koopman models of latent state dynamics enables
accurate long-term video predictions, achieving state-of-the-art performance
and outperforming all baseline methods considered, including Convolutional
LSTM, PredRNN, ResNet, and SimVP.
|
2501.13331
|
Qrazor: Reliable and Effortless 4-bit LLM Quantization by Significant
Data Razoring
|
cs.LG
|
Large-scale language models (LLMs) excel in language processing tasks but
face deployment challenges due to high memory and computational demands. While
low-bit quantization, such as 4-bit techniques, offers a potential solution,
these methods often suffer from significant accuracy loss or require
considerable effort for implementation such as reordering, rotation, etc. To
address these challenges, we propose QRazor, a simple yet effective
quantization scheme that enables 4-bit quantization of weights, activations,
and KV cache in transformer-based LLMs. QRazor operates in two stages: first,
quantizing data using 8 or 16-bit integers as a basis with absolute max scaling
to preserve accuracy close to full-precision models, and second, compressing
the quantized data to 4-bit using our significant data razoring (SDR)
technique, which retains only the four most salient bits. Without any
additional requirment of fine-tuning or additional training, QRazor achieves
performance similar or better compared to state-of-the-art in 4-bit
quantization method, surpassing Smoothquant and QLLM by over 12 points and
Quarot(RTN) by more than 2.9 points in zero-shot reasoning task accuracy on the
LLaMA2-7B model. Additionally, we introduce an integer-based arithmetic unit
optimized for QRazor, allowing direct low-precision operations on SDR data
without decompression.
|
2501.13332
|
Co-Learning Bayesian Optimization
|
cs.LG stat.ML
|
Bayesian optimization (BO) is well known to be sample-efficient for solving
black-box problems. However, the BO algorithms can sometimes get stuck in
suboptimal solutions even with plenty of samples. Intrinsically, such
suboptimal problem of BO can attribute to the poor surrogate accuracy of the
trained Gaussian process (GP), particularly that in the regions where the
optimal solutions locate. Hence, we propose to build multiple GP models instead
of a single GP surrogate to complement each other and thus resolving the
suboptimal problem of BO. Nevertheless, according to the bias-variance tradeoff
equation, the individual prediction errors can increase when increasing the
diversity of models, which may lead to even worse overall surrogate accuracy.
On the other hand, based on the theory of Rademacher complexity, it has been
proved that exploiting the agreement of models on unlabeled information can
help to reduce the complexity of the hypothesis space, and therefore achieving
the required surrogate accuracy with fewer samples. Such value of model
agreement has been extensively demonstrated for co-training style algorithms to
boost model accuracy with a small portion of samples. Inspired by the above, we
propose a novel BO algorithm labeled as co-learning BO (CLBO), which exploits
both model diversity and agreement on unlabeled information to improve the
overall surrogate accuracy with limited samples, and therefore achieving more
efficient global optimization. Through tests on five numerical toy problems and
three engineering benchmarks, the effectiveness of proposed CLBO has been well
demonstrated.
|
2501.13333
|
AgentRec: Agent Recommendation Using Sentence Embeddings Aligned to
Human Feedback
|
cs.LG cs.AI cs.CL cs.MA
|
Multi-agent systems must decide which agent is the most appropriate for a
given task. We propose a novel architecture for recommending which LLM agent
out of many should perform a task given a natural language prompt by extending
the Sentence-BERT (SBERT) encoder model. On test data, we are able to achieve a
top-1 accuracy of 92.2% with each classification taking less than 300
milliseconds. In contrast to traditional classification methods, our
architecture is computationally cheap, adaptive to new classes, interpretable,
and controllable with arbitrary metrics through reinforcement learning. By
encoding natural language prompts into sentence embeddings, our model captures
the semantic content relevant to recommending an agent. The distance between
sentence embeddings that belong to the same agent is then minimized through
fine-tuning and aligned to human values through reinforcement learning from
human feedback. This allows the classification of natural language prompts
based on their nearest neighbors by measuring the cosine similarity between
embeddings. This work is made possible through the generation of a synthetic
dataset for agent recommendation, which we have open-sourced to the public
along with the code for AgentRec recommendation system at
https://github.com/joshprk/agentrec.
|
2501.13335
|
Deblur-Avatar: Animatable Avatars from Motion-Blurred Monocular Videos
|
cs.CV
|
We introduce Deblur-Avatar, a novel framework for modeling high-fidelity,
animatable 3D human avatars from motion-blurred monocular video inputs. Motion
blur is prevalent in real-world dynamic video capture, especially due to human
movements in 3D human avatar modeling. Existing methods either (1) assume sharp
image inputs, failing to address the detail loss introduced by motion blur, or
(2) mainly consider blur by camera movements, neglecting the human motion blur
which is more common in animatable avatars. Our proposed approach integrates a
human movement-based motion blur model into 3D Gaussian Splatting (3DGS). By
explicitly modeling human motion trajectories during exposure time, we jointly
optimize the trajectories and 3D Gaussians to reconstruct sharp, high-quality
human avatars. We employ a pose-dependent fusion mechanism to distinguish
moving body regions, optimizing both blurred and sharp areas effectively.
Extensive experiments on synthetic and real-world datasets demonstrate that
Deblur-Avatar significantly outperforms existing methods in rendering quality
and quantitative metrics, producing sharp avatar reconstructions and enabling
real-time rendering under challenging motion blur conditions.
|
2501.13336
|
Gradient-Free Adversarial Purification with Diffusion Models
|
cs.CV eess.IV
|
Adversarial training and adversarial purification are two effective and
practical defense methods to enhance a model's robustness against adversarial
attacks. However, adversarial training necessitates additional training, while
adversarial purification suffers from low time efficiency. More critically,
current defenses are designed under the perturbation-based adversarial threat
model, which is ineffective against the recently proposed unrestricted
adversarial attacks. In this paper, we propose an effective and efficient
adversarial defense method that counters both perturbation-based and
unrestricted adversarial attacks. Our defense is inspired by the observation
that adversarial attacks are typically located near the decision boundary and
are sensitive to pixel changes. To address this, we introduce adversarial
anti-aliasing to mitigate adversarial modifications. Additionally, we propose
adversarial super-resolution, which leverages prior knowledge from clean
datasets to benignly recover images. These approaches do not require additional
training and are computationally efficient without calculating gradients.
Extensive experiments against both perturbation-based and unrestricted
adversarial attacks demonstrate that our defense method outperforms
state-of-the-art adversarial purification methods.
|
2501.13337
|
Generative Multi-Form Bayesian Optimization
|
cs.CE
|
Many real-world problems, such as airfoil design, involve optimizing a
black-box expensive objective function over complex structured input space
(e.g., discrete space or non-Euclidean space). By mapping the complex
structured input space into a latent space of dozens of variables, a two-stage
procedure labeled as generative model based optimization (GMO) in this paper,
shows promise in solving such problems. However, the latent dimension of GMO is
hard to determine, which may trigger the conflicting issue between desirable
solution accuracy and convergence rate. To address the above issue, we propose
a multi-form GMO approach, namely generative multi-form optimization (GMFoO),
which conducts optimization over multiple latent spaces simultaneously to
complement each other. More specifically, we devise a generative model which
promotes positive correlation between latent spaces to facilitate effective
knowledge transfer in GMFoO. And further, by using Bayesian optimization (BO)
as the optimizer, we propose two strategies to exchange information between
these latent spaces continuously. Experimental results are presented on airfoil
and corbel design problems and an area maximization problem as well to
demonstrate that our proposed GMFoO converges to better designs on a limited
computational budget.
|
2501.13338
|
CuriousBot: Interactive Mobile Exploration via Actionable 3D Relational
Object Graph
|
cs.RO cs.CV cs.LG
|
Mobile exploration is a longstanding challenge in robotics, yet current
methods primarily focus on active perception instead of active interaction,
limiting the robot's ability to interact with and fully explore its
environment. Existing robotic exploration approaches via active interaction are
often restricted to tabletop scenes, neglecting the unique challenges posed by
mobile exploration, such as large exploration spaces, complex action spaces,
and diverse object relations. In this work, we introduce a 3D relational object
graph that encodes diverse object relations and enables exploration through
active interaction. We develop a system based on this representation and
evaluate it across diverse scenes. Our qualitative and quantitative results
demonstrate the system's effectiveness and generalization capabilities,
outperforming methods that rely solely on vision-language models (VLMs).
|
2501.13340
|
Retrievals Can Be Detrimental: A Contrastive Backdoor Attack Paradigm on
Retrieval-Augmented Diffusion Models
|
cs.CV
|
Diffusion models (DMs) have recently demonstrated remarkable generation
capability. However, their training generally requires huge computational
resources and large-scale datasets. To solve these, recent studies empower DMs
with the advanced Retrieval-Augmented Generation (RAG) technique and propose
retrieval-augmented diffusion models (RDMs). By incorporating rich knowledge
from an auxiliary database, RAG enhances diffusion models' generation and
generalization ability while significantly reducing model parameters. Despite
the great success, RAG may introduce novel security issues that warrant further
investigation. In this paper, we reveal that the RDM is susceptible to backdoor
attacks by proposing a multimodal contrastive attack approach named BadRDM. Our
framework fully considers RAG's characteristics and is devised to manipulate
the retrieved items for given text triggers, thereby further controlling the
generated contents. Specifically, we first insert a tiny portion of images into
the retrieval database as target toxicity surrogates. Subsequently, a malicious
variant of contrastive learning is adopted to inject backdoors into the
retriever, which builds shortcuts from triggers to the toxicity surrogates.
Furthermore, we enhance the attacks through novel entropy-based selection and
generative augmentation strategies that can derive better toxicity surrogates.
Extensive experiments on two mainstream tasks demonstrate the proposed BadRDM
achieves outstanding attack effects while preserving the model's benign
utility.
|
2501.13341
|
Multi-aspect Knowledge Distillation with Large Language Model
|
cs.CV
|
Recent advancements in deep learning have significantly improved performance
on computer vision tasks. Previous image classification methods primarily
modify model architectures or add features, and they optimize models using
cross-entropy loss on class logits. Since they focus on classifying images with
considering class labels, these methods may struggle to learn various
\emph{aspects} of classes (e.g., natural positions and shape changes).
Rethinking the previous approach from a novel view, we propose a multi-aspect
knowledge distillation method using Multimodal Large Language Models (MLLMs).
Our approach involves: 1) querying Large Language Model with multi-aspect
questions relevant to the knowledge we want to transfer to the model, 2)
extracting corresponding logits from MLLM, and 3) expanding the model's output
dimensions to distill these multi-aspect logits. We then apply cross-entropy
loss to class logits and binary cross-entropy loss to multi-aspect logits.
Through our method, the model can learn not only the knowledge about visual
aspects but also the abstract and complex aspects that require a deeper
understanding. We primarily apply our method to image classification, and to
explore the potential for extending our model, we expand it to other tasks,
such as object detection. In all experimental results, our method improves the
performance of the baselines. Additionally, we analyze the effect of
multi-aspect knowledge distillation. These results demonstrate that our method
can transfer knowledge about various aspects to the model and the aspect
knowledge can enhance model performance in computer vision tasks. This paper
demonstrates the great potential of multi-aspect knowledge distillation, and we
believe it offers a promising direction for future research in computer vision
and beyond.
|
2501.13343
|
YOLOSCM: An improved YOLO algorithm for cars detection
|
cs.CV
|
Detecting objects in urban traffic images presents considerable difficulties
because of the following reasons: 1) These images are typically immense in
size, encompassing millions or even hundreds of millions of pixels, yet
computational resources are constrained. 2) The small size of vehicles in
certain scenarios leads to insufficient information for accurate detection. 3)
The uneven distribution of vehicles causes inefficient use of computational
resources. To address these issues, we propose YOLOSCM (You Only Look Once with
Segmentation Clustering Module), an efficient and effective framework. To
address the challenges of large-scale images and the non-uniform distribution
of vehicles, we propose a Segmentation Clustering Module (SCM). This module
adaptively identifies clustered regions, enabling the model to focus on these
areas for more precise detection. Additionally, we propose a new training
strategy to optimize the detection of small vehicles and densely packed targets
in complex urban traffic scenes. We perform extensive experiments on urban
traffic datasets to demonstrate the effectiveness and superiority of our
proposed approach.
|
2501.13344
|
Full-Stack Optimized Large Language Models for Lifelong Sequential
Behavior Comprehension in Recommendation
|
cs.IR cs.AI
|
In this paper, we address the lifelong sequential behavior incomprehension
problem in large language models (LLMs) for recommendation, where LLMs struggle
to extract useful information from long user behavior sequences, even within
their context limits. To tackle this, we propose ReLLaX (Retrieval-enhanced
Large Language models Plus), a framework offering optimization across data,
prompt, and parameter levels. At the data level, we introduce Semantic User
Behavior Retrieval (SUBR) to reduce sequence heterogeneity, making it easier
for LLMs to extract key information. For prompt-level enhancement, we employ
Soft Prompt Augmentation (SPA) to inject collaborative knowledge, aligning item
representations with recommendation tasks and improving LLMs's exploration of
item relationships. Finally, at the parameter level, we propose Component
Fully-interactive LoRA (CFLoRA), which enhances LoRA's expressiveness by
enabling interactions between its components, allowing better capture of
sequential information. Moreover, we present new perspectives to compare
current LoRA-based LLM4Rec methods, i.e. from both a composite and a decomposed
view. We theoretically demonstrate that the ways they employ LoRA for
recommendation are degraded versions of our CFLoRA, with different constraints
on atom component interactions. Extensive experiments on three public datasets
demonstrate ReLLaX's superiority over existing baselines and its ability to
mitigate lifelong sequential behavior incomprehension effectively.
|
2501.13347
|
One Fits All: General Mobility Trajectory Modeling via Masked
Conditional Diffusion
|
cs.LG cs.AI
|
Trajectory data play a crucial role in many applications, ranging from
network optimization to urban planning. Existing studies on trajectory data are
task-specific, and their applicability is limited to the specific tasks on
which they have been trained, such as generation, recovery, or prediction.
However, the potential of a unified model has not yet been fully explored in
trajectory modeling. Although various trajectory tasks differ in inputs,
outputs, objectives, and conditions, they share common mobility patterns. Based
on these common patterns, we can construct a general framework that enables a
single model to address different tasks. However, building a trajectory
task-general framework faces two critical challenges: 1) the diversity in the
formats of different tasks and 2) the complexity of the conditions imposed on
different tasks. In this work, we propose a general trajectory modeling
framework via masked conditional diffusion (named GenMove). Specifically, we
utilize mask conditions to unify diverse formats. To adapt to complex
conditions associated with different tasks, we utilize historical trajectory
data to obtain contextual trajectory embeddings, which include rich contexts
such as spatiotemporal characteristics and user preferences. Integrating the
contextual trajectory embedding into diffusion models through a classifier-free
guidance approach allows the model to flexibly adjust its outputs based on
different conditions. Extensive experiments on mainstream tasks demonstrate
that our model significantly outperforms state-of-the-art baselines, with the
highest performance improvement exceeding 13% in generation tasks.
|
2501.13349
|
MSF: Efficient Diffusion Model Via Multi-Scale Latent Factorize
|
cs.CV
|
Diffusion-based generative models have achieved remarkable progress in visual
content generation. However, traditional diffusion models directly denoise the
entire image from noisy inputs, disregarding the hierarchical structure present
in visual signals. This method is computationally intensive, especially for
high-resolution image generation. Signal processing often leverages
hierarchical decompositions; for instance, Fourier analysis decomposes signals
by frequency, while wavelet analysis captures localized frequency components,
reflecting both spatial and frequency information simultaneously. Inspired by
these principles, we propose a multiscale diffusion framework that generates
hierarchical visual representations, which are subsequently integrated to form
the final output. The diffusion model target, whether raw RGB pixels or latent
features from a Variational Autoencoder, s divided into multiple components
that each capture distinct spatial levels. The low-resolution component
contains the primary informative signal, while higher-resolution components add
high-frequency details, such as texture. This approach divides image generation
into two stages: producing a low-resolution base signal, followed by a
high-resolution residual signal. Both stages can be effectively modeled using
simpler, lightweight transformer architectures compared to full-resolution
generation. This decomposition is conceptually similar to wavelet decomposition
but offers a more streamlined and intuitive design. Our method, termed
MSF(short for Multi-Scale Factorization), achieves an FID of 2.2 and an IS of
255.4 on the ImageNet 256x256 benchmark, reducing computational costs by 50%
compared to baseline methods.
|
2501.13350
|
DoMINO: A Decomposable Multi-scale Iterative Neural Operator for
Modeling Large Scale Engineering Simulations
|
cs.LG physics.comp-ph
|
Numerical simulations play a critical role in design and development of
engineering products and processes. Traditional computational methods, such as
CFD, can provide accurate predictions but are computationally expensive,
particularly for complex geometries. Several machine learning (ML) models have
been proposed in the literature to significantly reduce computation time while
maintaining acceptable accuracy. However, ML models often face limitations in
terms of accuracy and scalability and depend on significant mesh downsampling,
which can negatively affect prediction accuracy and generalization. In this
work, we propose a novel ML model architecture, DoMINO (Decomposable
Multi-scale Iterative Neural Operator) developed in NVIDIA Modulus to address
the various challenges of machine learning based surrogate modeling of
engineering simulations. DoMINO is a point cloudbased ML model that uses local
geometric information to predict flow fields on discrete points. The DoMINO
model is validated for the automotive aerodynamics use case using the DrivAerML
dataset. Through our experiments we demonstrate the scalability, performance,
accuracy and generalization of our model to both in-distribution and
out-of-distribution testing samples. Moreover, the results are analyzed using a
range of engineering specific metrics important for validating numerical
simulations.
|
2501.13352
|
Polyhedra Encoding Transformers: Enhancing Diffusion MRI Analysis Beyond
Voxel and Volumetric Embedding
|
eess.IV cs.CV cs.LG
|
Diffusion-weighted Magnetic Resonance Imaging (dMRI) is an essential tool in
neuroimaging. It is arguably the sole noninvasive technique for examining the
microstructural properties and structural connectivity of the brain. Recent
years have seen the emergence of machine learning and data-driven approaches
that enhance the speed, accuracy, and consistency of dMRI data analysis.
However, traditional deep learning models often fell short, as they typically
utilize pixel-level or volumetric patch-level embeddings similar to those used
in structural MRI, and do not account for the unique distribution of various
gradient encodings. In this paper, we propose a novel method called Polyhedra
Encoding Transformer (PE-Transformer) for dMRI, designed specifically to handle
spherical signals. Our approach involves projecting an icosahedral polygon onto
a unit sphere to resample signals from predetermined directions. These
resampled signals are then transformed into embeddings, which are processed by
a transformer encoder that incorporates orientational information reflective of
the icosahedral structure. Through experimental validation with various
gradient encoding protocols, our method demonstrates superior accuracy in
estimating multi-compartment models and Fiber Orientation Distributions (FOD),
outperforming both conventional CNN architectures and standard transformers.
|
2501.13353
|
Contrast: A Hybrid Architecture of Transformers and State Space Models
for Low-Level Vision
|
cs.CV
|
Transformers have become increasingly popular for image super-resolution (SR)
tasks due to their strong global context modeling capabilities. However, their
quadratic computational complexity necessitates the use of window-based
attention mechanisms, which restricts the receptive field and limits effective
context expansion. Recently, the Mamba architecture has emerged as a promising
alternative with linear computational complexity, allowing it to avoid window
mechanisms and maintain a large receptive field. Nevertheless, Mamba faces
challenges in handling long-context dependencies when high pixel-level
precision is required, as in SR tasks. This is due to its hidden state
mechanism, which can compress and store a substantial amount of context but
only in an approximate manner, leading to inaccuracies that transformers do not
suffer from. In this paper, we propose \textbf{Contrast}, a hybrid SR model
that combines \textbf{Con}volutional, \textbf{Tra}nsformer, and \textbf{St}ate
Space components, effectively blending the strengths of transformers and Mamba
to address their individual limitations. By integrating transformer and state
space mechanisms, \textbf{Contrast} compensates for the shortcomings of each
approach, enhancing both global context modeling and pixel-level accuracy. We
demonstrate that combining these two architectures allows us to mitigate the
problems inherent in each, resulting in improved performance on image
super-resolution tasks.
|
2501.13354
|
NUDT4MSTAR: A Large Dataset and Benchmark Towards Remote Sensing Object
Recognition in the Wild
|
cs.CV
|
As an indispensable sensor for Remote sensing, Synthetic Aperture Radar (SAR)
has a unique capability for all-day imaging. Nevertheless, in a data-driven
era, the scarcity of large-scale datasets poses a significant bottleneck to
advancing SAR automatic target recognition (ATR) technology. This paper
introduces NUDT4MSTAR, a large-scale SAR dataset for remote sensing target
recognition in the wild, including 40 vehicle target types and various imaging
conditions across 5 realistic scenes. NUDT4MSTAR represents a significant leap
forward in dataset scale, containing over 190,000 images-tenfold the size of
its predecessors. We meticulously annotate each image with detailed target
information and imaging conditions. Besides, data in both processed magnitude
images and original complex formats are provided. Then, we construct a
comprehensive benchmark consisting of 7 experiments with 15 recognition methods
focusing on the stable and effective ATR issues. Besides, we conduct transfer
learning experiments utilizing various models training on NUDT4MSTAR and apply
them to three other target datasets, demonstrating its substantial potential
for the broader field of ground objects ATR. Finally, we discuss this dataset's
application value and ATR's significant challenges. To the best of our
knowledge, this work marks the first-ever endeavor to create a large-scale
dataset benchmark for fine-grained SAR recognition in the wild, featuring an
extensive collection of exhaustively annotated vehicle images. We expect that
the open source of NUDT4MSTAR will facilitate the development of SAR ATR and
attract a wider community of researchers.
|
2501.13357
|
A light-weight model to generate NDWI from Sentinel-1
|
cs.CV eess.IV
|
The use of Sentinel-2 images to compute Normalized Difference Water Index
(NDWI) has many applications, including water body area detection. However,
cloud cover poses significant challenges in this regard, which hampers the
effectiveness of Sentinel-2 images in this context. In this paper, we present a
deep learning model that can generate NDWI given Sentinel-1 images, thereby
overcoming this cloud barrier. We show the effectiveness of our model, where it
demonstrates a high accuracy of 0.9134 and an AUC of 0.8656 to predict the
NDWI. Additionally, we observe promising results with an R2 score of 0.4984
(for regressing the NDWI values) and a Mean IoU of 0.4139 (for the underlying
segmentation task). In conclusion, our model offers a first and robust solution
for generating NDWI images directly from Sentinel-1 images and subsequent use
for various applications even under challenging conditions such as cloud cover
and nighttime.
|
2501.13358
|
Learning to Bid in Non-Stationary Repeated First-Price Auctions
|
cs.LG cs.GT cs.IT math.IT stat.ML
|
First-price auctions have recently gained significant traction in digital
advertising markets, exemplified by Google's transition from second-price to
first-price auctions. Unlike in second-price auctions, where bidding one's
private valuation is a dominant strategy, determining an optimal bidding
strategy in first-price auctions is more complex. From a learning perspective,
the learner (a specific bidder) can interact with the environment (other
bidders) sequentially to infer their behaviors. Existing research often assumes
specific environmental conditions and benchmarks performance against the best
fixed policy (static benchmark). While this approach ensures strong learning
guarantees, the static benchmark can deviate significantly from the optimal
strategy in environments with even mild non-stationarity. To address such
scenarios, a dynamic benchmark, which represents the sum of the best possible
rewards at each time step, offers a more suitable objective. However, achieving
no-regret learning with respect to the dynamic benchmark requires additional
constraints. By inspecting reward functions in online first-price auctions, we
introduce two metrics to quantify the regularity of the bidding sequence, which
serve as measures of non-stationarity. We provide a minimax-optimal
characterization of the dynamic regret when either of these metrics is
sub-linear in the time horizon.
|
2501.13364
|
Task Allocation in Customer-led Two-sided Markets with Satellite
Constellation Services
|
cs.GT cs.MA
|
Multi-agent systems (MAS) are increasingly applied to complex task allocation
in two-sided markets, where agents such as companies and customers interact
dynamically. Traditional company-led Stackelberg game models, where companies
set service prices, and customers respond, struggle to accommodate diverse and
personalised customer demands in emerging markets like crowdsourcing. This
paper proposes a customer-led Stackelberg game model for cost-efficient task
allocation, where customers initiate tasks as leaders, and companies create
their strategies as followers to meet these demands. We prove the existence of
Nash Equilibrium for the follower game and Stackelberg Equilibrium for the
leader game while discussing their uniqueness under specific conditions,
ensuring cost-efficient task allocation and improved market performance. Using
the satellite constellation services market as a real-world case, experimental
results show a 23% reduction in customer payments and a 6.7-fold increase in
company revenues, demonstrating the model's effectiveness in emerging markets.
|
2501.13365
|
Enhanced Extractor-Selector Framework and Symmetrization Weighted Binary
Cross-Entropy for Edge Detections
|
cs.CV cs.AI
|
Recent advancements have demonstrated the effectiveness of the
extractor-selector (E-S) framework in edge detection (ED) tasks, which achieves
state-of-the-art (SOTA) performance in both quantitative metrics and perceptual
quality. However, this method still falls short of fully exploiting the
potential of feature extractors, as selectors only operate on highly compressed
feature maps that lack diversity and suffer from substantial information loss.
Additionally, while union training can improve perceptual quality, the highest
evaluation scores are typically obtained without it, creating a trade-off
between quantitative accuracy and perceptual fidelity. To address these
limitations, we propose an enhanced E-S architecture, which utilizes richer,
less-loss feature representations and incorporates auxiliary features during
the selection process, thereby improving the effectiveness of the feature
selection mechanism. Additionally, we introduce a novel loss function, the
Symmetrization Weight Binary Cross-Entropy (SWBCE), which simultaneously
emphasizes both the recall of edge pixels and the suppression of erroneous edge
predictions, thereby enhancing the predictions both in the perceptual quality
and the prediction accuracy. The effectiveness and superiority of our
approaches over baseline models, the standard E-S framework, and the standard
Weight Binary Cross-Entropy (WBCE) loss function are demonstrated by extensive
experiments. For example, our enhanced E-S architecture trained with SWBCE loss
function achieves average improvements of 8.25$\%$, 8.01$\%$, and 33.25$\%$ in
ODS, OIS, and AP, measured on BIPED2 compared with the baseline models,
significantly outperforming the standard E-S method. The results set new
benchmarks for ED tasks, and highlight the potential of the methods in beyond.
|
2501.13368
|
Meta-Feature Adapter: Integrating Environmental Metadata for Enhanced
Animal Re-identification
|
cs.CV cs.LG
|
Identifying individual animals within large wildlife populations is essential
for effective wildlife monitoring and conservation efforts. Recent advancements
in computer vision have shown promise in animal re-identification (Animal ReID)
by leveraging data from camera traps. However, existing methods rely
exclusively on visual data, neglecting environmental metadata that ecologists
have identified as highly correlated with animal behavior and identity, such as
temperature and circadian rhythms. To bridge this gap, we propose the
Meta-Feature Adapter (MFA), a lightweight module designed to integrate
environmental metadata into vision-language foundation models, such as CLIP, to
enhance Animal ReID performance. Our approach translates environmental metadata
into natural language descriptions, encodes them into metadata-aware text
embeddings, and incorporates these embeddings into image features through a
cross-attention mechanism. Furthermore, we introduce a Gated Cross-Attention
mechanism that dynamically adjusts the weights of metadata contributions,
further improving performance. To validate our approach, we constructed the
Metadata Augmented Animal Re-identification (MAAR) dataset, encompassing six
species from New Zealand and featuring paired image data and environmental
metadata. Extensive experiments demonstrate that MFA consistently improves
Animal ReID performance across multiple baseline models.
|
2501.13369
|
A review on development of eco-friendly filters in Nepal for use in
cigarettes and masks and Air Pollution Analysis with Machine Learning and
SHAP Interpretability
|
cs.LG cs.AI
|
In Nepal, air pollution is a serious public health concern, especially in
cities like Kathmandu where particulate matter (PM2.5 and PM10) has a major
influence on respiratory health and air quality. The Air Quality Index (AQI) is
predicted in this work using a Random Forest Regressor, and the model's
predictions are interpreted using SHAP (SHapley Additive exPlanations)
analysis. With the lowest Testing RMSE (0.23) and flawless R2 scores (1.00),
CatBoost performs better than other models, demonstrating its greater accuracy
and generalization which is cross validated using a nested cross validation
approach. NowCast Concentration and Raw Concentration are the most important
elements influencing AQI values, according to SHAP research, which shows that
the machine learning results are highly accurate. Their significance as major
contributors to air pollution is highlighted by the fact that high values of
these characteristics significantly raise the AQI. This study investigates the
Hydrogen-Alpha (HA) biodegradable filter as a novel way to reduce the related
health hazards. With removal efficiency of more than 98% for PM2.5 and 99.24%
for PM10, the HA filter offers exceptional defense against dangerous airborne
particles. These devices, which are biodegradable face masks and cigarette
filters, address the environmental issues associated with traditional filters'
non-biodegradable trash while also lowering exposure to air contaminants.
|
2501.13370
|
Unraveling Normal Anatomy via Fluid-Driven Anomaly Randomization
|
eess.IV cs.CV
|
Data-driven machine learning has made significant strides in medical image
analysis. However, most existing methods are tailored to specific modalities
and assume a particular resolution (often isotropic). This limits their
generalizability in clinical settings, where variations in scan appearance
arise from differences in sequence parameters, resolution, and orientation.
Furthermore, most general-purpose models are designed for healthy subjects and
suffer from performance degradation when pathology is present. We introduce UNA
(Unraveling Normal Anatomy), the first modality-agnostic learning approach for
normal brain anatomy reconstruction that can handle both healthy scans and
cases with pathology. We propose a fluid-driven anomaly randomization method
that generates an unlimited number of realistic pathology profiles on-the-fly.
UNA is trained on a combination of synthetic and real data, and can be applied
directly to real images with potential pathology without the need for
fine-tuning. We demonstrate UNA's effectiveness in reconstructing healthy brain
anatomy and showcase its direct application to anomaly detection, using both
simulated and real images from 3D healthy and stroke datasets, including CT and
MRI scans. By bridging the gap between healthy and diseased images, UNA enables
the use of general-purpose models on diseased images, opening up new
opportunities for large-scale analysis of uncurated clinical images in the
presence of pathology. Code is available at https://github.com/peirong26/UNA.
|
2501.13372
|
Generative Data Augmentation Challenge: Zero-Shot Speech Synthesis for
Personalized Speech Enhancement
|
eess.AS cs.AI
|
This paper presents a new challenge that calls for zero-shot text-to-speech
(TTS) systems to augment speech data for the downstream task, personalized
speech enhancement (PSE), as part of the Generative Data Augmentation workshop
at ICASSP 2025. Collecting high-quality personalized data is challenging due to
privacy concerns and technical difficulties in recording audio from the test
scene. To address these issues, synthetic data generation using generative
models has gained significant attention. In this challenge, participants are
tasked first with building zero-shot TTS systems to augment personalized data.
Subsequently, PSE systems are asked to be trained with this augmented
personalized dataset. Through this challenge, we aim to investigate how the
quality of augmented data generated by zero-shot TTS models affects PSE model
performance. We also provide baseline experiments using open-source zero-shot
TTS models to encourage participation and benchmark advancements. Our baseline
code implementation and checkpoints are available online.
|
2501.13373
|
Advancing Carbon Capture using AI: Design of permeable membrane and
estimation of parameters for Carbon Capture using linear regression and
membrane-based equations
|
physics.chem-ph cs.LG
|
This study focuses on membrane-based systems for CO$_2$ separation,
addressing the urgent need for efficient carbon capture solutions to mitigate
climate change. Linear regression models, based on membrane equations, were
utilized to estimate key parameters, including porosity ($\epsilon$) of 0.4805,
Kozeny constant (K) of 2.9084, specific surface area ($\sigma$) of 105.3272
m$^2$/m$^3$, mean pressure (Pm) of 6.2166 MPa, viscosity ($\mu$) of 0.1997
Ns/m$^2$, and gas flux (Jg) of 3.2559 kg m$^{-2}$ s$^{-1}$. These parameters
were derived from the analysis of synthetic datasets using linear regression.
The study also provides insights into the performance of the membrane, with a
flow rate (Q) of 9.8778 $\times$ 10$^{-4}$ m$^3$/s, an injection pressure
(P$_1$) of 2.8219 MPa, and an exit pressure (P$_2$) of 2.5762 MPa. The
permeability value of 0.045 for CO$_2$ indicates the potential for efficient
separation. Optimizing membrane properties to selectively block CO$_2$ while
allowing other gases to pass is crucial for improving carbon capture
efficiency. By integrating these technologies into industrial processes,
significant reductions in greenhouse gas emissions can be achieved, fostering a
circular carbon economy and contributing to global climate goals. This study
also explores how artificial intelligence (AI) can aid in designing membranes
for carbon capture, addressing the global climate change challenge and
supporting the Sustainable Development Goals (SDGs) set by the United Nations.
|
2501.13375
|
Bridging The Multi-Modality Gaps of Audio, Visual and Linguistic for
Speech Enhancement
|
cs.SD cs.LG cs.MM eess.AS
|
Speech Enhancement (SE) aims to improve the quality of noisy speech. It has
been shown that additional visual cues can further improve performance. Given
that speech communication involves audio, visual, and linguistic modalities, it
is natural to expect another performance boost by incorporating linguistic
information. However, bridging the modality gaps to efficiently incorporate
linguistic information, along with audio and visual modalities during knowledge
transfer, is a challenging task. In this paper, we propose a novel
multi-modality learning framework for SE. In the model framework, a
state-of-the-art diffusion Model backbone is utilized for Audio-Visual Speech
Enhancement (AVSE) modeling where both audio and visual information are
directly captured by microphones and video cameras. Based on this AVSE, the
linguistic modality employs a PLM to transfer linguistic knowledge to the
visual acoustic modality through a process termed Cross-Modal Knowledge
Transfer (CMKT) during AVSE model training. After the model is trained, it is
supposed that linguistic knowledge is encoded in the feature processing of the
AVSE model by the CMKT, and the PLM will not be involved during inference
stage. We carry out SE experiments to evaluate the proposed model framework.
Experimental results demonstrate that our proposed AVSE system significantly
enhances speech quality and reduces generative artifacts, such as phonetic
confusion compared to the state-of-the-art. Moreover, our visualization results
demonstrate that our Cross-Modal Knowledge Transfer method further improves the
generated speech quality of our AVSE system. These findings not only suggest
that Diffusion Model-based techniques hold promise for advancing the
state-of-the-art in AVSE but also justify the effectiveness of incorporating
linguistic information to improve the performance of Diffusion-based AVSE
systems.
|
2501.13376
|
Scalable Evaluation Framework for Foundation Models in Musculoskeletal
MRI Bridging Computational Innovation with Clinical Utility
|
eess.IV cs.CV
|
Foundation models hold transformative potential for medical imaging, but
their clinical utility requires rigorous evaluation to address their strengths
and limitations. This study introduces an evaluation framework for assessing
the clinical impact and translatability of SAM, MedSAM, and SAM2, using
musculoskeletal MRI as a case study. We tested these models across zero-shot
and finetuned paradigms to assess their ability to process diverse anatomical
structures and effectuate clinically reliable biomarkers, including cartilage
thickness, muscle volume, and disc height. We engineered a modular pipeline
emphasizing scalability, clinical relevance, and workflow integration, reducing
manual effort and aligning validation with end-user expectations. Hierarchical
modeling revealed how dataset mixing, anatomical complexity, and MRI
acquisition parameters influence performance, providing insights into the role
of imaging refinements in improving segmentation accuracy. This work
demonstrates how clinically focused evaluations can connect computational
advancements with tangible applications, creating a pathway for foundation
models to address medical challenges. By emphasizing interdisciplinary
collaboration and aligning technical innovation with clinical priorities, our
framework provides a roadmap for advancing machine learning technologies into
scalable and impactful biomedical solutions.
|
2501.13377
|
Concentration in Governance Control Across Decentralised Finance
Protocols
|
cs.CE
|
Blockchain-based systems are frequently governed through tokens that grant
their holders voting rights over core protocol functions and funds. The
centralisation occurring in Decentralised Finance (DeFi) protocols' token-based
voting systems is typically analysed by examining token holdings' distribution
across addresses. In this paper, we expand this perspective by exploring shared
token holdings of addresses across multiple DeFi protocols. We construct a
Statistically Validated Network (SVN) based on shared governance token holdings
among addresses. Using the links within the SVN, we identify influential
addresses that shape these connections and we conduct a post-hoc analysis to
examine their characteristics and behaviour. Our findings reveal persistent
influential links over time, predominantly involving addresses associated with
institutional investors who maintain significant token supplies across the
sampled protocols. Finally, we observe that token holding patterns and
concentrations tend to shift in response to speculative market cycles.
|
2501.13380
|
On the Massive MIMO Channel Polarization
|
cs.IT math.IT
|
In this work, we demonstrate that an $n \times n$ massive multiple-input
multiple-output (MIMO) channel can be polarized using common matrix
decomposition techniques: singular value decomposition (SVD) and QR
decomposition. With full channel state information (CSI), we show that channel
capacity is always attained by freezing certain number of worst subchannels,
provided a total power constraint and sufficiently large $n$. We further prove
that the capacity obtained through channel polarization is always greater than
that achieved through channel equalization. Finally, we propose a
low-complexity precoding scheme based on channel polarization, which
outperforms the lattice-reduction-aided precoding scheme, in terms of capacity,
decoding error rate, encoding complexity, and CSIT cost.
|
2501.13381
|
Do as We Do, Not as You Think: the Conformity of Large Language Models
|
cs.CL
|
Recent advancements in large language models (LLMs) revolutionize the field
of intelligent agents, enabling collaborative multi-agent systems capable of
tackling complex problems across various domains. However, the potential of
conformity within these systems, analogous to phenomena like conformity bias
and groupthink in human group dynamics, remains largely unexplored, raising
concerns about their collective problem-solving capabilities and possible
ethical implications. This paper presents a comprehensive study on conformity
in LLM-driven multi-agent systems, focusing on three aspects: the existence of
conformity, the factors influencing conformity, and potential mitigation
strategies. In particular, we introduce BenchForm, a new conformity-oriented
benchmark, featuring reasoning-intensive tasks and five distinct interaction
protocols designed to probe LLMs' behavior in collaborative scenarios. Several
representative LLMs are evaluated on BenchForm, using metrics such as
conformity rate and independence rate to quantify conformity's impact. Our
analysis delves into factors influencing conformity, including interaction time
and majority size, and examines how the subject agent rationalizes its
conforming behavior. Furthermore, we explore two strategies to mitigate
conformity effects, i.e., developing enhanced personas and implementing a
reflection mechanism. Several interesting findings regarding LLMs' conformity
are derived from empirical results and case studies. We hope that these
insights can pave the way for more robust and ethically-aligned collaborative
AI systems. Our benchmark and code are available at BenchForm.
|
2501.13385
|
Fast and Provable Tensor-Train Format Tensor Completion via
Precondtioned Riemannian Gradient Descent
|
cs.LG cs.NA math.NA math.OC
|
Low-rank tensor completion aims to recover a tensor from partially observed
entries, and it is widely applicable in fields such as quantum computing and
image processing. Due to the significant advantages of the tensor train (TT)
format in handling structured high-order tensors, this paper investigates the
low-rank tensor completion problem based on the TT-format. We proposed a
preconditioned Riemannian gradient descent algorithm (PRGD) to solve low
TT-rank tensor completion and establish its linear convergence. Experimental
results on both simulated and real datasets demonstrate the effectiveness of
the PRGD algorithm. On the simulated dataset, the PRGD algorithm reduced the
computation time by two orders of magnitude compared to existing classical
algorithms. In practical applications such as hyperspectral image completion
and quantum state tomography, the PRGD algorithm significantly reduced the
number of iterations, thereby substantially reducing the computational time.
|
2501.13387
|
From Images to Point Clouds: An Efficient Solution for Cross-media Blind
Quality Assessment without Annotated Training
|
cs.CV eess.IV
|
We present a novel quality assessment method which can predict the perceptual
quality of point clouds from new scenes without available annotations by
leveraging the rich prior knowledge in images, called the Distribution-Weighted
Image-Transferred Point Cloud Quality Assessment (DWIT-PCQA). Recognizing the
human visual system (HVS) as the decision-maker in quality assessment
regardless of media types, we can emulate the evaluation criteria for human
perception via neural networks and further transfer the capability of quality
prediction from images to point clouds by leveraging the prior knowledge in the
images. Specifically, domain adaptation (DA) can be leveraged to bridge the
images and point clouds by aligning feature distributions of the two media in
the same feature space. However, the different manifestations of distortions in
images and point clouds make feature alignment a difficult task. To reduce the
alignment difficulty and consider the different distortion distribution during
alignment, we have derived formulas to decompose the optimization objective of
the conventional DA into two suboptimization functions with distortion as a
transition. Specifically, through network implementation, we propose the
distortion-guided biased feature alignment which integrates existing/estimated
distortion distribution into the adversarial DA framework, emphasizing common
distortion patterns during feature alignment. Besides, we propose the
quality-aware feature disentanglement to mitigate the destruction of the
mapping from features to quality during alignment with biased distortions.
Experimental results demonstrate that our proposed method exhibits reliable
performance compared to general blind PCQA methods without needing point cloud
annotations.
|
2501.13389
|
AEON: Adaptive Estimation of Instance-Dependent In-Distribution and
Out-of-Distribution Label Noise for Robust Learning
|
cs.CV
|
Robust training with noisy labels is a critical challenge in image
classification, offering the potential to reduce reliance on costly clean-label
datasets. Real-world datasets often contain a mix of in-distribution (ID) and
out-of-distribution (OOD) instance-dependent label noise, a challenge that is
rarely addressed simultaneously by existing methods and is further compounded
by the lack of comprehensive benchmarking datasets. Furthermore, even though
current noisy-label learning approaches attempt to find noisy-label samples
during training, these methods do not aim to estimate ID and OOD noise rates to
promote their effectiveness in the selection of such noisy-label samples, and
they are often represented by inefficient multi-stage learning algorithms. We
propose the Adaptive Estimation of Instance-Dependent In-Distribution and
Out-of-Distribution Label Noise (AEON) approach to address these research gaps.
AEON is an efficient one-stage noisy-label learning methodology that
dynamically estimates instance-dependent ID and OOD label noise rates to
enhance robustness to complex noise settings. Additionally, we introduce a new
benchmark reflecting real-world ID and OOD noise scenarios. Experiments
demonstrate that AEON achieves state-of-the-art performance on both synthetic
and real-world datasets
|
2501.13390
|
Beyond Task Diversity: Provable Representation Transfer for Sequential
Multi-Task Linear Bandits
|
cs.LG
|
We study lifelong learning in linear bandits, where a learner interacts with
a sequence of linear bandit tasks whose parameters lie in an $m$-dimensional
subspace of $\mathbb{R}^d$, thereby sharing a low-rank representation. Current
literature typically assumes that the tasks are diverse, i.e., their parameters
uniformly span the $m$-dimensional subspace. This assumption allows the
low-rank representation to be learned before all tasks are revealed, which can
be unrealistic in real-world applications. In this work, we present the first
nontrivial result for sequential multi-task linear bandits without the task
diversity assumption. We develop an algorithm that efficiently learns and
transfers low-rank representations. When facing $N$ tasks, each played over
$\tau$ rounds, our algorithm achieves a regret guarantee of $\tilde{O}\big (Nm
\sqrt{\tau} + N^{\frac{2}{3}} \tau^{\frac{2}{3}} d m^{\frac13} + Nd^2 + \tau m
d \big)$ under the ellipsoid action set assumption. This result can
significantly improve upon the baseline of $\tilde{O} \left (Nd
\sqrt{\tau}\right)$ that does not leverage the low-rank structure when the
number of tasks $N$ is sufficiently large and $m \ll d$. We also demonstrate
empirically on synthetic data that our algorithm outperforms baseline
algorithms, which rely on the task diversity assumption.
|
2501.13391
|
Can Large Language Models Understand Preferences in Personalized
Recommendation?
|
cs.CL
|
Large Language Models (LLMs) excel in various tasks, including personalized
recommendations. Existing evaluation methods often focus on rating prediction,
relying on regression errors between actual and predicted ratings. However,
user rating bias and item quality, two influential factors behind rating
scores, can obscure personal preferences in user-item pair data. To address
this, we introduce PerRecBench, disassociating the evaluation from these two
factors and assessing recommendation techniques on capturing the personal
preferences in a grouped ranking manner. We find that the LLM-based
recommendation techniques that are generally good at rating prediction fail to
identify users' favored and disfavored items when the user rating bias and item
quality are eliminated by grouping users. With PerRecBench and 19 LLMs, we find
that while larger models generally outperform smaller ones, they still struggle
with personalized recommendation. Our findings reveal the superiority of
pairwise and listwise ranking approaches over pointwise ranking, PerRecBench's
low correlation with traditional regression metrics, the importance of user
profiles, and the role of pretraining data distributions. We further explore
three supervised fine-tuning strategies, finding that merging weights from
single-format training is promising but improving LLMs' understanding of user
preferences remains an open research problem. Code and data are available at
https://github.com/TamSiuhin/PerRecBench
|
2501.13392
|
Time Series Embedding Methods for Classification Tasks: A Review
|
cs.LG
|
Time series analysis has become crucial in various fields, from engineering
and finance to healthcare and social sciences. In this paper, we present a
comprehensive review and evaluation of time series embedding methods for
effective representations in machine learning and deep learning models. We
introduce a taxonomy of embedding techniques, categorizing them based on their
theoretical foundations and application contexts. Unlike previous surveys, our
work provides a quantitative evaluation of representative methods from each
category by assessing their performance on downstream classification tasks
across diverse real-world datasets. Our experimental results demonstrate that
the performance of embedding methods varies significantly depending on the
dataset and classification algorithm used, highlighting the importance of
careful model selection and extensive experimentation for specific
applications, including engineering systems. To facilitate further research and
practical applications, we provide an open-source code repository implementing
these embedding methods. This study contributes to the field by offering a
systematic comparison of time series embedding techniques, guiding
practitioners in selecting appropriate methods for their specific applications,
and providing a foundation for future advancements in time series analysis.
|
2501.13394
|
Concurrent Learning with Aggregated States via Randomized Least Squares
Value Iteration
|
cs.LG cs.AI
|
Designing learning agents that explore efficiently in a complex environment
has been widely recognized as a fundamental challenge in reinforcement
learning. While a number of works have demonstrated the effectiveness of
techniques based on randomized value functions on a single agent, it remains
unclear, from a theoretical point of view, whether injecting randomization can
help a society of agents {\it concurently} explore an environment. The
theoretical results %that we established in this work tender an affirmative
answer to this question. We adapt the concurrent learning framework to
\textit{randomized least-squares value iteration} (RLSVI) with
\textit{aggregated state representation}. We demonstrate polynomial worst-case
regret bounds in both finite- and infinite-horizon environments. In both setups
the per-agent regret decreases at an optimal rate of
$\Theta\left(\frac{1}{\sqrt{N}}\right)$, highlighting the advantage of
concurent learning. Our algorithm exhibits significantly lower space complexity
compared to \cite{russo2019worst} and \cite{agrawal2021improved}. We reduce the
space complexity by a factor of $K$ while incurring only a $\sqrt{K}$ increase
in the worst-case regret bound, compared to
\citep{agrawal2021improved,russo2019worst}. Additionally, we conduct numerical
experiments to demonstrate our theoretical findings.
|
2501.13396
|
Towards Intelligent Design: A Self-driven Framework for Collocated
Clothing Synthesis Leveraging Fashion Styles and Textures
|
cs.CV
|
Collocated clothing synthesis (CCS) has emerged as a pivotal topic in fashion
technology, primarily concerned with the generation of a clothing item that
harmoniously matches a given item. However, previous investigations have relied
on using paired outfits, such as a pair of matching upper and lower clothing,
to train a generative model for achieving this task. This reliance on the
expertise of fashion professionals in the construction of such paired outfits
has engendered a laborious and time-intensive process. In this paper, we
introduce a new self-driven framework, named style- and texture-guided
generative network (ST-Net), to synthesize collocated clothing without the
necessity for paired outfits, leveraging self-supervised learning. ST-Net is
designed to extrapolate fashion compatibility rules from the style and texture
attributes of clothing, using a generative adversarial network. To facilitate
the training and evaluation of our model, we have constructed a large-scale
dataset specifically tailored for unsupervised CCS. Extensive experiments
substantiate that our proposed method outperforms the state-of-the-art
baselines in terms of both visual authenticity and fashion compatibility.
|
2501.13397
|
ExLM: Rethinking the Impact of [MASK] Tokens in Masked Language Models
|
cs.CL cs.LG
|
Masked Language Models (MLMs) have achieved remarkable success in many
self-supervised representation learning tasks. MLMs are trained by randomly
masking portions of the input sequences with [MASK] tokens and learning to
reconstruct the original content based on the remaining context. This paper
explores the impact of [MASK] tokens on MLMs. Analytical studies show that
masking tokens can introduce the corrupted semantics problem, wherein the
corrupted context may convey multiple, ambiguous meanings. This problem is also
a key factor affecting the performance of MLMs on downstream tasks. Based on
these findings, we propose a novel enhanced-context MLM, ExLM. Our approach
expands [MASK] tokens in the input context and models the dependencies between
these expanded states. This enhancement increases context capacity and enables
the model to capture richer semantic information, effectively mitigating the
corrupted semantics problem during pre-training. Experimental results
demonstrate that ExLM achieves significant performance improvements in both
text modeling and SMILES modeling tasks. Further analysis confirms that ExLM
enriches semantic representations through context enhancement, and effectively
reduces the semantic multimodality commonly observed in MLMs.
|
2501.13400
|
YOLOv8 to YOLO11: A Comprehensive Architecture In-depth Comparative
Review
|
cs.CV cs.AI
|
In the field of deep learning-based computer vision, YOLO is revolutionary.
With respect to deep learning models, YOLO is also the one that is evolving the
most rapidly. Unfortunately, not every YOLO model possesses scholarly
publications. Moreover, there exists a YOLO model that lacks a publicly
accessible official architectural diagram. Naturally, this engenders
challenges, such as complicating the understanding of how the model operates in
practice. Furthermore, the review articles that are presently available do not
delve into the specifics of each model. The objective of this study is to
present a comprehensive and in-depth architecture comparison of the four most
recent YOLO models, specifically YOLOv8 through YOLO11, thereby enabling
readers to quickly grasp not only how each model functions, but also the
distinctions between them. To analyze each YOLO version's architecture, we
meticulously examined the relevant academic papers, documentation, and
scrutinized the source code. The analysis reveals that while each version of
YOLO has improvements in architecture and feature extraction, certain blocks
remain unchanged. The lack of scholarly publications and official diagrams
presents challenges for understanding the model's functionality and future
enhancement. Future developers are encouraged to provide these resources.
|
2501.13402
|
VIGS SLAM: IMU-based Large-Scale 3D Gaussian Splatting SLAM
|
cs.RO cs.CV cs.LG
|
Recently, map representations based on radiance fields such as 3D Gaussian
Splatting and NeRF, which excellent for realistic depiction, have attracted
considerable attention, leading to attempts to combine them with SLAM. While
these approaches can build highly realistic maps, large-scale SLAM still
remains a challenge because they require a large number of Gaussian images for
mapping and adjacent images as keyframes for tracking. We propose a novel 3D
Gaussian Splatting SLAM method, VIGS SLAM, that utilizes sensor fusion of RGB-D
and IMU sensors for large-scale indoor environments. To reduce the
computational load of 3DGS-based tracking, we adopt an ICP-based tracking
framework that combines IMU preintegration to provide a good initial guess for
accurate pose estimation. Our proposed method is the first to propose that
Gaussian Splatting-based SLAM can be effectively performed in large-scale
environments by integrating IMU sensor measurements. This proposal not only
enhances the performance of Gaussian Splatting SLAM beyond room-scale scenarios
but also achieves SLAM performance comparable to state-of-the-art methods in
large-scale indoor environments.
|
2501.13403
|
ROMA: ROtary and Movable Antenna
|
eess.SP cs.IT math.IT
|
The rotary and movable antenna (ROMA) architecture represents a
next-generation multi-antenna technology that enables flexible adjustment of
antenna position and array rotation angles of the transceiver. In this letter,
we propose a ROMA-aided multi-user MIMO communication system to fully enhance
the efficiency and reliability of system transmissions. By deploying ROMA
panels at both the transmitter and receiver sides, and jointly optimizing the
three-dimensional (3D) rotation angles of each ROMA panel and the relative
positions of antenna elements based on the spatial distribution of users and
channel state information (CSI), we can achieve the objective of maximizing the
average spectral efficiency (SE). Subsequently, we conduct a detailed analysis
of the average SE performance of the system under the consideration of maximum
ratio (MR) precoding. Due to the non-convexity of the optimization problem in
the ROMA multi-user MIMO system, we propose an efficient solution based on an
alternating optimization (AO) algorithm. Finally, simulation results
demonstrate that the AO-based ROMA architecture can significantly improve the
average SE. Furthermore, the performance improvement becomes more pronounced as
the size of the movable region and the transmission power increase.
|
2501.13405
|
Performance Analysis of Fluid Antenna Multiple Access Assisted Wireless
Powered Communication Network
|
cs.IT eess.SP math.IT
|
This paper investigates a novel fluid antenna multiple access (FAMA)-assisted
wireless powered communication network (WPCN), in which a hybrid access point
(HAP) equipped with multiple fixed position antennas (FPAs) provides integrated
data and energy transfer (IDET) services towards low-power devices that are
equipped with a single fluid antenna (FA), while the low-power devices use
harvested energy to power their own uplink transmission. Using the block
correlation channel model, both the downlink and uplink wireless data transfer
(WDT) outage probabilities are analyzed under specific port selection
strategies, including downlink signal-to-interference ratio-based port
selection (DSPS) strategy, downlink energy harvesting power-based port
selection (DEPS) strategy, uplink signal-to-noise ratio-based port selection
(USPS) strategy, and uplink channel-based port selection (UCPS) strategy. A
step function approximation (SFA) approach is also relied upon to derive
closed-form expressions for the outage probabilities, while the lower bounds
for uplink WDT outage probabilities are also formulated. Numerical results
demonstrate the validity of our theoretical analysis, which also provide useful
guidelines for the system design through the analytical framework.
|
2501.13412
|
Load and Renewable Energy Forecasting Using Deep Learning for Grid
Stability
|
cs.LG cs.AI
|
As the energy landscape changes quickly, grid operators face several
challenges, especially when integrating renewable energy sources with the grid.
The most important challenge is to balance supply and demand because the solar
and wind energy are highly unpredictable. When dealing with such uncertainty,
trustworthy short-term load and renewable energy forecasting can help stabilize
the grid, maximize energy storage, and guarantee the effective use of renewable
resources. Physical models and statistical techniques were the previous
approaches employed for this kind of forecasting tasks. In forecasting
renewable energy, machine learning and deep learning techniques have recently
demonstrated encouraging results. More specifically, the deep learning
techniques like CNN and LSTM and the conventional machine learning techniques
like regression that are mostly utilized for load and renewable energy
forecasting tasks. In this article, we will focus mainly on CNN and LSTM-based
forecasting methods.
|
2501.13414
|
Physics-Aware Sparse Signal Recovery Through PDE-Governed Measurement
Systems
|
cs.IT math.IT
|
This paper introduces a novel framework for physics-aware sparse signal
recovery in measurement systems governed by partial differential equations
(PDEs). Unlike conventional compressed sensing approaches that treat
measurement systems as simple linear systems, our method explicitly
incorporates the underlying physics through numerical PDE solvers and automatic
differentiation (AD). We present physics-aware iterative shrinkage-thresholding
algorithm (PA-ISTA), which combines the computational efficiency of ISTA with
accurate physical modeling to achieve improved signal reconstruction. Using
optical fiber channels as a concrete example, we demonstrate how the nonlinear
Schr\"odinger equation (NLSE) can be integrated into the recovery process. Our
approach leverages deep unfolding techniques for parameter optimization.
Numerical experiments show that PA-ISTA significantly outperforms conventional
recovery methods. While demonstrated on optical fiber systems, our framework
provides a general methodology for physics-aware signal recovery that can be
adapted to various PDE-governed measurement systems.
|
2501.13416
|
M3PT: A Transformer for Multimodal, Multi-Party Social Signal Prediction
with Person-aware Blockwise Attention
|
cs.LG cs.AI cs.RO
|
Understanding social signals in multi-party conversations is important for
human-robot interaction and artificial social intelligence. Social signals
include body pose, head pose, speech, and context-specific activities like
acquiring and taking bites of food when dining. Past work in multi-party
interaction tends to build task-specific models for predicting social signals.
In this work, we address the challenge of predicting multimodal social signals
in multi-party settings in a single model. We introduce M3PT, a causal
transformer architecture with modality and temporal blockwise attention masking
to simultaneously process multiple social cues across multiple participants and
their temporal interactions. We train and evaluate M3PT on the Human-Human
Commensality Dataset (HHCD), and demonstrate that using multiple modalities
improves bite timing and speaking status prediction. Source code:
https://github.com/AbrarAnwar/masked-social-signals/.
|
2501.13417
|
GeomGS: LiDAR-Guided Geometry-Aware Gaussian Splatting for Robot
Localization
|
cs.RO cs.CV cs.LG
|
Mapping and localization are crucial problems in robotics and autonomous
driving. Recent advances in 3D Gaussian Splatting (3DGS) have enabled precise
3D mapping and scene understanding by rendering photo-realistic images.
However, existing 3DGS methods often struggle to accurately reconstruct a 3D
map that reflects the actual scale and geometry of the real world, which
degrades localization performance. To address these limitations, we propose a
novel 3DGS method called Geometry-Aware Gaussian Splatting (GeomGS). This
method fully integrates LiDAR data into 3D Gaussian primitives via a
probabilistic approach, as opposed to approaches that only use LiDAR as initial
points or introduce simple constraints for Gaussian points. To this end, we
introduce a Geometric Confidence Score (GCS), which identifies the structural
reliability of each Gaussian point. The GCS is optimized simultaneously with
Gaussians under probabilistic distance constraints to construct a precise
structure. Furthermore, we propose a novel localization method that fully
utilizes both the geometric and photometric properties of GeomGS. Our GeomGS
demonstrates state-of-the-art geometric and localization performance across
several benchmarks, while also improving photometric performance.
|
2501.13418
|
Rethinking the Sample Relations for Few-Shot Classification
|
cs.CV cs.AI
|
Feature quality is paramount for classification performance, particularly in
few-shot scenarios. Contrastive learning, a widely adopted technique for
enhancing feature quality, leverages sample relations to extract intrinsic
features that capture semantic information and has achieved remarkable success
in Few-Shot Learning (FSL). Nevertheless, current few-shot contrastive learning
approaches often overlook the semantic similarity discrepancies at different
granularities when employing the same modeling approach for different sample
relations, which limits the potential of few-shot contrastive learning. In this
paper, we introduce a straightforward yet effective contrastive learning
approach, Multi-Grained Relation Contrastive Learning (MGRCL), as a
pre-training feature learning model to boost few-shot learning by meticulously
modeling sample relations at different granularities. MGRCL categorizes sample
relations into three types: intra-sample relation of the same sample under
different transformations, intra-class relation of homogenous samples, and
inter-class relation of inhomogeneous samples. In MGRCL, we design
Transformation Consistency Learning (TCL) to ensure the rigorous semantic
consistency of a sample under different transformations by aligning predictions
of input pairs. Furthermore, to preserve discriminative information, we employ
Class Contrastive Learning (CCL) to ensure that a sample is always closer to
its homogenous samples than its inhomogeneous ones, as homogenous samples share
similar semantic content while inhomogeneous samples have different semantic
content. Our method is assessed across four popular FSL benchmarks, showing
that such a simple pre-training feature learning method surpasses a majority of
leading FSL methods. Moreover, our method can be incorporated into other FSL
methods as the pre-trained model and help them obtain significant performance
gains.
|
2501.13419
|
A Survey of Code-switched Arabic NLP: Progress, Challenges, and Future
Directions
|
cs.CL
|
Language in the Arab world presents a complex diglossic and multilingual
setting, involving the use of Modern Standard Arabic, various dialects and
sub-dialects, as well as multiple European languages. This diverse linguistic
landscape has given rise to code-switching, both within Arabic varieties and
between Arabic and foreign languages. The widespread occurrence of
code-switching across the region makes it vital to address these linguistic
needs when developing language technologies. In this paper, we provide a review
of the current literature in the field of code-switched Arabic NLP, offering a
broad perspective on ongoing efforts, challenges, research gaps, and
recommendations for future research directions.
|
2501.13420
|
LVFace: Large Vision model for Face Recogniton
|
cs.CV
|
Recently, large vision models have demonstrated powerful representation
capabilities in the field of computer vision. However, we unexpectedly found
that face recognition research is still mainly focused on CNN-based model
architectures, which may lead to suboptimal state-of-the-art (SOTA) performance
in face recognition. Therefore, we study how to use various loss functions from
historical research orthogonally to train a new state-of-the-art face
recognition model based on large vision models, called LVFace. On the largest
public face database, WebFace42M, we demonstrated the superiority of LVFace
over other advanced face recognition methods and achieved first place in the
ICCV21 MFR-Ongoing challenge, until the submission of this work (December 30,
2024, academic track).
|
2501.13421
|
Perceived Fairness of the Machine Learning Development Process: Concept
Scale Development
|
cs.HC cs.CY cs.LG
|
In machine learning (ML) applications, unfairness is triggered due to bias in
the data, the data curation process, erroneous assumptions, and implicit bias
rendered during the development process. It is also well-accepted by
researchers that fairness in ML application development is highly subjective,
with a lack of clarity of what it means from an ML development and
implementation perspective. Thus, in this research, we investigate and
formalize the notion of the perceived fairness of ML development from a
sociotechnical lens. Our goal in this research is to understand the
characteristics of perceived fairness in ML applications. We address this
research goal using a three-pronged strategy: 1) conducting virtual focus
groups with ML developers, 2) reviewing existing literature on fairness in ML,
and 3) incorporating aspects of justice theory relating to procedural and
distributive justice. Based on our theoretical exposition, we propose
operational attributes of perceived fairness to be transparency,
accountability, and representativeness. These are described in terms of
multiple concepts that comprise each dimension of perceived fairness. We use
this operationalization to empirically validate the notion of perceived
fairness of machine learning (ML) applications from both the ML practioners and
users perspectives. The multidimensional framework for perceived fairness
offers a comprehensive understanding of perceived fairness, which can guide the
creation of fair ML systems with positive implications for society and
businesses.
|
2501.13422
|
Atmospheric Noise-Resilient Image Classification in a Real-World
Scenario: Using Hybrid CNN and Pin-GTSVM
|
cs.CV
|
Parking space occupation detection using deep learning frameworks has seen
significant advancements over the past few years. While these approaches
effectively detect partial obstructions and adapt to varying lighting
conditions, their performance significantly diminishes when haze is present.
This paper proposes a novel hybrid model with a pre-trained feature extractor
and a Pinball Generalized Twin Support Vector Machine (Pin-GTSVM) classifier,
which removes the need for a dehazing system from the current State-of-The-Art
hazy parking slot classification systems and is also insensitive to any
atmospheric noise. The proposed system can seamlessly integrate with
conventional smart parking infrastructures, leveraging a minimal number of
cameras to monitor and manage hundreds of parking spaces efficiently. Its
effectiveness has been evaluated against established parking space detection
methods using the CNRPark Patches, PKLot, and a custom dataset specific to hazy
parking scenarios. Furthermore, empirical results indicate a significant
improvement in accuracy on a hazy parking system, thus emphasizing efficient
atmospheric noise handling.
|
2501.13426
|
Auto-Prompting SAM for Weakly Supervised Landslide Extraction
|
cs.CV
|
Weakly supervised landslide extraction aims to identify landslide regions
from remote sensing data using models trained with weak labels, particularly
image-level labels. However, it is often challenged by the imprecise boundaries
of the extracted objects due to the lack of pixel-wise supervision and the
properties of landslide objects. To tackle these issues, we propose a simple
yet effective method by auto-prompting the Segment Anything Model (SAM), i.e.,
APSAM. Instead of depending on high-quality class activation maps (CAMs) for
pseudo-labeling or fine-tuning SAM, our method directly yields fine-grained
segmentation masks from SAM inference through prompt engineering. Specifically,
it adaptively generates hybrid prompts from the CAMs obtained by an object
localization network. To provide sufficient information for SAM prompting, an
adaptive prompt generation (APG) algorithm is designed to fully leverage the
visual patterns of CAMs, enabling the efficient generation of pseudo-masks for
landslide extraction. These informative prompts are able to identify the extent
of landslide areas (box prompts) and denote the centers of landslide objects
(point prompts), guiding SAM in landslide segmentation. Experimental results on
high-resolution aerial and satellite datasets demonstrate the effectiveness of
our method, achieving improvements of at least 3.0\% in F1 score and 3.69\% in
IoU compared to other state-of-the-art methods. The source codes and datasets
will be available at https://github.com/zxk688.
|
2501.13428
|
Softplus Attention with Re-weighting Boosts Length Extrapolation in
Large Language Models
|
cs.CL cs.AI cs.LG
|
Large language models have achieved remarkable success in recent years,
primarily due to the implementation of self-attention mechanisms. However,
traditional Softmax attention suffers from numerical instability and reduced
performance as the length of inference tokens increases. This paper addresses
these issues by decomposing the Softmax operation into a non-linear
transformation and the $l_1$-norm. We identify the latter as essential for
maintaining model performance. By replacing the non-linear transformation with
the Softplus activation function and introducing a dynamic scale factor for
different token lengths based on invariance entropy, we create a novel
attention mechanism with performance better than conventional Softmax attention
across various inference lengths. To further improve the length extrapolation
ability of the proposed attention mechanism, we introduce a fine-tuning-free
re-weighting mechanism that amplifies significant attention weights while
diminishing weaker ones, enabling the model to concentrate more effectively on
relevant tokens without requiring retraining. When combined with our proposed
attention mechanism, this approach demonstrates significant promise in managing
longer sequences, maintaining nearly constant validation loss even at
16$\times$ the training token length while ensuring numerical stability. Our
code is available at: https://github.com/iminfine/freeatten.
|
2501.13430
|
Wasserstein-regularized Conformal Prediction under General Distribution
Shift
|
cs.LG stat.ML
|
Conformal prediction yields a prediction set with guaranteed $1-\alpha$
coverage of the true target under the i.i.d. assumption, which may not hold and
lead to a gap between $1-\alpha$ and the actual coverage. Prior studies bound
the gap using total variation distance, which cannot identify the gap changes
under distribution shift at a given $\alpha$. Besides, existing methods are
mostly limited to covariate shift,while general joint distribution shifts are
more common in practice but less researched.In response, we first propose a
Wasserstein distance-based upper bound of the coverage gap and analyze the
bound using probability measure pushforwards between the shifted joint data and
conformal score distributions, enabling a separation of the effect of covariate
and concept shifts over the coverage gap. We exploit the separation to design
an algorithm based on importance weighting and regularized representation
learning (WR-CP) to reduce the Wasserstein bound with a finite-sample error
bound.WR-CP achieves a controllable balance between conformal prediction
accuracy and efficiency. Experiments on six datasets prove that WR-CP can
reduce coverage gaps to $3.1\%$ across different confidence levels and outputs
prediction sets 38$\%$ smaller than the worst-case approach on average.
|
2501.13431
|
Optimizing the Trade-off Between Throughput and PAoI Outage Exponents
|
cs.NI cs.IT math.IT
|
This paper investigates the trade-off between throughput and peak age of
information (PAoI) outage probability in a multi-sensor information collection
system. Each sensor monitors a physical process, periodically samples its
status, and transmits the updates to a central access point over a shared radio
resource. The trade-off arises from the interplay between each sensor's
sampling frequency and the allocation of the shared resource. To optimize this
trade-off, we formulate a joint optimization problem for each sensor's sampling
delay and resource allocation, aiming to minimize a weighted sum of sampling
delay costs (representing a weighted sum of throughput) while satisfying PAoI
outage probability exponent constraints. We derive an optimal solution and
particularly propose a closed-form approximation for large-scale systems. This
approximation provides an explicit expression for an approximately optimal
trade-off, laying a foundation for designing resource-constrained systems in
applications that demand frequent updates and also stringent statistical
timeliness guarantees.
|
2501.13432
|
Emotion estimation from video footage with LSTM
|
cs.CV cs.LG cs.RO
|
Emotion estimation in general is a field that has been studied for a long
time, and several approaches exist using machine learning. in this paper, we
present an LSTM model, that processes the blend-shapes produced by the library
MediaPipe, for a face detected in a live stream of a camera, to estimate the
main emotion from the facial expressions, this model is trained on the FER2013
dataset and delivers a result of 71% accuracy and 62% f1-score which meets the
accuracy benchmark of the FER2013 dataset, with significantly reduced
computation costs.
https://github.com/Samir-atra/Emotion_estimation_from_video_footage_with_LSTM_ML_algorithm
|
2501.13435
|
GC-ConsFlow: Leveraging Optical Flow Residuals and Global Context for
Robust Deepfake Detection
|
cs.CV
|
The rapid development of Deepfake technology has enabled the generation of
highly realistic manipulated videos, posing severe social and ethical
challenges. Existing Deepfake detection methods primarily focused on either
spatial or temporal inconsistencies, often neglecting the interplay between the
two or suffering from interference caused by natural facial motions. To address
these challenges, we propose the global context consistency flow (GC-ConsFlow),
a novel dual-stream framework that effectively integrates spatial and temporal
features for robust Deepfake detection. The global grouped context aggregation
module (GGCA), integrated into the global context-aware frame flow stream
(GCAF), enhances spatial feature extraction by aggregating grouped global
context information, enabling the detection of subtle, spatial artifacts within
frames. The flow-gradient temporal consistency stream (FGTC), rather than
directly modeling the residuals, it is used to improve the robustness of
temporal feature extraction against the inconsistency introduced by unnatural
facial motion using optical flow residuals and gradient-based features. By
combining these two streams, GC-ConsFlow demonstrates the effectiveness and
robustness in capturing complementary spatiotemporal forgery traces. Extensive
experiments show that GC-ConsFlow outperforms existing state-of-the-art methods
in detecting Deepfake videos under various compression scenarios.
|
2501.13439
|
One-cycle Structured Pruning with Stability Driven Structure Search
|
cs.CV cs.AI cs.LG
|
Existing structured pruning typically involves multi-stage training
procedures that often demand heavy computation. Pruning at initialization,
which aims to address this limitation, reduces training costs but struggles
with performance. To address these challenges, we propose an efficient
framework for one-cycle structured pruning without compromising model
performance. In this approach, we integrate pre-training, pruning, and
fine-tuning into a single training cycle, referred to as the `one cycle
approach'. The core idea is to search for the optimal sub-network during the
early stages of network training, guided by norm-based group saliency criteria
and structured sparsity regularization. We introduce a novel pruning indicator
that determines the stable pruning epoch by assessing the similarity between
evolving pruning sub-networks across consecutive training epochs. Also, group
sparsity regularization helps to accelerate the pruning process and results in
speeding up the entire process. Extensive experiments on datasets, including
CIFAR-10/100, and ImageNet, using VGGNet, ResNet, MobileNet, and ViT
architectures, demonstrate that our method achieves state-of-the-art accuracy
while being one of the most efficient pruning frameworks in terms of training
time. The source code will be made publicly available.
|
2501.13442
|
Billion-scale Similarity Search Using a Hybrid Indexing Approach with
Advanced Filtering
|
cs.IR cs.DB cs.DC cs.LG
|
This paper presents a novel approach for similarity search with complex
filtering capabilities on billion-scale datasets, optimized for CPU inference.
Our method extends the classical IVF-Flat index structure to integrate
multi-dimensional filters. The proposed algorithm combines dense embeddings
with discrete filtering attributes, enabling fast retrieval in high-dimensional
spaces. Designed specifically for CPU-based systems, our disk-based approach
offers a cost-effective solution for large-scale similarity search. We
demonstrate the effectiveness of our method through a case study, showcasing
its potential for various practical uses.
|
2501.13444
|
Explicit Construction of Classical and Quantum Quasi-Cyclic Low-Density
Parity-Check Codes with Column Weight 2 and Girth 12
|
cs.IT math.IT quant-ph
|
This study proposes an explicit construction method for classical and quantum
quasi-cyclic low-density parity-check (QC-LDPC) codes with a girth of 12. The
proposed method designs parity-check matrices that maximize the girth while
maintaining an orthogonal structure suitable for quantum error correction. By
utilizing algebraic techniques, short cycles are eliminated, which improves
error correction performance. Additionally, this method is extended to
non-binary LDPC codes and spatially-coupled LDPC codes, demonstrating that both
the girth and orthogonality can be preserved. The results of this study enable
the design of high-performance quantum error correction codes without the need
for random search.
|
2501.13448
|
BMG-Q: Localized Bipartite Match Graph Attention Q-Learning for
Ride-Pooling Order Dispatch
|
cs.MA cs.AI cs.ET cs.LG
|
This paper introduces Localized Bipartite Match Graph Attention Q-Learning
(BMG-Q), a novel Multi-Agent Reinforcement Learning (MARL) algorithm framework
tailored for ride-pooling order dispatch. BMG-Q advances ride-pooling
decision-making process with the localized bipartite match graph underlying the
Markov Decision Process, enabling the development of novel Graph Attention
Double Deep Q Network (GATDDQN) as the MARL backbone to capture the dynamic
interactions among ride-pooling vehicles in fleet. Our approach enriches the
state information for each agent with GATDDQN by leveraging a localized
bipartite interdependence graph and enables a centralized global coordinator to
optimize order matching and agent behavior using Integer Linear Programming
(ILP). Enhanced by gradient clipping and localized graph sampling, our GATDDQN
improves scalability and robustness. Furthermore, the inclusion of a posterior
score function in the ILP captures the online exploration-exploitation
trade-off and reduces the potential overestimation bias of agents, thereby
elevating the quality of the derived solutions. Through extensive experiments
and validation, BMG-Q has demonstrated superior performance in both training
and operations for thousands of vehicle agents, outperforming benchmark
reinforcement learning frameworks by around 10% in accumulative rewards and
showing a significant reduction in overestimation bias by over 50%.
Additionally, it maintains robustness amidst task variations and fleet size
changes, establishing BMG-Q as an effective, scalable, and robust framework for
advancing ride-pooling order dispatch operations.
|
2501.13449
|
MultiDreamer3D: Multi-concept 3D Customization with Concept-Aware
Diffusion Guidance
|
cs.CV
|
While single-concept customization has been studied in 3D, multi-concept
customization remains largely unexplored. To address this, we propose
MultiDreamer3D that can generate coherent multi-concept 3D content in a
divide-and-conquer manner. First, we generate 3D bounding boxes using an
LLM-based layout controller. Next, a selective point cloud generator creates
coarse point clouds for each concept. These point clouds are placed in the 3D
bounding boxes and initialized into 3D Gaussian Splatting with concept labels,
enabling precise identification of concept attributions in 2D projections.
Finally, we refine 3D Gaussians via concept-aware interval score matching,
guided by concept-aware diffusion. Our experimental results show that
MultiDreamer3D not only ensures object presence and preserves the distinct
identities of each concept but also successfully handles complex cases such as
property change or interaction. To the best of our knowledge, we are the first
to address the multi-concept customization in 3D.
|
2501.13451
|
Deep Modularity Networks with Diversity--Preserving Regularization
|
cs.LG
|
Graph clustering plays a crucial role in graph representation learning but
often faces challenges in achieving feature-space diversity. While Deep
Modularity Networks (DMoN) leverage modularity maximization and collapse
regularization to ensure structural separation, they do not explicitly
encourage diversity in the feature space among clusters. We address this
limitation by proposing Deep Modularity Networks with Diversity-Preserving
Regularization (DMoN-DPR), which introduces three novel regularization terms:
distance-based for inter-cluster separation, variance-based for intra-cluster
diversity, and entropy-based for balanced assignments. Our method enhances
clustering performance on benchmark datasets, namely Cora, CiteSeer, PubMed,
Coauthor CS, and Coauthor Physics, achieving significant improvements in
Normalized Mutual Information (NMI), and F1 scores. These results demonstrate
the effectiveness of incorporating diversity-preserving regularizations in
creating meaningful and interpretable clusters, especially in feature-rich
datasets.
|
2501.13452
|
EchoVideo: Identity-Preserving Human Video Generation by Multimodal
Feature Fusion
|
cs.CV
|
Recent advancements in video generation have significantly impacted various
downstream applications, particularly in identity-preserving video generation
(IPT2V). However, existing methods struggle with "copy-paste" artifacts and low
similarity issues, primarily due to their reliance on low-level facial image
information. This dependence can result in rigid facial appearances and
artifacts reflecting irrelevant details. To address these challenges, we
propose EchoVideo, which employs two key strategies: (1) an Identity Image-Text
Fusion Module (IITF) that integrates high-level semantic features from text,
capturing clean facial identity representations while discarding occlusions,
poses, and lighting variations to avoid the introduction of artifacts; (2) a
two-stage training strategy, incorporating a stochastic method in the second
phase to randomly utilize shallow facial information. The objective is to
balance the enhancements in fidelity provided by shallow features while
mitigating excessive reliance on them. This strategy encourages the model to
utilize high-level features during training, ultimately fostering a more robust
representation of facial identities. EchoVideo effectively preserves facial
identities and maintains full-body integrity. Extensive experiments demonstrate
that it achieves excellent results in generating high-quality, controllability
and fidelity videos.
|
2501.13453
|
Spurious Forgetting in Continual Learning of Language Models
|
cs.LG
|
Recent advancements in large language models (LLMs) reveal a perplexing
phenomenon in continual learning: despite extensive training, models experience
significant performance declines, raising questions about task alignment and
underlying knowledge retention. This study first explores the concept of
"spurious forgetting", proposing that such performance drops often reflect a
decline in task alignment rather than true knowledge loss. Through controlled
experiments with a synthesized dataset, we investigate the dynamics of model
performance during the initial training phases of new tasks, discovering that
early optimization steps can disrupt previously established task alignments.
Our theoretical analysis connects these shifts to orthogonal updates in model
weights, providing a robust framework for understanding this behavior.
Ultimately, we introduce a Freezing strategy that fix the bottom layers of the
model, leading to substantial improvements in four continual learning
scenarios. Our findings underscore the critical distinction between task
alignment and knowledge retention, paving the way for more effective strategies
in continual learning.
|
2501.13456
|
KAA: Kolmogorov-Arnold Attention for Enhancing Attentive Graph Neural
Networks
|
cs.LG cs.AI
|
Graph neural networks (GNNs) with attention mechanisms, often referred to as
attentive GNNs, have emerged as a prominent paradigm in advanced GNN models in
recent years. However, our understanding of the critical process of scoring
neighbor nodes remains limited, leading to the underperformance of many
existing attentive GNNs. In this paper, we unify the scoring functions of
current attentive GNNs and propose Kolmogorov-Arnold Attention (KAA), which
integrates the Kolmogorov-Arnold Network (KAN) architecture into the scoring
process. KAA enhances the performance of scoring functions across the board and
can be applied to nearly all existing attentive GNNs. To compare the expressive
power of KAA with other scoring functions, we introduce Maximum Ranking
Distance (MRD) to quantitatively estimate their upper bounds in ranking errors
for node importance. Our analysis reveals that, under limited parameters and
constraints on width and depth, both linear transformation-based and MLP-based
scoring functions exhibit finite expressive power. In contrast, our proposed
KAA, even with a single-layer KAN parameterized by zero-order B-spline
functions, demonstrates nearly infinite expressive power. Extensive experiments
on both node-level and graph-level tasks using various backbone models show
that KAA-enhanced scoring functions consistently outperform their original
counterparts, achieving performance improvements of over 20% in some cases.
|
2501.13457
|
Zero-Shot Trajectory Planning for Signal Temporal Logic Tasks
|
cs.RO cs.AI cs.LG cs.SY eess.SY
|
Signal Temporal Logic (STL) is a powerful specification language for
describing complex temporal behaviors of continuous signals, making it
well-suited for high-level robotic task descriptions. However, generating
executable plans for STL tasks is challenging, as it requires consideration of
the coupling between the task specification and the system dynamics. Existing
approaches either follow a model-based setting that explicitly requires
knowledge of the system dynamics or adopt a task-oriented data-driven approach
to learn plans for specific tasks. In this work, we investigate the problem of
generating executable STL plans for systems whose dynamics are unknown a
priori. We propose a new planning framework that uses only task-agnostic data
during the offline training stage, enabling zero-shot generalization to new STL
tasks. Our framework is hierarchical, involving: (i) decomposing the STL task
into a set of progress and time constraints, (ii) searching for time-aware
waypoints guided by task-agnostic data, and (iii) generating trajectories using
a pre-trained safe diffusion model. Simulation results demonstrate the
effectiveness of our method indeed in achieving zero-shot generalization to
various STL tasks.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.