id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
2501.04120
|
Bridging Impulse Control of Piecewise Deterministic Markov Processes and
Markov Decision Processes: Frameworks, Extensions, and Open Challenges
|
stat.ME cs.SY eess.SY
|
Control theory plays a pivotal role in understanding and optimizing the
behavior of complex dynamical systems across various scientific and engineering
disciplines. Two key frameworks that have emerged for modeling and solving
control problems in stochastic systems are piecewise deterministic Markov
processes (PDMPs) and Markov decision processes (MDPs). Each framework has its
unique strengths, and their intersection offers promising opportunities for
tackling a broad class of problems, particularly in the context of impulse
controls and decision-making in complex systems.
The relationship between PDMPs and MDPs is a natural subject of exploration,
as embedding impulse control problems for PDMPs into the MDP framework could
open new avenues for their analysis and resolution. Specifically, this
integration would allow leveraging the computational and theoretical tools
developed for MDPs to address the challenges inherent in PDMPs. On the other
hand, PDMPs can offer a versatile and simple paradigm to model continuous time
problems that are often described as discrete-time MDPs parametrized by complex
transition kernels. This transformation has the potential to bridge the gap
between the two frameworks, enabling solutions to previously intractable
problems and expanding the scope of both fields. This paper presents a
comprehensive review of two research domains, illustrated through a recurring
medical example. The example is revisited and progressively formalized within
the framework of thevarious concepts and objects introduced
|
2501.04121
|
Graph-Based Multimodal and Multi-view Alignment for Keystep Recognition
|
cs.CV
|
Egocentric videos capture scenes from a wearer's viewpoint, resulting in
dynamic backgrounds, frequent motion, and occlusions, posing challenges to
accurate keystep recognition. We propose a flexible graph-learning framework
for fine-grained keystep recognition that is able to effectively leverage
long-term dependencies in egocentric videos, and leverage alignment between
egocentric and exocentric videos during training for improved inference on
egocentric videos. Our approach consists of constructing a graph where each
video clip of the egocentric video corresponds to a node. During training, we
consider each clip of each exocentric video (if available) as additional nodes.
We examine several strategies to define connections across these nodes and pose
keystep recognition as a node classification task on the constructed graphs. We
perform extensive experiments on the Ego-Exo4D dataset and show that our
proposed flexible graph-based framework notably outperforms existing methods by
more than 12 points in accuracy. Furthermore, the constructed graphs are sparse
and compute efficient. We also present a study examining on harnessing several
multimodal features, including narrations, depth, and object class labels, on a
heterogeneous graph and discuss their corresponding contribution to the keystep
recognition performance.
|
2501.04126
|
Stochastic Process Learning via Operator Flow Matching
|
cs.LG
|
Expanding on neural operators, we propose a novel framework for stochastic
process learning across arbitrary domains. In particular, we develop operator
flow matching (OFM) for learning stochastic process priors on function spaces.
OFM provides the probability density of the values of any collection of points
and enables mathematically tractable functional regression at new points with
mean and density estimation. Our method outperforms state-of-the-art models in
stochastic process learning, functional regression, and prior learning.
|
2501.04134
|
Mixing Times and Privacy Analysis for the Projected Langevin Algorithm
under a Modulus of Continuity
|
stat.ML cs.LG math.OC math.ST stat.TH
|
We study the mixing time of the projected Langevin algorithm (LA) and the
privacy curve of noisy Stochastic Gradient Descent (SGD), beyond nonexpansive
iterations. Specifically, we derive new mixing time bounds for the projected LA
which are, in some important cases, dimension-free and poly-logarithmic on the
accuracy, closely matching the existing results in the smooth convex case.
Additionally, we establish new upper bounds for the privacy curve of the
subsampled noisy SGD algorithm. These bounds show a crucial dependency on the
regularity of gradients, and are useful for a wide range of convex losses
beyond the smooth case. Our analysis relies on a suitable extension of the
Privacy Amplification by Iteration (PABI) framework (Feldman et al., 2018;
Altschuler and Talwar, 2022, 2023) to noisy iterations whose gradient map is
not necessarily nonexpansive. This extension is achieved by designing an
optimization problem which accounts for the best possible R\'enyi divergence
bound obtained by an application of PABI, where the tractability of the problem
is crucially related to the modulus of continuity of the associated gradient
mapping. We show that, in several interesting cases -- including the nonsmooth
convex, weakly smooth and (strongly) dissipative -- such optimization problem
can be solved exactly and explicitly. This yields the tightest possible
PABI-based bounds, where our results are either new or substantially sharper
than those in previous works.
|
2501.04136
|
Implementing Systemic Thinking for Automatic Schema Matching: An
Agent-Based Modeling Approach
|
cs.AI cs.MA
|
Several approaches are proposed to deal with the problem of the Automatic
Schema Matching (ASM). The challenges and difficulties caused by the complexity
and uncertainty characterizing both the process and the outcome of Schema
Matching motivated us to investigate how bio-inspired emerging paradigm can
help with understanding, managing, and ultimately overcoming those challenges.
In this paper, we explain how we approached Automatic Schema Matching as a
systemic and Complex Adaptive System (CAS) and how we modeled it using the
approach of Agent-Based Modeling and Simulation (ABMS). This effort gives birth
to a tool (prototype) for schema matching called Reflex-SMAS. A set of
experiments demonstrates the viability of our approach on two main aspects: (i)
effectiveness (increasing the quality of the found matchings) and (ii)
efficiency (reducing the effort required for this efficiency). Our approach
represents a significant paradigm-shift, in the field of Automatic Schema
Matching.
|
2501.04138
|
"Yeah Right!" -- Do LLMs Exhibit Multimodal Feature Transfer?
|
cs.CL
|
Human communication is a multifaceted and multimodal skill. Communication
requires an understanding of both the surface-level textual content and the
connotative intent of a piece of communication. In humans, learning to go
beyond the surface level starts by learning communicative intent in speech.
Once humans acquire these skills in spoken communication, they transfer those
skills to written communication. In this paper, we assess the ability of
speech+text models and text models trained with special emphasis on
human-to-human conversations to make this multimodal transfer of skill. We
specifically test these models on their ability to detect covert deceptive
communication. We find that with no special prompting speech+text LLMs have an
advantage over unimodal LLMs in performing this task. Likewise, we find that
human-to-human conversation-trained LLMs are also advantaged in this skill.
|
2501.04141
|
Hardware-In-The-Loop Training of a 4f Optical Correlator with
Logarithmic Complexity Reduction for CNNs
|
cs.NE
|
This work evaluates a forward-only learning algorithm on the MNIST dataset
with hardware-in-the-loop training of a 4f optical correlator, achieving 87.6%
accuracy with O(n2) complexity, compared to backpropagation, which achieves
88.8% accuracy with O(n2 log n) complexity.
|
2501.04142
|
BiasGuard: Guardrailing Fairness in Machine Learning Production Systems
|
cs.LG cs.AI cs.CY
|
As machine learning (ML) systems increasingly impact critical sectors such as
hiring, financial risk assessments, and criminal justice, the imperative to
ensure fairness has intensified due to potential negative implications. While
much ML fairness research has focused on enhancing training data and processes,
addressing the outputs of already deployed systems has received less attention.
This paper introduces 'BiasGuard', a novel approach designed to act as a
fairness guardrail in production ML systems. BiasGuard leverages Test-Time
Augmentation (TTA) powered by Conditional Generative Adversarial Network
(CTGAN), a cutting-edge generative AI model, to synthesize data samples
conditioned on inverted protected attribute values, thereby promoting equitable
outcomes across diverse groups. This method aims to provide equal opportunities
for both privileged and unprivileged groups while significantly enhancing the
fairness metrics of deployed systems without the need for retraining. Our
comprehensive experimental analysis across diverse datasets reveals that
BiasGuard enhances fairness by 31% while only reducing accuracy by 0.09%
compared to non-mitigated benchmarks. Additionally, BiasGuard outperforms
existing post-processing methods in improving fairness, positioning it as an
effective tool to safeguard against biases when retraining the model is
impractical.
|
2501.04144
|
Chirpy3D: Continuous Part Latents for Creative 3D Bird Generation
|
cs.CV cs.GR
|
In this paper, we push the boundaries of fine-grained 3D generation into
truly creative territory. Current methods either lack intricate details or
simply mimic existing objects -- we enable both. By lifting 2D fine-grained
understanding into 3D through multi-view diffusion and modeling part latents as
continuous distributions, we unlock the ability to generate entirely new, yet
plausible parts through interpolation and sampling. A self-supervised feature
consistency loss further ensures stable generation of these unseen parts. The
result is the first system capable of creating novel 3D objects with
species-specific details that transcend existing examples. While we demonstrate
our approach on birds, the underlying framework extends beyond things that can
chirp! Code will be released at https://github.com/kamwoh/chirpy3d.
|
2501.04150
|
Benchmarking Large and Small MLLMs
|
cs.CV
|
Large multimodal language models (MLLMs) such as GPT-4V and GPT-4o have
achieved remarkable advancements in understanding and generating multimodal
content, showcasing superior quality and capabilities across diverse tasks.
However, their deployment faces significant challenges, including slow
inference, high computational cost, and impracticality for on-device
applications. In contrast, the emergence of small MLLMs, exemplified by the
LLava-series models and Phi-3-Vision, offers promising alternatives with faster
inference, reduced deployment costs, and the ability to handle domain-specific
scenarios. Despite their growing presence, the capability boundaries between
large and small MLLMs remain underexplored. In this work, we conduct a
systematic and comprehensive evaluation to benchmark both small and large
MLLMs, spanning general capabilities such as object recognition, temporal
reasoning, and multimodal comprehension, as well as real-world applications in
domains like industry and automotive. Our evaluation reveals that small MLLMs
can achieve comparable performance to large models in specific scenarios but
lag significantly in complex tasks requiring deeper reasoning or nuanced
understanding. Furthermore, we identify common failure cases in both small and
large MLLMs, highlighting domains where even state-of-the-art models struggle.
We hope our findings will guide the research community in pushing the quality
boundaries of MLLMs, advancing their usability and effectiveness across diverse
applications.
|
2501.04153
|
Multilingual Open QA on the MIA Shared Task
|
cs.CL cs.LG
|
Cross-lingual information retrieval (CLIR) ~\cite{shi2021cross, asai2021one,
jiang2020cross} for example, can find relevant text in any language such as
English(high resource) or Telugu (low resource) even when the query is posed in
a different, possibly low-resource, language. In this work, we aim to develop
useful CLIR models for this constrained, yet important, setting where we do not
require any kind of additional supervision or labelled data for retrieval task
and hence can work effectively for low-resource languages.
\par We propose a simple and effective re-ranking method for improving
passage retrieval in open question answering. The re-ranker re-scores retrieved
passages with a zero-shot multilingual question generation model, which is a
pre-trained language model, to compute the probability of the input question in
the target language conditioned on a retrieved passage, which can be possibly
in a different language. We evaluate our method in a completely zero shot
setting and doesn't require any training. Thus the main advantage of our method
is that our approach can be used to re-rank results obtained by any sparse
retrieval methods like BM-25. This eliminates the need for obtaining expensive
labelled corpus required for the retrieval tasks and hence can be used for low
resource languages.
|
2501.04155
|
MM-GEN: Enhancing Task Performance Through Targeted Multimodal Data
Curation
|
cs.CV cs.CL cs.LG
|
Vision-language models (VLMs) are highly effective but often underperform on
specialized tasks; for example, Llava-1.5 struggles with chart and diagram
understanding due to scarce task-specific training data. Existing training
data, sourced from general-purpose datasets, fails to capture the nuanced
details needed for these tasks. We introduce MM-Gen, a scalable method that
generates task-specific, high-quality synthetic text for candidate images by
leveraging stronger models. MM-Gen employs a three-stage targeted process:
partitioning data into subgroups, generating targeted text based on task
descriptions, and filtering out redundant and outlier data. Fine-tuning VLMs
with data generated by MM-Gen leads to significant performance gains, including
29% on spatial reasoning and 15% on diagram understanding for Llava-1.5 (7B).
Compared to human-curated caption data, MM-Gen achieves up to 1.6x better
improvements for the original models, proving its effectiveness in enhancing
task-specific VLM performance and bridging the gap between general-purpose
datasets and specialized requirements. Code available at
https://github.com/sjoshi804/MM-Gen.
|
2501.04160
|
Collaborative Spacecraft Servicing under Partial Feedback using
Lyapunov-based Deep Neural Networks
|
eess.SY cs.SY math.OC
|
Multi-agent systems are increasingly applied in space missions, including
distributed space systems, resilient constellations, and autonomous rendezvous
and docking operations. A critical emerging application is collaborative
spacecraft servicing, which encompasses on-orbit maintenance, space debris
removal, and swarm-based satellite repositioning. These missions involve
servicing spacecraft interacting with malfunctioning or defunct spacecraft
under challenging conditions, such as limited state information, measurement
inaccuracies, and erratic target behaviors. Existing approaches often rely on
assumptions of full state knowledge or single-integrator dynamics, which are
impractical for real-world applications involving second-order spacecraft
dynamics. This work addresses these challenges by developing a distributed
state estimation and tracking framework that requires only relative position
measurements and operates under partial state information. A novel
$\rho$-filter is introduced to reconstruct unknown states using locally
available information, and a Lyapunov-based deep neural network adaptive
controller is developed that adaptively compensates for uncertainties stemming
from unknown spacecraft dynamics. To ensure the collaborative spacecraft
regulation problem is well-posed, a trackability condition is defined. A
Lyapunov-based stability analysis is provided to ensure exponential convergence
of errors in state estimation and spacecraft regulation to a neighborhood of
the origin under the trackability condition. The developed method eliminates
the need for expensive velocity sensors or extensive pre-training, offering a
practical and robust solution for spacecraft servicing in complex, dynamic
environments.
|
2501.04161
|
KGIF: Optimizing Relation-Aware Recommendations with Knowledge Graph
Information Fusion
|
cs.LG cs.IR
|
While deep-learning-enabled recommender systems demonstrate strong
performance benchmarks, many struggle to adapt effectively in real-world
environments due to limited use of user-item relationship data and insufficient
transparency in recommendation generation. Traditional collaborative filtering
approaches fail to integrate multifaceted item attributes, and although
Factorization Machines account for item-specific details, they overlook broader
relational patterns. Collaborative knowledge graph-based models have progressed
by embedding user-item interactions with item-attribute relationships, offering
a holistic perspective on interconnected entities. However, these models
frequently aggregate attribute and interaction data in an implicit manner,
leaving valuable relational nuances underutilized.
This study introduces the Knowledge Graph Attention Network with Information
Fusion (KGIF), a specialized framework designed to merge entity and relation
embeddings explicitly through a tailored self-attention mechanism. The KGIF
framework integrates reparameterization via dynamic projection vectors,
enabling embeddings to adaptively represent intricate relationships within
knowledge graphs. This explicit fusion enhances the interplay between user-item
interactions and item-attribute relationships, providing a nuanced balance
between user-centric and item-centric representations. An attentive propagation
mechanism further optimizes knowledge graph embeddings, capturing multi-layered
interaction patterns. The contributions of this work include an innovative
method for explicit information fusion, improved robustness for sparse
knowledge graphs, and the ability to generate explainable recommendations
through interpretable path visualization.
|
2501.04164
|
Holographic Metasurface-Based Beamforming for Multi-Altitude LEO
Satellite Networks
|
cs.IT eess.SP math.IT
|
Low Earth Orbit (LEO) satellite networks are capable of improving the global
Internet service coverage. In this context, we propose a hybrid beamforming
design for holographic metasurface based terrestrial users in multi-altitude
LEO satellite networks. Firstly, the holographic beamformer is optimized by
maximizing the downlink channel gain from the serving satellite to the
terrestrial user. Then, the digital beamformer is designed by conceiving a
minimum mean square error (MMSE) based detection algorithm for mitigating the
interference arriving from other satellites. To dispense with excessive
overhead of full channel state information (CSI) acquisition of all satellites,
we propose a low-complexity MMSE beamforming algorithm that only relies on the
distribution of the LEO satellite constellation harnessing stochastic geometry,
which can achieve comparable throughput to that of the algorithm based on the
full CSI in the case of a dense LEO satellite deployment. Furthermore, it
outperforms the maximum ratio combining (MRC) algorithm, thanks to its
inter-satellite interference mitigation capacity. The simulation results show
that our proposed holographic metasurface based hybrid beamforming architecture
is capable of outperforming the state-of-the-art antenna array architecture in
terms of its throughput, given the same physical size of the transceivers.
Moreover, we demonstrate that the beamforming performance attained can be
substantially improved by taking into account the mutual coupling effect,
imposed by the dense placement of the holographic metasurface elements.
|
2501.04167
|
Reasoning-Enhanced Self-Training for Long-Form Personalized Text
Generation
|
cs.CL cs.AI cs.IR
|
Personalized text generation requires a unique ability of large language
models (LLMs) to learn from context that they often do not encounter during
their standard training. One way to encourage LLMs to better use personalized
context for generating outputs that better align with the user's expectations
is to instruct them to reason over the user's past preferences, background
knowledge, or writing style. To achieve this, we propose Reasoning-Enhanced
Self-Training for Personalized Text Generation (REST-PG), a framework that
trains LLMs to reason over personal data during response generation. REST-PG
first generates reasoning paths to train the LLM's reasoning abilities and then
employs Expectation-Maximization Reinforced Self-Training to iteratively train
the LLM based on its own high-reward outputs. We evaluate REST-PG on the
LongLaMP benchmark, consisting of four diverse personalized long-form text
generation tasks. Our experiments demonstrate that REST-PG achieves significant
improvements over state-of-the-art baselines, with an average relative
performance gain of 14.5% on the benchmark.
|
2501.04169
|
Learning to Transfer Human Hand Skills for Robot Manipulations
|
cs.RO cs.AI cs.LG
|
We present a method for teaching dexterous manipulation tasks to robots from
human hand motion demonstrations. Unlike existing approaches that solely rely
on kinematics information without taking into account the plausibility of robot
and object interaction, our method directly infers plausible robot manipulation
actions from human motion demonstrations. To address the embodiment gap between
the human hand and the robot system, our approach learns a joint motion
manifold that maps human hand movements, robot hand actions, and object
movements in 3D, enabling us to infer one motion component from others. Our key
idea is the generation of pseudo-supervision triplets, which pair human,
object, and robot motion trajectories synthetically. Through real-world
experiments with robot hand manipulation, we demonstrate that our data-driven
retargeting method significantly outperforms conventional retargeting
techniques, effectively bridging the embodiment gap between human and robotic
hands. Website at https://rureadyo.github.io/MocapRobot/.
|
2501.04170
|
A Bayesian Modeling Framework for Estimation and Ground Segmentation of
Cluttered Staircases
|
cs.RO
|
Autonomous robot navigation in complex environments requires robust
perception as well as high-level scene understanding due to perceptual
challenges, such as occlusions, and uncertainty introduced by robot movement.
For example, a robot climbing a cluttered staircase can misinterpret clutter as
a step, misrepresenting the state and compromising safety. This requires robust
state estimation methods capable of inferring the underlying structure of the
environment even from incomplete sensor data. In this paper, we introduce a
novel method for robust state estimation of staircases. To address the
challenge of perceiving occluded staircases extending beyond the robot's
field-of-view, our approach combines an infinite-width staircase representation
with a finite endpoint state to capture the overall staircase structure. This
representation is integrated into a Bayesian inference framework to fuse noisy
measurements enabling accurate estimation of staircase location even with
partial observations and occlusions. Additionally, we present a segmentation
algorithm that works in conjunction with the staircase estimation pipeline to
accurately identify clutter-free regions on a staircase. Our method is
extensively evaluated on real robot across diverse staircases, demonstrating
significant improvements in estimation accuracy and segmentation performance
compared to baseline approaches.
|
2501.04172
|
Machine Learning for Identifying Grain Boundaries in Scanning Electron
Microscopy (SEM) Images of Nanoparticle Superlattices
|
cond-mat.mtrl-sci cs.CV eess.IV
|
Nanoparticle superlattices consisting of ordered arrangements of
nanoparticles exhibit unique optical, magnetic, and electronic properties
arising from nanoparticle characteristics as well as their collective
behaviors. Understanding how processing conditions influence the nanoscale
arrangement and microstructure is critical for engineering materials with
desired macroscopic properties. Microstructural features such as grain
boundaries, lattice defects, and pores significantly affect these properties
but are challenging to quantify using traditional manual analyses as they are
labor-intensive and prone to errors. In this work, we present a machine
learning workflow for automating grain segmentation in scanning electron
microscopy (SEM) images of nanoparticle superlattices. This workflow integrates
signal processing techniques, such as Radon transforms, with unsupervised
learning methods like agglomerative hierarchical clustering to identify and
segment grains without requiring manually annotated data. In the workflow we
transform the raw pixel data into explainable numerical representation of
superlattice orientations for clustering. Benchmarking results demonstrate the
workflow's robustness against noisy images and edge cases, with a processing
speed of four images per minute on standard computational hardware. This
efficiency makes the workflow scalable to large datasets and makes it a
valuable tool for integrating data-driven models into decision-making processes
for material design and analysis. For example, one can use this workflow to
quantify grain size distributions at varying processing conditions like
temperature and pressure and using that knowledge adjust processing conditions
to achieve desired superlattice orientations and grain sizes.
|
2501.04173
|
Multimodal Multihop Source Retrieval for Web Question Answering
|
cs.CL cs.AI
|
This work deals with the challenge of learning and reasoning over multi-modal
multi-hop question answering (QA). We propose a graph reasoning network based
on the semantic structure of the sentences to learn multi-source reasoning
paths and find the supporting facts across both image and text modalities for
answering the question. In this paper, we investigate the importance of graph
structure for multi-modal multi-hop question answering. Our analysis is
centered on WebQA. We construct a strong baseline model, that finds relevant
sources using a pairwise classification task. We establish that, with the
proper use of feature representations from pre-trained models, graph structure
helps in improving multi-modal multi-hop question answering. We point out that
both graph structure and adjacency matrix are task-related prior knowledge, and
graph structure can be leveraged to improve the retrieval performance for the
task. Experiments and visualized analysis demonstrate that message propagation
over graph networks or the entire graph structure can replace massive
multimodal transformers with token-wise cross-attention. We demonstrated the
applicability of our method and show a performance gain of \textbf{4.6$\%$}
retrieval F1score over the transformer baselines, despite being a very light
model. We further demonstrated the applicability of our model to a large scale
retrieval setting.
|
2501.04179
|
Generation from Noisy Examples
|
stat.ML cs.LG
|
We continue to study the learning-theoretic foundations of generation by
extending the results from Kleinberg and Mullainathan [2024] and Li et al.
[2024] to account for noisy example streams. In the noiseless setting of
Kleinberg and Mullainathan [2024] and Li et al. [2024], an adversary picks a
hypothesis from a binary hypothesis class and provides a generator with a
sequence of its positive examples. The goal of the generator is to eventually
output new, unseen positive examples. In the noisy setting, an adversary still
picks a hypothesis and a sequence of its positive examples. But, before
presenting the stream to the generator, the adversary inserts a finite number
of negative examples. Unaware of which examples are noisy, the goal of the
generator is to still eventually output new, unseen positive examples. In this
paper, we provide necessary and sufficient conditions for when a binary
hypothesis class can be noisily generatable. We provide such conditions with
respect to various constraints on the number of distinct examples that need to
be seen before perfect generation of positive examples. Interestingly, for
finite and countable classes we show that generatability is largely unaffected
by the presence of a finite number of noisy examples.
|
2501.04180
|
HIVEX: A High-Impact Environment Suite for Multi-Agent Research
(extended version)
|
cs.MA cs.AI cs.GT
|
Games have been vital test beds for the rapid development of Agent-based
research. Remarkable progress has been achieved in the past, but it is unclear
if the findings equip for real-world problems. While pressure grows, some of
the most critical ecological challenges can find mitigation and prevention
solutions through technology and its applications. Most real-world domains
include multi-agent scenarios and require machine-machine and human-machine
collaboration. Open-source environments have not advanced and are often toy
scenarios, too abstract or not suitable for multi-agent research. By mimicking
real-world problems and increasing the complexity of environments, we hope to
advance state-of-the-art multi-agent research and inspire researchers to work
on immediate real-world problems. Here, we present HIVEX, an environment suite
to benchmark multi-agent research focusing on ecological challenges. HIVEX
includes the following environments: Wind Farm Control, Wildfire Resource
Management, Drone-Based Reforestation, Ocean Plastic Collection, and Aerial
Wildfire Suppression. We provide environments, training examples, and baselines
for the main and sub-tasks. All trained models resulting from the experiments
of this work are hosted on Hugging Face. We also provide a leaderboard on
Hugging Face and encourage the community to submit models trained on our
environment suite.
|
2501.04182
|
Fixed Points of Deep Neural Networks: Emergence, Stability, and
Applications
|
cs.LG cs.AI cs.NA math.NA
|
We present numerical and analytical results on the formation and stability of
a family of fixed points of deep neural networks (DNNs). Such fixed points
appear in a class of DNNs when dimensions of input and output vectors are the
same. We demonstrate examples of applications of such networks in supervised,
semi-supervised and unsupervised learning such as encoding/decoding of images,
restoration of damaged images among others.
We present several numerical and analytical results. First, we show that for
untrained DNN's with weights and biases initialized by normally distributed
random variables the only one fixed point exists. This result holds for DNN
with any depth (number of layers) $L$, any layer width $N$, and sigmoid-type
activation functions. Second, it has been shown that for a DNN whose parameters
(weights and biases) are initialized by ``light-tailed'' distribution of
weights (e.g. normal distribution), after training the distribution of these
parameters become ``heavy-tailed''. This motivates our study of DNNs with
``heavy-tailed'' initialization. For such DNNs we show numerically %existence
and stability that training leads to emergence of $Q(N,L)$ fixed points, where
$Q(N,L)$ is a positive integer which depends on the number of layers $L$ and
layer width $N$. We further observe numerically that for fixed $N = N_0$ the
function $Q(N_0, L)$ is non-monotone, that is it initially grows as $L$
increases and then decreases to 1.
This non-monotone behavior of $Q(N_0, L)$ is also obtained by analytical
derivation of equation for Empirical Spectral Distribution (ESD) of
input-output Jacobian followed by numerical solution of this equation.
|
2501.04184
|
MedicalNarratives: Connecting Medical Vision and Language with Localized
Narratives
|
cs.CV
|
We propose MedicalNarratives, a dataset curated from medical pedagogical
videos similar in nature to data collected in Think-Aloud studies and inspired
by Localized Narratives, which collects grounded image-text data by curating
instructors' speech and mouse cursor movements synchronized in time.
MedicalNarratives enables pretraining of both semantic and dense objectives,
alleviating the need to train medical semantic and dense tasks disparately due
to the lack of reasonably sized datasets. Our dataset contains 4.7M image-text
pairs from videos and articles, with 1M samples containing dense annotations in
the form of traces and bounding boxes. To evaluate the utility of
MedicalNarratives, we train GenMedClip based on the CLIP architecture using our
dataset spanning 12 medical domains and demonstrate that it outperforms
previous state-of-the-art models on a newly constructed medical imaging
benchmark that comprehensively evaluates performance across all modalities.
Data, demo, code and models available at https://medical-narratives.github.io
|
2501.04190
|
Partition Constraints for Conjunctive Queries: Bounds and Worst-Case
Optimal Joins
|
cs.DB
|
In the last decade, various works have used statistics on relations to
improve both the theory and practice of conjunctive query execution. Starting
with the AGM bound which took advantage of relation sizes, later works
incorporated statistics like functional dependencies and degree constraints.
Each new statistic prompted work along two lines; bounding the size of
conjunctive query outputs and worst-case optimal join algorithms. In this work,
we continue in this vein by introducing a new statistic called a
\emph{partition constraint}. This statistic captures latent structure within
relations by partitioning them into sub-relations which each have much tighter
degree constraints. We show that this approach can both refine existing
cardinality bounds and improve existing worst-case optimal join algorithms.
|
2501.04193
|
GNN-based Decentralized Perception in Multirobot Systems for Predicting
Worker Actions
|
cs.RO cs.AI cs.MA
|
In industrial environments, predicting human actions is essential for
ensuring safe and effective collaboration between humans and robots. This paper
introduces a perception framework that enables mobile robots to understand and
share information about human actions in a decentralized way. The framework
first allows each robot to build a spatial graph representing its surroundings,
which it then shares with other robots. This shared spatial data is combined
with temporal information to track human behavior over time. A swarm-inspired
decision-making process is used to ensure all robots agree on a unified
interpretation of the human's actions. Results show that adding more robots and
incorporating longer time sequences improve prediction accuracy. Additionally,
the consensus mechanism increases system resilience, making the multi-robot
setup more reliable in dynamic industrial settings.
|
2501.04194
|
STLCG++: A Masking Approach for Differentiable Signal Temporal Logic
Specification
|
cs.RO cs.LG cs.SC
|
Signal Temporal Logic (STL) offers a concise yet expressive framework for
specifying and reasoning about spatio-temporal behaviors of robotic systems.
Attractively, STL admits the notion of robustness, the degree to which an input
signal satisfies or violates an STL specification, thus providing a nuanced
evaluation of system performance. Notably, the differentiability of STL
robustness enables direct integration to robotics workflows that rely on
gradient-based optimization, such as trajectory optimization and deep learning.
However, existing approaches to evaluating and differentiating STL robustness
rely on recurrent computations, which become inefficient with longer sequences,
limiting their use in time-sensitive applications. In this paper, we present
STLCG++, a masking-based approach that parallelizes STL robustness evaluation
and backpropagation across timesteps, achieving more than 1000x faster
computation time than the recurrent approach. We also introduce a smoothing
technique for differentiability through time interval bounds, expanding STL's
applicability in gradient-based optimization tasks over spatial and temporal
variables. Finally, we demonstrate STLCG++'s benefits through three robotics
use cases and provide open-source Python libraries in JAX and PyTorch for
seamless integration into modern robotics workflows.
|
2501.04196
|
Comparison of Neural Models for X-ray Image Classification in COVID-19
Detection
|
eess.IV cs.LG
|
This study presents a comparative analysis of methods for detecting COVID-19
infection in radiographic images. The images, sourced from publicly available
datasets, were categorized into three classes: 'normal,' 'pneumonia,' and
'COVID.' For the experiments, transfer learning was employed using eight
pre-trained networks: SqueezeNet, DenseNet, ResNet, AlexNet, VGG, GoogleNet,
ShuffleNet, and MobileNet. DenseNet achieved the highest accuracy of 97.64%
using the ADAM optimization function in the multiclass approach. In the binary
classification approach, the highest precision was 99.98%, obtained by the VGG,
ResNet, and MobileNet networks. A comparative evaluation was also conducted
using heat maps.
|
2501.04199
|
Unattainability of Common Knowledge in Asymmetric Games with Imperfect
Information
|
cs.MA cs.GT cs.LO
|
In this paper, we present a conceptual model game to examine the dynamics of
asymmetric interactions in games with imperfect information. The game involves
two agents with starkly contrasting capabilities: one agent can take actions
but has no information of the state of the game, whereas the other agent has
perfect information of the state but cannot act or observe the other agent's
actions. This duality manifests an extreme form of asymmetry, and how differing
abilities influence the possibility of attaining common knowledge. Using Kripke
structures and epistemic logic we demonstrate that, under these conditions,
common knowledge of the current game state becomes unattainable. Our findings
advance the discussion on the strategic limitations of knowledge in
environments where information and action are unevenly distributed.
|
2501.04202
|
Generative Dataset Distillation Based on Self-knowledge Distillation
|
cs.CV cs.AI cs.LG
|
Dataset distillation is an effective technique for reducing the cost and
complexity of model training while maintaining performance by compressing large
datasets into smaller, more efficient versions. In this paper, we present a
novel generative dataset distillation method that can improve the accuracy of
aligning prediction logits. Our approach integrates self-knowledge distillation
to achieve more precise distribution matching between the synthetic and
original data, thereby capturing the overall structure and relationships within
the data. To further improve the accuracy of alignment, we introduce a
standardization step on the logits before performing distribution matching,
ensuring consistency in the range of logits. Through extensive experiments, we
demonstrate that our method outperforms existing state-of-the-art methods,
resulting in superior distillation performance.
|
2501.04204
|
LipGen: Viseme-Guided Lip Video Generation for Enhancing Visual Speech
Recognition
|
cs.CV cs.MM
|
Visual speech recognition (VSR), commonly known as lip reading, has garnered
significant attention due to its wide-ranging practical applications. The
advent of deep learning techniques and advancements in hardware capabilities
have significantly enhanced the performance of lip reading models. Despite
these advancements, existing datasets predominantly feature stable video
recordings with limited variability in lip movements. This limitation results
in models that are highly sensitive to variations encountered in real-world
scenarios. To address this issue, we propose a novel framework, LipGen, which
aims to improve model robustness by leveraging speech-driven synthetic visual
data, thereby mitigating the constraints of current datasets. Additionally, we
introduce an auxiliary task that incorporates viseme classification alongside
attention mechanisms. This approach facilitates the efficient integration of
temporal information, directing the model's focus toward the relevant segments
of speech, thereby enhancing discriminative capabilities. Our method
demonstrates superior performance compared to the current state-of-the-art on
the lip reading in the wild (LRW) dataset and exhibits even more pronounced
advantages under challenging conditions.
|
2501.04206
|
GRAPHITE: Graph-Based Interpretable Tissue Examination for Enhanced
Explainability in Breast Cancer Histopathology
|
eess.IV cs.CV
|
Explainable AI (XAI) in medical histopathology is essential for enhancing the
interpretability and clinical trustworthiness of deep learning models in cancer
diagnosis. However, the black-box nature of these models often limits their
clinical adoption. We introduce GRAPHITE (Graph-based Interpretable Tissue
Examination), a post-hoc explainable framework designed for breast cancer
tissue microarray (TMA) analysis. GRAPHITE employs a multiscale approach,
extracting patches at various magnification levels, constructing an
hierarchical graph, and utilising graph attention networks (GAT) with scalewise
attention (SAN) to capture scale-dependent features. We trained the model on
140 tumour TMA cores and four benign whole slide images from which 140 benign
samples were created, and tested it on 53 pathologist-annotated TMA samples.
GRAPHITE outperformed traditional XAI methods, achieving a mean average
precision (mAP) of 0.56, an area under the receiver operating characteristic
curve (AUROC) of 0.94, and a threshold robustness (ThR) of 0.70, indicating
that the model maintains high performance across a wide range of thresholds. In
clinical utility, GRAPHITE achieved the highest area under the decision curve
(AUDC) of 4.17e+5, indicating reliable decision support across thresholds.
These results highlight GRAPHITE's potential as a clinically valuable tool in
computational pathology, providing interpretable visualisations that align with
the pathologists' diagnostic reasoning and support precision medicine.
|
2501.04210
|
Recognition-Oriented Low-Light Image Enhancement based on Global and
Pixelwise Optimization
|
cs.CV eess.IV
|
In this paper, we propose a novel low-light image enhancement method aimed at
improving the performance of recognition models. Despite recent advances in
deep learning, the recognition of images under low-light conditions remains a
challenge. Although existing low-light image enhancement methods have been
developed to improve image visibility for human vision, they do not
specifically focus on enhancing recognition model performance. Our proposed
low-light image enhancement method consists of two key modules: the Global
Enhance Module, which adjusts the overall brightness and color balance of the
input image, and the Pixelwise Adjustment Module, which refines image features
at the pixel level. These modules are trained to enhance input images to
improve downstream recognition model performance effectively. Notably, the
proposed method can be applied as a frontend filter to improve low-light
recognition performance without requiring retraining of downstream recognition
models. Experimental results demonstrate that our method improves the
performance of pretrained recognition models under low-light conditions and its
effectiveness.
|
2501.04211
|
CURing Large Models: Compression via CUR Decomposition
|
cs.LG cs.AI
|
Large deep learning models have achieved remarkable success but are
resource-intensive, posing challenges such as memory usage. We introduce
CURing, a novel model compression method based on CUR matrix decomposition,
which approximates weight matrices as the product of selected columns (C) and
rows (R), and a small linking matrix (U). We apply this decomposition to
weights chosen based on the combined influence of their magnitudes and
activations. By identifying and retaining informative rows and columns, CURing
significantly reduces model size with minimal performance loss. For example, it
reduces Llama3.1-8B's parameters to 7.32B (-9%) in just 129 seconds, over 20
times faster than prior compression methods.
|
2501.04213
|
UPAQ: A Framework for Real-Time and Energy-Efficient 3D Object Detection
in Autonomous Vehicles
|
cs.CV cs.AI cs.LG
|
To enhance perception in autonomous vehicles (AVs), recent efforts are
concentrating on 3D object detectors, which deliver more comprehensive
predictions than traditional 2D object detectors, at the cost of increased
memory footprint and computational resource usage. We present a novel framework
called UPAQ, which leverages semi-structured pattern pruning and quantization
to improve the efficiency of LiDAR point-cloud and camera-based 3D object
detectors on resource-constrained embedded AV platforms. Experimental results
on the Jetson Orin Nano embedded platform indicate that UPAQ achieves up to
5.62x and 5.13x model compression rates, up to 1.97x and 1.86x boost in
inference speed, and up to 2.07x and 1.87x reduction in energy consumption
compared to state-of-the-art model compression frameworks, on the Pointpillar
and SMOKE models respectively.
|
2501.04216
|
Optimal Oblivious Algorithms for Multi-way Joins
|
cs.DB cs.CR
|
In cloud databases, cloud computation over sensitive data uploaded by clients
inevitably causes concern about data security and privacy. Even when encryption
primitives and trusted computing environments are integrated into query
processing to safeguard the actual contents of the data, access patterns of
algorithms can still leak private information about the data. Oblivious Random
Access Memory (ORAM) and circuits are two generic approaches to address this
issue, ensuring that access patterns of algorithms remain oblivious to the
data. However, deploying these methods on insecure algorithms, particularly for
multi-way join processing, is computationally expensive and inherently
challenging.
In this paper, we propose a novel sorting-based algorithm for multi-way join
processing that operates without relying on ORAM simulations or other security
assumptions. Our algorithm is a non-trivial, provably oblivious composition of
basic primitives, with time complexity matching the insecure worst-case optimal
join algorithm, up to a logarithmic factor. Furthermore, it is cache-agnostic,
with cache complexity matching the insecure lower bound, also up to a
logarithmic factor. This clean and straightforward approach has the potential
to be extended to other security settings and implemented in practical database
systems.
|
2501.04217
|
Continual Self-supervised Learning Considering Medical Domain Knowledge
in Chest CT Images
|
cs.CV cs.AI
|
We propose a novel continual self-supervised learning method (CSSL)
considering medical domain knowledge in chest CT images. Our approach addresses
the challenge of sequential learning by effectively capturing the relationship
between previously learned knowledge and new information at different stages.
By incorporating an enhanced DER into CSSL and maintaining both diversity and
representativeness within the rehearsal buffer of DER, the risk of data
interference during pretraining is reduced, enabling the model to learn more
richer and robust feature representations. In addition, we incorporate a mixup
strategy and feature distillation to further enhance the model's ability to
learn meaningful representations. We validate our method using chest CT images
obtained under two different imaging conditions, demonstrating superior
performance compared to state-of-the-art methods.
|
2501.04222
|
Privacy-Preserving Distributed Online Mirror Descent for Nonconvex
Optimization
|
eess.SY cs.SY
|
We investigate the distributed online nonconvex optimization problem with
differential privacy over time-varying networks. Each node minimizes the sum of
several nonconvex functions while preserving the node's differential privacy.
We propose a privacy-preserving distributed online mirror descent algorithm for
nonconvex optimization, which uses the mirror descent to update decision
variables and the Laplace differential privacy mechanism to protect privacy.
Unlike the existing works, the proposed algorithm allows the cost functions to
be nonconvex, which is more applicable. Based upon these, we prove that if the
communication network is $B$-strongly connected and the constraint set is
compact, then by choosing the step size properly, the algorithm guarantees
$\epsilon$-differential privacy at each time. Furthermore, we prove that if the
local cost functions are $\beta$-smooth, then the regret over time horizon $T$
grows sublinearly while preserving differential privacy, with an upper bound
$O(\sqrt{T})$. Finally, the effectiveness of the algorithm is demonstrated
through numerical simulations.
|
2501.04227
|
Agent Laboratory: Using LLM Agents as Research Assistants
|
cs.HC cs.AI cs.CL cs.LG
|
Historically, scientific discovery has been a lengthy and costly process,
demanding substantial time and resources from initial conception to final
results. To accelerate scientific discovery, reduce research costs, and improve
research quality, we introduce Agent Laboratory, an autonomous LLM-based
framework capable of completing the entire research process. This framework
accepts a human-provided research idea and progresses through three
stages--literature review, experimentation, and report writing to produce
comprehensive research outputs, including a code repository and a research
report, while enabling users to provide feedback and guidance at each stage. We
deploy Agent Laboratory with various state-of-the-art LLMs and invite multiple
researchers to assess its quality by participating in a survey, providing human
feedback to guide the research process, and then evaluate the final paper. We
found that: (1) Agent Laboratory driven by o1-preview generates the best
research outcomes; (2) The generated machine learning code is able to achieve
state-of-the-art performance compared to existing methods; (3) Human
involvement, providing feedback at each stage, significantly improves the
overall quality of research; (4) Agent Laboratory significantly reduces
research expenses, achieving an 84% decrease compared to previous autonomous
research methods. We hope Agent Laboratory enables researchers to allocate more
effort toward creative ideation rather than low-level coding and writing,
ultimately accelerating scientific discovery.
|
2501.04228
|
Constraints as Rewards: Reinforcement Learning for Robots without Reward
Functions
|
cs.RO cs.AI cs.LG
|
Reinforcement learning has become an essential algorithm for generating
complex robotic behaviors. However, to learn such behaviors, it is necessary to
design a reward function that describes the task, which often consists of
multiple objectives that needs to be balanced. This tuning process is known as
reward engineering and typically involves extensive trial-and-error. In this
paper, to avoid this trial-and-error process, we propose the concept of
Constraints as Rewards (CaR). CaR formulates the task objective using multiple
constraint functions instead of a reward function and solves a reinforcement
learning problem with constraints using the Lagrangian-method. By adopting this
approach, different objectives are automatically balanced, because Lagrange
multipliers serves as the weights among the objectives. In addition, we will
demonstrate that constraints, expressed as inequalities, provide an intuitive
interpretation of the optimization target designed for the task. We apply the
proposed method to the standing-up motion generation task of a
six-wheeled-telescopic-legged robot and demonstrate that the proposed method
successfully acquires the target behavior, even though it is challenging to
learn with manually designed reward functions.
|
2501.04231
|
Computation and Communication Co-scheduling for Timely Multi-Task
Inference at the Wireless Edge
|
cs.IT cs.NI math.IT
|
In multi-task remote inference systems, an intelligent receiver (e.g.,
command center) performs multiple inference tasks (e.g., target detection)
using data features received from several remote sources (e.g., edge sensors).
Key challenges to facilitating timely inference in these systems arise from (i)
limited computational power of the sources to produce features from their
inputs, and (ii) limited communication resources of the channels to carry
simultaneous feature transmissions to the receiver. We develop a novel
computation and communication co-scheduling methodology which determines
feature generation and transmission scheduling to minimize inference errors
subject to these resource constraints. Specifically, we formulate the
co-scheduling problem as a weakly-coupled Markov decision process with Age of
Information (AoI)-based timeliness gauging the inference errors. To overcome
its PSPACE-hard complexity, we analyze a Lagrangian relaxation of the problem,
which yields gain indices assessing the improvement in inference error for each
potential feature generation-transmission scheduling action. Based on this, we
develop a maximum gain first (MGF) policy which we show is asymptotically
optimal for the original problem as the number of inference tasks increases.
Experiments demonstrate that MGF obtains significant improvements over baseline
policies for varying tasks, channels, and sources.
|
2501.04233
|
A note on the differential spectrum of a class of locally APN functions
|
cs.IT cs.CR math.IT
|
Let $\gf_{p^n}$ denote the finite field containing $p^n$ elements, where $n$
is a positive integer and $p$ is a prime. The function
$f_u(x)=x^{\frac{p^n+3}{2}}+ux^2$ over $\gf_{p^n}[x]$ with
$u\in\gf_{p^n}\setminus\{0,\pm1\}$ was recently studied by Budaghyan and Pal in
\cite{Budaghyan2024ArithmetizationorientedAP}, whose differential uniformity is
at most $5$ when $p^n\equiv3~(mod~4)$. In this paper, we study the differential
uniformity and the differential spectrum of $f_u$ for $u=\pm1$. We first give
some properties of the differential spectrum of any cryptographic function.
Moreover, by solving some systems of equations over finite fields, we express
the differential spectrum of $f_{\pm1}$ in terms of the quadratic character
sums.
|
2501.04234
|
Statistical Uncertainty Quantification for Aggregate Performance Metrics
in Machine Learning Benchmarks
|
stat.ML cs.LG stat.AP
|
Modern artificial intelligence is supported by machine learning models (e.g.,
foundation models) that are pretrained on a massive data corpus and then
adapted to solve a variety of downstream tasks. To summarize performance across
multiple tasks, evaluation metrics are often aggregated into a summary metric,
e.g., average accuracy across 10 question-answering tasks. When aggregating
evaluation metrics, it is useful to incorporate uncertainty in the aggregate
metric in order to gain a more realistic understanding of model performance.
Our objective in this work is to demonstrate how statistical methodology can be
used for quantifying uncertainty in metrics that have been aggregated across
multiple tasks. The methods we emphasize are bootstrapping, Bayesian
hierarchical (i.e., multilevel) modeling, and the visualization of task
weightings that consider standard errors. These techniques reveal insights such
as the dominance of a specific model for certain types of tasks despite an
overall poor performance. We use a popular ML benchmark, the Visual Task
Adaptation Benchmark (VTAB), to demonstrate the usefulness of our approaches.
|
2501.04238
|
A Quasi-deterministic Channel Model for Underwater Acoustic
Communication Systems
|
eess.SY cs.SY
|
In this paper, a quasi-deterministic (Q-D) model for non-stationary
underwater acoustic (UWA) channels is proposed. This model combines the BELLHOP
deterministic model and geometry-based stochastic model (GBSM), which provides
higher accuracy and flexibility. Different propagation components in shallow
water are classified as D-rays, R-rays and F-rays in the proposed model, where
D-rays are modeled by BELLHOP while both R-rays and F-rays are modeled by GBSM.
Some important channel statistical properties, including time-frequency
correlation function (TF-CF), Doppler power spectrum density (PSD), average
Doppler shift, and RMS Doppler spread are derived and simulated. Finally,
simulation results illustrate the correctness of the proposed model.
|
2501.04239
|
Dynamic Localisation of Spatial-Temporal Graph Neural Network
|
cs.LG
|
Spatial-temporal data, fundamental to many intelligent applications, reveals
dependencies indicating causal links between present measurements at specific
locations and historical data at the same or other locations. Within this
context, adaptive spatial-temporal graph neural networks (ASTGNNs) have emerged
as valuable tools for modelling these dependencies, especially through a
data-driven approach rather than pre-defined spatial graphs. While this
approach offers higher accuracy, it presents increased computational demands.
Addressing this challenge, this paper delves into the concept of localisation
within ASTGNNs, introducing an innovative perspective that spatial dependencies
should be dynamically evolving over time. We introduce \textit{DynAGS}, a
localised ASTGNN framework aimed at maximising efficiency and accuracy in
distributed deployment. This framework integrates dynamic localisation,
time-evolving spatial graphs, and personalised localisation, all orchestrated
around the Dynamic Graph Generator, a light-weighted central module leveraging
cross attention. The central module can integrate historical information in a
node-independent manner to enhance the feature representation of nodes at the
current moment. This improved feature representation is then used to generate a
dynamic sparse graph without the need for costly data exchanges, and it
supports personalised localisation. Performance assessments across two core
ASTGNN architectures and nine real-world datasets from various applications
reveal that \textit{DynAGS} outshines current benchmarks, underscoring that the
dynamic modelling of spatial dependencies can drastically improve model
expressibility, flexibility, and system efficiency, especially in distributed
settings.
|
2501.04240
|
A Novel Non-Stationary Channel Emulator for 6G MIMO Wireless Channels
|
eess.SY cs.IT cs.SY math.IT
|
The performance evaluation of sixth generation (6G) communication systems is
anticipated to be a controlled and repeatable process in the lab, which brings
up the demand for wireless channel emulators. However, channel emulation for 6G
space-time-frequency (STF) non-stationary channels is missing currently. In
this paper, a non-stationary multiple-input multiple-output (MIMO)
geometry-based stochastic model (GBSM) that accurately characterizes the
channel STF properties is introduced firstly. Then, a subspace-based method is
proposed for reconstructing the channel fading obtained from the GBSM and a
channel emulator architecture with frequency domain processing is presented for
6G MIMO systems. Moreover, the spatial time-varying channel transfer functions
(CTFs) of the channel simulation and the channel emulation are compared and
analyzed. The Doppler power spectral density (PSD) and delay PSD are further
derived and compared between the channel model simulation and subspace-based
emulation. The results demonstrate that the proposed channel emulator is
capable of reproducing the non-stationary channel characteristics.
|
2501.04242
|
Beam Domain Channel Estimation for Spatial Non-Stationary Massive MIMO
Systems
|
eess.SY cs.IT cs.SY math.IT
|
In massive multiple-input multiple-output (MIMO) systems, the channel
estimation scheme is subject to the spatial non-stationarity and inevitably
power leakage in the beam domain. In this paper, a beam domain channel
estimation scheme is investigated for spatial non-stationary (SNS) massive MIMO
systems considering power leakage. %a novel beam domain channel estimation
scheme is proposed for spatial non-stationary (SNS) massive MIMO systems.
Specifically, a realistic massive MIMO beam domain channel model (BDCM) is
introduced to capture the spatial non-stationarity considering power leakage by
introducing the illustration of visibility region (VR). Then, a beam domain
structure-based sparsity adaptive matching pursuit (BDS-SAMP) scheme is
proposed based on the cross-block sparse structure and power ratio threshold of
beam domain channel. Finally, the simulation results validate the accuracy of
proposed BDS-SAMP scheme with low pilot overhead and reasonable complexity by
comparing with conventional schemes.
|
2501.04249
|
IOLBENCH: Benchmarking LLMs on Linguistic Reasoning
|
cs.CL
|
Despite the remarkable advancements and widespread applications of deep
neural networks, their ability to perform reasoning tasks remains limited,
particularly in domains requiring structured, abstract thought. In this paper,
we investigate the linguistic reasoning capabilities of state-of-the-art large
language models (LLMs) by introducing IOLBENCH, a novel benchmark derived from
International Linguistics Olympiad (IOL) problems. This dataset encompasses
diverse problems testing syntax, morphology, phonology, and semantics, all
carefully designed to be self-contained and independent of external knowledge.
These tasks challenge models to engage in metacognitive linguistic reasoning,
requiring the deduction of linguistic rules and patterns from minimal examples.
Through extensive benchmarking of leading LLMs, we find that even the most
advanced models struggle to handle the intricacies of linguistic complexity,
particularly in areas demanding compositional generalization and rule
abstraction. Our analysis highlights both the strengths and persistent
limitations of current models in linguistic problem-solving, offering valuable
insights into their reasoning capabilities. By introducing IOLBENCH, we aim to
foster further research into developing models capable of human-like reasoning,
with broader implications for the fields of computational linguistics and
artificial intelligence.
|
2501.04253
|
Integrated Offline and Online Learning to Solve a Large Class of
Scheduling Problems
|
math.OC cs.AI cs.LG
|
In this paper, we develop a unified machine learning (ML) approach to predict
high-quality solutions for single-machine scheduling problems with a
non-decreasing min-sum objective function with or without release times. Our ML
approach is novel in three major aspects. First, our approach is developed for
the entire class of the aforementioned problems. To achieve this, we exploit
the fact that the entire class of the problems considered can be formulated as
a time-indexed formulation in a unified manner. We develop a deep neural
network (DNN) which uses the cost parameters in the time-indexed formulation as
the inputs to effectively predict a continuous solution to this formulation,
based on which a feasible discrete solution is easily constructed. The second
novel aspect of our approach lies in how the DNN model is trained. In view of
the NP-hard nature of the problems, labels (i.e., optimal solutions) are hard
to generate for training. To overcome this difficulty, we generate and utilize
a set of special instances, for which optimal solutions can be found with
little computational effort, to train the ML model offline. The third novel
idea we employ in our approach is that we develop an online single-instance
learning approach to fine tune the parameters in the DNN for a given online
instance, with the goal of generating an improved solution for the given
instance. To this end, we develop a feasibility surrogate that approximates the
objective value of a given instance as a continuous function of the outputs of
the DNN, which then enables us to derive gradients and update the learnable
parameters in the DNN. Numerical results show that our approach can efficiently
generate high-quality solutions for a variety of single-machine scheduling
min-sum problems with up to 1000 jobs.
|
2501.04259
|
Stable Derivative Free Gaussian Mixture Variational Inference for
Bayesian Inverse Problems
|
cs.LG cs.NA math.NA
|
This paper is concerned with the approximation of probability distributions
known up to normalization constants, with a focus on Bayesian inference for
large-scale inverse problems in scientific computing. In this context, key
challenges include costly repeated evaluations of forward models,
multimodality, and inaccessible gradients for the forward model. To address
them, we develop a variational inference framework that combines Fisher-Rao
natural gradient with specialized quadrature rules to enable derivative free
updates of Gaussian mixture variational families. The resulting method, termed
Derivative Free Gaussian Mixture Variational Inference (DF-GMVI), guarantees
covariance positivity and affine invariance, offering a stable and efficient
framework for approximating complex posterior distributions. The effectiveness
of DF-GMVI is demonstrated through numerical experiments on challenging
scenarios, including distributions with multiple modes, infinitely many modes,
and curved modes in spaces with up to hundreds of dimensions. The method's
practicality is further demonstrated in a large-scale application, where it
successfully recovers the initial conditions of the Navier-Stokes equations
from solution data at positive times.
|
2501.04260
|
Modeling All Response Surfaces in One for Conditional Search Spaces
|
cs.LG
|
Bayesian Optimization (BO) is a sample-efficient black-box optimizer commonly
used in search spaces where hyperparameters are independent. However, in many
practical AutoML scenarios, there will be dependencies among hyperparameters,
forming a conditional search space, which can be partitioned into structurally
distinct subspaces. The structure and dimensionality of hyperparameter
configurations vary across these subspaces, challenging the application of BO.
Some previous BO works have proposed solutions to develop multiple Gaussian
Process models in these subspaces. However, these approaches tend to be
inefficient as they require a substantial number of observations to guarantee
each GP's performance and cannot capture relationships between hyperparameters
across different subspaces. To address these issues, this paper proposes a
novel approach to model the response surfaces of all subspaces in one, which
can model the relationships between hyperparameters elegantly via a
self-attention mechanism. Concretely, we design a structure-aware
hyperparameter embedding to preserve the structural information. Then, we
introduce an attention-based deep feature extractor, capable of projecting
configurations with different structures from various subspaces into a unified
feature space, where the response surfaces can be formulated using a single
standard Gaussian Process. The empirical results on a simulation function,
various real-world tasks, and HPO-B benchmark demonstrate that our proposed
approach improves the efficacy and efficiency of BO within conditional search
spaces.
|
2501.04262
|
Target Tracking Using the Invariant Extended Kalman Filter with
Numerical Differentiation for Estimating Curvature and Torsion
|
eess.SY cs.SY eess.SP
|
The goal of target tracking is to estimate target position, velocity, and
acceleration in real time using position data. This paper introduces a novel
target-tracking technique that uses adaptive input and state estimation (AISE)
for real-time numerical differentiation to estimate velocity, acceleration, and
jerk from position data. These estimates are used to model the target motion
within the Frenet-Serret (FS) frame. By representing the model in SE(3), the
position and velocity are estimated using the invariant extended Kalman filter
(IEKF). The proposed method, called FS-IEKF-AISE, is illustrated by numerical
examples and compared to prior techniques.
|
2501.04263
|
KN-LIO: Geometric Kinematics and Neural Field Coupled LiDAR-Inertial
Odometry
|
cs.RO cs.AI eess.SP
|
Recent advancements in LiDAR-Inertial Odometry (LIO) have boosted a large
amount of applications. However, traditional LIO systems tend to focus more on
localization rather than mapping, with maps consisting mostly of sparse
geometric elements, which is not ideal for downstream tasks. Recent emerging
neural field technology has great potential in dense mapping, but pure LiDAR
mapping is difficult to work on high-dynamic vehicles. To mitigate this
challenge, we present a new solution that tightly couples geometric kinematics
with neural fields to enhance simultaneous state estimation and dense mapping
capabilities. We propose both semi-coupled and tightly coupled Kinematic-Neural
LIO (KN-LIO) systems that leverage online SDF decoding and iterated error-state
Kalman filtering to fuse laser and inertial data. Our KN-LIO minimizes
information loss and improves accuracy in state estimation, while also
accommodating asynchronous multi-LiDAR inputs. Evaluations on diverse
high-dynamic datasets demonstrate that our KN-LIO achieves performance on par
with or superior to existing state-of-the-art solutions in pose estimation and
offers improved dense mapping accuracy over pure LiDAR-based methods. The
relevant code and datasets will be made available at https://**.
|
2501.04266
|
Scaling Large Language Model Training on Frontier with Low-Bandwidth
Partitioning
|
cs.DC cs.AI
|
Scaling up Large Language Model(LLM) training involves fitting a tremendous
amount of training parameters across a limited number of workers. However,
methods like ZeRO-3 that drastically reduce GPU memory pressure often incur
heavy communication to ensure global synchronization and consistency.
Established efforts such as ZeRO++ use secondary partitions to avoid inter-node
communications, given that intra-node GPU-GPU transfer generally has more
bandwidth and lower latency than inter-node connections. However, as more
capable infrastructure like Frontier, equipped with AMD GPUs, emerged with
impressive computing capability, there is a need for investigations on the
hardware topology and to develop targeted strategies to improve training
efficiency. In this work, we propose a collection of communication and
optimization strategies for ZeRO++ to reduce communication costs and improve
memory utilization. In this paper, we propose a 3-level hierarchical
partitioning specifically for the current 2nd ranked supercomputing cluster,
Frontier, which aims at leveraging various bandwidths across layers of
communications (GCD-GCD, GPU-GPU, and inter-node) to reduce communication
overhead. For a 20B GPT model, we observe a 1.71x increase in TFLOPS per GPU
when compared with ZeRO++ up to 384 GCDs and a scaling efficiency of 0.94 for
up to 384 GCDs.
|
2501.04268
|
Robotic Programmer: Video Instructed Policy Code Generation for Robotic
Manipulation
|
cs.RO cs.CV
|
Zero-shot generalization across various robots, tasks and environments
remains a significant challenge in robotic manipulation. Policy code generation
methods use executable code to connect high-level task descriptions and
low-level action sequences, leveraging the generalization capabilities of large
language models and atomic skill libraries. In this work, we propose Robotic
Programmer (RoboPro), a robotic foundation model, enabling the capability of
perceiving visual information and following free-form instructions to perform
robotic manipulation with policy code in a zero-shot manner. To address low
efficiency and high cost in collecting runtime code data for robotic tasks, we
devise Video2Code to synthesize executable code from extensive videos
in-the-wild with off-the-shelf vision-language model and code-domain large
language model. Extensive experiments show that RoboPro achieves the
state-of-the-art zero-shot performance on robotic manipulation in both
simulators and real-world environments. Specifically, the zero-shot success
rate of RoboPro on RLBench surpasses the state-of-the-art model GPT-4o by
11.6%, which is even comparable to a strong supervised training baseline.
Furthermore, RoboPro is robust to variations on API formats and skill sets.
|
2501.04269
|
Open set label noise learning with robust sample selection and
margin-guided module
|
cs.CV
|
In recent years, the remarkable success of deep neural networks (DNNs) in
computer vision is largely due to large-scale, high-quality labeled datasets.
Training directly on real-world datasets with label noise may result in
overfitting. The traditional method is limited to deal with closed set label
noise, where noisy training data has true class labels within the known label
space. However, there are some real-world datasets containing open set label
noise, which means that some samples belong to an unknown class outside the
known label space. To address the open set label noise problem, we introduce a
method based on Robust Sample Selection and Margin-Guided Module (RSS-MGM).
Firstly, unlike the prior clean sample selection approach, which only select a
limited number of clean samples, a robust sample selection module combines
small loss selection or high-confidence sample selection to obtain more clean
samples. Secondly, to efficiently distinguish open set label noise and closed
set ones, margin functions are designed to filter open-set data and closed set
data. Thirdly, different processing methods are selected for different types of
samples in order to fully utilize the data's prior information and optimize the
whole model. Furthermore, extensive experimental results with noisy labeled
data from benchmark datasets and real-world datasets, such as CIFAR-100N-C,
CIFAR80N-O, WebFG-469, and Food101N, indicate that our approach outperforms
many state-of-the-art label noise learning methods. Especially, it can more
accurately divide open set label noise samples and closed set ones.
|
2501.04272
|
On weight and variance uncertainty in neural networks for regression
tasks
|
stat.ML cs.LG
|
We consider the problem of weight uncertainty proposed by [Blundell et al.
(2015). Weight uncertainty in neural network. In International conference on
machine learning, 1613-1622, PMLR.] in neural networks {(NNs)} specialized for
regression tasks. {We further} investigate the effect of variance uncertainty
in {their model}. We show that including the variance uncertainty can improve
the prediction performance of the Bayesian {NN}. Variance uncertainty enhances
the generalization of the model {by} considering the posterior distribution
over the variance parameter. { We examine the generalization ability of the
proposed model using a function approximation} example and {further illustrate
it with} the riboflavin genetic data set. {We explore fully connected dense
networks and dropout NNs with} Gaussian and spike-and-slab priors,
respectively, for the network weights.
|
2501.04273
|
Frenet-Serret-Based Trajectory Prediction
|
eess.SY cs.SY eess.SP
|
Trajectory prediction is a crucial element of guidance, navigation, and
control systems. This paper presents two novel trajectory-prediction methods
based on real-time position measurements and adaptive input and state
estimation (AISE). The first method, called AISE/va, uses position measurements
to estimate the target velocity and acceleration. The second method, called
AISE/FS, models the target trajectory as a 3D curve using the Frenet-Serret
formulas, which require estimates of velocity, acceleration, and jerk. To
estimate velocity, acceleration, and jerk in real time, AISE computes first,
second, and third derivatives of the position measurements. AISE does not rely
on assumptions about the target maneuver, measurement noise, or disturbances.
For trajectory prediction, both methods use measurements of the target position
and estimates of its derivatives to extrapolate from the current position. The
performance of AISE/va and AISE/FS is compared numerically with the
$\alpha$-$\beta$-$\gamma$ filter, which shows that AISE/FS provides more
accurate trajectory prediction than AISE/va and traditional methods, especially
for complex target maneuvers.
|
2501.04275
|
Adaptive Numerical Differentiation for Extremum Seeking with Sensor
Noise
|
eess.SY cs.SY
|
Extremum-seeking control (ESC) is widely used to optimize performance when
the system dynamics are uncertain. However, sensitivity to sensor noise is an
important issue in ESC implementation due to the use of high-pass filters or
gradient estimators. To reduce the sensitivity of ESC to noise, this paper
investigates the use of adaptive input and state estimation (AISE) for
numerical differentiation. In particular, this paper develops extremum-seeking
control with adaptive input and state estimation (ESC/AISE), where the
high-pass filter of ESC is replaced by AISE to improve performance under sensor
noise. The effectiveness of ESC/AISE is illustrated via numerical examples.
|
2501.04276
|
Bridging Adaptivity and Safety: Learning Agile Collision-Free Locomotion
Across Varied Physics
|
cs.RO cs.LG
|
Real-world legged locomotion systems often need to reconcile agility and
safety for different scenarios. Moreover, the underlying dynamics are often
unknown and time-variant (e.g., payload, friction). In this paper, we introduce
BAS (Bridging Adaptivity and Safety), which builds upon the pipeline of prior
work Agile But Safe (ABS)(He et al.) and is designed to provide adaptive safety
even in dynamic environments with uncertainties. BAS involves an agile policy
to avoid obstacles rapidly and a recovery policy to prevent collisions, a
physical parameter estimator that is concurrently trained with agile policy,
and a learned control-theoretic RA (reach-avoid) value network that governs the
policy switch. Also, the agile policy and RA network are both conditioned on
physical parameters to make them adaptive. To mitigate the distribution shift
issue, we further introduce an on-policy fine-tuning phase for the estimator to
enhance its robustness and accuracy. The simulation results show that BAS
achieves 50% better safety than baselines in dynamic environments while
maintaining a higher speed on average. In real-world experiments, BAS shows its
capability in complex environments with unknown physics (e.g., slippery floors
with unknown frictions, unknown payloads up to 8kg), while baselines lack
adaptivity, leading to collisions or. degraded agility. As a result, BAS
achieves a 19.8% increase in speed and gets a 2.36 times lower collision rate
than ABS in the real world. Videos: https://adaptive-safe-locomotion.github.io.
|
2501.04279
|
OpenIN: Open-Vocabulary Instance-Oriented Navigation in Dynamic Domestic
Environments
|
cs.RO
|
In daily domestic settings, frequently used objects like cups often have
unfixed positions and multiple instances within the same category, and their
carriers frequently change as well. As a result, it becomes challenging for a
robot to efficiently navigate to a specific instance. To tackle this challenge,
the robot must capture and update scene changes and plans continuously.
However, current object navigation approaches primarily focus on the semantic
level and lack the ability to dynamically update scene representation. In
contrast, this paper captures the relationships between frequently used objects
and their static carriers. It constructs an open-vocabulary
Carrier-Relationship Scene Graph (CRSG) and updates the carrying status during
robot navigation to reflect the dynamic changes of the scene. Based on the
CRSG, we further propose an instance navigation strategy that models the
navigation process as a Markov Decision Process. At each step, decisions are
informed by the Large Language Model's commonsense knowledge and
visual-language feature similarity. We designed a series of long-sequence
navigation tasks for frequently used everyday items in the Habitat simulator.
The results demonstrate that by updating the CRSG, the robot can efficiently
navigate to moved targets. Additionally, we deployed our algorithm on a real
robot and validated its practical effectiveness. The project page can be found
here: https://OpenIN-nav.github.io.
|
2501.04281
|
Cluster & Disperse: a general air conflict resolution heuristic using
unsupervised learning
|
cs.RO cs.LG physics.soc-ph
|
We provide a general and malleable heuristic for the air conflict resolution
problem. This heuristic is based on a new neighborhood structure for searching
the solution space of trajectories and flight-levels. Using unsupervised
learning, the core idea of our heuristic is to cluster the conflict points and
disperse them in various flight levels. Our first algorithm is called Cluster &
Disperse and in each iteration it assigns the most problematic flights in each
cluster to another flight-level. In effect, we shuffle them between the
flight-levels until we achieve a well-balanced configuration. The Cluster &
Disperse algorithm then uses any horizontal plane conflict resolution algorithm
as a subroutine to solve these well-balanced instances. Nevertheless, we
develop a novel algorithm for the horizontal plane based on a similar idea.
That is we cluster and disperse the conflict points spatially in the same
flight level using the gradient descent and a social force. We use a novel
maneuver making flights travel on an arc instead of a straight path which is
based on the aviation routine of the Radius to Fix legs. Our algorithms can
handle a high density of flights within a reasonable computation time. We put
their performance in context with some notable algorithms from the literature.
Being a general framework, a particular strength of the Cluster & Disperse is
its malleability in allowing various constraints regarding the aircraft or the
environment to be integrated with ease. This is in contrast to the models for
instance based on mixed integer programming.
|
2501.04283
|
Enhancing Scene Classification in Cloudy Image Scenarios: A
Collaborative Transfer Method with Information Regulation Mechanism using
Optical Cloud-Covered and SAR Remote Sensing Images
|
cs.CV cs.AI eess.IV
|
In remote sensing scene classification, leveraging the transfer methods with
well-trained optical models is an efficient way to overcome label scarcity.
However, cloud contamination leads to optical information loss and significant
impacts on feature distribution, challenging the reliability and stability of
transferred target models. Common solutions include cloud removal for optical
data or directly using Synthetic aperture radar (SAR) data in the target
domain. However, cloud removal requires substantial auxiliary data for support
and pre-training, while directly using SAR disregards the unobstructed portions
of optical data. This study presents a scene classification transfer method
that synergistically combines multi-modality data, which aims to transfer the
source domain model trained on cloudfree optical data to the target domain that
includes both cloudy optical and SAR data at low cost. Specifically, the
framework incorporates two parts: (1) the collaborative transfer strategy,
based on knowledge distillation, enables the efficient prior knowledge transfer
across heterogeneous data; (2) the information regulation mechanism (IRM) is
proposed to address the modality imbalance issue during transfer. It employs
auxiliary models to measure the contribution discrepancy of each modality, and
automatically balances the information utilization of modalities during the
target model learning process at the sample-level. The transfer experiments
were conducted on simulated and real cloud datasets, demonstrating the superior
performance of the proposed method compared to other solutions in cloud-covered
scenarios. We also verified the importance and limitations of IRM, and further
discussed and visualized the modality imbalance problem during the model
transfer. Codes are available at https://github.com/wangyuze-csu/ESCCS
|
2501.04284
|
ContextMRI: Enhancing Compressed Sensing MRI through Metadata
Conditioning
|
cs.CV cs.LG
|
Compressed sensing MRI seeks to accelerate MRI acquisition processes by
sampling fewer k-space measurements and then reconstructing the missing data
algorithmically. The success of these approaches often relies on strong priors
or learned statistical models. While recent diffusion model-based priors have
shown great potential, previous methods typically ignore clinically available
metadata (e.g. patient demographics, imaging parameters, slice-specific
information). In practice, metadata contains meaningful cues about the anatomy
and acquisition protocol, suggesting it could further constrain the
reconstruction problem. In this work, we propose ContextMRI, a text-conditioned
diffusion model for MRI that integrates granular metadata into the
reconstruction process. We train a pixel-space diffusion model directly on
minimally processed, complex-valued MRI images. During inference, metadata is
converted into a structured text prompt and fed to the model via CLIP text
embeddings. By conditioning the prior on metadata, we unlock more accurate
reconstructions and show consistent gains across multiple datasets,
acceleration factors, and undersampling patterns. Our experiments demonstrate
that increasing the fidelity of metadata, ranging from slice location and
contrast to patient age, sex, and pathology, systematically boosts
reconstruction performance. This work highlights the untapped potential of
leveraging clinical context for inverse problems and opens a new direction for
metadata-driven MRI reconstruction.
|
2501.04285
|
Separate Source Channel Coding Is Still What You Need: An LLM-based
Rethinking
|
cs.IT eess.SP math.IT
|
Along with the proliferating research interest in Semantic Communication
(SemCom), Joint Source Channel Coding (JSCC) has dominated the attention due to
the widely assumed existence in efficiently delivering information semantics.
%has emerged as a pivotal area of research, aiming to enhance the efficiency
and reliability of information transmission through deep learning-based
methods. Nevertheless, this paper challenges the conventional JSCC paradigm,
and advocates for adoption of Separate Source Channel Coding (SSCC) to enjoy
the underlying more degree of freedom for optimization. We demonstrate that
SSCC, after leveraging the strengths of Large Language Model (LLM) for source
coding and Error Correction Code Transformer (ECCT) complemented for channel
decoding, offers superior performance over JSCC. Our proposed framework also
effectively highlights the compatibility challenges between SemCom approaches
and digital communication systems, particularly concerning the resource costs
associated with the transmission of high precision floating point numbers.
Through comprehensive evaluations, we establish that empowered by LLM-based
compression and ECCT-enhanced error correction, SSCC remains a viable and
effective solution for modern communication systems. In other words, separate
source and channel coding is still what we need!
|
2501.04286
|
Mapping the Edge of Chaos: Fractal-Like Boundaries in The Trainability
of Decoder-Only Transformer Models
|
cs.LG cs.AI
|
In the realm of fractal geometry, intricate structures emerge from simple
iterative processes that partition parameter spaces into regions of stability
and instability. Likewise, training large language models involves iteratively
applying update functions, such as Adam, where even slight hyperparameter
adjustments can shift the training process from convergence to divergence.
Recent evidence from miniature neural networks suggests that the boundary
separating these outcomes displays fractal characteristics. Building on these
insights, this study extends them to medium-sized, decoder-only transformer
architectures by employing a more consistent convergence measure and examining
the learning rate hyperparameter landscape for attention and fully connected
layers. The results show that the trainability frontier is not a simple
threshold; rather, it forms a self-similar yet seemingly random structure at
multiple scales, with statistically consistent and repeating patterns. Within
this landscape, a region of stable convergence is surrounded by a complex
chaotic border, illustrating the sensitive nature of the underlying training
dynamics.
|
2501.04287
|
ElasticZO: A Memory-Efficient On-Device Learning with Combined Zeroth-
and First-Order Optimization
|
cs.LG
|
Zeroth-order (ZO) optimization is being recognized as a simple yet powerful
alternative to standard backpropagation (BP)-based training. Notably, ZO
optimization allows for training with only forward passes and (almost) the same
memory as inference, making it well-suited for edge devices with limited
computing and memory resources. In this paper, we propose ZO-based on-device
learning (ODL) methods for full-precision and 8-bit quantized deep neural
networks (DNNs), namely ElasticZO and ElasticZO-INT8. ElasticZO lies in the
middle between pure ZO- and pure BP-based approaches, and is based on the idea
to employ BP for the last few layers and ZO for the remaining layers.
ElasticZO-INT8 achieves integer arithmetic-only ZO-based training for the first
time, by incorporating a novel method for computing quantized ZO gradients from
integer cross-entropy loss values. Experimental results on the classification
datasets show that ElasticZO effectively addresses the slow convergence of
vanilla ZO and shrinks the accuracy gap to BP-based training. Compared to
vanilla ZO, ElasticZO achieves 5.2-9.5% higher accuracy with only 0.072-1.7%
memory overhead, and can handle fine-tuning tasks as well as full training.
ElasticZO-INT8 further reduces the memory usage and training time by 1.46-1.60x
and 1.38-1.42x without compromising the accuracy. These results demonstrate a
better tradeoff between accuracy and training cost compared to pure ZO- and
BP-based approaches, and also highlight the potential of ZO optimization in
on-device learning.
|
2501.04288
|
An Analysis of Model Robustness across Concurrent Distribution Shifts
|
cs.LG
|
Machine learning models, meticulously optimized for source data, often fail
to predict target data when faced with distribution shifts (DSs). Previous
benchmarking studies, though extensive, have mainly focused on simple DSs.
Recognizing that DSs often occur in more complex forms in real-world scenarios,
we broadened our study to include multiple concurrent shifts, such as unseen
domain shifts combined with spurious correlations. We evaluated 26 algorithms
that range from simple heuristic augmentations to zero-shot inference using
foundation models, across 168 source-target pairs from eight datasets. Our
analysis of over 100K models reveals that (i) concurrent DSs typically worsen
performance compared to a single shift, with certain exceptions, (ii) if a
model improves generalization for one distribution shift, it tends to be
effective for others, and (iii) heuristic data augmentations achieve the best
overall performance on both synthetic and real-world datasets.
|
2501.04292
|
MADUV: The 1st INTERSPEECH Mice Autism Detection via Ultrasound
Vocalization Challenge
|
cs.SD cs.AI cs.LG eess.AS
|
The Mice Autism Detection via Ultrasound Vocalization (MADUV) Challenge
introduces the first INTERSPEECH challenge focused on detecting autism spectrum
disorder (ASD) in mice through their vocalizations. Participants are tasked
with developing models to automatically classify mice as either wild-type or
ASD models based on recordings with a high sampling rate. Our baseline system
employs a simple CNN-based classification using three different spectrogram
features. Results demonstrate the feasibility of automated ASD detection, with
the considered audible-range features achieving the best performance (UAR of
0.600 for segment-level and 0.625 for subject-level classification). This
challenge bridges speech technology and biomedical research, offering
opportunities to advance our understanding of ASD models through machine
learning approaches. The findings suggest promising directions for vocalization
analysis and highlight the potential value of audible and ultrasound
vocalizations in ASD detection.
|
2501.04293
|
TADFormer : Task-Adaptive Dynamic Transformer for Efficient Multi-Task
Learning
|
cs.CV
|
Transfer learning paradigm has driven substantial advancements in various
vision tasks. However, as state-of-the-art models continue to grow, classical
full fine-tuning often becomes computationally impractical, particularly in
multi-task learning (MTL) setup where training complexity increases
proportional to the number of tasks. Consequently, recent studies have explored
Parameter-Efficient Fine-Tuning (PEFT) for MTL architectures. Despite some
progress, these approaches still exhibit limitations in capturing fine-grained,
task-specific features that are crucial to MTL. In this paper, we introduce
Task-Adaptive Dynamic transFormer, termed TADFormer, a novel PEFT framework
that performs task-aware feature adaptation in the fine-grained manner by
dynamically considering task-specific input contexts. TADFormer proposes the
parameter-efficient prompting for task adaptation and the Dynamic Task Filter
(DTF) to capture task information conditioned on input contexts. Experiments on
the PASCAL-Context benchmark demonstrate that the proposed method achieves
higher accuracy in dense scene understanding tasks, while reducing the number
of trainable parameters by up to 8.4 times when compared to full fine-tuning of
MTL models. TADFormer also demonstrates superior parameter efficiency and
accuracy compared to recent PEFT methods.
|
2501.04299
|
Circuit Complexity Bounds for Visual Autoregressive Model
|
stat.ML cs.AI cs.CC cs.CL cs.LG
|
Understanding the expressive ability of a specific model is essential for
grasping its capacity limitations. Recently, several studies have established
circuit complexity bounds for Transformer architecture. Besides, the Visual
AutoRegressive (VAR) model has risen to be a prominent method in the field of
image generation, outperforming previous techniques, such as Diffusion
Transformers, in generating high-quality images. We investigate the circuit
complexity of the VAR model and establish a bound in this study. Our primary
result demonstrates that the VAR model is equivalent to a simulation by a
uniform $\mathsf{TC}^0$ threshold circuit with hidden dimension $d \leq O(n)$
and $\mathrm{poly}(n)$ precision. This is the first study to rigorously
highlight the limitations in the expressive power of VAR models despite their
impressive performance. We believe our findings will offer valuable insights
into the inherent constraints of these models and guide the development of more
efficient and expressive architectures in the future.
|
2501.04300
|
Handling Incomplete Heterogeneous Data using a Data-Dependent Kernel
|
cs.LG
|
Handling incomplete data in real-world applications is a critical challenge
due to two key limitations of existing methods: (i) they are primarily designed
for numeric data and struggle with categorical or heterogeneous/mixed datasets;
(ii) they assume that data is missing completely at random, which is often not
the case in practice -- in reality, data is missing in patterns, leading to
biased results if these patterns are not accounted for. To address these two
limitations, this paper presents a novel approach to handling missing values
using the Probability Mass Similarity Kernel (PMK), a data-dependent kernel,
which does not make any assumptions about data types and missing mechanisms. It
eliminates the need for prior knowledge or extensive pre-processing steps and
instead leverages the distribution of observed data. Our method unifies the
representation of diverse data types by capturing more meaningful pairwise
similarities and enhancing downstream performance. We evaluated our approach
across over 10 datasets with numerical-only, categorical-only, and mixed
features under different missing mechanisms and rates. Across both
classification and clustering tasks, our approach consistently outperformed
existing techniques, demonstrating its robustness and effectiveness in managing
incomplete heterogeneous data.
|
2501.04302
|
H-MBA: Hierarchical MamBa Adaptation for Multi-Modal Video Understanding
in Autonomous Driving
|
cs.CV cs.AI
|
With the prevalence of Multimodal Large Language Models(MLLMs), autonomous
driving has encountered new opportunities and challenges. In particular,
multi-modal video understanding is critical to interactively analyze what will
happen in the procedure of autonomous driving. However, videos in such a
dynamical scene that often contains complex spatial-temporal movements, which
restricts the generalization capacity of the existing MLLMs in this field. To
bridge the gap, we propose a novel Hierarchical Mamba Adaptation (H-MBA)
framework to fit the complicated motion changes in autonomous driving videos.
Specifically, our H-MBA consists of two distinct modules, including Context
Mamba (C-Mamba) and Query Mamba (Q-Mamba). First, C-Mamba contains various
types of structure state space models, which can effectively capture
multi-granularity video context for different temporal resolutions. Second,
Q-Mamba flexibly transforms the current frame as the learnable query, and
attentively selects multi-granularity video context into query. Consequently,
it can adaptively integrate all the video contexts of multi-scale temporal
resolutions to enhance video understanding. Via a plug-and-play paradigm in
MLLMs, our H-MBA shows the remarkable performance on multi-modal video tasks in
autonomous driving, e.g., for risk object detection, it outperforms the
previous SOTA method with 5.5% mIoU improvement.
|
2501.04303
|
Multimodal Graph Constrastive Learning and Prompt for ChartQA
|
cs.CL
|
ChartQA presents significant challenges due to the complex distribution of
chart elements and the implicit patterns embedded within the underlying data.
In this chapter, we have developed a joint multimodal scene graph for charts,
explicitly representing the relationships between chart elements and their
associated patterns.
Our proposed multimodal scene graph consists of two components: a visual
graph and a textual graph, each designed to capture the structural and semantic
information within the chart. To unify representations across these different
modalities, we introduce a multimodal graph contrastive learning approach that
learns unified representations by maximizing similarity between nodes
representing the same object across multimodal graphs. The learned graph
representations can be seamlessly incorporated into a transformer decoder as a
soft prompt.
Additionally, given the growing need for Multimodal Large Language Models
(MLLMs) in zero-shot scenarios, we have designed Chain-of-Thought (CoT) prompts
for MLLMs to reduce hallucinations. We tested both methods on public benchmarks
such as ChartQA, OpenCQA, and ChartX, demonstrating improved performance and
validating the effectiveness of our proposed methods.
|
2501.04304
|
DGQ: Distribution-Aware Group Quantization for Text-to-Image Diffusion
Models
|
cs.CV cs.LG
|
Despite the widespread use of text-to-image diffusion models across various
tasks, their computational and memory demands limit practical applications. To
mitigate this issue, quantization of diffusion models has been explored. It
reduces memory usage and computational costs by compressing weights and
activations into lower-bit formats. However, existing methods often struggle to
preserve both image quality and text-image alignment, particularly in
lower-bit($<$ 8bits) quantization. In this paper, we analyze the challenges
associated with quantizing text-to-image diffusion models from a distributional
perspective. Our analysis reveals that activation outliers play a crucial role
in determining image quality. Additionally, we identify distinctive patterns in
cross-attention scores, which significantly affects text-image alignment. To
address these challenges, we propose Distribution-aware Group Quantization
(DGQ), a method that identifies and adaptively handles pixel-wise and
channel-wise outliers to preserve image quality. Furthermore, DGQ applies
prompt-specific logarithmic quantization scales to maintain text-image
alignment. Our method demonstrates remarkable performance on datasets such as
MS-COCO and PartiPrompts. We are the first to successfully achieve low-bit
quantization of text-to-image diffusion models without requiring additional
fine-tuning of weight quantization parameters. Code is available at
https://github.com/ugonfor/DGQ.
|
2501.04305
|
Physics-Informed Super-Resolution Diffusion for 6D Phase Space
Diagnostics
|
cs.LG math.DS physics.acc-ph
|
Adaptive physics-informed super-resolution diffusion is developed for
non-invasive virtual diagnostics of the 6D phase space density of charged
particle beams. An adaptive variational autoencoder (VAE) embeds initial beam
condition images and scalar measurements to a low-dimensional latent space from
which a 326 pixel 6D tensor representation of the beam's 6D phase space density
is generated. Projecting from a 6D tensor generates physically consistent 2D
projections. Physics-guided super-resolution diffusion transforms
low-resolution images of the 6D density to high resolution 256x256 pixel
images. Un-supervised adaptive latent space tuning enables tracking of
time-varying beams without knowledge of time-varying initial conditions. The
method is demonstrated with experimental data and multi-particle simulations at
the HiRES UED. The general approach is applicable to a wide range of complex
dynamic systems evolving in high-dimensional phase space. The method is shown
to be robust to distribution shift without re-training.
|
2501.04306
|
LLM4SR: A Survey on Large Language Models for Scientific Research
|
cs.CL cs.DL
|
In recent years, the rapid advancement of Large Language Models (LLMs) has
transformed the landscape of scientific research, offering unprecedented
support across various stages of the research cycle. This paper presents the
first systematic survey dedicated to exploring how LLMs are revolutionizing the
scientific research process. We analyze the unique roles LLMs play across four
critical stages of research: hypothesis discovery, experiment planning and
implementation, scientific writing, and peer reviewing. Our review
comprehensively showcases the task-specific methodologies and evaluation
benchmarks. By identifying current challenges and proposing future research
directions, this survey not only highlights the transformative potential of
LLMs, but also aims to inspire and guide researchers and practitioners in
leveraging LLMs to advance scientific inquiry. Resources are available at the
following repository: https://github.com/du-nlp-lab/LLM4SR
|
2501.04307
|
Finite Dimensional Lattice Codes with Self Error-Detection and Retry
Decoding
|
cs.IT math.IT
|
Lattice codes with optimal decoding coefficient are capacity-achieving when
dimension $N \rightarrow \infty$. In communications systems, finite dimensional
lattice codes are considered, where the optimal decoding coefficients may still
fail decoding even when $R< C$. This paper presents a new retry decoding scheme
for finite dimensional lattice-based transmissions. When decoding errors are
detected, the receiver is allowed to adjust the value of decoding coefficients
and retry decoding, instead of requesting a re-transmission immediately which
causes high latency. This scheme is considered for both point-to-point single
user transmission and compute-forward (CF) relaying with power unconstrained
relays, by which a lower word error rate (WER) is achieved than conventional
one-shot decoding with optimal coefficients. A lattice/lattice code
construction, called CRC-embedded lattice/lattice code, is presented to provide
physical layer error detection to enable retry decoding. For CF relaying, a
shaping lattice design is given so that the decoder is able to detect errors
from CF linear combinations without requiring individual users' messages. The
numerical results show gains of up to 1.31 dB and 1.08 dB at error probability
$10^{-5}$ for a 2-user CF relay using 128- and 256-dimensional lattice codes
with optimized CRC length and 2 decoding trials in total.
|
2501.04308
|
FSC-loss: A Frequency-domain Structure Consistency Learning Approach for
Signal Data Recovery and Reconstruction
|
eess.SP cs.LG
|
A core challenge for signal data recovery is to model the distribution of
signal matrix (SM) data based on measured low-quality data in biomedical
engineering of magnetic particle imaging (MPI). For acquiring the
high-resolution (high-quality) SM, the number of meticulous measurements at
numerous positions in the field-of-view proves time-consuming (measurement of a
37x37x37 SM takes about 32 hours). To improve reconstructed signal quality and
shorten SM measurement time, existing methods explore to generating
high-resolution SM based on time-saving measured low-resolution SM (a 9x9x9 SM
just takes about 0.5 hours). However, previous methods show poor performance
for high-frequency signal recovery in SM. To achieve a high-resolution SM
recovery and shorten its acquisition time, we propose a frequency-domain
structure consistency loss function and data component embedding strategy to
model global and local structural information of SM. We adopt a
transformer-based network to evaluate this function and the strategy. We
evaluate our methods and state-of-the-art (SOTA) methods on the two simulation
datasets and four public measured SMs in Open MPI Data. The results show that
our method outperforms the SOTA methods in high-frequency structural signal
recovery. Additionally, our method can recover a high-resolution SM with clear
high-frequency structure based on a down-sampling factor of 16 less than 15
seconds, which accelerates the acquisition time over 60 times faster than the
measurement-based HR SM with the minimum error (nRMSE=0.041). Moreover, our
method is applied in our three in-house MPI systems, and boost their
performance for signal reconstruction.
|
2501.04315
|
RoRA: Efficient Fine-Tuning of LLM with Reliability Optimization for
Rank Adaptation
|
cs.LG cs.AI
|
Fine-tuning helps large language models (LLM) recover degraded information
and enhance task performance. Although Low-Rank Adaptation (LoRA) is widely
used and effective for fine-tuning, we have observed that its scaling factor
can limit or even reduce performance as the rank size increases. To address
this issue, we propose RoRA (Rank-adaptive Reliability Optimization), a simple
yet effective method for optimizing LoRA's scaling factor. By replacing
$\alpha/r$ with $\alpha/\sqrt{r}$, RoRA ensures improved performance as rank
size increases. Moreover, RoRA enhances low-rank adaptation in fine-tuning
uncompressed models and excels in the more challenging task of accuracy
recovery when fine-tuning pruned models. Extensive experiments demonstrate the
effectiveness of RoRA in fine-tuning both uncompressed and pruned models. RoRA
surpasses the state-of-the-art (SOTA) in average accuracy and robustness on
LLaMA-7B/13B, LLaMA2-7B, and LLaMA3-8B, specifically outperforming LoRA and
DoRA by 6.5% and 2.9% on LLaMA-7B, respectively. In pruned model fine-tuning,
RoRA shows significant advantages; for SHEARED-LLAMA-1.3, a LLaMA-7B with 81.4%
pruning, RoRA achieves 5.7% higher average accuracy than LoRA and 3.9% higher
than DoRA.
|
2501.04316
|
Who Does the Giant Number Pile Like Best: Analyzing Fairness in Hiring
Contexts
|
cs.CL
|
Large language models (LLMs) are increasingly being deployed in high-stakes
applications like hiring, yet their potential for unfair decision-making and
outcomes remains understudied, particularly in generative settings. In this
work, we examine the fairness of LLM-based hiring systems through two
real-world tasks: resume summarization and retrieval. By constructing a
synthetic resume dataset and curating job postings, we investigate whether
model behavior differs across demographic groups and is sensitive to
demographic perturbations. Our findings reveal that race-based differences
appear in approximately 10% of generated summaries, while gender-based
differences occur in only 1%. In the retrieval setting, all evaluated models
display non-uniform selection patterns across demographic groups and exhibit
high sensitivity to both gender and race-based perturbations. Surprisingly,
retrieval models demonstrate comparable sensitivity to non-demographic changes,
suggesting that fairness issues may stem, in part, from general brittleness
issues. Overall, our results indicate that LLM-based hiring systems, especially
at the retrieval stage, can exhibit notable biases that lead to discriminatory
outcomes in real-world contexts.
|
2501.04319
|
VerifBFL: Leveraging zk-SNARKs for A Verifiable Blockchained Federated
Learning
|
cs.CR cs.DC cs.ET cs.LG
|
Blockchain-based Federated Learning (FL) is an emerging decentralized machine
learning paradigm that enables model training without relying on a central
server. Although some BFL frameworks are considered privacy-preserving, they
are still vulnerable to various attacks, including inference and model
poisoning. Additionally, most of these solutions employ strong trust
assumptions among all participating entities or introduce incentive mechanisms
to encourage collaboration, making them susceptible to multiple security flaws.
This work presents VerifBFL, a trustless, privacy-preserving, and verifiable
federated learning framework that integrates blockchain technology and
cryptographic protocols. By employing zero-knowledge Succinct Non-Interactive
Argument of Knowledge (zk-SNARKs) and incrementally verifiable computation
(IVC), VerifBFL ensures the verifiability of both local training and
aggregation processes. The proofs of training and aggregation are verified
on-chain, guaranteeing the integrity and auditability of each participant's
contributions. To protect training data from inference attacks, VerifBFL
leverages differential privacy. Finally, to demonstrate the efficiency of the
proposed protocols, we built a proof of concept using emerging tools. The
results show that generating proofs for local training and aggregation in
VerifBFL takes less than 81s and 2s, respectively, while verifying them
on-chain takes less than 0.6s.
|
2501.04322
|
Eve: Efficient Multimodal Vision Language Models with Elastic Visual
Experts
|
cs.CV
|
Multimodal vision language models (VLMs) have made significant progress with
the support of continuously increasing model sizes and data volumes. Running
VLMs on edge devices has become a challenge for their widespread application.
There are several efficient VLM efforts, but they often sacrifice linguistic
capabilities to enhance multimodal abilities, or require extensive training. To
address this quandary,we introduce the innovative framework of Efficient Vision
Language Models with Elastic Visual Experts (Eve). By strategically
incorporating adaptable visual expertise at multiple stages of training, Eve
strikes a balance between preserving linguistic abilities and augmenting
multimodal capabilities. This balanced approach results in a versatile model
with only 1.8B parameters that delivers significant improvements in both
multimodal and linguistic tasks. Notably, in configurations below 3B
parameters, Eve distinctly outperforms in language benchmarks and achieves
state-of-the-art results 68.87% in VLM Benchmarks. Additionally, its multimodal
accuracy outstrips that of the larger 7B LLaVA-1.5 model. Our code is available
at https://github.com/rangmiao/Eve.
|
2501.04323
|
Navigating the Designs of Privacy-Preserving Fine-tuning for Large
Language Models
|
cs.LG cs.CR
|
Instruction tuning has proven effective in enhancing Large Language Models'
(LLMs) performance on downstream tasks. However, real-world fine-tuning faces
inherent conflicts between model providers' intellectual property protection,
clients' data privacy requirements, and tuning costs. While recent approaches
like split learning and offsite tuning demonstrate promising architectures for
privacy-preserving fine-tuning, there is a gap in systematically addressing the
multidimensional trade-offs required for diverse real-world deployments. We
propose several indicative evaluation metrics to guide design trade-offs for
privacy-preserving fine-tuning and a series of example designs, collectively
named GuardedTuning; they result from novel combinations of system
architectures with adapted privacy-enhancement methods and emerging computation
techniques. Each design represents distinct trade-offs across model utility,
privacy guarantees, and costs. Experimental results demonstrate that these
designs protect against data reconstruction attacks while maintaining
competitive fine-tuning performance.
|
2501.04325
|
Edit as You See: Image-guided Video Editing via Masked Motion Modeling
|
cs.CV
|
Recent advancements in diffusion models have significantly facilitated
text-guided video editing. However, there is a relative scarcity of research on
image-guided video editing, a method that empowers users to edit videos by
merely indicating a target object in the initial frame and providing an RGB
image as reference, without relying on the text prompts. In this paper, we
propose a novel Image-guided Video Editing Diffusion model, termed IVEDiff for
the image-guided video editing. IVEDiff is built on top of image editing
models, and is equipped with learnable motion modules to maintain the temporal
consistency of edited video. Inspired by self-supervised learning concepts, we
introduce a masked motion modeling fine-tuning strategy that empowers the
motion module's capabilities for capturing inter-frame motion dynamics, while
preserving the capabilities for intra-frame semantic correlations modeling of
the base image editing model. Moreover, an optical-flow-guided motion reference
network is proposed to ensure the accurate propagation of information between
edited video frames, alleviating the misleading effects of invalid information.
We also construct a benchmark to facilitate further research. The comprehensive
experiments demonstrate that our method is able to generate temporally smooth
edited videos while robustly dealing with various editing objects with high
quality.
|
2501.04328
|
Lower Bound on the Error Rate of Genie-Aided Lattice Decoding
|
cs.IT math.IT
|
A genie-aided decoder for finite dimensional lattice codes is considered. The
decoder may exhaustively search through all possible scaling factors $\alpha
\in \mathbb{R}$. We show that this decoder can achieve lower word error rate
(WER) than the one-shot decoder using $\alpha_{MMSE}$ as a scaling factor. A
lower bound on the WER for the decoder is found by considering the covering
sphere of the lattice Voronoi region. The proposed decoder and the bound are
valid for both power-constrained lattice codes and lattices. If the genie is
applied at the decoder, E8 lattice code has 0.5 dB gain and BW16 lattice code
has 0.4 dB gain at WER of $10^{-4}$ compared with the one-shot decoder using
$\alpha_{MMSE}$. A method for estimating the WER of the decoder is provided by
considering the effective sphere of the lattice Voronoi region, which shows an
accurate estimate for E8 and BW16 lattice codes. In the case of per-dimension
power $P \rightarrow \infty$, an asymptotic expression of the bound is given in
a closed form. A practical implementation of a simplified decoder is given by
considering CRC-embedded $n=128$ polar code lattice.
|
2501.04329
|
An Efficient Adaptive Compression Method for Human Perception and
Machine Vision Tasks
|
cs.CV
|
While most existing neural image compression (NIC) and neural video
compression (NVC) methodologies have achieved remarkable success, their
optimization is primarily focused on human visual perception. However, with the
rapid development of artificial intelligence, many images and videos will be
used for various machine vision tasks. Consequently, such existing compression
methodologies cannot achieve competitive performance in machine vision. In this
work, we introduce an efficient adaptive compression (EAC) method tailored for
both human perception and multiple machine vision tasks. Our method involves
two key modules: 1), an adaptive compression mechanism, that adaptively selects
several subsets from latent features to balance the optimizations for multiple
machine vision tasks (e.g., segmentation, and detection) and human vision. 2),
a task-specific adapter, that uses the parameter-efficient delta-tuning
strategy to stimulate the comprehensive downstream analytical networks for
specific machine vision tasks. By using the above two modules, we can optimize
the bit-rate costs and improve machine vision performance. In general, our
proposed EAC can seamlessly integrate with existing NIC (i.e., Ball\'e2018, and
Cheng2020) and NVC (i.e., DVC, and FVC) methods. Extensive evaluation on
various benchmark datasets (i.e., VOC2007, ILSVRC2012, VOC2012, COCO, UCF101,
and DAVIS) shows that our method enhances performance for multiple machine
vision tasks while maintaining the quality of human vision.
|
2501.04331
|
AutoDFL: A Scalable and Automated Reputation-Aware Decentralized
Federated Learning
|
cs.DC cs.CR cs.ET cs.LG
|
Blockchained federated learning (BFL) combines the concepts of federated
learning and blockchain technology to enhance privacy, security, and
transparency in collaborative machine learning models. However, implementing
BFL frameworks poses challenges in terms of scalability and cost-effectiveness.
Reputation-aware BFL poses even more challenges, as blockchain validators are
tasked with processing federated learning transactions along with the
transactions that evaluate FL tasks and aggregate reputations. This leads to
faster blockchain congestion and performance degradation. To improve BFL
efficiency while increasing scalability and reducing on-chain reputation
management costs, this paper proposes AutoDFL, a scalable and automated
reputation-aware decentralized federated learning framework. AutoDFL leverages
zk-Rollups as a Layer-2 scaling solution to boost the performance while
maintaining the same level of security as the underlying Layer-1 blockchain.
Moreover, AutoDFL introduces an automated and fair reputation model designed to
incentivize federated learning actors. We develop a proof of concept for our
framework for an accurate evaluation. Tested with various custom workloads,
AutoDFL reaches an average throughput of over 3000 TPS with a gas reduction of
up to 20X.
|
2501.04336
|
Building a Mind Palace: Structuring Environment-Grounded Semantic Graphs
for Effective Long Video Analysis with LLMs
|
cs.CV
|
Long-form video understanding with Large Vision Language Models is challenged
by the need to analyze temporally dispersed yet spatially concentrated key
moments within limited context windows. In this work, we introduce
VideoMindPalace, a new framework inspired by the "Mind Palace", which organizes
critical video moments into a topologically structured semantic graph.
VideoMindPalace organizes key information through (i) hand-object tracking and
interaction, (ii) clustered activity zones representing specific areas of
recurring activities, and (iii) environment layout mapping, allowing natural
language parsing by LLMs to provide grounded insights on spatio-temporal and 3D
context. In addition, we propose the Video MindPalace Benchmark (VMB), to
assess human-like reasoning, including spatial localization, temporal
reasoning, and layout-aware sequential understanding. Evaluated on VMB and
established video QA datasets, including EgoSchema, NExT-QA, IntentQA, and the
Active Memories Benchmark, VideoMindPalace demonstrates notable gains in
spatio-temporal coherence and human-aligned reasoning, advancing long-form
video analysis capabilities in VLMs.
|
2501.04339
|
DCIts -- Deep Convolutional Interpreter for time series
|
stat.ML cs.LG physics.app-ph
|
We introduce an interpretable deep learning model for multivariate time
series forecasting that prioritizes both predictive performance and
interpretability - key requirements for understanding complex physical
phenomena. Our model not only matches but often surpasses existing
interpretability methods, achieving this without compromising accuracy. Through
extensive experiments, we demonstrate its ability to identify the most relevant
time series and lags that contribute to forecasting future values, providing
intuitive and transparent explanations for its predictions. To minimize the
need for manual supervision, the model is designed so one can robustly
determine the optimal window size that captures all necessary interactions
within the smallest possible time frame. Additionally, it effectively
identifies the optimal model order, balancing complexity when incorporating
higher-order terms. These advancements hold significant implications for
modeling and understanding dynamic systems, making the model a valuable tool
for applied and computational physicists.
|
2501.04340
|
On Domain Decomposition for Magnetostatic Problems in 3D
|
cs.CE cs.NA math.NA
|
The simulation of three dimensional magnetostatic problems plays an important
role, for example when simulating synchronous electric machines. Building on
prior work that developed a domain decomposition algorithm using isogeometric
analysis, this paper extends the method to support subdomains composed of
multiple patches. This extension enables load-balancing across available CPUs,
facilitated by graph partitioning tools such as METIS. The proposed approach
enhances scalability and flexibility, making it suitable for large-scale
simulations in diverse industrial contexts.
|
2501.04341
|
Understanding Before Reasoning: Enhancing Chain-of-Thought with
Iterative Summarization Pre-Prompting
|
cs.CL
|
Chain-of-Thought (CoT) Prompting is a dominant paradigm in Large Language
Models (LLMs) to enhance complex reasoning. It guides LLMs to present
multi-step reasoning, rather than generating the final answer directly.
However, CoT encounters difficulties when key information required for
reasoning is implicit or missing. This occurs because CoT emphasizes the
sequence of reasoning steps while overlooking the early extraction of essential
information. We propose a pre-prompting method called Iterative Summarization
Pre-Prompting (ISP^2) to refine LLM reasoning when key information is not
explicitly provided. First, entities and their corresponding descriptions are
extracted to form potential key information pairs. Next, we use a reliability
rating to assess these pairs, then merge the two lowest-ranked pairs into a new
entity description. This process is repeated until a unique key information
pair is obtained. Finally, that pair, along with the original question, is fed
into LLMs to produce the answer. Extensive experiments demonstrate a 7.1%
improvement compared to existing methods. Unlike traditional prompting, ISP^2
adopts an inductive approach with pre-prompting, offering flexible integration
into diverse reasoning frameworks. The code is available at
https://github.com/zdhgreat/ISP-2.
|
2501.04343
|
TimelineKGQA: A Comprehensive Question-Answer Pair Generator for
Temporal Knowledge Graphs
|
cs.LO cs.AI cs.CL
|
Question answering over temporal knowledge graphs (TKGs) is crucial for
understanding evolving facts and relationships, yet its development is hindered
by limited datasets and difficulties in generating custom QA pairs. We propose
a novel categorization framework based on timeline-context relationships, along
with \textbf{TimelineKGQA}, a universal temporal QA generator applicable to any
TKGs. The code is available at: \url{https://github.com/PascalSun/TimelineKGQA}
as an open source Python package.
|
2501.04347
|
Keyword Search in the Deep Web
|
cs.DB
|
The Deep Web is constituted by data that are accessible through Web pages,
but not readily indexable by search engines as they are returned in dynamic
pages. In this paper we propose a conceptual framework for answering keyword
queries on Deep Web sources represented as relational tables with so-called
access limitations. We formalize the notion of optimal answer, characterize
queries for which an answer can be found, and present a method for query
processing based on the construction of a query plan that minimizes the
accesses to the data sources.
|
2501.04352
|
Online Gaussian Test-Time Adaptation of Vision-Language Models
|
cs.CV
|
Online test-time adaptation (OTTA) of vision-language models (VLMs) has
recently garnered increased attention to take advantage of data observed along
a stream to improve future predictions. Unfortunately, existing methods rely on
dataset-specific hyperparameters, significantly limiting their adaptability to
unseen tasks. In response, we propose Online Gaussian Adaptation (OGA), a novel
method that models the likelihoods of visual features using Gaussian
distributions and incorporates zero-shot priors into an interpretable Maximum A
Posteriori (MAP) estimation framework with fixed hyper-parameters across all
datasets. We demonstrate that OGA outperforms state-of-the-art methods on most
datasets and runs. Additionally, we show that combining OTTA with popular
few-shot techniques (a practical yet overlooked setting in prior research) is
highly beneficial. Furthermore, our experimental study reveals that common OTTA
evaluation protocols, which average performance over at most three runs per
dataset, are inadequate due to the substantial variability observed across runs
for all OTTA methods. Therefore, we advocate for more rigorous evaluation
practices, including increasing the number of runs and considering additional
quantitative metrics, such as our proposed Expected Tail Accuracy (ETA),
calculated as the average accuracy in the worst 10% of runs. We hope these
contributions will encourage more rigorous and diverse evaluation practices in
the OTTA community. Code is available at https://github.com/cfuchs2023/OGA .
|
2501.04353
|
DeFusion: An Effective Decoupling Fusion Network for Multi-Modal
Pregnancy Prediction
|
cs.CV cs.LG
|
Temporal embryo images and parental fertility table indicators are both
valuable for pregnancy prediction in \textbf{in vitro fertilization embryo
transfer} (IVF-ET). However, current machine learning models cannot make full
use of the complementary information between the two modalities to improve
pregnancy prediction performance. In this paper, we propose a Decoupling Fusion
Network called DeFusion to effectively integrate the multi-modal information
for IVF-ET pregnancy prediction. Specifically, we propose a decoupling fusion
module that decouples the information from the different modalities into
related and unrelated information, thereby achieving a more delicate fusion.
And we fuse temporal embryo images with a spatial-temporal position encoding,
and extract fertility table indicator information with a table transformer. To
evaluate the effectiveness of our model, we use a new dataset including 4046
cases collected from Southern Medical University. The experiments show that our
model outperforms state-of-the-art methods. Meanwhile, the performance on the
eye disease prediction dataset reflects the model's good generalization. Our
code and dataset are available at https://github.com/Ou-Young-1999/DFNet.
|
2501.04359
|
Decoding EEG Speech Perception with Transformers and VAE-based Data
Augmentation
|
eess.AS cs.CL cs.HC cs.LG cs.SD
|
Decoding speech from non-invasive brain signals, such as
electroencephalography (EEG), has the potential to advance brain-computer
interfaces (BCIs), with applications in silent communication and assistive
technologies for individuals with speech impairments. However, EEG-based speech
decoding faces major challenges, such as noisy data, limited datasets, and poor
performance on complex tasks like speech perception. This study attempts to
address these challenges by employing variational autoencoders (VAEs) for EEG
data augmentation to improve data quality and applying a state-of-the-art
(SOTA) sequence-to-sequence deep learning architecture, originally successful
in electromyography (EMG) tasks, to EEG-based speech decoding. Additionally, we
adapt this architecture for word classification tasks. Using the Brennan
dataset, which contains EEG recordings of subjects listening to narrated
speech, we preprocess the data and evaluate both classification and
sequence-to-sequence models for EEG-to-words/sentences tasks. Our experiments
show that VAEs have the potential to reconstruct artificial EEG data for
augmentation. Meanwhile, our sequence-to-sequence model achieves more promising
performance in generating sentences compared to our classification model,
though both remain challenging tasks. These findings lay the groundwork for
future research on EEG speech perception decoding, with possible extensions to
speech production tasks such as silent or imagined speech.
|
2501.04361
|
A Unified Framework for Foreground and Anonymization Area Segmentation
in CT and MRI Data
|
eess.IV cs.CV
|
This study presents an open-source toolkit to address critical challenges in
preprocessing data for self-supervised learning (SSL) for 3D medical imaging,
focusing on data privacy and computational efficiency. The toolkit comprises
two main components: a segmentation network that delineates foreground regions
to optimize data sampling and thus reduce training time, and a segmentation
network that identifies anonymized regions, preventing erroneous supervision in
reconstruction-based SSL methods. Experimental results demonstrate high
robustness, with mean Dice scores exceeding 98.5 across all anonymization
methods and surpassing 99.5 for foreground segmentation tasks, highlighting the
efficacy of the toolkit in supporting SSL applications in 3D medical imaging
for both CT and MRI images. The weights and code is available at
https://github.com/MIC-DKFZ/Foreground-and-Anonymization-Area-Segmentation.
|
2501.04364
|
An innovative data collection method to eliminate the preprocessing
phase in web usage mining
|
cs.IR
|
The underlying data source for web usage mining (WUM) is commonly thought to
be server logs. However, access log files ensure quite limited data about the
clients. Identifying sessions from this messy data takes a considerable effort,
and operations performed for this purpose do not always yield excellent
results. Also, this data cannot be used for web analytics efficiently. This
study proposes an innovative method for user tracking, session management, and
collecting web usage data. The method is mainly based on a new approach for
using collected data for web analytics extraction as the data source in web
usage mining. An application-based API has been developed with a different
strategy from conventional client-side methods to obtain and process log data.
The log data has been successfully gathered by integrating the technique into
an enterprise web application. The results reveal that the homogeneous
structured data collected and stored with this method is more convenient to
browse, filter, and process than web server logs. This data stored on a
relational database can be used effortlessly as a reliable data source for
high-performance web usage mining activity, real-time web analytics, or a
functional recommendation system.
|
2501.04366
|
DispFormer: Pretrained Transformer for Flexible Dispersion Curve
Inversion from Global Synthesis to Regional Applications
|
physics.geo-ph cs.AI
|
Surface wave dispersion curve inversion is essential for estimating
subsurface Shear-wave velocity ($v_s$), yet traditional methods often struggle
to balance computational efficiency with inversion accuracy. While deep
learning approaches show promise, previous studies typically require large
amounts of labeled data and struggle with real-world datasets that have varying
period ranges, missing data, and low signal-to-noise ratios. This study
proposes DispFormer, a transformer-based neural network for inverting the $v_s$
profile from Rayleigh-wave phase and group dispersion curves. DispFormer
processes dispersion data at each period independently, thereby allowing it to
handle data of varying lengths without requiring network modifications or
alignment between training and testing data. The performance is demonstrated by
pre-training it on a global synthetic dataset and testing it on two regional
synthetic datasets using zero-shot and few-shot strategies. Results indicate
that zero-shot DispFormer, even without any labeled data, produces inversion
profiles that match well with the ground truth, providing a deployable initial
model generator to assist traditional methods. When labeled data is available,
few-shot DispFormer outperforms traditional methods with only a small number of
labels. Furthermore, real-world tests indicate that DispFormer effectively
handles varying length data, and yields lower data residuals than reference
models. These findings demonstrate that DispFormer provides a robust foundation
model for dispersion curve inversion and is a promising approach for broader
applications.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.