id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
2412.20916 | Low-Light Image Enhancement via Generative Perceptual Priors | cs.CV | Although significant progress has been made in enhancing visibility,
retrieving texture details, and mitigating noise in Low-Light (LL) images, the
challenge persists in applying current Low-Light Image Enhancement (LLIE)
methods to real-world scenarios, primarily due to the diverse illumination
conditions encountered. Furthermore, the quest for generating enhancements that
are visually realistic and attractive remains an underexplored realm. In
response to these challenges, we introduce a novel \textbf{LLIE} framework with
the guidance of \textbf{G}enerative \textbf{P}erceptual \textbf{P}riors
(\textbf{GPP-LLIE}) derived from vision-language models (VLMs). Specifically,
we first propose a pipeline that guides VLMs to assess multiple visual
attributes of the LL image and quantify the assessment to output the global and
local perceptual priors. Subsequently, to incorporate these generative
perceptual priors to benefit LLIE, we introduce a transformer-based backbone in
the diffusion process, and develop a new layer normalization
(\textit{\textbf{GPP-LN}}) and an attention mechanism
(\textit{\textbf{LPP-Attn}}) guided by global and local perceptual priors.
Extensive experiments demonstrate that our model outperforms current SOTA
methods on paired LL datasets and exhibits superior generalization on
real-world data. The code is released at
\url{https://github.com/LowLevelAI/GPP-LLIE}.
|
2412.20918 | Uncertainty-Aware Out-of-Distribution Detection with Gaussian Processes | stat.ML cs.LG | Deep neural networks (DNNs) are often constructed under the closed-world
assumption, which may fail to generalize to the out-of-distribution (OOD) data.
This leads to DNNs producing overconfident wrong predictions and can result in
disastrous consequences in safety-critical applications. Existing OOD detection
methods mainly rely on curating a set of OOD data for model training or
hyper-parameter tuning to distinguish OOD data from training data (also known
as in-distribution data or InD data). However, OOD samples are not always
available during the training phase in real-world applications, hindering the
OOD detection accuracy. To overcome this limitation, we propose a
Gaussian-process-based OOD detection method to establish a decision boundary
based on InD data only. The basic idea is to perform uncertainty quantification
of the unconstrained softmax scores of a DNN via a multi-class Gaussian process
(GP), and then define a score function to separate InD and potential OOD data
based on their fundamental differences in the posterior predictive distribution
from the GP. Two case studies on conventional image classification datasets and
real-world image datasets are conducted to demonstrate that the proposed method
outperforms the state-of-the-art OOD detection methods when OOD samples are not
observed in the training phase.
|
2412.20920 | Channel Charting-assisted Non-orthogonal Pilot Allocation for Uplink
XL-MIMO Transmission | cs.IT eess.SP math.IT | Extremely large-scale multiple-input multiple-output (XL-MIMO) is critical to
future wireless networks. The substantial increase in the number of base
station (BS) antennas introduces near-field propagation effects in the wireless
channels, complicating channel parameter estimation and increasing pilot
overhead. Channel charting (CC) has emerged as a potent unsupervised technique
to effectively harness varying high-dimensional channel statistics to enable
non-orthogonal pilot assignment and reduce pilot overhead. In this paper, we
investigate near-field channel estimation with reduced pilot overhead by
developing a CC-assisted pilot scheduling. To this end, we introduce a
polar-domain codebook to capture the power distribution of near-field XL-MIMO
channels. The CC-assisted approach uses such features as inputs to enable an
effective low-dimensional mapping of the inherent correlation patterns in
near-field user terminal (UT) channels. Building upon the mapped channel
correlations, we further propose a near-field CC-assisted pilot allocation
(NCC-PA) algorithm, which efficiently enhances channel orthogonality among
pilot-reusing UTs. Numerical results confirm that the NCC-PA algorithm
substantially elevates the wireless transmission performance, offering a marked
improvement over the conventional far-field CC-PA approach.
|
2412.20924 | HisynSeg: Weakly-Supervised Histopathological Image Segmentation via
Image-Mixing Synthesis and Consistency Regularization | cs.CV cs.AI | Tissue semantic segmentation is one of the key tasks in computational
pathology. To avoid the expensive and laborious acquisition of pixel-level
annotations, a wide range of studies attempt to adopt the class activation map
(CAM), a weakly-supervised learning scheme, to achieve pixel-level tissue
segmentation. However, CAM-based methods are prone to suffer from
under-activation and over-activation issues, leading to poor segmentation
performance. To address this problem, we propose a novel weakly-supervised
semantic segmentation framework for histopathological images based on
image-mixing synthesis and consistency regularization, dubbed HisynSeg.
Specifically, synthesized histopathological images with pixel-level masks are
generated for fully-supervised model training, where two synthesis strategies
are proposed based on Mosaic transformation and B\'ezier mask generation.
Besides, an image filtering module is developed to guarantee the authenticity
of the synthesized images. In order to further avoid the model overfitting to
the occasional synthesis artifacts, we additionally propose a novel
self-supervised consistency regularization, which enables the real images
without segmentation masks to supervise the training of the segmentation model.
By integrating the proposed techniques, the HisynSeg framework successfully
transforms the weakly-supervised semantic segmentation problem into a
fully-supervised one, greatly improving the segmentation accuracy. Experimental
results on three datasets prove that the proposed method achieves a
state-of-the-art performance. Code is available at
https://github.com/Vison307/HisynSeg.
|
2412.20925 | Active Learning with Variational Quantum Circuits for Quantum Process
Tomography | quant-ph cs.LG | Quantum process tomography (QPT), used for reconstruction of an unknown
quantum process from measurement data, is a fundamental tool for the diagnostic
and full characterization of quantum systems. It relies on querying a set of
quantum states as input to the quantum process. Previous works commonly use a
straightforward strategy to select a set of quantum states randomly,
overlooking differences in informativeness among quantum states. Since querying
the quantum system requires multiple experiments that can be prohibitively
costly, it is always the case that there are not enough quantum states for
high-quality reconstruction. In this paper, we propose a general framework for
active learning (AL) to adaptively select a set of informative quantum states
that improves the reconstruction most efficiently. In particular, we introduce
a learning framework that leverages the widely-used variational quantum
circuits (VQCs) to perform the QPT task and integrate our AL algorithms into
the query step. We design and evaluate three various types of AL algorithms:
committee-based, uncertainty-based, and diversity-based, each exhibiting
distinct advantages in terms of performance and computational cost.
Additionally, we provide a guideline for selecting algorithms suitable for
different scenarios. Numerical results demonstrate that our algorithms achieve
significantly improved reconstruction compared to the baseline method that
selects a set of quantum states randomly. Moreover, these results suggest that
active learning based approaches are applicable to other complicated learning
tasks in large-scale quantum information processing.
|
2412.20927 | Enhanced Multimodal RAG-LLM for Accurate Visual Question Answering | cs.CV | Multimodal large language models (MLLMs), such as GPT-4o, Gemini, LLaVA, and
Flamingo, have made significant progress in integrating visual and textual
modalities, excelling in tasks like visual question answering (VQA), image
captioning, and content retrieval. They can generate coherent and contextually
relevant descriptions of images. However, they still face challenges in
accurately identifying and counting objects and determining their spatial
locations, particularly in complex scenes with overlapping or small objects. To
address these limitations, we propose a novel framework based on multimodal
retrieval-augmented generation (RAG), which introduces structured scene graphs
to enhance object recognition, relationship identification, and spatial
understanding within images. Our framework improves the MLLM's capacity to
handle tasks requiring precise visual descriptions, especially in scenarios
with challenging perspectives, such as aerial views or scenes with dense object
arrangements. Finally, we conduct extensive experiments on the VG-150 dataset
that focuses on first-person visual understanding and the AUG dataset that
involves aerial imagery. The results show that our approach consistently
outperforms existing MLLMs in VQA tasks, which stands out in recognizing,
localizing, and quantifying objects in different spatial contexts and provides
more accurate visual descriptions.
|
2412.20934 | Optimal Diffusion Processes | math.PR cs.IT math.IT | Of stochastic differential equations, diffusion processes have been adopted
in numerous applications, as more relevant and flexible models. This paper
studies diffusion processes in a different setting, where for a given
stationary distribution and average variance, it seeks the diffusion process
with optimal convergence rate. It is shown that the optimal drift function is a
linear function and the convergence rate of the stochastic process is bounded
by the ratio of the average variance to the variance of the stationary
distribution. Furthermore, the concavity of the optimal relaxation time as a
function of the stationary distribution has been proven, and it is shown that
all Pearson diffusion processes of the Hypergeometric type with polynomial
functions of at most degree two as the variance functions are optimal.
|
2412.20936 | Influence Maximization in Temporal Networks with Persistent and Reactive
Behaviors | cs.SI physics.comp-ph | Influence maximization in temporal social networks presents unique challenges
due to the dynamic interactions that evolve over time. Traditional diffusion
models often fall short in capturing the real-world complexities of
active-inactive transitions among nodes, obscuring the true behavior of
influence spread. In dynamic networks, nodes do not simply transition to an
active state once; rather, they can oscillate between active and inactive
states, with the potential for reactivation and reinforcement over time. This
reactivation allows previously influenced nodes to regain influence potency,
enhancing their ability to spread influence to others and amplifying the
overall diffusion process. Ignoring these transitions can thus conceal the
cumulative impact of influence, making it essential to account for them in any
effective diffusion model. To address these challenges, we introduce the
Continuous Persistent Susceptible-Infected Model with Reinforcement and
Re-activation (cpSI-R), which explicitly incorporates active-inactive
transitions, capturing the progressive reinforcement that makes nodes more
potent spreaders upon reactivation. This model naturally leads to a submodular
and monotone objective function, which supports efficient optimization for seed
selection in influence maximization tasks. Alongside cpSI-R, we propose an
efficient temporal snapshot sampling method, simplifying the analysis of
evolving networks. We then adapt the prior algorithms of seed selection to our
model and sampling strategy, resulting in reduced computational costs and
enhanced seed selection efficiency. Experimental evaluations on diverse
datasets demonstrate substantial improvements in performance over baseline
methods, underscoring the effectiveness of cpSI-R for real-world temporal
networks
|
2412.20942 | Ontology-grounded Automatic Knowledge Graph Construction by LLM under
Wikidata schema | cs.AI cs.IR | We propose an ontology-grounded approach to Knowledge Graph (KG) construction
using Large Language Models (LLMs) on a knowledge base. An ontology is authored
by generating Competency Questions (CQ) on knowledge base to discover knowledge
scope, extracting relations from CQs, and attempt to replace equivalent
relations by their counterpart in Wikidata. To ensure consistency and
interpretability in the resulting KG, we ground generation of KG with the
authored ontology based on extracted relations. Evaluation on benchmark
datasets demonstrates competitive performance in knowledge graph construction
task. Our work presents a promising direction for scalable KG construction
pipeline with minimal human intervention, that yields high quality and
human-interpretable KGs, which are interoperable with Wikidata semantics for
potential knowledge base expansion.
|
2412.20943 | Cluster-Based Time-Variant Channel Characterization and Modeling for
5G-Railways | cs.IT math.IT | With the development of high-speed railways, 5G for Railways (5G-R) is
gradually replacing Global System for the Mobile Communications for Railway
(GSM-R) worldwide to meet increasing demands. The large bandwidth, array
antennas, and non-stationarity caused by high mobility has made 5G-R channel
characterization more complex. Therefore, it is essential to develop an
accurate channel model for 5G-R. However, researches on channel
characterization and time-variant models specific to 5G-R frequency bands and
scenarios is scarce. There are virtually no cluster-based time-variant channel
models that capture statistical properties of 5G-R channel. In this paper, we
propose a cluster-based time-variant channel model for 5G-R within an enhanced
3GPP framework, which incorporates time evolution features. Extensive channel
measurements are conducted on 5G-R private network test line in China. We then
extract and analyze typical channel fading characteristics and multipath
cluster characteristics. Furthermore, birth-death process of the clusters is
modeled by using a four-state Markov chain. Finally, a generalized clustered
delay line (CDL) model is established in accordance with 3GPP standard and
validated by comparing the results of measurements and simulations. This work
enhances the understanding of 5G-R channels and presents a flexible
cluster-based time-variant channel model. The results can be used in the
design, deployment, and optimization of 5G-R networks.
|
2412.20946 | Generalizing in Net-Zero Microgrids: A Study with Federated PPO and TRPO | cs.LG | This work addresses the challenge of optimal energy management in microgrids
through a collaborative and privacy-preserving framework. We propose the
FedTRPO methodology, which integrates Federated Learning (FL) and Trust Region
Policy Optimization (TRPO) to manage distributed energy resources (DERs)
efficiently. Using a customized version of the CityLearn environment and
synthetically generated data, we simulate designed net-zero energy scenarios
for microgrids composed of multiple buildings. Our approach emphasizes reducing
energy costs and carbon emissions while ensuring privacy. Experimental results
demonstrate that FedTRPO is comparable with state-of-the-art federated RL
methodologies without hyperparameter tunning. The proposed framework highlights
the feasibility of collaborative learning for achieving optimal control
policies in energy systems, advancing the goals of sustainable and efficient
smart grids.
|
2412.20953 | GASLITEing the Retrieval: Exploring Vulnerabilities in Dense
Embedding-based Search | cs.CR cs.CL | Dense embedding-based text retrieval$\unicode{x2013}$retrieval of relevant
passages from corpora via deep learning encodings$\unicode{x2013}$has emerged
as a powerful method attaining state-of-the-art search results and popularizing
the use of Retrieval Augmented Generation (RAG). Still, like other search
methods, embedding-based retrieval may be susceptible to search-engine
optimization (SEO) attacks, where adversaries promote malicious content by
introducing adversarial passages to corpora. To faithfully assess and gain
insights into the susceptibility of such systems to SEO, this work proposes the
GASLITE attack, a mathematically principled gradient-based search method for
generating adversarial passages without relying on the corpus content or
modifying the model. Notably, GASLITE's passages (1) carry adversary-chosen
information while (2) achieving high retrieval ranking for a selected query
distribution when inserted to corpora. We use GASLITE to extensively evaluate
retrievers' robustness, testing nine advanced models under varied threat
models, while focusing on realistic adversaries targeting queries on a specific
concept (e.g., a public figure). We found GASLITE consistently outperformed
baselines by $\geq$140% success rate, in all settings. Particularly,
adversaries using GASLITE require minimal effort to manipulate search
results$\unicode{x2013}$by injecting a negligible amount of adversarial
passages ($\leq$0.0001% of the corpus), they could make them visible in the
top-10 results for 61-100% of unseen concept-specific queries against most
evaluated models. Inspecting variance in retrievers' robustness, we identify
key factors that may contribute to models' susceptibility to SEO, including
specific properties in the embedding space's geometry.
|
2412.20960 | Rise of Generative Artificial Intelligence in Science | cs.CY cs.AI cs.IR | Generative Artificial Intelligence (GenAI, generative AI) has rapidly become
available as a tool in scientific research. To explore the use of generative AI
in science, we conduct an empirical analysis using OpenAlex. Analyzing GenAI
publications and other AI publications from 2017 to 2023, we profile growth
patterns, the diffusion of GenAI publications across fields of study, and the
geographical spread of scientific research on generative AI. We also
investigate team size and international collaborations to explore whether
GenAI, as an emerging scientific research area, shows different collaboration
patterns compared to other AI technologies. The results indicate that
generative AI has experienced rapid growth and increasing presence in
scientific publications. The use of GenAI now extends beyond computer science
to other scientific research domains. Over the study period, U.S. researchers
contributed nearly two-fifths of global GenAI publications. The U.S. is
followed by China, with several small and medium-sized advanced economies
demonstrating relatively high levels of GenAI deployment in their research
publications. Although scientific research overall is becoming increasingly
specialized and collaborative, our results suggest that GenAI research groups
tend to have slightly smaller team sizes than found in other AI fields.
Furthermore, notwithstanding recent geopolitical tensions, GenAI research
continues to exhibit levels of international collaboration comparable to other
AI technologies.
|
2412.20962 | Conservation-informed Graph Learning for Spatiotemporal Dynamics
Prediction | cs.LG cs.AI | Data-centric methods have shown great potential in understanding and
predicting spatiotemporal dynamics, enabling better design and control of the
object system. However, deep learning models often lack interpretability, fail
to obey intrinsic physics, and struggle to cope with the various domains. While
geometry-based methods, e.g., graph neural networks (GNNs), have been proposed
to further tackle these challenges, they still need to find the implicit
physical laws from large datasets and rely excessively on rich labeled data. In
this paper, we herein introduce the conservation-informed GNN (CiGNN), an
end-to-end explainable learning framework, to learn spatiotemporal dynamics
based on limited training data. The network is designed to conform to the
general conservation law via symmetry, where conservative and non-conservative
information passes over a multiscale space enhanced by a latent temporal
marching strategy. The efficacy of our model has been verified in various
spatiotemporal systems based on synthetic and real-world datasets, showing
superiority over baseline models. Results demonstrate that CiGNN exhibits
remarkable accuracy and generalizability, and is readily applicable to learning
for prediction of various spatiotemporal dynamics in a spatial domain with
complex geometry.
|
2412.20964 | Hierarchical Banzhaf Interaction for General Video-Language
Representation Learning | cs.CV | Multimodal representation learning, with contrastive learning, plays an
important role in the artificial intelligence domain. As an important subfield,
video-language representation learning focuses on learning representations
using global semantic interactions between pre-defined video-text pairs.
However, to enhance and refine such coarse-grained global interactions, more
detailed interactions are necessary for fine-grained multimodal learning. In
this study, we introduce a new approach that models video-text as game players
using multivariate cooperative game theory to handle uncertainty during
fine-grained semantic interactions with diverse granularity, flexible
combination, and vague intensity. Specifically, we design the Hierarchical
Banzhaf Interaction to simulate the fine-grained correspondence between video
clips and textual words from hierarchical perspectives. Furthermore, to
mitigate the bias in calculations within Banzhaf Interaction, we propose
reconstructing the representation through a fusion of single-modal and
cross-modal components. This reconstructed representation ensures fine
granularity comparable to that of the single-modal representation, while also
preserving the adaptive encoding characteristics of cross-modal representation.
Additionally, we extend our original structure into a flexible encoder-decoder
framework, enabling the model to adapt to various downstream tasks. Extensive
experiments on commonly used text-video retrieval, video-question answering,
and video captioning benchmarks, with superior performance, validate the
effectiveness and generalization of our method.
|
2412.20965 | Practical Implementation and Experimental Validation of an Optimal
Control based Eco-Driving System | eess.SY cs.SY | The main goal of Eco-Driving (ED) is to maximize energy efficiency. This
study evaluates the energy gains of an ED system for an electric vehicle,
obtained from a predictive optimal controller, in a real-world traffic
scenario. To this end, a Visual driver Advisory System (VAS) in the form of a
personal tablet is used to advise the driver to follow a target eco-speed via a
screen. Two Renault Zoe electric cars, one equipped with the different modules
for ED and one without, are used to perform field tests on a route between
Rueil-Malmaison and Bougival in France. Overall, the ED consumed, on average,
4.6~$\%$ less energy than the non-eco-driven car with a maximum of 2~$\%$
change in average speed.
|
2412.20974 | FPGA-based Acceleration of Neural Network for Image Classification using
Vitis AI | cs.CV eess.IV | In recent years, Convolutional Neural Networks (CNNs) have been widely
adopted in computer vision. Complex CNN architecture running on CPU or GPU has
either insufficient throughput or prohibitive power consumption. Hence, there
is a need to have dedicated hardware to accelerate the computation workload to
solve these limitations. In this paper, we accelerate a CNN for image
classification with the CIFAR-10 dataset using Vitis-AI on Xilinx Zynq
UltraScale+ MPSoC ZCU104 FPGA evaluation board. The work achieves 3.33-5.82x
higher throughput and 3.39-6.30x higher energy efficiency than CPU and GPU
baselines. It shows the potential to extract 2D features for downstream tasks,
such as depth estimation and 3D reconstruction.
|
2412.20976 | Hierarchical Pose Estimation and Mapping with Multi-Scale Neural Feature
Fields | cs.RO | Robotic applications require a comprehensive understanding of the scene. In
recent years, neural fields-based approaches that parameterize the entire
environment have become popular. These approaches are promising due to their
continuous nature and their ability to learn scene priors. However, the use of
neural fields in robotics becomes challenging when dealing with unknown sensor
poses and sequential measurements. This paper focuses on the problem of sensor
pose estimation for large-scale neural implicit SLAM. We investigate implicit
mapping from a probabilistic perspective and propose hierarchical pose
estimation with a corresponding neural network architecture. Our method is
well-suited for large-scale implicit map representations. The proposed approach
operates on consecutive outdoor LiDAR scans and achieves accurate pose
estimation, while maintaining stable mapping quality for both short and long
trajectories. We built our method on a structured and sparse implicit
representation suitable for large-scale reconstruction and evaluated it using
the KITTI and MaiCity datasets. Our approach outperforms the baseline in terms
of mapping with unknown poses and achieves state-of-the-art localization
accuracy.
|
2412.20977 | UnrealZoo: Enriching Photo-realistic Virtual Worlds for Embodied AI | cs.AI cs.CV cs.RO | We introduce UnrealZoo, a rich collection of photo-realistic 3D virtual
worlds built on Unreal Engine, designed to reflect the complexity and
variability of the open worlds. Additionally, we offer a variety of playable
entities for embodied AI agents. Based on UnrealCV, we provide a suite of
easy-to-use Python APIs and tools for various potential applications, such as
data collection, environment augmentation, distributed training, and
benchmarking. We optimize the rendering and communication efficiency of
UnrealCV to support advanced applications, such as multi-agent interaction. Our
experiments benchmark agents in various complex scenes, focusing on visual
navigation and tracking, which are fundamental capabilities for embodied visual
intelligence. The results yield valuable insights into the advantages of
diverse training environments for reinforcement learning (RL) agents and the
challenges faced by current embodied vision agents, including those based on RL
and large vision-language models (VLMs), in open worlds. These challenges
involve latency in closed-loop control in dynamic scenes and reasoning about 3D
spatial structures in unstructured terrain.
|
2412.20980 | Efficient Parallel Genetic Algorithm for Perturbed Substructure
Optimization in Complex Network | cs.NE cs.SI | Evolutionary computing, particularly genetic algorithm (GA), is a
combinatorial optimization method inspired by natural selection and the
transmission of genetic information, which is widely used to identify optimal
solutions to complex problems through simulated programming and iteration. Due
to its strong adaptability, flexibility, and robustness, GA has shown
significant performance and potentiality on perturbed substructure optimization
(PSSO), an important graph mining problem that achieves its goals by modifying
network structures. However, the efficiency and practicality of GA-based PSSO
face enormous challenges due to the complexity and diversity of application
scenarios. While some research has explored acceleration frameworks in
evolutionary computing, their performance on PSSO remains limited due to a lack
of scenario generalizability. Based on these, this paper is the first to
present the GA-based PSSO Acceleration framework (GAPA), which simplifies the
GA development process and supports distributed acceleration. Specifically, it
reconstructs the genetic operation and designs a development framework for
efficient parallel acceleration. Meanwhile, GAPA includes an extensible library
that optimizes and accelerates 10 PSSO algorithms, covering 4 crucial tasks for
graph mining. Comprehensive experiments on 18 datasets across 4 tasks and 10
algorithms effectively demonstrate the superiority of GAPA, achieving an
average of 4x the acceleration of Evox. The repository is in
https://github.com/NetAlsGroup/GAPA.
|
2412.20984 | AlignAb: Pareto-Optimal Energy Alignment for Designing Nature-Like
Antibodies | cs.LG | We present a three-stage framework for training deep learning models
specializing in antibody sequence-structure co-design. We first pre-train a
language model using millions of antibody sequence data. Then, we employ the
learned representations to guide the training of a diffusion model for joint
optimization over both sequence and structure of antibodies. During the final
alignment stage, we optimize the model to favor antibodies with low repulsion
and high attraction to the antigen binding site, enhancing the rationality and
functionality of the designs. To mitigate conflicting energy preferences, we
extend AbDPO (Antibody Direct Preference Optimization) to guide the model
towards Pareto optimality under multiple energy-based alignment objectives.
Furthermore, we adopt an iterative learning paradigm with temperature scaling,
enabling the model to benefit from diverse online datasets without requiring
additional data. In practice, our proposed methods achieve high stability and
efficiency in producing a better Pareto front of antibody designs compared to
top samples generated by baselines and previous alignment techniques. Through
extensive experiments, we showcase the superior performance of our methods in
generating nature-like antibodies with high binding affinity consistently.
|
2412.20987 | RobustBlack: Challenging Black-Box Adversarial Attacks on
State-of-the-Art Defenses | cs.LG | Although adversarial robustness has been extensively studied in white-box
settings, recent advances in black-box attacks (including transfer- and
query-based approaches) are primarily benchmarked against weak defenses,
leaving a significant gap in the evaluation of their effectiveness against more
recent and moderate robust models (e.g., those featured in the Robustbench
leaderboard). In this paper, we question this lack of attention from black-box
attacks to robust models. We establish a framework to evaluate the
effectiveness of recent black-box attacks against both top-performing and
standard defense mechanisms, on the ImageNet dataset. Our empirical evaluation
reveals the following key findings: (1) the most advanced black-box attacks
struggle to succeed even against simple adversarially trained models; (2)
robust models that are optimized to withstand strong white-box attacks, such as
AutoAttack, also exhibits enhanced resilience against black-box attacks; and
(3) robustness alignment between the surrogate models and the target model
plays a key factor in the success rate of transfer-based attacks
|
2412.20992 | Verified Lifting of Deep learning Operators | cs.LG cs.PL stat.ML | Deep learning operators are fundamental components of modern deep learning
frameworks. With the growing demand for customized operators, it has become
increasingly common for developers to create their own. However, designing and
implementing operators is complex and error-prone, due to hardware-specific
optimizations and the need for numerical stability. There is a pressing need
for tools that can summarize the functionality of both existing and
user-defined operators. To address this gap, this work introduces a novel
framework for the verified lifting of deep learning operators, which
synthesizes high-level mathematical formulas from low-level implementations.
Our approach combines symbolic execution, syntax-guided synthesis, and
SMT-based verification to produce readable and formally verified mathematical
formulas. In synthesis, we employ a combination of top-down and bottom-up
strategies to explore the vast search space efficiently; In verification, we
design invariant synthesis patterns and leverage SMT solvers to validate the
correctness of the derived summaries; In simplification, we use egraph-based
techniques with custom rules to restore complex formulas to their natural,
intuitive forms. Evaluated on a dataset of deep learning operators implemented
in Triton from the real world, our method demonstrates the effectiveness of
synthesis and verification compared to existing techniques. This framework
bridges the gap between low-level implementations and high-level abstractions,
improving understanding and reliability in deep learning operator development.
|
2412.20993 | Efficiently Serving LLM Reasoning Programs with Certaindex | cs.LG cs.CL | The rapid evolution of large language models (LLMs) has unlocked their
capabilities in advanced reasoning tasks like mathematical problem-solving,
code generation, and legal analysis. Central to this progress are
inference-time reasoning algorithms, which refine outputs by exploring multiple
solution paths, at the cost of increasing compute demands and response
latencies. Existing serving systems fail to adapt to the scaling behaviors of
these algorithms or the varying difficulty of queries, leading to inefficient
resource use and unmet latency targets.
We present Dynasor, a system that optimizes inference-time compute for LLM
reasoning queries. Unlike traditional engines, Dynasor tracks and schedules
requests within reasoning queries and uses Certaindex, a proxy that measures
statistical reasoning progress based on model certainty, to guide compute
allocation dynamically. Dynasor co-adapts scheduling with reasoning progress:
it allocates more compute to hard queries, reduces compute for simpler ones,
and terminates unpromising queries early, balancing accuracy, latency, and
cost. On diverse datasets and algorithms, Dynasor reduces compute by up to 50%
in batch processing and sustaining 3.3x higher query rates or 4.7x tighter
latency SLOs in online serving.
|
2412.20995 | KARPA: A Training-free Method of Adapting Knowledge Graph as References
for Large Language Model's Reasoning Path Aggregation | cs.CL cs.AI | Large language models (LLMs) demonstrate exceptional performance across a
variety of tasks, yet they are often affected by hallucinations and the
timeliness of knowledge. Leveraging knowledge graphs (KGs) as external
knowledge sources has emerged as a viable solution, but existing methods for
LLM-based knowledge graph question answering (KGQA) are often limited by
step-by-step decision-making on KGs, restricting the global planning and
reasoning capabilities of LLMs, or they require fine-tuning or pre-training on
specific KGs. To address these challenges, we propose Knowledge graph Assisted
Reasoning Path Aggregation (KARPA), a novel framework that harnesses the global
planning abilities of LLMs for efficient and accurate KG reasoning. KARPA
operates in three steps: pre-planning relation paths using the LLM's global
planning capabilities, matching semantically relevant paths via an embedding
model, and reasoning over these paths to generate answers. Unlike existing KGQA
methods, KARPA avoids stepwise traversal, requires no additional training, and
is adaptable to various LLM architectures. Extensive experimental results show
that KARPA achieves state-of-the-art performance in KGQA tasks, delivering both
high efficiency and accuracy. Our code will be available on Github.
|
2412.20996 | Plug-and-Play Training Framework for Preference Optimization | cs.CL | Recently, preference optimization methods such as DPO have significantly
enhanced large language models (LLMs) in wide tasks including dialogue and
question-answering. However, current methods fail to account for the varying
difficulty levels of training samples during preference optimization, leading
to mediocre performance in tasks with high accuracy requirements, particularly
in mathematical reasoning. To address this limitation, we propose a novel
training framework, which employs multiple sampling to analyze output
distributions, assign different weights to samples, and incorporate these
weights into the preference optimization process. This plug-and-play approach
enables LLMs to prioritize challenging examples during training, improving
learning efficiency. Experimental results demonstrate that our framework
integrates seamlessly with various preference optimization methods and achieves
consistent improvements in mathematical reasoning tasks.
|
2412.20998 | T-DOM: A Taxonomy for Robotic Manipulation of Deformable Objects | cs.RO | Robotic grasp and manipulation taxonomies, inspired by observing human
manipulation strategies, can provide key guidance for tasks ranging from
robotic gripper design to the development of manipulation algorithms. The
existing grasp and manipulation taxonomies, however, often assume object
rigidity, which limits their ability to reason about the complex interactions
in the robotic manipulation of deformable objects. Hence, to assist in tasks
involving deformable objects, taxonomies need to capture more comprehensively
the interactions inherent in deformable object manipulation. To this end, we
introduce T-DOM, a taxonomy that analyses key aspects involved in the
manipulation of deformable objects, such as robot motion, forces, prehensile
and non-prehensile interactions and, for the first time, a detailed
classification of object deformations. To evaluate T-DOM, we curate a dataset
of ten tasks involving a variety of deformable objects, such as garments,
ropes, and surgical gloves, as well as diverse types of deformations. We
analyse the proposed tasks comparing the T-DOM taxonomy with previous well
established manipulation taxonomies. Our analysis demonstrates that T-DOM can
effectively distinguish between manipulation skills that were not identified in
other taxonomies, across different deformable objects and manipulation actions,
offering new categories to characterize a skill. The proposed taxonomy
significantly extends past work, providing a more fine-grained classification
that can be used to describe the robotic manipulation of deformable objects.
This work establishes a foundation for advancing deformable object
manipulation, bridging theoretical understanding and practical implementation
in robotic systems.
|
2412.21001 | LEASE: Offline Preference-based Reinforcement Learning with High Sample
Efficiency | cs.LG cs.AI | Offline preference-based reinforcement learning (PbRL) provides an effective
way to overcome the challenges of designing reward and the high costs of online
interaction. However, since labeling preference needs real-time human feedback,
acquiring sufficient preference labels is challenging. To solve this, this
paper proposes a offLine prEference-bAsed RL with high Sample Efficiency
(LEASE) algorithm, where a learned transition model is leveraged to generate
unlabeled preference data. Considering the pretrained reward model may generate
incorrect labels for unlabeled data, we design an uncertainty-aware mechanism
to ensure the performance of reward model, where only high confidence and low
variance data are selected. Moreover, we provide the generalization bound of
reward model to analyze the factors influencing reward accuracy, and
demonstrate that the policy learned by LEASE has theoretical improvement
guarantee. The developed theory is based on state-action pair, which can be
easily combined with other offline algorithms. The experimental results show
that LEASE can achieve comparable performance to baseline under fewer
preference data without online interaction.
|
2412.21004 | Weber-Fechner Law in Temporal Difference learning derived from Control
as Inference | cs.LG cs.RO | This paper investigates a novel nonlinear update rule based on temporal
difference (TD) errors in reinforcement learning (RL). The update rule in the
standard RL states that the TD error is linearly proportional to the degree of
updates, treating all rewards equally without no bias. On the other hand, the
recent biological studies revealed that there are nonlinearities in the TD
error and the degree of updates, biasing policies optimistic or pessimistic.
Such biases in learning due to nonlinearities are expected to be useful and
intentionally leftover features in biological learning. Therefore, this
research explores a theoretical framework that can leverage the nonlinearity
between the degree of the update and TD errors. To this end, we focus on a
control as inference framework, since it is known as a generalized formulation
encompassing various RL and optimal control methods. In particular, we
investigate the uncomputable nonlinear term needed to be approximately excluded
in the derivation of the standard RL from control as inference. By analyzing
it, Weber-Fechner law (WFL) is found, namely, perception (a.k.a. the degree of
updates) in response to stimulus change (a.k.a. TD error) is attenuated by
increase in the stimulus intensity (a.k.a. the value function). To numerically
reveal the utilities of WFL on RL, we then propose a practical implementation
using a reward-punishment framework and modifying the definition of optimality.
Analysis of this implementation reveals that two utilities can be expected i)
to increase rewards to a certain level early, and ii) to sufficiently suppress
punishment. We finally investigate and discuss the expected utilities through
simulations and robot experiments. As a result, the proposed RL algorithm with
WFL shows the expected utilities that accelerate the reward-maximizing startup
and continue to suppress punishments during learning.
|
2412.21006 | Verbosity-Aware Rationale Reduction: Effective Reduction of Redundant
Rationale via Principled Criteria | cs.CL cs.AI | Large Language Models (LLMs) rely on generating extensive intermediate
reasoning units (e.g., tokens, sentences) to enhance final answer quality
across a wide range of complex tasks. While generating multiple reasoning paths
or iteratively refining rationales proves effective for improving performance,
these approaches inevitably result in significantly higher inference costs. In
this work, we propose a novel sentence-level rationale reduction training
framework that leverages likelihood-based criteria, verbosity, to identify and
remove redundant reasoning sentences. Unlike previous approaches that utilize
token-level reduction, our sentence-level reduction framework maintains model
performance while reducing generation length. This preserves the original
reasoning abilities of LLMs and achieves an average 17.15% reduction in
generation costs across various models and tasks.
|
2412.21009 | Towards Identity-Aware Cross-Modal Retrieval: a Dataset and a Baseline | cs.CV cs.IR cs.MM | Recent advancements in deep learning have significantly enhanced
content-based retrieval methods, notably through models like CLIP that map
images and texts into a shared embedding space. However, these methods often
struggle with domain-specific entities and long-tail concepts absent from their
training data, particularly in identifying specific individuals. In this paper,
we explore the task of identity-aware cross-modal retrieval, which aims to
retrieve images of persons in specific contexts based on natural language
queries. This task is critical in various scenarios, such as for searching and
browsing personalized video collections or large audio-visual archives
maintained by national broadcasters. We introduce a novel dataset, COCO Person
FaceSwap (COCO-PFS), derived from the widely used COCO dataset and enriched
with deepfake-generated faces from VGGFace2. This dataset addresses the lack of
large-scale datasets needed for training and evaluating models for this task.
Our experiments assess the performance of different CLIP variations repurposed
for this task, including our architecture, Identity-aware CLIP (Id-CLIP), which
achieves competitive retrieval performance through targeted fine-tuning. Our
contributions lay the groundwork for more robust cross-modal retrieval systems
capable of recognizing long-tail identities and contextual nuances. Data and
code are available at https://github.com/mesnico/IdCLIP.
|
2412.21015 | MapQaTor: A System for Efficient Annotation of Map Query Datasets | cs.CL cs.HC | Mapping and navigation services like Google Maps, Apple Maps, Openstreet
Maps, are essential for accessing various location-based data, yet they often
struggle to handle natural language geospatial queries. Recent advancements in
Large Language Models (LLMs) show promise in question answering (QA), but
creating reliable geospatial QA datasets from map services remains challenging.
We introduce MapQaTor, a web application that streamlines the creation of
reproducible, traceable map-based QA datasets. With its plug-and-play
architecture, MapQaTor enables seamless integration with any maps API, allowing
users to gather and visualize data from diverse sources with minimal setup. By
caching API responses, the platform ensures consistent ground truth, enhancing
the reliability of the data even as real-world information evolves. MapQaTor
centralizes data retrieval, annotation, and visualization within a single
platform, offering a unique opportunity to evaluate the current state of
LLM-based geospatial reasoning while advancing their capabilities for improved
geospatial understanding. Evaluation metrics show that, MapQaTor speeds up the
annotation process by at least 30 times compared to manual methods,
underscoring its potential for developing geospatial resources, such as complex
map reasoning datasets. The website is live at: https://mapqator.github.io/ and
a demo video is available at: https://youtu.be/7_aV9Wmhs6Q.
|
2412.21022 | Text Classification: Neural Networks VS Machine Learning Models VS
Pre-trained Models | cs.LG | Text classification is a very common task nowadays and there are many
efficient methods and algorithms that we can employ to accomplish it.
Transformers have revolutionized the field of deep learning, particularly in
Natural Language Processing (NLP) and have rapidly expanded to other domains
such as computer vision, time-series analysis and more. The transformer model
was firstly introduced in the context of machine translation and its
architecture relies on self-attention mechanisms to capture complex
relationships within data sequences. It is able to handle long-range
dependencies more effectively than traditional neural networks (such as
Recurrent Neural Networks and Multilayer Perceptrons). In this work, we present
a comparison between different techniques to perform text classification. We
take into consideration seven pre-trained models, three standard neural
networks and three machine learning models. For standard neural networks and
machine learning models we also compare two embedding techniques: TF-IDF and
GloVe, with the latter consistently outperforming the former. Finally, we
demonstrate the results from our experiments where pre-trained models such as
BERT and DistilBERT always perform better than standard models/algorithms.
|
2412.21023 | EdgeRAG: Online-Indexed RAG for Edge Devices | cs.LG | Deploying Retrieval Augmented Generation (RAG) on resource-constrained edge
devices is challenging due to limited memory and processing power. In this
work, we propose EdgeRAG which addresses the memory constraint by pruning
embeddings within clusters and generating embeddings on-demand during
retrieval. To avoid the latency of generating embeddings for large tail
clusters, EdgeRAG pre-computes and stores embeddings for these clusters, while
adaptively caching remaining embeddings to minimize redundant computations and
further optimize latency. The result from BEIR suite shows that EdgeRAG offers
significant latency reduction over the baseline IVF index, but with similar
generation quality while allowing all of our evaluated datasets to fit into the
memory.
|
2412.21030 | Improving Location-based Thermal Emission Side-Channel Analysis Using
Iterative Transfer Learning | cs.LG cs.CR | This paper proposes the use of iterative transfer learning applied to deep
learning models for side-channel attacks. Currently, most of the side-channel
attack methods train a model for each individual byte, without considering the
correlation between bytes. However, since the models' parameters for attacking
different bytes may be similar, we can leverage transfer learning, meaning that
we first train the model for one of the key bytes, then use the trained model
as a pretrained model for the remaining bytes. This technique can be applied
iteratively, a process known as iterative transfer learning. Experimental
results show that when using thermal or power consumption map images as input,
and multilayer perceptron or convolutional neural network as the model, our
method improves average performance, especially when the amount of data is
insufficient.
|
2412.21033 | Plancraft: an evaluation dataset for planning with LLM agents | cs.CL cs.AI | We present Plancraft, a multi-modal evaluation dataset for LLM agents.
Plancraft has both a text-only and multi-modal interface, based on the
Minecraft crafting GUI. We include the Minecraft Wiki to evaluate tool use and
Retrieval Augmented Generation (RAG), as well as an oracle planner and oracle
RAG information extractor, to ablate the different components of a modern agent
architecture. To evaluate decision-making, Plancraft also includes a subset of
examples that are intentionally unsolvable, providing a realistic challenge
that requires the agent not only to complete tasks but also to decide whether
they are solvable at all. We benchmark both open-source and closed-source LLMs
and strategies on our task and compare their performance to a handcrafted
planner. We find that LLMs and VLMs struggle with the planning problems that
Plancraft introduces, and we offer suggestions on how to improve their
capabilities.
|
2412.21035 | Machine Learning Optimal Ordering in Global Routing Problems in
Semiconductors | cs.LG cs.DM | In this work, we propose a new method for ordering nets during the process of
layer assignment in global routing problems. The global routing problems that
we focus on in this work are based on routing problems that occur in the design
of substrates in multilayered semiconductor packages. The proposed new method
is based on machine learning techniques and we show that the proposed method
supersedes conventional net ordering techniques based on heuristic score
functions. We perform global routing experiments in multilayered semiconductor
package environments in order to illustrate that the routing order based on our
new proposed technique outperforms previous methods based on heuristics. Our
approach of using machine learning for global routing targets specifically the
net ordering step which we show in this work can be significantly improved by
deep learning.
|
2412.21036 | GePBench: Evaluating Fundamental Geometric Perception for Multimodal
Large Language Models | cs.CL | Multimodal large language models (MLLMs) have made significant progress in
integrating visual and linguistic understanding. Existing benchmarks typically
focus on high-level semantic capabilities, such as scene understanding and
visual reasoning, but often overlook a crucial, foundational ability: geometric
perception. Geometric perception involves understanding geometric shapes,
structures, and spatial relationships, which are essential for supporting
higher-level semantic tasks. Despite its importance, this capability remains
underexplored in current MLLM research. To address this gap, we introduce
GePBench, a novel benchmark designed to assess the geometric perception
abilities of MLLMs. Our extensive evaluations reveal that current
state-of-the-art MLLMs exhibit significant deficiencies in geometric perception
tasks. Furthermore, we show that models trained with GePBench data demonstrate
substantial improvements on a wide range of benchmark tasks, highlighting the
critical role of geometric perception in enabling advanced multimodal
applications. Our code and datasets will be publicly available.
|
2412.21037 | TangoFlux: Super Fast and Faithful Text to Audio Generation with Flow
Matching and Clap-Ranked Preference Optimization | cs.SD cs.AI cs.CL eess.AS | We introduce TangoFlux, an efficient Text-to-Audio (TTA) generative model
with 515M parameters, capable of generating up to 30 seconds of 44.1kHz audio
in just 3.7 seconds on a single A40 GPU. A key challenge in aligning TTA models
lies in the difficulty of creating preference pairs, as TTA lacks structured
mechanisms like verifiable rewards or gold-standard answers available for Large
Language Models (LLMs). To address this, we propose CLAP-Ranked Preference
Optimization (CRPO), a novel framework that iteratively generates and optimizes
preference data to enhance TTA alignment. We demonstrate that the audio
preference dataset generated using CRPO outperforms existing alternatives. With
this framework, TangoFlux achieves state-of-the-art performance across both
objective and subjective benchmarks. We open source all code and models to
support further research in TTA generation.
|
2412.21042 | Visual Style Prompt Learning Using Diffusion Models for Blind Face
Restoration | cs.CV cs.MM | Blind face restoration aims to recover high-quality facial images from
various unidentified sources of degradation, posing significant challenges due
to the minimal information retrievable from the degraded images. Prior
knowledge-based methods, leveraging geometric priors and facial features, have
led to advancements in face restoration but often fall short of capturing fine
details. To address this, we introduce a visual style prompt learning framework
that utilizes diffusion probabilistic models to explicitly generate visual
prompts within the latent space of pre-trained generative models. These prompts
are designed to guide the restoration process. To fully utilize the visual
prompts and enhance the extraction of informative and rich patterns, we
introduce a style-modulated aggregation transformation layer. Extensive
experiments and applications demonstrate the superiority of our method in
achieving high-quality blind face restoration. The source code is available at
\href{https://github.com/LonglongaaaGo/VSPBFR}{https://github.com/LonglongaaaGo/VSPBFR}.
|
2412.21044 | E2EDiff: Direct Mapping from Noise to Data for Enhanced Diffusion Models | cs.CV | Diffusion models have emerged as a powerful framework for generative
modeling, achieving state-of-the-art performance across various tasks. However,
they face several inherent limitations, including a training-sampling gap,
information leakage in the progressive noising process, and the inability to
incorporate advanced loss functions like perceptual and adversarial losses
during training. To address these challenges, we propose an innovative
end-to-end training framework that aligns the training and sampling processes
by directly optimizing the final reconstruction output. Our method eliminates
the training-sampling gap, mitigates information leakage by treating the
training process as a direct mapping from pure noise to the target data
distribution, and enables the integration of perceptual and adversarial losses
into the objective. Extensive experiments on benchmarks such as COCO30K and
HW30K demonstrate that our approach consistently outperforms traditional
diffusion models, achieving superior results in terms of FID and CLIP score,
even with reduced sampling steps. These findings highlight the potential of
end-to-end training to advance diffusion-based generative models toward more
robust and efficient solutions.
|
2412.21046 | Mind the truncation gap: challenges of learning on dynamic graphs with
recurrent architectures | cs.LG | Systems characterized by evolving interactions, prevalent in social,
financial, and biological domains, are effectively modeled as continuous-time
dynamic graphs (CTDGs). To manage the scale and complexity of these graph
datasets, machine learning (ML) approaches have become essential. However,
CTDGs pose challenges for ML because traditional static graph methods do not
naturally account for event timings. Newer approaches, such as graph recurrent
neural networks (GRNNs), are inherently time-aware and offer advantages over
static methods for CTDGs. However, GRNNs face another issue: the short
truncation of backpropagation-through-time (BPTT), whose impact has not been
properly examined until now. In this work, we demonstrate that this truncation
can limit the learning of dependencies beyond a single hop, resulting in
reduced performance. Through experiments on a novel synthetic task and
real-world datasets, we reveal a performance gap between full
backpropagation-through-time (F-BPTT) and the truncated
backpropagation-through-time (T-BPTT) commonly used to train GRNN models. We
term this gap the "truncation gap" and argue that understanding and addressing
it is essential as the importance of CTDGs grows, discussing potential future
directions for research in this area.
|
2412.21049 | Learning Epidemiological Dynamics via the Finite Expression Method | cs.LG cs.NA math.NA | Modeling and forecasting the spread of infectious diseases is essential for
effective public health decision-making. Traditional epidemiological models
rely on expert-defined frameworks to describe complex dynamics, while neural
networks, despite their predictive power, often lack interpretability due to
their ``black-box" nature. This paper introduces the Finite Expression Method,
a symbolic learning framework that leverages reinforcement learning to derive
explicit mathematical expressions for epidemiological dynamics. Through
numerical experiments on both synthetic and real-world datasets, FEX
demonstrates high accuracy in modeling and predicting disease spread, while
uncovering explicit relationships among epidemiological variables. These
results highlight FEX as a powerful tool for infectious disease modeling,
combining interpretability with strong predictive performance to support
practical applications in public health.
|
2412.21051 | Toward Intelligent and Secure Cloud: Large Language Model Empowered
Proactive Defense | cs.CR cs.AI cs.NI | The rapid evolution of cloud computing technologies and the increasing number
of cloud applications have provided a large number of benefits in daily lives.
However, the diversity and complexity of different components pose a
significant challenge to cloud security, especially when dealing with
sophisticated and advanced cyberattacks. Recent advancements in generative
foundation models (GFMs), particularly in the large language models (LLMs),
offer promising solutions for security intelligence. By exploiting the powerful
abilities in language understanding, data analysis, task inference, action
planning, and code generation, we present LLM-PD, a novel proactive defense
architecture that defeats various threats in a proactive manner. LLM-PD can
efficiently make a decision through comprehensive data analysis and sequential
reasoning, as well as dynamically creating and deploying actionable defense
mechanisms on the target cloud. Furthermore, it can flexibly self-evolve based
on experience learned from previous interactions and adapt to new attack
scenarios without additional training. The experimental results demonstrate its
remarkable ability in terms of defense effectiveness and efficiency,
particularly highlighting an outstanding success rate when compared with other
existing methods.
|
2412.21052 | Towards Effective Discrimination Testing for Generative AI | cs.LG cs.AI cs.CY | Generative AI (GenAI) models present new challenges in regulating against
discriminatory behavior. In this paper, we argue that GenAI fairness research
still has not met these challenges; instead, a significant gap remains between
existing bias assessment methods and regulatory goals. This leads to
ineffective regulation that can allow deployment of reportedly fair, yet
actually discriminatory, GenAI systems. Towards remedying this problem, we
connect the legal and technical literature around GenAI bias evaluation and
identify areas of misalignment. Through four case studies, we demonstrate how
this misalignment between fairness testing techniques and regulatory goals can
result in discriminatory outcomes in real-world deployments, especially in
adaptive or complex environments. We offer practical recommendations for
improving discrimination testing to better align with regulatory goals and
enhance the reliability of fairness assessments in future deployments.
|
2412.21059 | VisionReward: Fine-Grained Multi-Dimensional Human Preference Learning
for Image and Video Generation | cs.CV | We present a general strategy to aligning visual generation models -- both
image and video generation -- with human preference. To start with, we build
VisionReward -- a fine-grained and multi-dimensional reward model. We decompose
human preferences in images and videos into multiple dimensions, each
represented by a series of judgment questions, linearly weighted and summed to
an interpretable and accurate score. To address the challenges of video quality
assessment, we systematically analyze various dynamic features of videos, which
helps VisionReward surpass VideoScore by 17.2% and achieve top performance for
video preference prediction. Based on VisionReward, we develop a
multi-objective preference learning algorithm that effectively addresses the
issue of confounding factors within preference data. Our approach significantly
outperforms existing image and video scoring methods on both machine metrics
and human evaluation. All code and datasets are provided at
https://github.com/THUDM/VisionReward.
|
2412.21061 | BridgePure: Revealing the Fragility of Black-box Data Protection | cs.LG | Availability attacks, or unlearnable examples, are defensive techniques that
allow data owners to modify their datasets in ways that prevent unauthorized
machine learning models from learning effectively while maintaining the data's
intended functionality. It has led to the release of popular black-box tools
for users to upload personal data and receive protected counterparts. In this
work, we show such black-box protections can be substantially bypassed if a
small set of unprotected in-distribution data is available. Specifically, an
adversary can (1) easily acquire (unprotected, protected) pairs by querying the
black-box protections with the unprotected dataset; and (2) train a diffusion
bridge model to build a mapping. This mapping, termed BridgePure, can
effectively remove the protection from any previously unseen data within the
same distribution. Under this threat model, our method demonstrates superior
purification performance on classification and style mimicry tasks, exposing
critical vulnerabilities in black-box data protection.
|
2412.21063 | Varformer: Adapting VAR's Generative Prior for Image Restoration | cs.CV | Generative models trained on extensive high-quality datasets effectively
capture the structural and statistical properties of clean images, rendering
them powerful priors for transforming degraded features into clean ones in
image restoration. VAR, a novel image generative paradigm, surpasses diffusion
models in generation quality by applying a next-scale prediction approach. It
progressively captures both global structures and fine-grained details through
the autoregressive process, consistent with the multi-scale restoration
principle widely acknowledged in the restoration community. Furthermore, we
observe that during the image reconstruction process utilizing VAR, scale
predictions automatically modulate the input, facilitating the alignment of
representations at subsequent scales with the distribution of clean images. To
harness VAR's adaptive distribution alignment capability in image restoration
tasks, we formulate the multi-scale latent representations within VAR as the
restoration prior, thus advancing our delicately designed VarFormer framework.
The strategic application of these priors enables our VarFormer to achieve
remarkable generalization on unseen tasks while also reducing training
computational costs. Extensive experiments underscores that our VarFormer
outperforms existing multi-task image restoration methods across various
restoration tasks.
|
2412.21065 | Efficient Multi-Task Inferencing with a Shared Backbone and Lightweight
Task-Specific Adapters for Automatic Scoring | cs.CL | The integration of Artificial Intelligence (AI) in education requires
scalable and efficient frameworks that balance performance, adaptability, and
cost. This paper addresses these needs by proposing a shared backbone model
architecture enhanced with lightweight LoRA adapters for task-specific
fine-tuning, targeting the automated scoring of student responses across 27
mutually exclusive tasks. By achieving competitive performance (average QWK of
0.848 compared to 0.888 for fully fine-tuned models) while reducing GPU memory
consumption by 60% and inference latency by 40%, the framework demonstrates
significant efficiency gains. This approach aligns with the workshops' focus on
improving language models for educational tasks, creating responsible
innovations for cost-sensitive deployment, and supporting educators by
streamlining assessment workflows. The findings underscore the potential of
scalable AI to enhance learning outcomes while maintaining fairness and
transparency in automated scoring systems.
|
2412.21069 | Privacy-Aware Multi-Device Cooperative Edge Inference with Distributed
Resource Bidding | eess.SY cs.LG cs.NI cs.SY | Mobile edge computing (MEC) has empowered mobile devices (MDs) in supporting
artificial intelligence (AI) applications through collaborative efforts with
proximal MEC servers. Unfortunately, despite the great promise of device-edge
cooperative AI inference, data privacy becomes an increasing concern. In this
paper, we develop a privacy-aware multi-device cooperative edge inference
system for classification tasks, which integrates a distributed bidding
mechanism for the MEC server's computational resources. Intermediate feature
compression is adopted as a principled approach to minimize data privacy
leakage. To determine the bidding values and feature compression ratios in a
distributed fashion, we formulate a decentralized partially observable Markov
decision process (DEC-POMDP) model, for which, a multi-agent deep deterministic
policy gradient (MADDPG)-based algorithm is developed. Simulation results
demonstrate the effectiveness of the proposed algorithm in privacy-preserving
cooperative edge inference. Specifically, given a sufficient level of data
privacy protection, the proposed algorithm achieves 0.31-0.95% improvements in
classification accuracy compared to the approach being agnostic to the wireless
channel conditions. The performance is further enhanced by 1.54-1.67% by
considering the difficulties of inference data.
|
2412.21071 | Investigating layer-selective transfer learning of QAOA parameters for
Max-Cut problem | quant-ph cond-mat.dis-nn cs.LG | Quantum approximate optimization algorithm (QAOA) is a variational quantum
algorithm (VQA) ideal for noisy intermediate-scale quantum (NISQ) processors,
and is highly successful for solving combinatorial optimization problems
(COPs). It has been observed that the optimal variational parameters obtained
from one instance of a COP can be transferred to another instance, producing
sufficiently satisfactory solutions for the latter. In this context, a suitable
method for further improving the solution is to fine-tune a subset of the
transferred parameters. We numerically explore the role of optimizing
individual QAOA layers in improving the approximate solution of the Max-Cut
problem after parameter transfer. We also investigate the trade-off between a
good approximation and the required optimization time when optimizing
transferred QAOA parameters. These studies show that optimizing a subset of
layers can be more effective at a lower time-cost compared to optimizing all
layers.
|
2412.21072 | Enhanced coarsening of charge density waves induced by electron
correlation: Machine-learning enabled large-scale dynamical simulations | cond-mat.str-el cond-mat.stat-mech cs.LG | The phase ordering kinetics of emergent orders in correlated electron systems
is a fundamental topic in non-equilibrium physics, yet it remains largely
unexplored. The intricate interplay between quasiparticles and emergent
order-parameter fields could lead to unusual coarsening dynamics that is beyond
the standard theories. However, accurate treatment of both quasiparticles and
collective degrees of freedom is a multi-scale challenge in dynamical
simulations of correlated electrons. Here we leverage modern machine learning
(ML) methods to achieve a linear-scaling algorithm for simulating the
coarsening of charge density waves (CDWs), one of the fundamental symmetry
breaking phases in functional electron materials. We demonstrate our approach
on the square-lattice Hubbard-Holstein model and uncover an intriguing
enhancement of CDW coarsening which is related to the screening of on-site
potential by electron-electron interactions. Our study provides fresh insights
into the role of electron correlations in non-equilibrium dynamics and
underscores the promise of ML force-field approaches for advancing multi-scale
dynamical modeling of correlated electron systems.
|
2412.21079 | Edicho: Consistent Image Editing in the Wild | cs.CV | As a verified need, consistent editing across in-the-wild images remains a
technical challenge arising from various unmanageable factors, like object
poses, lighting conditions, and photography environments. Edicho steps in with
a training-free solution based on diffusion models, featuring a fundamental
design principle of using explicit image correspondence to direct editing.
Specifically, the key components include an attention manipulation module and a
carefully refined classifier-free guidance (CFG) denoising strategy, both of
which take into account the pre-estimated correspondence. Such an
inference-time algorithm enjoys a plug-and-play nature and is compatible to
most diffusion-based editing methods, such as ControlNet and BrushNet.
Extensive results demonstrate the efficacy of Edicho in consistent cross-image
editing under diverse settings. We will release the code to facilitate future
studies.
|
2412.21080 | Vinci: A Real-time Embodied Smart Assistant based on Egocentric
Vision-Language Model | cs.CV | We introduce Vinci, a real-time embodied smart assistant built upon an
egocentric vision-language model. Designed for deployment on portable devices
such as smartphones and wearable cameras, Vinci operates in an "always on"
mode, continuously observing the environment to deliver seamless interaction
and assistance. Users can wake up the system and engage in natural
conversations to ask questions or seek assistance, with responses delivered
through audio for hands-free convenience. With its ability to process long
video streams in real-time, Vinci can answer user queries about current
observations and historical context while also providing task planning based on
past interactions. To further enhance usability, Vinci integrates a video
generation module that creates step-by-step visual demonstrations for tasks
that require detailed guidance. We hope that Vinci can establish a robust
framework for portable, real-time egocentric AI systems, empowering users with
contextual and actionable insights. We release the complete implementation for
the development of the device in conjunction with a demo web platform to test
uploaded videos at https://github.com/OpenGVLab/vinci.
|
2412.21082 | Quantum Diffusion Model for Quark and Gluon Jet Generation | quant-ph cs.LG hep-ph | Diffusion models have demonstrated remarkable success in image generation,
but they are computationally intensive and time-consuming to train. In this
paper, we introduce a novel diffusion model that benefits from quantum
computing techniques in order to mitigate computational challenges and enhance
generative performance within high energy physics data. The fully quantum
diffusion model replaces Gaussian noise with random unitary matrices in the
forward process and incorporates a variational quantum circuit within the U-Net
in the denoising architecture. We run evaluations on the structurally complex
quark and gluon jets dataset from the Large Hadron Collider. The results
demonstrate that the fully quantum and hybrid models are competitive with a
similar classical model for jet generation, highlighting the potential of using
quantum techniques for machine learning problems.
|
2412.21084 | On the Generalizability of Machine Learning-based Ransomware Detection
in Block Storage | cs.CR cs.LG | Ransomware represents a pervasive threat, traditionally countered at the
operating system, file-system, or network levels. However, these approaches
often introduce significant overhead and remain susceptible to circumvention by
attackers. Recent research activity started looking into the detection of
ransomware by observing block IO operations. However, this approach exhibits
significant detection challenges. Recognizing these limitations, our research
pivots towards enabling robust ransomware detection in storage systems keeping
in mind their limited computational resources available. To perform our
studies, we propose a kernel-based framework capable of efficiently extracting
and analyzing IO operations to identify ransomware activity. The framework can
be adopted to storage systems using computational storage devices to improve
security and fully hide detection overheads. Our method employs a refined set
of computationally light features optimized for ML models to accurately discern
malicious from benign activities.
Using this lightweight approach, we study a wide range of generalizability
aspects and analyze the performance of these models across a large space of
setups and configurations covering a wide range of realistic real-world
scenarios. We reveal various trade-offs and provide strong arguments for the
generalizability of storage-based detection of ransomware and show that our
approach outperforms currently available ML-based ransomware detection in
storage. Empirical validation reveals that our decision tree-based models
achieve remarkable effectiveness, evidenced by higher median F1 scores of up to
12.8%, lower false negative rates of up to 10.9% and particularly decreased
false positive rates of up to 17.1% compared to existing storage-based
detection approaches.
|
2412.21088 | Advances in Multi-agent Reinforcement Learning: Persistent Autonomy and
Robot Learning Lab Report 2024 | cs.MA | Multi-Agent Reinforcement Learning (MARL) approaches have emerged as popular
solutions to address the general challenges of cooperation in multi-agent
environments, where the success of achieving shared or individual goals
critically depends on the coordination and collaboration between agents.
However, existing cooperative MARL methods face several challenges intrinsic to
multi-agent systems, such as the curse of dimensionality, non-stationarity, and
the need for a global exploration strategy. Moreover, the presence of agents
with constraints (e.g., limited battery life, restricted mobility) or distinct
roles further exacerbates these challenges. This document provides an overview
of recent advances in Multi-Agent Reinforcement Learning (MARL) conducted at
the Persistent Autonomy and Robot Learning (PeARL) lab at the University of
Massachusetts Lowell. We briefly discuss various research directions and
present a selection of approaches proposed in our most recent publications. For
each proposed approach, we also highlight potential future directions to
further advance the field.
|
2412.21095 | Lyapunov-Based Deep Neural Networks for Adaptive Control of Stochastic
Nonlinear Systems | eess.SY cs.SY | Controlling nonlinear stochastic dynamical systems involves substantial
challenges when the dynamics contain unknown and unstructured nonlinear
state-dependent terms. For such complex systems, deep neural networks can serve
as powerful black box approximators for the unknown drift and diffusion
processes. Recent developments construct Lyapunov-based deep neural network
(Lb-DNN) controllers to compensate for deterministic uncertainties using
adaptive weight update laws derived from a Lyapunov-based analysis based on
insights from the compositional structure of the DNN architecture. However,
these Lb-DNN controllers do not account for non-deterministic uncertainties.
This paper develops Lb-DNNs to adaptively compensate for both the drift and
diffusion uncertainties of nonlinear stochastic dynamic systems. Through a
Lyapunov-based stability analysis, a DNN-based approximation and corresponding
DNN weight adaptation laws are constructed to eliminate the unknown
state-dependent terms resulting from the nonlinear diffusion and drift
processes. The tracking error is shown to be uniformly ultimately bounded in
probability. Simulations are performed on a nonlinear stochastic dynamical
system to show efficacy of the proposed method.
|
2412.21102 | Exploring and Controlling Diversity in LLM-Agent Conversation | cs.CL cs.AI | Diversity is a critical aspect of multi-agent communication. In this paper,
we focus on controlling and exploring diversity in the context of open-domain
multi-agent conversations, particularly for world simulation applications. We
propose Adaptive Prompt Pruning (APP), a novel method that dynamically adjusts
the content of the utterance generation prompt to control diversity using a
single parameter, lambda. Through extensive experiments, we show that APP
effectively controls the output diversity across models and datasets, with
pruning more information leading to more diverse output. We comprehensively
analyze the relationship between prompt content and conversational diversity.
Our findings reveal that information from all components of the prompt
generally constrains the diversity of the output, with the Memory block
exerting the most significant influence. APP is compatible with established
techniques like temperature sampling and top-p sampling, providing a versatile
tool for diversity management. To address the trade-offs of increased
diversity, such as inconsistencies with omitted information, we incorporate a
post-generation correction step, which effectively balances diversity
enhancement with output consistency. Additionally, we examine how prompt
structure, including component order and length, impacts diversity. This study
addresses key questions surrounding diversity in multi-agent world simulation,
offering insights into its control, influencing factors, and associated
trade-offs. Our contributions lay the foundation for systematically engineering
diversity in LLM-based multi-agent collaborations, advancing their
effectiveness in real-world applications.
|
2412.21104 | On Parallel External-Memory Bidirectional Search | cs.AI | Parallelization and External Memory (PEM) techniques have significantly
enhanced the capabilities of search algorithms when solving large-scale
problems. Previous research on PEM has primarily centered on unidirectional
algorithms, with only one publication on bidirectional PEM that focuses on the
meet-in-the-middle (MM) algorithm. Building upon this foundation, this paper
presents a framework that integrates both uni- and bi-directional best-first
search algorithms into this framework. We then develop a PEM variant of the
state-of-the-art bidirectional heuristic search (BiHS) algorithm BAE*
(PEM-BAE*). As previous work on BiHS did not focus on scaling problem sizes,
this work enables us to evaluate bidirectional algorithms on hard problems.
Empirical evaluation shows that PEM-BAE* outperforms the PEM variants of A* and
the MM algorithm, as well as a parallel variant of IDA*. These findings mark a
significant milestone, revealing that bidirectional search algorithms clearly
outperform unidirectional search algorithms across several domains, even when
equipped with state-of-the-art heuristics.
|
2412.21117 | Prometheus: 3D-Aware Latent Diffusion Models for Feed-Forward Text-to-3D
Scene Generation | cs.CV | In this work, we introduce Prometheus, a 3D-aware latent diffusion model for
text-to-3D generation at both object and scene levels in seconds. We formulate
3D scene generation as multi-view, feed-forward, pixel-aligned 3D Gaussian
generation within the latent diffusion paradigm. To ensure generalizability, we
build our model upon pre-trained text-to-image generation model with only
minimal adjustments, and further train it using a large number of images from
both single-view and multi-view datasets. Furthermore, we introduce an RGB-D
latent space into 3D Gaussian generation to disentangle appearance and geometry
information, enabling efficient feed-forward generation of 3D Gaussians with
better fidelity and geometry. Extensive experimental results demonstrate the
effectiveness of our method in both feed-forward 3D Gaussian reconstruction and
text-to-3D generation. Project page:
https://freemty.github.io/project-prometheus/
|
2412.21118 | Efficient Approximate Degenerate Ordered Statistics Decoding for Quantum
Codes via Reliable Subset Reduction | quant-ph cs.IT math.IT | Efficient decoding of quantum codes is crucial for achieving high-performance
quantum error correction. In this paper, we introduce the concept of
approximate degenerate decoding and integrate it with ordered statistics
decoding (OSD). Previously, we proposed a reliability metric that leverages
both hard and soft decisions from the output of belief propagation (BP), which
is particularly useful for identifying highly reliable subsets of variables.
Using the approach of reliable subset reduction, we reduce the effective
problem size. Additionally, we identify a degeneracy condition that allows
high-order OSD to be simplified to order-0 OSD. By integrating these
techniques, we present an ADOSD algorithm that significantly improves OSD
efficiency in the code capacity noise model. We demonstrate the effectiveness
of our BP+ADOSD approach through extensive simulations on a varity of quantum
codes, including generalized hypergraph-product codes, topological codes,
lift-connected surface codes, and bivariate bicycle codes. The results indicate
that the BP+ADOSD decoder outperforms existing methods, achieving higher error
thresholds and enhanced performance at low error rates. Additionally, we
validate the efficiency of our approach in terms of computational time,
demonstrating that ADOSD requires, on average, the same amount of time as two
to three BP iterations on surface codes at a depolarizing error rate of around
$1\%$. All the proposed algorithms are compared using single-threaded CPU
implementations.
|
2412.21124 | Adaptive Batch Size Schedules for Distributed Training of Language
Models with Data and Model Parallelism | cs.LG math.OC stat.ML | An appropriate choice of batch sizes in large-scale model training is
crucial, yet it involves an intrinsic yet inevitable dilemma: large-batch
training improves training efficiency in terms of memory utilization, while
generalization performance often deteriorates due to small amounts of gradient
noise. Despite this dilemma, the common practice of choosing batch sizes in
language model training often prioritizes training efficiency -- employing
either constant large sizes with data parallelism or implementing batch size
warmup schedules. However, such batch size schedule designs remain heuristic
and often fail to adapt to training dynamics, presenting the challenge of
designing adaptive batch size schedules. Given the abundance of available
datasets and the data-hungry nature of language models, data parallelism has
become an indispensable distributed training paradigm, enabling the use of
larger batch sizes for gradient computation. However, vanilla data parallelism
requires replicas of model parameters, gradients, and optimizer states at each
worker, which prohibits training larger models with billions of parameters. To
optimize memory usage, more advanced parallelism strategies must be employed.
In this work, we propose general-purpose and theoretically principled adaptive
batch size schedules compatible with data parallelism and model parallelism. We
develop a practical implementation with PyTorch Fully Sharded Data Parallel,
facilitating the pretraining of language models of different sizes. We
empirically demonstrate that our proposed approaches outperform constant batch
sizes and heuristic batch size warmup schedules in the pretraining of models in
the Llama family, with particular focus on smaller models with up to 3 billion
parameters. We also establish theoretical convergence guarantees for such
adaptive batch size schedules with Adam for general smooth nonconvex
objectives.
|
2412.21127 | What Makes for a Good Stereoscopic Image? | cs.CV | With rapid advancements in virtual reality (VR) headsets, effectively
measuring stereoscopic quality of experience (SQoE) has become essential for
delivering immersive and comfortable 3D experiences. However, most existing
stereo metrics focus on isolated aspects of the viewing experience such as
visual discomfort or image quality, and have traditionally faced data
limitations. To address these gaps, we present SCOPE (Stereoscopic COntent
Preference Evaluation), a new dataset comprised of real and synthetic
stereoscopic images featuring a wide range of common perceptual distortions and
artifacts. The dataset is labeled with preference annotations collected on a VR
headset, with our findings indicating a notable degree of consistency in user
preferences across different headsets. Additionally, we present iSQoE, a new
model for stereo quality of experience assessment trained on our dataset. We
show that iSQoE aligns better with human preferences than existing methods when
comparing mono-to-stereo conversion methods.
|
2412.21132 | DeepF-fNet: a physics-informed neural network for vibration isolation
optimization | physics.comp-ph cs.LG eess.SP | Structural optimization is essential for designing safe, efficient, and
durable components with minimal material usage. Traditional methods for
vibration control often rely on active systems to mitigate unpredictable
vibrations, which may lead to resonance and potential structural failure.
However, these methods face significant challenges when addressing the
nonlinear inverse eigenvalue problems required for optimizing structures
subjected to a wide range of frequencies. As a result, no existing approach has
effectively addressed the need for real-time vibration suppression within this
context, particularly in high-performance environments such as automotive
noise, vibration and harshness, where computational efficiency is crucial.
This study introduces DeepF-fNet, a novel neural network framework designed
to replace traditional active systems in vibration-based structural
optimization. Leveraging DeepONets within the context of physics-informed
neural networks, DeepF-fNet integrates both data and the governing physical
laws. This enables rapid identification of optimal parameters to suppress
critical vibrations at specific frequencies, offering a more efficient and
real-time alternative to conventional methods.
The proposed framework is validated through a case study involving a locally
resonant metamaterial used to isolate structures from user-defined frequency
ranges. The results demonstrate that DeepF-fNet outperforms traditional genetic
algorithms in terms of computational speed while achieving comparable results,
making it a promising tool for vibration-sensitive applications. By replacing
active systems with machine learning techniques, DeepF-fNet paves the way for
more efficient and cost-effective structural optimization in real-world
scenarios.
|
2412.21139 | Training Software Engineering Agents and Verifiers with SWE-Gym | cs.SE cs.CL | We present SWE-Gym, the first environment for training real-world software
engineering (SWE) agents. SWE-Gym contains 2,438 real-world Python task
instances, each comprising a codebase with an executable runtime environment,
unit tests, and a task specified in natural language. We use SWE-Gym to train
language model based SWE agents , achieving up to 19% absolute gains in resolve
rate on the popular SWE-Bench Verified and Lite test sets. We also experiment
with inference-time scaling through verifiers trained on agent trajectories
sampled from SWE-Gym. When combined with our fine-tuned SWE agents, we achieve
32.0% and 26.0% on SWE-Bench Verified and Lite, respectively, reflecting a new
state-of-the-art for open-weight SWE agents. To facilitate further research, we
publicly release SWE-Gym, models, and agent trajectories.
|
2412.21140 | Facilitating large language model Russian adaptation with Learned
Embedding Propagation | cs.CL cs.AI | Rapid advancements of large language model (LLM) technologies led to the
introduction of powerful open-source instruction-tuned LLMs that have the same
text generation quality as the state-of-the-art counterparts such as GPT-4.
While the emergence of such models accelerates the adoption of LLM technologies
in sensitive-information environments the authors of such models don not
disclose the training data necessary for replication of the results thus making
the achievements model-exclusive. Since those open-source models are also
multilingual this in turn reduces the benefits of training a language specific
LLMs as improved inference computation efficiency becomes the only guaranteed
advantage of such costly procedure. More cost-efficient options such as
vocabulary extension and subsequent continued pre-training are also inhibited
by the lack of access to high-quality instruction-tuning data since it is the
major factor behind the resulting LLM task-solving capabilities. To address the
limitations and cut the costs of the language adaptation pipeline we propose
Learned Embedding Propagation (LEP). Unlike existing approaches our method has
lower training data size requirements due to minimal impact on existing LLM
knowledge which we reinforce using novel ad-hoc embedding propagation procedure
that allows to skip the instruction-tuning step and instead implant the new
language knowledge directly into any existing instruct-tuned variant. We
evaluated four Russian vocabulary adaptations for LLaMa-3-8B and Mistral-7B,
showing that LEP is competitive with traditional instruction-tuning methods,
achieving performance comparable to OpenChat 3.5 and LLaMa-3-8B-Instruct, with
further improvements via self-calibration and continued tuning enhancing
task-solving capabilities.
|
2412.21149 | Functional Risk Minimization | cs.LG | The field of Machine Learning has changed significantly since the 1970s.
However, its most basic principle, Empirical Risk Minimization (ERM), remains
unchanged. We propose Functional Risk Minimization~(FRM), a general framework
where losses compare functions rather than outputs. This results in better
performance in supervised, unsupervised, and RL experiments. In the FRM
paradigm, for each data point $(x_i,y_i)$ there is function $f_{\theta_i}$ that
fits it: $y_i = f_{\theta_i}(x_i)$. This allows FRM to subsume ERM for many
common loss functions and to capture more realistic noise processes. We also
show that FRM provides an avenue towards understanding generalization in the
modern over-parameterized regime, as its objective can be framed as finding the
simplest model that fits the training data.
|
2412.21151 | PyG-SSL: A Graph Self-Supervised Learning Toolkit | cs.LG cs.AI | Graph Self-Supervised Learning (SSL) has emerged as a pivotal area of
research in recent years. By engaging in pretext tasks to learn the intricate
topological structures and properties of graphs using unlabeled data, these
graph SSL models achieve enhanced performance, improved generalization, and
heightened robustness. Despite the remarkable achievements of these graph SSL
methods, their current implementation poses significant challenges for
beginners and practitioners due to the complex nature of graph structures,
inconsistent evaluation metrics, and concerns regarding reproducibility hinder
further progress in this field. Recognizing the growing interest within the
research community, there is an urgent need for a comprehensive,
beginner-friendly, and accessible toolkit consisting of the most representative
graph SSL algorithms. To address these challenges, we present a Graph SSL
toolkit named PyG-SSL, which is built upon PyTorch and is compatible with
various deep learning and scientific computing backends. Within the toolkit, we
offer a unified framework encompassing dataset loading, hyper-parameter
configuration, model training, and comprehensive performance evaluation for
diverse downstream tasks. Moreover, we provide beginner-friendly tutorials and
the best hyper-parameters of each graph SSL algorithm on different graph
datasets, facilitating the reproduction of results. The GitHub repository of
the library is https://github.com/iDEA-iSAIL-Lab-UIUC/pyg-ssl.
|
2412.21154 | Aviary: training language agents on challenging scientific tasks | cs.AI cs.CL cs.LG | Solving complex real-world tasks requires cycles of actions and observations.
This is particularly true in science, where tasks require many cycles of
analysis, tool use, and experimentation. Language agents are promising for
automating intellectual tasks in science because they can interact with tools
via natural language or code. Yet their flexibility creates conceptual and
practical challenges for software implementations, since agents may comprise
non-standard components such as internal reasoning, planning, tool usage, as
well as the inherent stochasticity of temperature-sampled language models.
Here, we introduce Aviary, an extensible gymnasium for language agents. We
formalize agents as policies solving language-grounded partially observable
Markov decision processes, which we term language decision processes. We then
implement five environments, including three challenging scientific
environments: (1) manipulating DNA constructs for molecular cloning, (2)
answering research questions by accessing scientific literature, and (3)
engineering protein stability. These environments were selected for their focus
on multi-step reasoning and their relevance to contemporary biology research.
Finally, with online training and scaling inference-time compute, we show that
language agents backed by open-source, non-frontier LLMs can match and exceed
both frontier LLM agents and human experts on multiple tasks at up to 100x
lower inference cost.
|
2412.21156 | Unified dimensionality reduction techniques in chronic liver disease
detection | cs.LG | Globally, chronic liver disease continues to be a major health concern that
requires precise predictive models for prompt detection and treatment. Using
the Indian Liver Patient Dataset (ILPD) from the University of California at
Irvine's UCI Machine Learning Repository, a number of machine learning
algorithms are investigated in this study. The main focus of our research is
this dataset, which includes the medical records of 583 patients, 416 of whom
have been diagnosed with liver disease and 167 of whom have not. There are
several aspects to this work, including feature extraction and dimensionality
reduction methods like Linear Discriminant Analysis (LDA), Factor Analysis
(FA), t-distributed Stochastic Neighbour Embedding (t-SNE), and Uniform
Manifold Approximation and Projection (UMAP). The purpose of the study is to
investigate how well these approaches work for converting high-dimensional
datasets and improving prediction accuracy. To assess the prediction ability of
the improved models, a number of classification methods were used, such as
Multi-layer Perceptron, Random Forest, K-nearest neighbours, and Logistic
Regression. Remarkably, the improved models performed admirably, with Random
Forest having the highest accuracy of 98.31\% in 10-fold cross-validation and
95.79\% in train-test split evaluation. Findings offer important new
perspectives on the choice and use of customized feature extraction and
dimensionality reduction methods, which improve predictive models for patients
with chronic liver disease.
|
2412.21161 | Open RAN-Enabled Deep Learning-Assisted Mobility Management for
Connected Vehicles | cs.NI cs.AI | Connected Vehicles (CVs) can leverage the unique features of 5G and future
6G/NextG networks to enhance Intelligent Transportation System (ITS) services.
However, even with advancements in cellular network generations, CV
applications may experience communication interruptions in high-mobility
scenarios due to frequent changes of serving base station, also known as
handovers (HOs). This paper proposes the adoption of Open Radio Access Network
(Open RAN/O-RAN) and deep learning models for decision-making to prevent
Quality of Service (QoS) degradation due to HOs and to ensure the timely
connectivity needed for CV services. The solution utilizes the O-RAN Software
Community (OSC), an open-source O-RAN platform developed by the collaboration
between the O-RAN Alliance and Linux Foundation, to develop xApps that are
executed in the near-Real-Time RIC of OSC. To demonstrate the proposal's
effectiveness, an integrated framework combining the OMNeT++ simulator and OSC
was created. Evaluations used real-world datasets in urban application
scenarios, such as video streaming transmission and over-the-air (OTA) updates.
Results indicate that the proposal achieved superior performance and reduced
latency compared to the standard 3GPP HO procedure.
|
2412.21164 | Adversarial Attack and Defense for LoRa Device Identification and
Authentication via Deep Learning | cs.NI cs.AI cs.CR cs.LG eess.SP | LoRa provides long-range, energy-efficient communications in Internet of
Things (IoT) applications that rely on Low-Power Wide-Area Network (LPWAN)
capabilities. Despite these merits, concerns persist regarding the security of
LoRa networks, especially in situations where device identification and
authentication are imperative to secure the reliable access to the LoRa
networks. This paper explores a deep learning (DL) approach to tackle these
concerns, focusing on two critical tasks, namely (i) identifying LoRa devices
and (ii) classifying them to legitimate and rogue devices. Deep neural networks
(DNNs), encompassing both convolutional and feedforward neural networks, are
trained for these tasks using actual LoRa signal data. In this setting, the
adversaries may spoof rogue LoRa signals through the kernel density estimation
(KDE) method based on legitimate device signals that are received by the
adversaries. Two cases are considered, (i) training two separate classifiers,
one for each of the two tasks, and (ii) training a multi-task classifier for
both tasks. The vulnerabilities of the resulting DNNs to manipulations in input
samples are studied in form of untargeted and targeted adversarial attacks
using the Fast Gradient Sign Method (FGSM). Individual and common perturbations
are considered against single-task and multi-task classifiers for the LoRa
signal analysis. To provide resilience against such attacks, a defense approach
is presented by increasing the robustness of classifiers with adversarial
training. Results quantify how vulnerable LoRa signal classification tasks are
to adversarial attacks and emphasize the need to fortify IoT applications
against these subtle yet effective threats.
|
2412.21171 | Quantum Error Correction near the Coding Theoretical Bound | quant-ph cs.IT math.IT | Recent advancements in quantum computing have led to the realization of
systems comprising tens of reliable logical qubits, constructed from thousands
of noisy physical qubits. However, many of the critical applications that
quantum computers aim to solve require quantum computations involving millions
or more logical qubits. This necessitates highly efficient quantum error
correction capable of handling large numbers of logical qubits. Classical error
correction theory is well-developed, with low-density parity-check (LDPC) codes
achieving performance limits by encoding large classical bits. Despite more
than two decades of effort, no efficiently decodable quantum error-correcting
code that approaches the hashing bound, which is a fundamental lower bound on
quantum capacity, had been discovered. Here, we present quantum
error-correcting codes constructed from classical LDPC codes that approach the
hashing bound while maintaining linear computational complexity in the number
of physical qubits. This result establishes a pathway toward realizing
large-scale, fault-tolerant quantum computers. By integrating our quantum error
correction scheme with devices capable of managing vast numbers of qubits, the
prospect of solving critical real-world problems through quantum computation is
brought significantly closer.
|
2412.21178 | Two-component spatiotemporal template for activation-inhibition of
speech in ECoG | q-bio.NC cs.CL cs.LG eess.AS eess.SP | I compute the average trial-by-trial power of band-limited speech activity
across epochs of multi-channel high-density electrocorticography (ECoG)
recorded from multiple subjects during a consonant-vowel speaking task. I show
that previously seen anti-correlations of average beta frequency activity
(12-35 Hz) to high-frequency gamma activity (70-140 Hz) during speech movement
are observable between individual ECoG channels in the sensorimotor cortex
(SMC). With this I fit a variance-based model using principal component
analysis to the band-powers of individual channels of session-averaged ECoG
data in the SMC and project SMC channels onto their lower-dimensional principal
components.
Spatiotemporal relationships between speech-related activity and principal
components are identified by correlating the principal components of both
frequency bands to individual ECoG channels over time using windowed
correlation. Correlations of principal component areas to sensorimotor areas
reveal a distinct two-component activation-inhibition-like representation for
speech that resembles distinct local sensorimotor areas recently shown to have
complex interplay in whole-body motor control, inhibition, and posture. Notably
the third principal component shows insignificant correlations across all
subjects, suggesting two components of ECoG are sufficient to represent SMC
activity during speech movement.
|
2412.21180 | STITCHER: Real-Time Trajectory Planning with Motion Primitive Search | cs.RO | Autonomous high-speed navigation through large, complex environments requires
real-time generation of agile trajectories that are dynamically feasible,
collision-free, and satisfy state or actuator constraints. Most modern
trajectory planning techniques rely on numerical optimization because
high-quality, expressive trajectories that satisfy various constraints can be
systematically computed. However, meeting computation time constraints and the
potential for numerical instabilities can limit the use of optimization-based
planners in safety-critical scenarios. This work presents an optimization-free
planning framework that stitches short trajectory segments together with graph
search to compute long range, expressive, and near-optimal trajectories in
real-time. Our STITCHER algorithm is shown to outperform modern
optimization-based planners through our innovative planning architecture and
several algorithmic developments that make real-time planning possible.
Extensive simulation testing is conducted to analyze the algorithmic components
that make up STITCHER, and a thorough comparison with two state-of-the-art
optimization planners is performed. It is shown STITCHER can generate
trajectories through complex environments over long distances (tens of meters)
with low computation times (milliseconds).
|
2412.21181 | Causal Hangover Effects | econ.EM cs.IT math.IT stat.AP | It's not unreasonable to think that in-game sporting performance can be
affected partly by what takes place off the court. We can't observe what
happens between games directly. Instead, we proxy for the possibility of
athletes partying by looking at play following games in party cities. We are
interested to see if teams exhibit a decline in performance the day following a
game in a city with active nightlife; we call this a "hangover effect". Part of
the question is determining a reasonable way to measure levels of nightlife,
and correspondingly which cities are notorious for it; we colloquially refer to
such cities as "party cities". To carry out this study, we exploit data on
bookmaker spreads: the expected score differential between two teams after
conditioning on observable performance in past games and expectations about the
upcoming game. We expect a team to meet the spread half the time, since this is
one of the easiest ways for bookmakers to guarantee a profit. We construct a
model which attempts to estimate the causal effect of visiting a "party city"
on subsequent day performance as measured by the odds of beating the spread. In
particular, we only consider the hangover effect on games played back-to-back
within 24 hours of each other. To the extent that odds of beating the spread
against next day opponent is uncorrelated with playing in a party city the day
before, which should be the case under an efficient betting market, we have
identification in our variable of interest. We find that visiting a city with
active nightlife the day prior to a game does have a statistically significant
negative effect on a team's likelihood of meeting bookmakers' expectations for
both NBA and MLB.
|
2412.21187 | Do NOT Think That Much for 2+3=? On the Overthinking of o1-Like LLMs | cs.CL | The remarkable performance of models like the OpenAI o1 can be attributed to
their ability to emulate human-like long-time thinking during inference. These
models employ extended chain-of-thought (CoT) processes, exploring multiple
strategies to enhance problem-solving capabilities. However, a critical
question remains: How to intelligently and efficiently scale computational
resources during testing. This paper presents the first comprehensive study on
the prevalent issue of overthinking in these models, where excessive
computational resources are allocated for simple problems with minimal benefit.
We introduce novel efficiency metrics from both outcome and process
perspectives to evaluate the rational use of computational resources by o1-like
models. Using a self-training paradigm, we propose strategies to mitigate
overthinking, streamlining reasoning processes without compromising accuracy.
Experimental results show that our approach successfully reduces computational
overhead while preserving model performance across a range of testsets with
varying difficulty levels, such as GSM8K, MATH500, GPQA, and AIME.
|
2412.21188 | Sparse chaos in cortical circuits | q-bio.NC cond-mat.dis-nn cs.LG nlin.CD | Nerve impulses, the currency of information flow in the brain, are generated
by an instability of the neuronal membrane potential dynamics. Neuronal
circuits exhibit collective chaos that appears essential for learning, memory,
sensory processing, and motor control. However, the factors controlling the
nature and intensity of collective chaos in neuronal circuits are not well
understood. Here we use computational ergodic theory to demonstrate that basic
features of nerve impulse generation profoundly affect collective chaos in
neuronal circuits. Numerically exact calculations of Lyapunov spectra,
Kolmogorov-Sinai-entropy, and upper and lower bounds on attractor dimension
show that changes in nerve impulse generation in individual neurons moderately
impact information encoding rates but qualitatively transform phase space
structure. Specifically, we find a drastic reduction in the number of unstable
manifolds, Kolmogorov-Sinai entropy, and attractor dimension. Beyond a critical
point, marked by the simultaneous breakdown of the diffusion approximation, a
peak in the largest Lyapunov exponent, and a localization transition of the
leading covariant Lyapunov vector, networks exhibit sparse chaos: prolonged
periods of near stable dynamics interrupted by short bursts of intense chaos.
Analysis of large, more realistically structured networks supports the
generality of these findings. In cortical circuits, biophysical properties
appear tuned to this regime of sparse chaos. Our results reveal a close link
between fundamental aspects of single-neuron biophysics and the collective
dynamics of cortical circuits, suggesting that nerve impulse generation
mechanisms are adapted to enhance circuit controllability and information flow.
|
2412.21197 | A Large-Scale Study on Video Action Dataset Condensation | cs.CV | Dataset condensation has made significant progress in the image domain.
Unlike images, videos possess an additional temporal dimension, which harbors
considerable redundant information, making condensation even more crucial.
However, video dataset condensation still remains an underexplored area. We aim
to bridge this gap by providing a large-scale empirical study with systematic
design and fair comparison. Specifically, our work delves into three key
aspects to provide valuable empirical insights: (1) temporal processing of
video data, (2) establishing a comprehensive evaluation protocol for video
dataset condensation, and (3) adaptation of condensation methods to the
space-time domain and fair comparisons among them. From this study, we derive
several intriguing observations: (i) sample diversity appears to be more
crucial than temporal diversity for video dataset condensation, (ii) simple
slide-window sampling proves to be effective, and (iii) sample selection
currently outperforms dataset distillation in most cases. Furthermore, we
conduct experiments on three prominent action recognition datasets (HMDB51,
UCF101 and Kinetics-400) and achieve state-of-the-art results on all of them.
Our code is available at https://github.com/MCG-NJU/Video-DC.
|
2412.21199 | HumanEval Pro and MBPP Pro: Evaluating Large Language Models on
Self-invoking Code Generation | cs.SE cs.CL | We introduce self-invoking code generation, a new task designed to evaluate
the progressive reasoning and problem-solving capabilities of LLMs. In this
task, models are presented with a base problem and a related, more complex
problem. They must solve the base problem and then utilize its solution to
address the more complex one. This work features three key contributions.
First, we propose a general recipe for generating more challenging versions of
existing benchmarks, resulting in three new benchmarks: HumanEval Pro, MBPP
Pro, and BigCodeBench-Lite Pro, specifically designed to assess LLMs on
self-invoking code generation. Second, from the analysis of experimental
results over twenty LLMs on our benchmarks, we have two important observations:
(i) Most LLMs excel in traditional code generation benchmarks like HumanEval
and MBPP, but their performance declines on self-invoking tasks. For example,
o1-mini achieves 96.2% pass@1 on HumanEval but only 76.2% on HumanEval Pro.
(ii) On self-invoking code generation task, the instruction-tuned models
demonstrate only marginal improvements compared to the base models. Third, we
disclose the types of failure modes that exist in our evaluation results. All
these results underscore the need for further advancements in self-invoking
code generation tasks and provide a new direction for future research on
enhancing LLMs' code reasoning capabilities.
|
2412.21200 | Distributed Mixture-of-Agents for Edge Inference with Large Language
Models | cs.IT cs.CL cs.DC cs.LG cs.NI math.IT | Mixture-of-Agents (MoA) has recently been proposed as a method to enhance
performance of large language models (LLMs), enabling multiple individual LLMs
to work together for collaborative inference. This collaborative approach
results in improved responses to user prompts compared to relying on a single
LLM. In this paper, we consider such an MoA architecture in a distributed
setting, where LLMs operate on individual edge devices, each uniquely
associated with a user and equipped with its own distributed computing power.
These devices exchange information using decentralized gossip algorithms,
allowing different device nodes to talk without the supervision of a
centralized server. In the considered setup, different users have their own LLM
models to address user prompts. Additionally, the devices gossip either their
own user-specific prompts or augmented prompts to generate more refined answers
to certain queries. User prompts are temporarily stored in the device queues
when their corresponding LLMs are busy. Given the memory limitations of edge
devices, it is crucial to ensure that the average queue sizes in the system
remain bounded. In this paper, we address this by theoretically calculating the
queuing stability conditions for the device queues under reasonable
assumptions, which we validate experimentally as well. Further, we demonstrate
through experiments, leveraging open-source LLMs for the implementation of
distributed MoA, that certain MoA configurations produce higher-quality
responses compared to others, as evaluated on AlpacaEval 2.0 benchmark. The
implementation is available at:
https://github.com/purbeshmitra/distributed_moa.
|
2412.21203 | SoS Certificates for Sparse Singular Values and Their Applications:
Robust Statistics, Subspace Distortion, and More | cs.DS cs.LG | We study $\textit{sparse singular value certificates}$ for random rectangular
matrices. If $M$ is an $n \times d$ matrix with independent Gaussian entries,
we give a new family of polynomial-time algorithms which can certify upper
bounds on the maximum of $\|M u\|$, where $u$ is a unit vector with at most
$\eta n$ nonzero entries for a given $\eta \in (0,1)$. This basic algorithmic
primitive lies at the heart of a wide range of problems across algorithmic
statistics and theoretical computer science.
Our algorithms certify a bound which is asymptotically smaller than the naive
one, given by the maximum singular value of $M$, for nearly the widest-possible
range of $n,d,$ and $\eta$. Efficiently certifying such a bound for a range of
$n,d$ and $\eta$ which is larger by any polynomial factor than what is achieved
by our algorithm would violate lower bounds in the SQ and low-degree
polynomials models. Our certification algorithm makes essential use of the
Sum-of-Squares hierarchy. To prove the correctness of our algorithm, we develop
a new combinatorial connection between the graph matrix approach to analyze
random matrices with dependent entries, and the Efron-Stein decomposition of
functions of independent random variables.
As applications of our certification algorithm, we obtain new efficient
algorithms for a wide range of well-studied algorithmic tasks. In algorithmic
robust statistics, we obtain new algorithms for robust mean and covariance
estimation with tradeoffs between breakdown point and sample complexity, which
are nearly matched by SQ and low-degree polynomial lower bounds (that we
establish). We also obtain new polynomial-time guarantees for certification of
$\ell_1/\ell_2$ distortion of random subspaces of $\mathbb{R}^n$ (also with
nearly matching lower bounds), sparse principal component analysis, and
certification of the $2\rightarrow p$ norm of a random matrix.
|
2412.21205 | Action-Agnostic Point-Level Supervision for Temporal Action Detection | cs.CV cs.AI cs.LG | We propose action-agnostic point-level (AAPL) supervision for temporal action
detection to achieve accurate action instance detection with a lightly
annotated dataset. In the proposed scheme, a small portion of video frames is
sampled in an unsupervised manner and presented to human annotators, who then
label the frames with action categories. Unlike point-level supervision, which
requires annotators to search for every action instance in an untrimmed video,
frames to annotate are selected without human intervention in AAPL supervision.
We also propose a detection model and learning method to effectively utilize
the AAPL labels. Extensive experiments on the variety of datasets (THUMOS '14,
FineAction, GTEA, BEOID, and ActivityNet 1.3) demonstrate that the proposed
approach is competitive with or outperforms prior methods for video-level and
point-level supervision in terms of the trade-off between the annotation cost
and detection performance.
|
2412.21206 | PERSE: Personalized 3D Generative Avatars from A Single Portrait | cs.CV | We present PERSE, a method for building an animatable personalized generative
avatar from a reference portrait. Our avatar model enables facial attribute
editing in a continuous and disentangled latent space to control each facial
attribute, while preserving the individual's identity. To achieve this, our
method begins by synthesizing large-scale synthetic 2D video datasets, where
each video contains consistent changes in the facial expression and viewpoint,
combined with a variation in a specific facial attribute from the original
input. We propose a novel pipeline to produce high-quality, photorealistic 2D
videos with facial attribute editing. Leveraging this synthetic attribute
dataset, we present a personalized avatar creation method based on the 3D
Gaussian Splatting, learning a continuous and disentangled latent space for
intuitive facial attribute manipulation. To enforce smooth transitions in this
latent space, we introduce a latent space regularization technique by using
interpolated 2D faces as supervision. Compared to previous approaches, we
demonstrate that PERSE generates high-quality avatars with interpolated
attributes while preserving identity of reference person.
|
2501.00001 | Mathematical modelling of flow and adsorption in a gas chromatograph | cs.CE physics.chem-ph | In this paper, a mathematical model is developed to describe the evolution of
the concentration of compounds through a gas chromatography column. The model
couples mass balances and kinetic equations for all components. Both single and
multiple-component cases are considered with constant or variable velocity.
Non-dimensionalisation indicates the small effect of diffusion. The system
where diffusion is neglected is analysed using Laplace transforms. In the
multiple-component case, it is demonstrated that the competition between the
compounds is negligible and the equations may be decoupled. This reduces the
problem to solving a single integral equation to determine the concentration
profile for all components (since they are scaled versions of each other). For
a given analyte, we then only two parameters need to be fitted to the data. To
verify this approach, the full governing equations are also solved numerically
using the finite difference method and a global adaptive quadrature method to
integrate the Laplace transformation. Comparison with the Laplace solution
verifies the high degree of accuracy of the simpler Laplace form. The Laplace
solution is then verified against experimental data from BTEX chromatography.
This novel method, which involves solving a single equation and fitting
parameters in pairs for individual components, is highly efficient. It is
significantly faster and simpler than the full numerical solution and avoids
the computationally expensive methods that would normally be used to fit all
curves at the same time.
|
2501.00003 | Machine learning models for Si nanoparticle growth in nonthermal plasma | physics.comp-ph cs.LG | Nanoparticles (NPs) formed in nonthermal plasmas (NTPs) can have unique
properties and applications. However, modeling their growth in these
environments presents significant challenges due to the non-equilibrium nature
of NTPs, making them computationally expensive to describe. In this work, we
address the challenges associated with accelerating the estimation of
parameters needed for these models. Specifically, we explore how different
machine learning models can be tailored to improve prediction outcomes. We
apply these methods to reactive classical molecular dynamics data, which
capture the processes associated with colliding silane fragments in NTPs. These
reactions exemplify processes where qualitative trends are clear, but their
quantification is challenging, hard to generalize, and requires time-consuming
simulations. Our results demonstrate that good prediction performance can be
achieved when appropriate loss functions are implemented and correct
invariances are imposed. While the diversity of molecules used in the training
set is critical for accurate prediction, our findings indicate that only a
fraction (15-25\%) of the energy and temperature sampling is required to
achieve high levels of accuracy. This suggests a substantial reduction in
computational effort is possible for similar systems.
|
2501.00004 | NewsHomepages: Homepage Layouts Capture Information Prioritization
Decisions | cs.IR cs.AI cs.CL | Information prioritization plays an important role in how humans perceive and
understand the world. Homepage layouts serve as a tangible proxy for this
prioritization. In this work, we present NewsHomepages, a large dataset of over
3,000 new website homepages (including local, national and topic-specific
outlets) captured twice daily over a three-year period. We develop models to
perform pairwise comparisons between news items to infer their relative
significance. To illustrate that modeling organizational hierarchies has
broader implications, we applied our models to rank-order a collection of local
city council policies passed over a ten-year period in San Francisco, assessing
their "newsworthiness". Our findings lay the groundwork for leveraging implicit
organizational cues to deepen our understanding of information prioritization.
|
2501.00009 | Model-Driven Deep Neural Network for Enhanced AoA Estimation Using 5G
gNB | eess.SP cs.AI | High-accuracy positioning has become a fundamental enabler for intelligent
connected devices. Nevertheless, the present wireless networks still rely on
model-driven approaches to achieve positioning functionality, which are
susceptible to performance degradation in practical scenarios, primarily due to
hardware impairments. Integrating artificial intelligence into the positioning
framework presents a promising solution to revolutionize the accuracy and
robustness of location-based services. In this study, we address this challenge
by reformulating the problem of angle-of-arrival (AoA) estimation into image
reconstruction of spatial spectrum. To this end, we design a model-driven deep
neural network (MoD-DNN), which can automatically calibrate the
angular-dependent phase error. The proposed MoD-DNN approach employs an
iterative optimization scheme between a convolutional neural network and a
sparse conjugate gradient algorithm. Simulation and experimental results are
presented to demonstrate the effectiveness of the proposed method in enhancing
spectrum calibration and AoA estimation.
|
2501.00013 | Relation-Aware Equivariant Graph Networks for Epitope-Unknown Antibody
Design and Specificity Optimization | q-bio.QM cs.AI cs.LG | Antibodies are Y-shaped proteins that protect the host by binding to specific
antigens, and their binding is mainly determined by the Complementary
Determining Regions (CDRs) in the antibody. Despite the great progress made in
CDR design, existing computational methods still encounter several challenges:
1) poor capability of modeling complex CDRs with long sequences due to
insufficient contextual information; 2) conditioned on pre-given antigenic
epitopes and their static interaction with the target antibody; 3) neglect of
specificity during antibody optimization leads to non-specific antibodies. In
this paper, we take into account a variety of node features, edge features, and
edge relations to include more contextual and geometric information. We propose
a novel Relation-Aware Antibody Design (RAAD) framework, which dynamically
models antigen-antibody interactions for co-designing the sequences and
structures of antigen-specific CDRs. Furthermore, we propose a new evaluation
metric to better measure antibody specificity and develop a contrasting
specificity-enhancing constraint to optimize the specificity of antibodies.
Extensive experiments have demonstrated the superior capability of RAAD in
terms of antibody modeling, generation, and optimization across different CDR
types, sequence lengths, pre-training strategies, and input contexts.
|
2501.00015 | Energy-Efficient Sampling Using Stochastic Magnetic Tunnel Junctions | physics.comp-ph cs.LG stat.CO stat.ML | (Pseudo)random sampling, a costly yet widely used method in (probabilistic)
machine learning and Markov Chain Monte Carlo algorithms, remains unfeasible on
a truly large scale due to unmet computational requirements. We introduce an
energy-efficient algorithm for uniform Float16 sampling, utilizing a
room-temperature stochastic magnetic tunnel junction device to generate truly
random floating-point numbers. By avoiding expensive symbolic computation and
mapping physical phenomena directly to the statistical properties of the
floating-point format and uniform distribution, our approach achieves a higher
level of energy efficiency than the state-of-the-art Mersenne-Twister algorithm
by a minimum factor of 9721 and an improvement factor of 5649 compared to the
more energy-efficient PCG algorithm. Building on this sampling technique and
hardware framework, we decompose arbitrary distributions into many
non-overlapping approximative uniform distributions along with convolution and
prior-likelihood operations, which allows us to sample from any 1D distribution
without closed-form solutions. We provide measurements of the potential
accumulated approximation errors, demonstrating the effectiveness of our
method.
|
2501.00016 | Predicting Crack Nucleation and Propagation in Brittle Materials Using
Deep Operator Networks with Diverse Trunk Architectures | physics.comp-ph cs.AI | Phase-field modeling reformulates fracture problems as energy minimization
problems and enables a comprehensive characterization of the fracture process,
including crack nucleation, propagation, merging, and branching, without
relying on ad-hoc assumptions. However, the numerical solution of phase-field
fracture problems is characterized by a high computational cost. To address
this challenge, in this paper, we employ a deep neural operator (DeepONet)
consisting of a branch network and a trunk network to solve brittle fracture
problems. We explore three distinct approaches that vary in their trunk network
configurations. In the first approach, we demonstrate the effectiveness of a
two-step DeepONet, which results in a simplification of the learning task. In
the second approach, we employ a physics-informed DeepONet, whereby the
mathematical expression of the energy is integrated into the trunk network's
loss to enforce physical consistency. The integration of physics also results
in a substantially smaller data size needed for training. In the third
approach, we replace the neural network in the trunk with a Kolmogorov-Arnold
Network and train it without the physics loss. Using these methods, we model
crack nucleation in a one-dimensional homogeneous bar under prescribed end
displacements, as well as crack propagation and branching in single
edge-notched specimens with varying notch lengths subjected to tensile and
shear loading. We show that the networks predict the solution fields
accurately, and the error in the predicted fields is localized near the crack.
|
2501.00020 | Magnetic Field Data Calibration with Transformer Model Using Physical
Constraints: A Scalable Method for Satellite Missions, Illustrated by
Tianwen-1 | physics.space-ph astro-ph.EP astro-ph.IM cs.LG | This study introduces a novel approach that integrates the magnetic field
data correction from the Tianwen-1 Mars mission with a neural network
architecture constrained by physical principles derived from Maxwell's equation
equations. By employing a Transformer based model capable of efficiently
handling sequential data, the method corrects measurement anomalies caused by
satellite dynamics, instrument interference, and environmental noise. As a
result, it significantly improves both the accuracy and the physical
consistency of the calibrated data. Compared to traditional methods that
require long data segments and manual intervention often taking weeks or even
months to complete this new approach can finish calibration in just minutes to
hours, and predictions are made within seconds. This innovation not only
accelerates the process of space weather modeling and planetary magnetospheric
studies but also provides a robust framework for future planetary exploration
and solar wind interaction research.
|
2501.00021 | Did we miss P In CAP? Partial Progress Conjecture under Asynchrony | cs.DC cs.DB | Each application developer desires to provide its users with consistent
results and an always-available system despite failures. Boldly, the CALM
theorem disagrees. It states that it is hard to design a system that is both
consistent and available under network partitions; select at most two out of
these three properties. One possible solution is to design coordination-free
monotonic applications. However, a majority of real-world applications require
coordination. We resolve this dilemma by conjecturing that partial progress is
possible under network partitions. This partial progress ensures the system
appears responsive to a subset of clients and achieves non-zero throughput
during failures. To this extent, we present the design of our CASSANDRA
consensus protocol that allows partitioned replicas to order client requests.
|
2501.00029 | A Breadth-First Catalog of Text Processing, Speech Processing and
Multimodal Research in South Asian Languages | cs.CL cs.IR cs.LG | We review the recent literature (January 2022- October 2024) in South Asian
languages on text-based language processing, multimodal models, and speech
processing, and provide a spotlight analysis focused on 21 low-resource South
Asian languages, namely Saraiki, Assamese, Balochi, Bhojpuri, Bodo, Burmese,
Chhattisgarhi, Dhivehi, Gujarati, Kannada, Kashmiri, Konkani, Khasi, Malayalam,
Meitei, Nepali, Odia, Pashto, Rajasthani, Sindhi, and Telugu. We identify
trends, challenges, and future research directions, using a step-wise approach
that incorporates relevance classification and clustering based on large
language models (LLMs). Our goal is to provide a breadth-first overview of the
recent developments in South Asian language technologies to NLP researchers
interested in working with South Asian languages.
|
2501.00030 | Underutilization of Syntactic Processing by Chinese Learners of English
in Comprehending English Sentences, Evidenced from Adapted Garden-Path
Ambiguity Experiment | cs.CL | Many studies have revealed that sentence comprehension relies more on
semantic processing than on syntactic processing. However, previous studies
have predominantly emphasized the preference for semantic processing, focusing
on the semantic perspective. In contrast, this current study highlights the
under-utilization of syntactic processing, from a syntactic perspective. Based
on the traditional garden-path experiment, which involves locally ambiguous but
globally unambiguous sentences, this study's empirical experiment innovatively
crafted an adapted version featuring semantically ambiguous but syntactically
unambiguous sentences to meet its specific research objective. This experiment,
involving 140 subjects, demonstrates through descriptive and inferential
statistical analyses using SPSS, Graph Pad Prism, and Cursor that Chinese
learners of English tend to under-utilize syntactic processing when
comprehending English sentences. The study identifies two types of parsing
under-utilization: partial and complete. Further exploration reveals that trial
and error in syntactic processing contributes to both. Consequently, this study
lays a foundation for the development of a novel parsing method designed to
fully integrate syntactic processing into sentence comprehension, thereby
enhancing the level of English sentence comprehension for Chinese learners of
English.
|
2501.00031 | Distilling Large Language Models for Efficient Clinical Information
Extraction | cs.CL | Large language models (LLMs) excel at clinical information extraction but
their computational demands limit practical deployment. Knowledge
distillation--the process of transferring knowledge from larger to smaller
models--offers a potential solution. We evaluate the performance of distilled
BERT models, which are approximately 1,000 times smaller than modern LLMs, for
clinical named entity recognition (NER) tasks. We leveraged state-of-the-art
LLMs (Gemini and OpenAI models) and medical ontologies (RxNorm and SNOMED) as
teacher labelers for medication, disease, and symptom extraction. We applied
our approach to over 3,300 clinical notes spanning five publicly available
datasets, comparing distilled BERT models against both their teacher labelers
and BERT models fine-tuned on human labels. External validation was conducted
using clinical notes from the MedAlign dataset. For disease extraction, F1
scores were 0.82 (teacher model), 0.89 (BioBERT trained on human labels), and
0.84 (BioBERT-distilled). For medication, F1 scores were 0.84 (teacher model),
0.91 (BioBERT-human), and 0.87 (BioBERT-distilled). For symptoms: F1 score of
0.73 (teacher model) and 0.68 (BioBERT-distilled). Distilled BERT models had
faster inference (12x, 4x, 8x faster than GPT-4o, o1-mini, and Gemini Flash
respectively) and lower costs (85x, 101x, 2x cheaper than GPT-4o, o1-mini, and
Gemini Flash respectively). On the external validation dataset, the distilled
BERT model achieved F1 scores of 0.883 (medication), 0.726 (disease), and 0.699
(symptom). Distilled BERT models were up to 101x cheaper and 12x faster than
state-of-the-art LLMs while achieving similar performance on NER tasks.
Distillation offers a computationally efficient and scalable alternative to
large LLMs for clinical information extraction.
|
2501.00032 | Highly Optimized Kernels and Fine-Grained Codebooks for LLM Inference on
Arm CPUs | cs.LG cs.AI cs.AR cs.CL | Large language models (LLMs) have transformed the way we think about language
understanding and generation, enthralling both researchers and developers.
However, deploying LLMs for inference has been a significant challenge due to
their unprecedented size and resource requirements. While quantizing model
weights to sub-byte precision has emerged as a promising solution to ease
memory pressure, the group quantization formats commonly used for LLM
quantization have significant compute overheads and a resource-intensive
dequantization process. As a result, a higher proportion of compute
instructions do not perform multiplies, i.e., real work, rendering them
unsuitable for meeting the required latency requirements for LLMs deployed on
commodity CPUs. In this work, we propose a set of highly optimized kernels to
accelerate LLM inference and unleash the full potential of CPUs, particularly
Arm CPUs. These kernels amortize the cost of loading the operands and the cost
of weight unpacking across multiple output rows. This, along with the
introduction of an optimized interleaved group data layout for weights and
decompression path optimizations to reduce unnecessary operations and
dequantization overhead while maximizing the use of vector and matrix multiply
operations, significantly improves the efficiency of MAC operations.
Furthermore, we present a groupwise non-uniform codebook-based quantization
method for ultra-low-precision quantization of LLMs to better match non-uniform
patterns in their weight distributions, demonstrating better throughput during
token generation while ensuring better quality than the state-of-the-art.
Applying these improvements to 4-bit LLMs results in a 3-3.2x improvement in
prompt processing and a 2x improvement in autoregressive decoding on Arm CPUs,
compared to LLaMA.cpp-based solution. The optimized kernels are available at
https://github.com/ggerganov/llama.cpp.
|
2501.00034 | Time Series Feature Redundancy Paradox: An Empirical Study Based on
Mortgage Default Prediction | q-fin.ST cs.AI cs.LG | With the widespread application of machine learning in financial risk
management, conventional wisdom suggests that longer training periods and more
feature variables contribute to improved model performance. This paper,
focusing on mortgage default prediction, empirically discovers a phenomenon
that contradicts traditional knowledge: in time series prediction, increased
training data timespan and additional non-critical features actually lead to
significant deterioration in prediction effectiveness. Using Fannie Mae's
mortgage data, the study compares predictive performance across different time
window lengths (2012-2022) and feature combinations, revealing that shorter
time windows (such as single-year periods) paired with carefully selected key
features yield superior prediction results. The experimental results indicate
that extended time spans may introduce noise from historical data and outdated
market patterns, while excessive non-critical features interfere with the
model's learning of core default factors. This research not only challenges the
traditional "more is better" approach in data modeling but also provides new
insights and practical guidance for feature selection and time window
optimization in financial risk prediction.
|
2501.00036 | Crime Hotspot Analysis and Mapping Using Geospatial Technology in Dessie
City, Ethiopia | physics.soc-ph cs.IR | Over the past few decades, crime and delinquency rates have increased
drastically in many countries; nevertheless, it is important to note that crime
trends can differ significantly by geographic region. This study's primary goal
was to use geographic technology to map and analyze Dessie City's crime
patterns. To investigate the geographic clustering of crime, the researchers
used semivariogram modeling and spatial autocorrelation analysis with Moran'sI.
The neighborhoods of Hote, Arada, and Segno in Dessie's central city were found
to be crime-prone "hot spot" locations, as evidenced by statistically
significant high Z-scores ranging from 0.037 to 4.608. On the other hand, low
negative Z-scores ranging from -3.231 to -0.116 indicated "cold spot"
concentrations of crime in the city's north-central sub-cities of Menafesha and
Bounbouwha. With an index of 0.027492 and a Z-score of 3.297616 (p<0.01), the
analysis overall showed a substantial positive spatial autocorrelation,
suggesting a clustered pattern of crime in Dessie. The majority of crimes
showed a north-south directionality, except for murder, which trended from
northeast to southwest. The mean center of all crime types was found in the
central Hote area. To address the complicated problem of rising crime rates in
Dessie and other developing metropolitan areas, more focused and efficient
enforcement techniques, and resource deployment can be informed through the
knowledge acquired from the geospatial analysis.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.