id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
2501.05209
|
MHAFF: Multi-Head Attention Feature Fusion of CNN and Transformer for
Cattle Identification
|
cs.CV
|
Convolutional Neural Networks (CNNs) have drawn researchers' attention to
identifying cattle using muzzle images. However, CNNs often fail to capture
long-range dependencies within the complex patterns of the muzzle. The
transformers handle these challenges. This inspired us to fuse the strengths of
CNNs and transformers in muzzle-based cattle identification. Addition and
concatenation have been the most commonly used techniques for feature fusion.
However, addition fails to preserve discriminative information, while
concatenation results in an increase in dimensionality. Both methods are simple
operations and cannot discover the relationships or interactions between fusing
features. This research aims to overcome the issues faced by addition and
concatenation. This research introduces a novel approach called Multi-Head
Attention Feature Fusion (MHAFF) for the first time in cattle identification.
MHAFF captures relations between the different types of fusing features while
preserving their originality. The experiments show that MHAFF outperformed
addition and concatenation techniques and the existing cattle identification
methods in accuracy on two publicly available cattle datasets. MHAFF
demonstrates excellent performance and quickly converges to achieve optimum
accuracy of 99.88% and 99.52% in two cattle datasets simultaneously.
|
2501.05213
|
GLaM-Sign: Greek Language Multimodal Lip Reading with Integrated Sign
Language Accessibility
|
cs.CL cs.AI
|
The Greek Language Multimodal Lip Reading with Integrated Sign Language
Accessibility (GLaM-Sign) [1] is a groundbreaking resource in accessibility and
multimodal AI, designed to support Deaf and Hard-of-Hearing (DHH) individuals.
Developed from the FEELIT project [2], it integrates high-resolution audio,
video, textual transcriptions, and Greek Sign Language translations for
applications like real-time sign language translation and enhanced subtitle
synchronization. While its primary focus is on promoting inclusivity in the
Greek tourism sector, its adaptability extends to education, healthcare, and
public services. Future advancements will enhance word-level precision and
scalability to additional languages, supported by advanced AI methodologies and
collaborations with diverse stakeholders. This dataset underscores the
transformative potential of multimodal resources in bridging communication
gaps, fostering innovation, and setting a benchmark for ethical AI and
inclusive technologies.
|
2501.05220
|
A Novel Approach to Scalable and Automatic Topic-Controlled Question
Generation in Education
|
cs.CY cs.AI cs.CL cs.IR
|
The development of Automatic Question Generation (QG) models has the
potential to significantly improve educational practices by reducing the
teacher workload associated with creating educational content. This paper
introduces a novel approach to educational question generation that controls
the topical focus of questions. The proposed Topic-Controlled Question
Generation (T-CQG) method enhances the relevance and effectiveness of the
generated content for educational purposes. Our approach uses fine-tuning on a
pre-trained T5-small model, employing specially created datasets tailored to
educational needs. The research further explores the impacts of pre-training
strategies, quantisation, and data augmentation on the model's performance. We
specifically address the challenge of generating semantically aligned questions
with paragraph-level contexts, thereby improving the topic specificity of the
generated questions. In addition, we introduce and explore novel evaluation
methods to assess the topical relatedness of the generated questions. Our
results, validated through rigorous offline and human-backed evaluations,
demonstrate that the proposed models effectively generate high-quality,
topic-focused questions. These models have the potential to reduce teacher
workload and support personalised tutoring systems by serving as bespoke
question generators. With its relatively small number of parameters, the
proposals not only advance the capabilities of question generation models for
handling specific educational topics but also offer a scalable solution that
reduces infrastructure costs. This scalability makes them feasible for
widespread use in education without reliance on proprietary large language
models like ChatGPT.
|
2501.05222
|
ParaRev: Building a dataset for Scientific Paragraph Revision annotated
with revision instruction
|
cs.CL
|
Revision is a crucial step in scientific writing, where authors refine their
work to improve clarity, structure, and academic quality. Existing approaches
to automated writing assistance often focus on sentence-level revisions, which
fail to capture the broader context needed for effective modification. In this
paper, we explore the impact of shifting from sentence-level to paragraph-level
scope for the task of scientific text revision. The paragraph level definition
of the task allows for more meaningful changes, and is guided by detailed
revision instructions rather than general ones. To support this task, we
introduce ParaRev, the first dataset of revised scientific paragraphs with an
evaluation subset manually annotated with revision instructions. Our
experiments demonstrate that using detailed instructions significantly improves
the quality of automated revisions compared to general approaches, no matter
the model or the metric considered.
|
2501.05223
|
EVA-S2PLoR: A Secure Element-wise Multiplication Meets Logistic
Regression on Heterogeneous Database
|
cs.CR cs.LG
|
Accurate nonlinear computation is a key challenge in privacy-preserving
machine learning (PPML). Most existing frameworks approximate it through linear
operations, resulting in significant precision loss. This paper proposes an
efficient, verifiable and accurate security 2-party logistic regression
framework (EVA-S2PLoR), which achieves accurate nonlinear function computation
through a novel secure element-wise multiplication protocol and its derived
protocols. Our framework primarily includes secure 2-party vector element-wise
multiplication, addition to multiplication, reciprocal, and sigmoid function
based on data disguising technology, where high efficiency and accuracy are
guaranteed by the simple computation flow based on the real number domain and
the few number of fixed communication rounds. We provide secure and robust
anomaly detection through dimension transformation and Monte Carlo methods.
EVA-S2PLoR outperforms many advanced frameworks in terms of precision
(improving the performance of the sigmoid function by about 10 orders of
magnitude compared to most frameworks) and delivers the best overall
performance in secure logistic regression experiments.
|
2501.05224
|
Leveraging Large Language Models for Zero-shot Lay Summarisation in
Biomedicine and Beyond
|
cs.CL
|
In this work, we explore the application of Large Language Models to
zero-shot Lay Summarisation. We propose a novel two-stage framework for Lay
Summarisation based on real-life processes, and find that summaries generated
with this method are increasingly preferred by human judges for larger models.
To help establish best practices for employing LLMs in zero-shot settings, we
also assess the ability of LLMs as judges, finding that they are able to
replicate the preferences of human judges. Finally, we take the initial steps
towards Lay Summarisation for Natural Language Processing (NLP) articles,
finding that LLMs are able to generalise to this new domain, and further
highlighting the greater utility of summaries generated by our proposed
approach via an in-depth human evaluation.
|
2501.05225
|
Implementation Pitfalls for Carbonate Mineral Dissolution -- a Technical
Note
|
cs.CE
|
In systems with slow reaction kinetics, such as mineral dissolution
processes, chemical equilibrium cannot be assumed and an accurate understanding
of reaction rates is essential; discrepancies in parameter reporting can
greatly affect simulation results. This technical note identifies an issue with
the reporting of rate parameters for carbonate mineral dissolution in a widely
used database for reactive transport modeling based on Palandri and Kharaka
2004. This misrepresentation leads to a considerable overestimation of reaction
timescales. Using the simulators Reaktoro and DuMuX, we simulated a simple
calcite dissolution batch test and compared the results to experimental data.
By adjusting the parameter to align with established literature, we demonstrate
an improved fit between simulated and experimental data. Discrepancies in
reaction timescales were reduced by an order of magnitude, emphasizing the
importance of regular validation of simulations with experimental data.
|
2501.05226
|
Light Transport-aware Diffusion Posterior Sampling for Single-View
Reconstruction of 3D Volumes
|
cs.CV cs.LG
|
We introduce a single-view reconstruction technique of volumetric fields in
which multiple light scattering effects are omnipresent, such as in clouds. We
model the unknown distribution of volumetric fields using an unconditional
diffusion model trained on a novel benchmark dataset comprising 1,000
synthetically simulated volumetric density fields. The neural diffusion model
is trained on the latent codes of a novel, diffusion-friendly, monoplanar
representation. The generative model is used to incorporate a tailored
parametric diffusion posterior sampling technique into different reconstruction
tasks. A physically-based differentiable volume renderer is employed to provide
gradients with respect to light transport in the latent space. This stands in
contrast to classic NeRF approaches and makes the reconstructions better
aligned with observed data. Through various experiments, we demonstrate
single-view reconstruction of volumetric clouds at a previously unattainable
quality.
|
2501.05228
|
Harnessing Large Language and Vision-Language Models for Robust
Out-of-Distribution Detection
|
cs.CV
|
Out-of-distribution (OOD) detection has seen significant advancements with
zero-shot approaches by leveraging the powerful Vision-Language Models (VLMs)
such as CLIP. However, prior research works have predominantly focused on
enhancing Far-OOD performance, while potentially compromising Near-OOD
efficacy, as observed from our pilot study. To address this issue, we propose a
novel strategy to enhance zero-shot OOD detection performances for both Far-OOD
and Near-OOD scenarios by innovatively harnessing Large Language Models (LLMs)
and VLMs. Our approach first exploit an LLM to generate superclasses of the ID
labels and their corresponding background descriptions followed by feature
extraction using CLIP. We then isolate the core semantic features for ID data
by subtracting background features from the superclass features. The refined
representation facilitates the selection of more appropriate negative labels
for OOD data from a comprehensive candidate label set of WordNet, thereby
enhancing the performance of zero-shot OOD detection in both scenarios.
Furthermore, we introduce novel few-shot prompt tuning and visual prompt tuning
to adapt the proposed framework to better align with the target distribution.
Experimental results demonstrate that the proposed approach consistently
outperforms current state-of-the-art methods across multiple benchmarks, with
an improvement of up to 2.9% in AUROC and a reduction of up to 12.6% in FPR95.
Additionally, our method exhibits superior robustness against covariate shift
across different domains, further highlighting its effectiveness in real-world
scenarios.
|
2501.05232
|
The Intraday Bitcoin Response to Tether Minting and Burning Events:
Asymmetry, Investor Sentiment, And "Whale Alerts" On Twitter
|
q-fin.GN cs.SI q-fin.PR q-fin.TR
|
Tether Limited has the sole authority to create (mint) and destroy (burn)
Tether stablecoins (USDT). This paper investigates Bitcoin's response to USDT
supply change events between 2014 and 2021 and identifies an interesting
asymmetry between Bitcoin's responses to USDT minting and burning events.
Bitcoin responds positively to USDT minting events over 5- to 30-minute event
windows, but this response begins declining after 60 minutes. State-dependence
is also demonstrated, with Bitcoin prices exhibiting a greater increase when
the corresponding USDT minting event coincides with positive investor sentiment
and is announced to the public by data service provider, Whale Alert, on
Twitter.
|
2501.05234
|
Optimizing Estonian TV Subtitles with Semi-supervised Learning and LLMs
|
cs.CL cs.AI cs.LG eess.AS
|
This paper presents an approach for generating high-quality, same-language
subtitles for Estonian TV content. We fine-tune the Whisper model on
human-generated Estonian subtitles and enhance it with iterative
pseudo-labeling and large language model (LLM) based post-editing. Our
experiments demonstrate notable subtitle quality improvement through
pseudo-labeling with an unlabeled dataset. We find that applying LLM-based
editing at test time enhances subtitle accuracy, while its use during training
does not yield further gains. This approach holds promise for creating subtitle
quality close to human standard and could be extended to real-time
applications.
|
2501.05236
|
Automated external cervical resorption segmentation in cone-beam CT
using local texture features
|
cs.CV
|
External cervical resorption (ECR) is a resorptive process affecting teeth.
While in some patients, active resorption ceases and gets replaced by osseous
tissue, in other cases, the resorption progresses and ultimately results in
tooth loss. For proper ECR assessment, cone-beam computed tomography (CBCT) is
the recommended imaging modality, enabling a 3-D characterization of these
lesions. While it is possible to manually identify and measure ECR resorption
in CBCT scans, this process can be time intensive and highly subject to human
error. Therefore, there is an urgent need to develop an automated method to
identify and quantify the severity of ECR resorption using CBCT. Here, we
present a method for ECR lesion segmentation that is based on automatic, binary
classification of locally extracted voxel-wise texture features. We evaluate
our method on 6 longitudinal CBCT datasets and show that certain
texture-features can be used to accurately detect subtle CBCT signal changes
due to ECR. We also present preliminary analyses clustering texture features
within a lesion to stratify the defects and identify patterns indicative of
calcification. These methods are important steps in developing prognostic
biomarkers to predict whether ECR will continue to progress or cease,
ultimately informing treatment decisions.
|
2501.05238
|
FOCUS: Towards Universal Foreground Segmentation
|
cs.CV cs.AI cs.LG
|
Foreground segmentation is a fundamental task in computer vision,
encompassing various subdivision tasks. Previous research has typically
designed task-specific architectures for each task, leading to a lack of
unification. Moreover, they primarily focus on recognizing foreground objects
without effectively distinguishing them from the background. In this paper, we
emphasize the importance of the background and its relationship with the
foreground. We introduce FOCUS, the Foreground ObjeCts Universal Segmentation
framework that can handle multiple foreground tasks. We develop a multi-scale
semantic network using the edge information of objects to enhance image
features. To achieve boundary-aware segmentation, we propose a novel
distillation method, integrating the contrastive learning strategy to refine
the prediction mask in multi-modal feature space. We conduct extensive
experiments on a total of 13 datasets across 5 tasks, and the results
demonstrate that FOCUS consistently outperforms the state-of-the-art
task-specific models on most metrics.
|
2501.05239
|
Is Your Autonomous Vehicle Safe? Understanding the Threat of
Electromagnetic Signal Injection Attacks on Traffic Scene Perception
|
cs.CR cs.CV eess.SP
|
Autonomous vehicles rely on camera-based perception systems to comprehend
their driving environment and make crucial decisions, thereby ensuring vehicles
to steer safely. However, a significant threat known as Electromagnetic Signal
Injection Attacks (ESIA) can distort the images captured by these cameras,
leading to incorrect AI decisions and potentially compromising the safety of
autonomous vehicles. Despite the serious implications of ESIA, there is limited
understanding of its impacts on the robustness of AI models across various and
complex driving scenarios. To address this gap, our research analyzes the
performance of different models under ESIA, revealing their vulnerabilities to
the attacks. Moreover, due to the challenges in obtaining real-world attack
data, we develop a novel ESIA simulation method and generate a simulated attack
dataset for different driving scenarios. Our research provides a comprehensive
simulation and evaluation framework, aiming to enhance the development of more
robust AI models and secure intelligent systems, ultimately contributing to the
advancement of safer and more reliable technology across various fields.
|
2501.05241
|
Contrast-Free Myocardial Scar Segmentation in Cine MRI using Motion and
Texture Fusion
|
eess.IV cs.CV
|
Late gadolinium enhancement MRI (LGE MRI) is the gold standard for the
detection of myocardial scars for post myocardial infarction (MI). LGE MRI
requires the injection of a contrast agent, which carries potential side
effects and increases scanning time and patient discomfort. To address these
issues, we propose a novel framework that combines cardiac motion observed in
cine MRI with image texture information to segment the myocardium and scar
tissue in the left ventricle. Cardiac motion tracking can be formulated as a
full cardiac image cycle registration problem, which can be solved via deep
neural networks. Experimental results prove that the proposed method can
achieve scar segmentation based on non-contrasted cine images with comparable
accuracy to LGE MRI. This demonstrates its potential as an alternative to
contrast-enhanced techniques for scar detection.
|
2501.05242
|
Scaffold-SLAM: Structured 3D Gaussians for Simultaneous Localization and
Photorealistic Mapping
|
cs.CV
|
3D Gaussian Splatting (3DGS) has recently revolutionized novel view synthesis
in the Simultaneous Localization and Mapping (SLAM). However, existing SLAM
methods utilizing 3DGS have failed to provide high-quality novel view rendering
for monocular, stereo, and RGB-D cameras simultaneously. Notably, some methods
perform well for RGB-D cameras but suffer significant degradation in rendering
quality for monocular cameras. In this paper, we present Scaffold-SLAM, which
delivers simultaneous localization and high-quality photorealistic mapping
across monocular, stereo, and RGB-D cameras. We introduce two key innovations
to achieve this state-of-the-art visual quality. First, we propose
Appearance-from-Motion embedding, enabling 3D Gaussians to better model image
appearance variations across different camera poses. Second, we introduce a
frequency regularization pyramid to guide the distribution of Gaussians,
allowing the model to effectively capture finer details in the scene. Extensive
experiments on monocular, stereo, and RGB-D datasets demonstrate that
Scaffold-SLAM significantly outperforms state-of-the-art methods in
photorealistic mapping quality, e.g., PSNR is 16.76% higher in the TUM RGB-D
datasets for monocular cameras.
|
2501.05244
|
Optimized Sampling for Non-Line-of-Sight Imaging Using Modified Fast
Fourier Transforms
|
eess.IV cs.CV eess.SP physics.optics
|
Non-line-of-Sight (NLOS) imaging systems collect light at a diffuse relay
surface and input this measurement into computational algorithms that output a
3D volumetric reconstruction. These algorithms utilize the Fast Fourier
Transform (FFT) to accelerate the reconstruction process but require both input
and output to be sampled spatially with uniform grids. However, the geometry of
NLOS imaging inherently results in non-uniform sampling on the relay surface
when using multi-pixel detector arrays, even though such arrays significantly
reduce acquisition times. Furthermore, using these arrays increases the data
rate required for sensor readout, posing challenges for real-world deployment.
In this work, we utilize the phasor field framework to demonstrate that
existing NLOS imaging setups typically oversample the relay surface spatially,
explaining why the measurement can be compressed without significantly
sacrificing reconstruction quality. This enables us to utilize the Non-Uniform
Fast Fourier Transform (NUFFT) to reconstruct from sparse measurements acquired
from irregularly sampled relay surfaces of arbitrary shapes. Furthermore, we
utilize the NUFFT to reconstruct at arbitrary locations in the hidden volume,
ensuring flexible sampling schemes for both the input and output. Finally, we
utilize the Scaled Fast Fourier Transform (SFFT) to reconstruct larger volumes
without increasing the number of samples stored in memory. All algorithms
introduced in this paper preserve the computational complexity of FFT-based
methods, ensuring scalability for practical NLOS imaging applications.
|
2501.05246
|
Domain-Incremental Semantic Segmentation for Autonomous Driving under
Adverse Driving Conditions
|
cs.CV
|
Semantic segmentation for autonomous driving is an even more challenging task
when faced with adverse driving conditions. Standard models trained on data
recorded under ideal conditions show a deteriorated performance in unfavorable
weather or illumination conditions. Fine-tuning on the new task or condition
would lead to overwriting the previously learned information resulting in
catastrophic forgetting. Adapting to the new conditions through traditional
domain adaption methods improves the performance on the target domain at the
expense of the source domain. Addressing these issues, we propose an
architecture-based domain-incremental learning approach called Progressive
Semantic Segmentation (PSS). PSS is a task-agnostic, dynamically growing
collection of domain-specific segmentation models. The task of inferring the
domain and subsequently selecting the appropriate module for segmentation is
carried out using a collection of convolutional autoencoders. We extensively
evaluate our proposed approach using several datasets at varying levels of
granularity in the categorization of adverse driving conditions. Furthermore,
we demonstrate the generalization of the proposed approach to similar and
unseen domains.
|
2501.05247
|
Online Prompt Selection for Program Synthesis
|
cs.AI cs.SE
|
Large Language Models (LLMs) demonstrate impressive capabilities in the
domain of program synthesis. This level of performance is not, however,
universal across all tasks, all LLMs and all prompting styles. There are many
areas where one LLM dominates, one prompting style dominates, or where calling
a symbolic solver is a better choice than an LLM. A key challenge for the user
then, is to identify not only when an LLM is the right choice of solver, and
the appropriate LLM to call for a given synthesis task, but also the right way
to call it. A non-expert user who makes the wrong choice, incurs a cost both in
terms of results (number of tasks solved, and the time it takes to solve them)
and financial cost, if using a closed-source language model via a commercial
API. We frame this choice as an online learning problem. We use a multi-armed
bandit algorithm to select which symbolic solver, or LLM and prompt combination
to deploy in order to maximize a given reward function (which may prioritize
solving time, number of synthesis tasks solved, or financial cost of solving).
We implement an instance of this approach, called CYANEA, and evaluate it on
synthesis queries from the literature in ranking function synthesis, from the
syntax-guided synthesis competition, and fresh, unseen queries generated from
SMT problems. CYANEA solves 37.2% more queries than the best single solver and
achieves results within 4% of the virtual best solver.
|
2501.05248
|
Deriving Coding-Specific Sub-Models from LLMs using Resource-Efficient
Pruning
|
cs.LG cs.AI cs.SE
|
Large Language Models (LLMs) have demonstrated their exceptional performance
in various complex code generation tasks. However, their broader adoption is
limited by significant computational demands and high resource requirements,
particularly memory and processing power. To mitigate such requirements, model
pruning techniques are used to create more compact models with significantly
fewer parameters. However, current approaches do not focus on the efficient
extraction of programming-language-specific sub-models. In this work, we
explore the idea of efficiently deriving coding-specific sub-models through
unstructured pruning (i.e., Wanda). We investigate the impact of different
domain-specific calibration datasets on pruning outcomes across three distinct
domains and extend our analysis to extracting four language-specific
sub-models: Python, Java, C++, and JavaScript. We are the first to efficiently
extract programming-language-specific sub-models using appropriate calibration
datasets while maintaining acceptable accuracy w.r.t. full models. We are also
the first to provide analytical evidence that domain-specific tasks activate
distinct regions within LLMs, supporting the creation of specialized sub-models
through unstructured pruning. We believe that this work has significant
potential to enhance LLM accessibility for coding by reducing computational
requirements to enable local execution on consumer-grade hardware, and
supporting faster inference times critical for real-time development feedback.
|
2501.05249
|
RAG-WM: An Efficient Black-Box Watermarking Approach for
Retrieval-Augmented Generation of Large Language Models
|
cs.CR cs.AI
|
In recent years, tremendous success has been witnessed in Retrieval-Augmented
Generation (RAG), widely used to enhance Large Language Models (LLMs) in
domain-specific, knowledge-intensive, and privacy-sensitive tasks. However,
attackers may steal those valuable RAGs and deploy or commercialize them,
making it essential to detect Intellectual Property (IP) infringement. Most
existing ownership protection solutions, such as watermarks, are designed for
relational databases and texts. They cannot be directly applied to RAGs because
relational database watermarks require white-box access to detect IP
infringement, which is unrealistic for the knowledge base in RAGs. Meanwhile,
post-processing by the adversary's deployed LLMs typically destructs text
watermark information. To address those problems, we propose a novel black-box
"knowledge watermark" approach, named RAG-WM, to detect IP infringement of
RAGs. RAG-WM uses a multi-LLM interaction framework, comprising a Watermark
Generator, Shadow LLM & RAG, and Watermark Discriminator, to create watermark
texts based on watermark entity-relationship tuples and inject them into the
target RAG. We evaluate RAG-WM across three domain-specific and two
privacy-sensitive tasks on four benchmark LLMs. Experimental results show that
RAG-WM effectively detects the stolen RAGs in various deployed LLMs.
Furthermore, RAG-WM is robust against paraphrasing, unrelated content removal,
knowledge insertion, and knowledge expansion attacks. Lastly, RAG-WM can also
evade watermark detection approaches, highlighting its promising application in
detecting IP infringement of RAG systems.
|
2501.05252
|
From Scientific Texts to Verifiable Code: Automating the Process with
Transformers
|
cs.SE cs.AI cs.LO
|
Despite the vast body of research literature proposing algorithms with formal
guarantees, the amount of verifiable code in today's systems remains minimal.
This discrepancy stems from the inherent difficulty of verifying code,
particularly due to the time-consuming nature and strict formalism of proof
details that formal verification tools require. However, the emergence of
transformers in Large Language Models presents a promising solution to this
challenge. In this position paper, we believe that transformers have the
potential to read research papers that propose algorithms with formal proofs
and translate these proofs into verifiable code. We leverage transformers to
first build a formal structure of the proof using the original text from the
paper, and then to handle the tedious, low-level aspects of proofs that are
often omitted by humans. We argue that this approach can significantly reduce
the barrier to formal verification. The above idea of reading papers to write
verifiable code opens new avenues for automating the verification of complex
systems, enabling a future where formally verified algorithms from academic
research can more seamlessly transition into real-world software systems,
thereby improving code reliability and security.
|
2501.05255
|
CallNavi: A Study and Challenge on Function Calling Routing and
Invocation in Large Language Models
|
cs.SE cs.CL
|
Interacting with a software system via a chatbot can be challenging,
especially when the chatbot needs to generate API calls, in the right order and
with the right parameters, to communicate with the system. API calling in
chatbot systems poses significant challenges, particularly in complex,
multi-step tasks requiring accurate API selection and execution. We contribute
to this domain in three ways: first, by introducing a novel dataset designed to
assess models on API function selection, parameter generation, and nested API
calls; second, by benchmarking state-of-the-art language models across varying
levels of complexity to evaluate their performance in API function generation
and parameter accuracy; and third, by proposing an enhanced API routing method
that combines general-purpose large language models for API selection with
fine-tuned models for parameter generation and some prompt engineering
approach. These approaches lead to substantial improvements in handling complex
API tasks, offering practical advancements for real-world API-driven chatbot
systems.
|
2501.05258
|
Automating the Detection of Code Vulnerabilities by Analyzing GitHub
Issues
|
cs.SE cs.AI cs.CR
|
In today's digital landscape, the importance of timely and accurate
vulnerability detection has significantly increased. This paper presents a
novel approach that leverages transformer-based models and machine learning
techniques to automate the identification of software vulnerabilities by
analyzing GitHub issues. We introduce a new dataset specifically designed for
classifying GitHub issues relevant to vulnerability detection. We then examine
various classification techniques to determine their effectiveness. The results
demonstrate the potential of this approach for real-world application in early
vulnerability detection, which could substantially reduce the window of
exploitation for software vulnerabilities. This research makes a key
contribution to the field by providing a scalable and computationally efficient
framework for automated detection, enabling the prevention of compromised
software usage before official notifications. This work has the potential to
enhance the security of open-source software ecosystems.
|
2501.05260
|
Enhancing Plagiarism Detection in Marathi with a Weighted Ensemble of
TF-IDF and BERT Embeddings for Low-Resource Language Processing
|
cs.CL cs.AI cs.LG
|
Plagiarism involves using another person's work or concepts without proper
attribution, presenting them as original creations. With the growing amount of
data communicated in regional languages such as Marathi -- one of India's
regional languages -- it is crucial to design robust plagiarism detection
systems tailored for low-resource languages. Language models like Bidirectional
Encoder Representations from Transformers (BERT) have demonstrated exceptional
capability in text representation and feature extraction, making them essential
tools for semantic analysis and plagiarism detection. However, the application
of BERT for low-resource languages remains under-explored, particularly in the
context of plagiarism detection. This paper presents a method to enhance the
accuracy of plagiarism detection for Marathi texts using BERT sentence
embeddings in conjunction with Term Frequency-Inverse Document Frequency
(TF-IDF) feature representation. This approach effectively captures
statistical, semantic, and syntactic aspects of text features through a
weighted voting ensemble of machine learning models.
|
2501.05262
|
QMDB: Quick Merkle Database
|
cs.NI cs.DB
|
Quick Merkle Database (QMDB) addresses longstanding bottlenecks in blockchain
state management by integrating key-value (KV) and Merkle tree storage into a
single unified architecture. QMDB delivers a significant throughput improvement
over existing architectures, achieving up to 6X over the widely used RocksDB
and 8X over NOMT, a leading verifiable database. Its novel append-only
twig-based design enables one SSD read per state access, O(1) IOs for updates,
and in-memory Merkleization on a memory footprint as small as 2.3 bytes per
entry, enabling it to run on even modest consumer-grade PCs. QMDB scales
seamlessly across both commodity and enterprise hardware, achieving up to 2.28
million state updates per second. This performance enables support for 1
million token transfers per second (TPS), marking QMDB as the first solution
achieving such a milestone. QMDB has been benchmarked with workloads exceeding
15 billion entries (10X Ethereum's 2024 state) and has proven the capacity to
scale to 280 billion entries on a single server. Furthermore, QMDB introduces
historical proofs, unlocking the ability to query its blockchain's historical
state at the latest block. QMDB not only meets the demands of current
blockchains but also provides a robust foundation for building scalable,
efficient, and verifiable decentralized applications across diverse use cases.
|
2501.05264
|
Towards Balanced Continual Multi-Modal Learning in Human Pose Estimation
|
cs.CV cs.AI
|
3D human pose estimation (3D HPE) has emerged as a prominent research topic,
particularly in the realm of RGB-based methods. However, RGB images are
susceptible to limitations such as sensitivity to lighting conditions and
potential user discomfort. Consequently, multi-modal sensing, which leverages
non-intrusive sensors, is gaining increasing attention. Nevertheless,
multi-modal 3D HPE still faces challenges, including modality imbalance and the
imperative for continual learning. In this work, we introduce a novel balanced
continual multi-modal learning method for 3D HPE, which harnesses the power of
RGB, LiDAR, mmWave, and WiFi. Specifically, we propose a Shapley value-based
contribution algorithm to quantify the contribution of each modality and
identify modality imbalance. To address this imbalance, we employ a re-learning
strategy. Furthermore, recognizing that raw data is prone to noise
contamination, we develop a novel denoising continual learning approach. This
approach incorporates a noise identification and separation module to mitigate
the adverse effects of noise and collaborates with the balanced learning
strategy to enhance optimization. Additionally, an adaptive EWC mechanism is
employed to alleviate catastrophic forgetting. We conduct extensive experiments
on the widely-adopted multi-modal dataset, MM-Fi, which demonstrate the
superiority of our approach in boosting 3D pose estimation and mitigating
catastrophic forgetting in complex scenarios. We will release our codes.
|
2501.05265
|
Patch-GAN Transfer Learning with Reconstructive Models for Cloud Removal
|
cs.CV eess.IV
|
Cloud removal plays a crucial role in enhancing remote sensing image
analysis, yet accurately reconstructing cloud-obscured regions remains a
significant challenge. Recent advancements in generative models have made the
generation of realistic images increasingly accessible, offering new
opportunities for this task. Given the conceptual alignment between image
generation and cloud removal tasks, generative models present a promising
approach for addressing cloud removal in remote sensing. In this work, we
propose a deep transfer learning approach built on a generative adversarial
network (GAN) framework to explore the potential of the novel masked
autoencoder (MAE) image reconstruction model in cloud removal. Due to the
complexity of remote sensing imagery, we further propose using a patch-wise
discriminator to determine whether each patch of the image is real or not. The
proposed reconstructive transfer learning approach demonstrates significant
improvements in cloud removal performance compared to other GAN-based methods.
Additionally, whilst direct comparisons with some of the state-of-the-art cloud
removal techniques are limited due to unclear details regarding their
train/test data splits, the proposed model achieves competitive results based
on available benchmarks.
|
2501.05269
|
CellViT++: Energy-Efficient and Adaptive Cell Segmentation and
Classification Using Foundation Models
|
cs.CV cs.LG
|
Digital Pathology is a cornerstone in the diagnosis and treatment of
diseases. A key task in this field is the identification and segmentation of
cells in hematoxylin and eosin-stained images. Existing methods for cell
segmentation often require extensive annotated datasets for training and are
limited to a predefined cell classification scheme. To overcome these
limitations, we propose $\text{CellViT}^{{\scriptscriptstyle ++}}$, a framework
for generalized cell segmentation in digital pathology.
$\text{CellViT}^{{\scriptscriptstyle ++}}$ utilizes Vision Transformers with
foundation models as encoders to compute deep cell features and segmentation
masks simultaneously. To adapt to unseen cell types, we rely on a
computationally efficient approach. It requires minimal data for training and
leads to a drastically reduced carbon footprint. We demonstrate excellent
performance on seven different datasets, covering a broad spectrum of cell
types, organs, and clinical settings. The framework achieves remarkable
zero-shot segmentation and data-efficient cell-type classification.
Furthermore, we show that $\text{CellViT}^{{\scriptscriptstyle ++}}$ can
leverage immunofluorescence stainings to generate training datasets without the
need for pathologist annotations. The automated dataset generation approach
surpasses the performance of networks trained on manually labeled data,
demonstrating its effectiveness in creating high-quality training datasets
without expert annotations. To advance digital pathology,
$\text{CellViT}^{{\scriptscriptstyle ++}}$ is available as an open-source
framework featuring a user-friendly, web-based interface for visualization and
annotation. The code is available under
https://github.com/TIO-IKIM/CellViT-plus-plus.
|
2501.05272
|
Solving the Catastrophic Forgetting Problem in Generalized Category
Discovery
|
cs.CV
|
Generalized Category Discovery (GCD) aims to identify a mix of known and
novel categories within unlabeled data sets, providing a more realistic setting
for image recognition. Essentially, GCD needs to remember existing patterns
thoroughly to recognize novel categories. Recent state-of-the-art method SimGCD
transfers the knowledge from known-class data to the learning of novel classes
through debiased learning. However, some patterns are catastrophically forgot
during adaptation and thus lead to poor performance in novel categories
classification. To address this issue, we propose a novel learning approach,
LegoGCD, which is seamlessly integrated into previous methods to enhance the
discrimination of novel classes while maintaining performance on previously
encountered known classes. Specifically, we design two types of techniques
termed as Local Entropy Regularization (LER) and Dual-views Kullback Leibler
divergence constraint (DKL). The LER optimizes the distribution of potential
known class samples in unlabeled data, thus ensuring the preservation of
knowledge related to known categories while learning novel classes. Meanwhile,
DKL introduces Kullback Leibler divergence to encourage the model to produce a
similar prediction distribution of two view samples from the same image. In
this way, it successfully avoids mismatched prediction and generates more
reliable potential known class samples simultaneously. Extensive experiments
validate that the proposed LegoGCD effectively addresses the known category
forgetting issue across all datasets, eg, delivering a 7.74% and 2.51% accuracy
boost on known and novel classes in CUB, respectively. Our code is available
at: https://github.com/Cliffia123/LegoGCD.
|
2501.05278
|
Off-Policy Evaluation and Counterfactual Methods in Dynamic Auction
Environments
|
cs.AI cs.LG q-fin.CP
|
Counterfactual estimators are critical for learning and refining policies
using logged data, a process known as Off-Policy Evaluation (OPE). OPE allows
researchers to assess new policies without costly experiments, speeding up the
evaluation process. Online experimental methods, such as A/B tests, are
effective but often slow, thus delaying the policy selection and optimization
process.
In this work, we explore the application of OPE methods in the context of
resource allocation in dynamic auction environments. Given the competitive
nature of environments where rapid decision-making is crucial for gaining a
competitive edge, the ability to quickly and accurately assess algorithmic
performance is essential. By utilizing counterfactual estimators as a
preliminary step before conducting A/B tests, we aim to streamline the
evaluation process, reduce the time and resources required for experimentation,
and enhance confidence in the chosen policies. Our investigation focuses on the
feasibility and effectiveness of using these estimators to predict the outcomes
of potential resource allocation strategies, evaluate their performance, and
facilitate more informed decision-making in policy selection. Motivated by the
outcomes of our initial study, we envision an advanced analytics system
designed to seamlessly and dynamically assess new resource allocation
strategies and policies.
|
2501.05279
|
Learning convolution operators on compact Abelian groups
|
cs.LG stat.ML
|
We consider the problem of learning convolution operators associated to
compact Abelian groups. We study a regularization-based approach and provide
corresponding learning guarantees, discussing natural regularity condition on
the convolution kernel. More precisely, we assume the convolution kernel is a
function in a translation invariant Hilbert space and analyze a natural ridge
regression (RR) estimator. Building on existing results for RR, we characterize
the accuracy of the estimator in terms of finite sample bounds. Interestingly,
regularity assumptions which are classical in the analysis of RR, have a novel
and natural interpretation in terms of space/frequency localization.
Theoretical results are illustrated by numerical simulations.
|
2501.05281
|
Comparison Study: Glacier Calving Front Delineation in Synthetic
Aperture Radar Images With Deep Learning
|
cs.CV cs.LG
|
Calving front position variation of marine-terminating glaciers is an
indicator of ice mass loss and a crucial parameter in numerical glacier models.
Deep Learning (DL) systems can automatically extract this position from
Synthetic Aperture Radar (SAR) imagery, enabling continuous, weather- and
illumination-independent, large-scale monitoring. This study presents the first
comparison of DL systems on a common calving front benchmark dataset. A
multi-annotator study with ten annotators is performed to contrast the
best-performing DL system against human performance. The best DL model's
outputs deviate 221 m on average, while the average deviation of the human
annotators is 38 m. This significant difference shows that current DL systems
do not yet match human performance and that further research is needed to
enable fully automated monitoring of glacier calving fronts. The study of
Vision Transformers, foundation models, and the inclusion and processing
strategy of more information are identified as avenues for future research.
|
2501.05285
|
Pitch Plane Trajectory Tracking Control for Sounding Rockets via
Adaptive Feedback Linearization
|
eess.SY cs.SY math.DS
|
This paper proposes a pitch plane trajectory tacking control solution for
suborbital launch vehicles relying on adaptive feedback linearization.
Initially, the 2D dynamics and kinematics for a single-engine,
thrust-vector-controlled sounding rocket are obtained for control design
purposes. Then, an inner-outer control strategy, which simultaneously tackles
attitude and position control, is adopted, with the inner-loop comprising the
altitude and pitch control and the outer-loop addressing the horizontal
(downrange) position control. Feedback linearization is used to cancel out the
non-linearities in both the inner and outer dynamics. Making use of Lyapunov
stability theory, an adaptation law, which provides online estimates on the
inner-loop aerodynamic uncertainty, is jointly designed with the output
tracking controller via adaptive backstepping, ensuring global reference
tracking in the region where the feedback linearization is well-defined. The
zero dynamics of the inner-stabilized system are then exploited to obtain the
outerloop dynamics and derive a Linear Quadratic Regulator (LQR) with integral
action, which can stabilize them as well as reject external disturbances. In
the outermost loop, the estimate on the correspondent aerodynamic uncertainty
is indirectly obtained by using the inner loop estimates together with known
aerodynamics relations. The resulting inner-outer position control solution is
proven to be asymptotically stable in the region of interest. Using a
single-stage sounding rocket, propelled by a liquid engine, as reference
vehicle, different mission scenarios are tested in a simulation environment to
verify the adaptability of the proposed control strategy. The system is able to
track the requested trajectories while rejecting external wind disturbances.
Furthermore, the need to re-tune the control gains in between different mission
scenarios is minimal to none.
|
2501.05289
|
Unraveling the Impact of Visual Complexity on Search as Learning
|
cs.IR
|
Information search has become essential for learning and knowledge
acquisition, offering broad access to information and learning resources. The
visual complexity of web pages is known to influence search behavior, with
previous work suggesting that searchers make evaluative judgments within the
first second on a page. However, there is a significant gap in our
understanding of how visual complexity impacts searches specifically conducted
with a learning intent. This gap is particularly relevant for the development
of optimized information retrieval (IR) systems that effectively support
educational objectives. To address this research need, we model visual
complexity and aesthetics via a diverse set of features, investigating their
relationship with search behavior during learning-oriented web sessions. Our
study utilizes a publicly available dataset from a lab study where participants
learned about thunderstorm formation. Our findings reveal that while content
relevance is the most significant predictor for knowledge gain, sessions with
less visually complex pages are associated with higher learning success. This
observation applies to features associated with the layout of web pages rather
than to simpler features (e.g., number of images). The reported results shed
light on the impact of visual complexity on learning-oriented searches,
informing the design of more effective IR systems for educational contexts. To
foster reproducibility, we release our source code
(https://github.com/TIBHannover/sal_visual_complexity).
|
2501.05292
|
Detection of Rumors and Their Sources in Social Networks: A
Comprehensive Survey
|
cs.SI
|
With the recent advancements in social network platform technology, an
overwhelming amount of information is spreading rapidly. In this situation, it
can become increasingly difficult to discern what information is false or true.
If false information proliferates significantly, it can lead to undesirable
outcomes. Hence, when we receive some information, we can pose the following
two questions: $(i)$ Is the information true? $(ii)$ If not, who initially
spread that information? % The first problem is the rumor detection issue,
while the second is the rumor source detection problem. A rumor-detection
problem involves identifying and mitigating false or misleading information
spread via various communication channels, particularly online platforms and
social media. Rumors can range from harmless ones to deliberately misleading
content aimed at deceiving or manipulating audiences. Detecting misinformation
is crucial for maintaining the integrity of information ecosystems and
preventing harmful effects such as the spread of false beliefs, polarization,
and even societal harm. Therefore, it is very important to quickly distinguish
such misinformation while simultaneously finding its source to block it from
spreading on the network. However, most of the existing surveys have analyzed
these two issues separately. In this work, we first survey the existing
research on the rumor-detection and rumor source detection problems with joint
detection approaches, simultaneously. % This survey deals with these two issues
together so that their relationship can be observed and it provides how the two
problems are similar and different. The limitations arising from the rumor
detection, rumor source detection, and their combination problems are also
explained, and some challenges to be addressed in future works are presented.
|
2501.05295
|
GaussDB-Global: A Geographically Distributed Database System
|
cs.DB
|
Geographically distributed database systems use remote replication to protect
against regional failures. These systems are sensitive to severe latency
penalties caused by centralized transaction management, remote access to
sharded data, and log shipping over long distances. To tackle these issues, we
present GaussDB-Global, a sharded geographically distributed database system
with asynchronous replication, for OLTP applications. To tackle the transaction
management bottleneck, we take a decentralized approach using synchronized
clocks. Our system can seamlessly transition between centralized and
decentralized transaction management, providing efficient fault tolerance and
streamlining deployment. To alleviate the remote read and log shipping issues,
we support reads on asynchronous replicas with strong consistency, tunable
freshness guarantees, and dynamic load balancing. Our experimental results on a
geographically distributed cluster show that our approach provides up to 14x
higher read throughput, and 50% more TPC-C throughput compared to our baseline.
|
2501.05300
|
Local particle refinement in terramechanical simulations
|
cs.CE physics.comp-ph
|
The discrete element method (DEM) is a powerful tool for simulating granular
soils, but its high computational demand often results in extended simulation
times. While the effect of particle size has been extensively studied, the
potential benefits of spatially scaling particle sizes are less explored. We
systematically investigate a local particle refinement method's impact on
reducing computational effort while maintaining accuracy. We first conduct
triaxial tests to verify that bulk mechanical properties are preserved under
local particle refinement. Then, we perform pressure-sinkage and
shear-displacement tests, comparing our method to control simulations with
homogeneous particle size. We evaluate $36$ different DEM beds with varying
aggressiveness in particle refinement. Our results show that this approach,
depending on refinement aggressiveness, can significantly reduce particle count
by $2.3$ to $25$ times and simulation times by $3.1$ to $43$ times, with
normalized errors ranging from $3.4$\% to $11$\% compared to high-resolution
reference simulations. The approach maintains a high resolution at the soil
surface, where interaction is high, while allowing larger particles below the
surface. The results demonstrate that substantial computational savings can be
achieved without significantly compromising simulation accuracy. This method
can enhance the efficiency of DEM simulations in terramechanics applications.
|
2501.05309
|
Private Selection with Heterogeneous Sensitivities
|
cs.CR cs.DS cs.LG
|
Differentially private (DP) selection involves choosing a high-scoring
candidate from a finite candidate pool, where each score depends on a sensitive
dataset. This problem arises naturally in a variety of contexts including model
selection, hypothesis testing, and within many DP algorithms. Classical
methods, such as Report Noisy Max (RNM), assume all candidates' scores are
equally sensitive to changes in a single individual's data, but this often
isn't the case. To address this, algorithms like the Generalised Exponential
Mechanism (GEM) leverage variability in candidate sensitivities. However, we
observe that while these algorithms can outperform RNM in some situations, they
may underperform in others - they can even perform worse than random selection.
In this work, we explore how the distribution of scores and sensitivities
impacts DP selection mechanisms. In all settings we study, we find that there
exists a mechanism that utilises heterogeneity in the candidate sensitivities
that outperforms standard mechanisms like RNM. However, no single mechanism
uniformly outperforms RNM. We propose using the correlation between the scores
and sensitivities as the basis for deciding which DP selection mechanism to
use. Further, we design a slight variant of GEM, modified GEM that generally
performs well whenever GEM performs poorly. Relying on the correlation
heuristic we propose combined GEM, which adaptively chooses between GEM and
modified GEM and outperforms both in polarised settings.
|
2501.05313
|
Optimizing Distributed Deployment of Mixture-of-Experts Model Inference
in Serverless Computing
|
cs.DC cs.LG
|
With the advancement of serverless computing, running machine learning (ML)
inference services over a serverless platform has been advocated, given its
labor-free scalability and cost effectiveness. Mixture-of-Experts (MoE) models
have been a dominant type of model architectures to enable large models
nowadays, with parallel expert networks. Serving large MoE models on serverless
computing is potentially beneficial, but has been underexplored due to
substantial challenges in handling the skewed expert popularity and
scatter-gather communication bottleneck in MoE model execution, for
cost-efficient serverless MoE deployment and performance guarantee. We study
optimized MoE model deployment and distributed inference serving on a
serverless platform, that effectively predict expert selection, pipeline
communication with model execution, and minimize the overall billed cost of
serving MoE models. Especially, we propose a Bayesian optimization framework
with multi-dimensional epsilon-greedy search to learn expert selections and
optimal MoE deployment achieving optimal billed cost, including: 1) a Bayesian
decision-making method for predicting expert popularity; 2) flexibly pipelined
scatter-gather communication; and 3) an optimal model deployment algorithm for
distributed MoE serving. Extensive experiments on AWS Lambda show that our
designs reduce the billed cost of all MoE layers by at least 75.67% compared to
CPU clusters while maintaining satisfactory inference throughput. As compared
to LambdaML in serverless computing, our designs achieves 43.41% lower cost
with a throughput decrease of at most 18.76%.
|
2501.05323
|
Distributed Learning and Inference Systems: A Networking Perspective
|
cs.LG cs.NI
|
Machine learning models have achieved, and in some cases surpassed,
human-level performance in various tasks, mainly through centralized training
of static models and the use of large models stored in centralized clouds for
inference. However, this centralized approach has several drawbacks, including
privacy concerns, high storage demands, a single point of failure, and
significant computing requirements. These challenges have driven interest in
developing alternative decentralized and distributed methods for AI training
and inference. Distribution introduces additional complexity, as it requires
managing multiple moving parts. To address these complexities and fill a gap in
the development of distributed AI systems, this work proposes a novel
framework, Data and Dynamics-Aware Inference and Training Networks (DA-ITN).
The different components of DA-ITN and their functions are explored, and the
associated challenges and research areas are highlighted.
|
2501.05325
|
The explanation dialogues: an expert focus study to understand
requirements towards explanations within the GDPR
|
cs.CY cs.LG
|
Explainable AI (XAI) provides methods to understand non-interpretable machine
learning models. However, we have little knowledge about what legal experts
expect from these explanations, including their legal compliance with, and
value against European Union legislation. To close this gap, we present the
Explanation Dialogues, an expert focus study to uncover the expectations,
reasoning, and understanding of legal experts and practitioners towards XAI,
with a specific focus on the European General Data Protection Regulation. The
study consists of an online questionnaire and follow-up interviews, and is
centered around a use-case in the credit domain. We extract both a set of
hierarchical and interconnected codes using grounded theory, and present the
standpoints of the participating experts towards XAI. We find that the
presented explanations are hard to understand and lack information, and discuss
issues that can arise from the different interests of the data controller and
subject. Finally, we present a set of recommendations for developers of XAI
methods, and indications of legal areas of discussion. Among others,
recommendations address the presentation, choice, and content of an
explanation, technical risks as well as the end-user, while we provide legal
pointers to the contestability of explanations, transparency thresholds,
intellectual property rights as well as the relationship between involved
parties.
|
2501.05329
|
Knowledge Transfer in Model-Based Reinforcement Learning Agents for
Efficient Multi-Task Learning
|
cs.LG cs.RO
|
We propose an efficient knowledge transfer approach for model-based
reinforcement learning, addressing the challenge of deploying large world
models in resource-constrained environments. Our method distills a
high-capacity multi-task agent (317M parameters) into a compact 1M parameter
model, achieving state-of-the-art performance on the MT30 benchmark with a
normalized score of 28.45, a substantial improvement over the original 1M
parameter model's score of 18.93. This demonstrates the ability of our
distillation technique to consolidate complex multi-task knowledge effectively.
Additionally, we apply FP16 post-training quantization, reducing the model size
by 50% while maintaining performance. Our work bridges the gap between the
power of large models and practical deployment constraints, offering a scalable
solution for efficient and accessible multi-task reinforcement learning in
robotics and other resource-limited domains.
|
2501.05332
|
AnCoGen: Analysis, Control and Generation of Speech with a Masked
Autoencoder
|
cs.SD cs.AI eess.AS
|
This article introduces AnCoGen, a novel method that leverages a masked
autoencoder to unify the analysis, control, and generation of speech signals
within a single model. AnCoGen can analyze speech by estimating key attributes,
such as speaker identity, pitch, content, loudness, signal-to-noise ratio, and
clarity index. In addition, it can generate speech from these attributes and
allow precise control of the synthesized speech by modifying them. Extensive
experiments demonstrated the effectiveness of AnCoGen across speech
analysis-resynthesis, pitch estimation, pitch modification, and speech
enhancement.
|
2501.05333
|
Stability and List-Replicability for Agnostic Learners
|
cs.LG
|
Two seminal papers--Alon, Livni, Malliaris, Moran (STOC 2019) and Bun, Livni,
and Moran (FOCS 2020)--established the equivalence between online learnability
and globally stable PAC learnability in binary classification. However, Chase,
Chornomaz, Moran, and Yehudayoff (STOC 2024) recently showed that this
equivalence does not hold in the agnostic setting. Specifically, they proved
that in the agnostic setting, only finite hypothesis classes are globally
stable learnable. Therefore, agnostic global stability is too restrictive to
capture interesting hypothesis classes.
To address this limitation, Chase \emph{et al.} introduced two relaxations of
agnostic global stability. In this paper, we characterize the classes that are
learnable under their proposed relaxed conditions, resolving the two open
problems raised in their work.
First, we prove that in the setting where the stability parameter can depend
on the excess error (the gap between the learner's error and the best
achievable error by the hypothesis class), agnostic stability is fully
characterized by the Littlestone dimension. Consequently, as in the realizable
case, this form of learnability is equivalent to online learnability.
As part of the proof of this theorem, we strengthen the celebrated result of
Bun et al. by showing that classes with infinite Littlestone dimension are not
stably PAC learnable, even if we allow the stability parameter to depend on the
excess error.
For the second relaxation proposed by Chase et al., we prove that only finite
hypothesis classes are globally stable learnable even if we restrict the
agnostic setting to distributions with small population loss.
|
2501.05334
|
The Bakers and Millers Game with Restricted Locations
|
cs.GT cs.AI
|
We study strategic location choice by customers and sellers, termed the
Bakers and Millers Game in the literature. In our generalized setting, each
miller can freely choose any location for setting up a mill, while each baker
is restricted in the choice of location for setting up a bakery. For optimal
bargaining power, a baker would like to select a location with many millers to
buy flour from and with little competition from other bakers. Likewise, a
miller aims for a location with many bakers and few competing millers. Thus,
both types of agents choose locations to optimize the ratio of agents of
opposite type divided by agents of the same type at their chosen location.
Originally raised in the context of Fractional Hedonic Games, the Bakers and
Millers Game has applications that range from commerce to product design.
We study the impact of location restrictions on the properties of the game.
While pure Nash equilibria trivially exist in the setting without location
restrictions, we show via a sophisticated, efficient algorithm that even the
more challenging restricted setting admits equilibria. Moreover, the computed
equilibrium approximates the optimal social welfare by a factor of at most
$2\left(\frac{e}{e-1}\right)$. Furthermore, we give tight bounds on the price
of anarchy/stability.
On the conceptual side, the location choice feature adds a new layer to the
standard setting of Hedonic Games, in the sense that agents that select the
same location form a coalition. This allows to naturally restrict the possible
coalitions that can be formed. With this, our model generalizes simple
symmetric Fractional Hedonic Games on complete bipartite valuation graphs and
also Hedonic Diversity Games with utilities single-peaked at 0. We believe that
this generalization is also a very interesting direction for other types of
Hedonic Games.
|
2501.05336
|
Stream Aligner: Efficient Sentence-Level Alignment via Distribution
Induction
|
cs.CL cs.AI cs.LG
|
The rapid advancement of large language models (LLMs) has led to significant
improvements in their capabilities, but also to increased concerns about their
alignment with human values and intentions. Current alignment strategies,
including adaptive training and inference-time methods, have demonstrated
potential in this area. However, these approaches still struggle to balance
deployment complexity and capability across various tasks and difficulties. In
this work, we introduce the Streaming Distribution Induce Aligner (Stream
Aligner), a novel alignment paradigm that combines efficiency with enhanced
performance in various tasks throughout the generation process. Stream Aligner
achieves dynamic sentence-level correction by using a small model to learn the
preferences of the suffix sentence, iteratively correcting the suffix sentence
output by the upstream model, and then using the corrected sentence to replace
the suffix sentence in subsequent generations. Compared to Aligner, our
experiments demonstrate that Stream Aligner reduces reliance on the
capabilities of additional models, enhances the reasoning abilities of LLMs,
and decreases latency during user interaction. Specifically, Stream Aligner-2B
model has achieved an improvement of 76.1% in helpfulness, 36.0% in
harmlessness on the tested Llama2-70B-chat model, and Stream Aligner-8B has
achieved an improvement of 3.5% on the math ability of the tested
Llama3-70B-Instruct model.
|
2501.05339
|
JAQ: Joint Efficient Architecture Design and Low-Bit Quantization with
Hardware-Software Co-Exploration
|
cs.CV
|
The co-design of neural network architectures, quantization precisions, and
hardware accelerators offers a promising approach to achieving an optimal
balance between performance and efficiency, particularly for model deployment
on resource-constrained edge devices. In this work, we propose the JAQ
Framework, which jointly optimizes the three critical dimensions. However,
effectively automating the design process across the vast search space of those
three dimensions poses significant challenges, especially when pursuing
extremely low-bit quantization. Specifical, the primary challenges include: (1)
Memory overhead in software-side: Low-precision quantization-aware training can
lead to significant memory usage due to storing large intermediate features and
latent weights for back-propagation, potentially causing memory exhaustion. (2)
Search time-consuming in hardware-side: The discrete nature of hardware
parameters and the complex interplay between compiler optimizations and
individual operators make the accelerator search time-consuming. To address
these issues, JAQ mitigates the memory overhead through a channel-wise sparse
quantization (CSQ) scheme, selectively applying quantization to the most
sensitive components of the model during optimization. Additionally, JAQ
designs BatchTile, which employs a hardware generation network to encode all
possible tiling modes, thereby speeding up the search for the optimal compiler
mapping strategy. Extensive experiments demonstrate the effectiveness of JAQ,
achieving approximately 7% higher Top-1 accuracy on ImageNet compared to
previous methods and reducing the hardware search time per iteration to 0.15
seconds.
|
2501.05359
|
CROPS: Model-Agnostic Training-Free Framework for Safe Image Synthesis
with Latent Diffusion Models
|
cs.CV
|
With advances in diffusion models, image generation has shown significant
performance improvements. This raises concerns about the potential abuse of
image generation, such as the creation of explicit or violent images, commonly
referred to as Not Safe For Work (NSFW) content. To address this, the Stable
Diffusion model includes several safety checkers to censor initial text prompts
and final output images generated from the model. However, recent research has
shown that these safety checkers have vulnerabilities against adversarial
attacks, allowing them to generate NSFW images. In this paper, we find that
these adversarial attacks are not robust to small changes in text prompts or
input latents. Based on this, we propose CROPS (Circular or RandOm Prompts for
Safety), a model-agnostic framework that easily defends against adversarial
attacks generating NSFW images without requiring additional training. Moreover,
we develop an approach that utilizes one-step diffusion models for efficient
NSFW detection (CROPS-1), further reducing computational resources. We
demonstrate the superiority of our method in terms of performance and
applicability.
|
2501.05360
|
On Corrigibility and Alignment in Multi Agent Games
|
cs.GT cs.AI
|
Corrigibility of autonomous agents is an under explored part of system
design, with previous work focusing on single agent systems. It has been
suggested that uncertainty over the human preferences acts to keep the agents
corrigible, even in the face of human irrationality. We present a general
framework for modelling corrigibility in a multi-agent setting as a 2 player
game in which the agents always have a move in which they can ask the human for
supervision. This is formulated as a Bayesian game for the purpose of
introducing uncertainty over the human beliefs. We further analyse two specific
cases. First, a two player corrigibility game, in which we want corrigibility
displayed in both agents for both common payoff (monotone) games and harmonic
games. Then we investigate an adversary setting, in which one agent is
considered to be a `defending' agent and the other an `adversary'. A general
result is provided for what belief over the games and human rationality the
defending agent is required to have to induce corrigibility.
|
2501.05361
|
No-Regret Linear Bandits under Gap-Adjusted Misspecification
|
cs.LG
|
This work studies linear bandits under a new notion of gap-adjusted
misspecification and is an extension of Liu et al. (2023). When the underlying
reward function is not linear, existing linear bandits work usually relies on a
uniform misspecification parameter $\epsilon$ that measures the sup-norm error
of the best linear approximation. This results in an unavoidable linear regret
whenever $\epsilon > 0$. We propose a more natural model of misspecification
which only requires the approximation error at each input $x$ to be
proportional to the suboptimality gap at $x$. It captures the intuition that,
for optimization problems, near-optimal regions should matter more and we can
tolerate larger approximation errors in suboptimal regions.
Quite surprisingly, we show that the classical LinUCB algorithm -- designed
for the realizable case -- is automatically robust against such
$\rho$-gap-adjusted misspecification with parameter $\rho$ diminishing at
$O(1/(d \sqrt{\log T}))$. It achieves a near-optimal $O(\sqrt{T})$ regret for
problems that the best-known regret is almost linear in time horizon $T$. We
further advance this frontier by presenting a novel phased elimination-based
algorithm whose gap-adjusted misspecification parameter $\rho = O(1/\sqrt{d})$
does not scale with $T$. This algorithm attains optimal $O(\sqrt{T})$ regret
and is deployment-efficient, requiring only $\log T$ batches of exploration. It
also enjoys an adaptive $O(\log T)$ regret when a constant suboptimality gap
exists. Technically, our proof relies on a novel self-bounding argument that
bounds the part of the regret due to misspecification by the regret itself, and
a new inductive lemma that limits the misspecification error within the
suboptimality gap for all valid actions in each batch selected by G-optimal
design.
|
2501.05366
|
Search-o1: Agentic Search-Enhanced Large Reasoning Models
|
cs.AI cs.CL cs.IR
|
Large reasoning models (LRMs) like OpenAI-o1 have demonstrated impressive
long stepwise reasoning capabilities through large-scale reinforcement
learning. However, their extended reasoning processes often suffer from
knowledge insufficiency, leading to frequent uncertainties and potential
errors. To address this limitation, we introduce \textbf{Search-o1}, a
framework that enhances LRMs with an agentic retrieval-augmented generation
(RAG) mechanism and a Reason-in-Documents module for refining retrieved
documents. Search-o1 integrates an agentic search workflow into the reasoning
process, enabling dynamic retrieval of external knowledge when LRMs encounter
uncertain knowledge points. Additionally, due to the verbose nature of
retrieved documents, we design a separate Reason-in-Documents module to deeply
analyze the retrieved information before injecting it into the reasoning chain,
minimizing noise and preserving coherent reasoning flow. Extensive experiments
on complex reasoning tasks in science, mathematics, and coding, as well as six
open-domain QA benchmarks, demonstrate the strong performance of Search-o1.
This approach enhances the trustworthiness and applicability of LRMs in complex
reasoning tasks, paving the way for more reliable and versatile intelligent
systems. The code is available at
\url{https://github.com/sunnynexus/Search-o1}.
|
2501.05368
|
Developing a Foundation of Vector Symbolic Architectures Using Category
Theory
|
cs.AI cs.LG
|
At the risk of overstating the case, connectionist approaches to machine
learning, i.e. neural networks, are enjoying a small vogue right now. However,
these methods require large volumes of data and produce models that are
uninterpretable to humans. An alternative framework that is compatible with
neural networks and gradient-based learning, but explicitly models
compositionality, is Vector Symbolic Architectures (VSAs). VSAs are a family of
algebras on high-dimensional vector representations. They arose in cognitive
science from the need to unify neural processing and the kind of symbolic
reasoning that humans perform. While machine learning methods have benefited
from category theoretical analyses, VSAs have not yet received similar
treatment. In this paper, we present a first attempt at applying category
theory to VSAs. Specifically, we conduct a brief literature survey
demonstrating the lacking intersection of these two topics, provide a list of
desiderata for VSAs, and propose that VSAs may be understood as a (division)
rig in a category enriched over a monoid in Met (the category of Lawvere metric
spaces). This final contribution suggests that VSAs may be generalised beyond
current implementations. It is our hope that grounding VSAs in category theory
will lead to more rigorous connections with other research, both within and
beyond, learning and cognition.
|
2501.05369
|
1-2-1: Renaissance of Single-Network Paradigm for Virtual Try-On
|
cs.CV
|
Virtual Try-On (VTON) has become a crucial tool in ecommerce, enabling the
realistic simulation of garments on individuals while preserving their original
appearance and pose. Early VTON methods relied on single generative networks,
but challenges remain in preserving fine-grained garment details due to
limitations in feature extraction and fusion. To address these issues, recent
approaches have adopted a dual-network paradigm, incorporating a complementary
"ReferenceNet" to enhance garment feature extraction and fusion. While
effective, this dual-network approach introduces significant computational
overhead, limiting its scalability for high-resolution and long-duration
image/video VTON applications. In this paper, we challenge the dual-network
paradigm by proposing a novel single-network VTON method that overcomes the
limitations of existing techniques. Our method, namely MNVTON, introduces a
Modality-specific Normalization strategy that separately processes text, image
and video inputs, enabling them to share the same attention layers in a VTON
network. Extensive experimental results demonstrate the effectiveness of our
approach, showing that it consistently achieves higher-quality, more detailed
results for both image and video VTON tasks. Our results suggest that the
single-network paradigm can rival the performance of dualnetwork approaches,
offering a more efficient alternative for high-quality, scalable VTON
applications.
|
2501.05370
|
Accelerated Diffusion Models via Speculative Sampling
|
cs.LG stat.ML
|
Speculative sampling is a popular technique for accelerating inference in
Large Language Models by generating candidate tokens using a fast draft model
and accepting or rejecting them based on the target model's distribution. While
speculative sampling was previously limited to discrete sequences, we extend it
to diffusion models, which generate samples via continuous, vector-valued
Markov chains. In this context, the target model is a high-quality but
computationally expensive diffusion model. We propose various drafting
strategies, including a simple and effective approach that does not require
training a draft model and is applicable out of the box to any diffusion model.
Our experiments demonstrate significant generation speedup on various diffusion
models, halving the number of function evaluations, while generating exact
samples from the target model.
|
2501.05378
|
A Portable Solution for Simultaneous Human Movement and Mobile EEG
Acquisition: Readiness Potentials for Basketball Free-throw Shooting
|
cs.NE q-bio.NC
|
Advances in wireless electroencephalography (EEG) technology promise to
record brain-electrical activity in everyday situations. To better understand
the relationship between brain activity and natural behavior, it is necessary
to monitor human movement patterns. Here, we present a pocketable setup
consisting of two smartphones to simultaneously capture human posture and EEG
signals. We asked 26 basketball players to shoot 120 free throws each. First,
we investigated whether our setup allows us to capture the readiness potential
(RP) that precedes voluntary actions. Second, we investigated whether the RP
differs between successful and unsuccessful free-throw attempts. The results
confirmed the presence of the RP, but the amplitude of the RP was not related
to shooting success. However, offline analysis of real-time human pose signals
derived from a smartphone camera revealed pose differences between successful
and unsuccessful shots for some individuals. We conclude that a highly
portable, low-cost and lightweight acquisition setup, consisting of two
smartphones and a head-mounted wireless EEG amplifier, is sufficient to monitor
complex human movement patterns and associated brain dynamics outside the
laboratory.
|
2501.05379
|
Arc2Avatar: Generating Expressive 3D Avatars from a Single Image via ID
Guidance
|
cs.CV
|
Inspired by the effectiveness of 3D Gaussian Splatting (3DGS) in
reconstructing detailed 3D scenes within multi-view setups and the emergence of
large 2D human foundation models, we introduce Arc2Avatar, the first SDS-based
method utilizing a human face foundation model as guidance with just a single
image as input. To achieve that, we extend such a model for diverse-view human
head generation by fine-tuning on synthetic data and modifying its
conditioning. Our avatars maintain a dense correspondence with a human face
mesh template, allowing blendshape-based expression generation. This is
achieved through a modified 3DGS approach, connectivity regularizers, and a
strategic initialization tailored for our task. Additionally, we propose an
optional efficient SDS-based correction step to refine the blendshape
expressions, enhancing realism and diversity. Experiments demonstrate that
Arc2Avatar achieves state-of-the-art realism and identity preservation,
effectively addressing color issues by allowing the use of very low guidance,
enabled by our strong identity prior and initialization strategy, without
compromising detail. Please visit https://arc2avatar.github.io for more
resources.
|
2501.05382
|
Large Physics Models: Towards a collaborative approach with Large
Language Models and Foundation Models
|
physics.data-an cs.AI hep-ph physics.comp-ph physics.hist-ph
|
This paper explores ideas and provides a potential roadmap for the
development and evaluation of physics-specific large-scale AI models, which we
call Large Physics Models (LPMs). These models, based on foundation models such
as Large Language Models (LLMs) - trained on broad data - are tailored to
address the demands of physics research. LPMs can function independently or as
part of an integrated framework. This framework can incorporate specialized
tools, including symbolic reasoning modules for mathematical manipulations,
frameworks to analyse specific experimental and simulated data, and mechanisms
for synthesizing theories and scientific literature. We begin by examining
whether the physics community should actively develop and refine dedicated
models, rather than relying solely on commercial LLMs. We then outline how LPMs
can be realized through interdisciplinary collaboration among experts in
physics, computer science, and philosophy of science. To integrate these models
effectively, we identify three key pillars: Development, Evaluation, and
Philosophical Reflection. Development focuses on constructing models capable of
processing physics texts, mathematical formulations, and diverse physical data.
Evaluation assesses accuracy and reliability by testing and benchmarking.
Finally, Philosophical Reflection encompasses the analysis of broader
implications of LLMs in physics, including their potential to generate new
scientific understanding and what novel collaboration dynamics might arise in
research. Inspired by the organizational structure of experimental
collaborations in particle physics, we propose a similarly interdisciplinary
and collaborative approach to building and refining Large Physics Models. This
roadmap provides specific objectives, defines pathways to achieve them, and
identifies challenges that must be addressed to realise physics-specific large
scale AI models.
|
2501.05387
|
Integrating Explainable AI for Effective Malware Detection in Encrypted
Network Traffic
|
cs.CR cs.LG
|
Encrypted network communication ensures confidentiality, integrity, and
privacy between endpoints. However, attackers are increasingly exploiting
encryption to conceal malicious behavior. Detecting unknown encrypted malicious
traffic without decrypting the payloads remains a significant challenge. In
this study, we investigate the integration of explainable artificial
intelligence (XAI) techniques to detect malicious network traffic. We employ
ensemble learning models to identify malicious activity using multi-view
features extracted from various aspects of encrypted communication. To
effectively represent malicious communication, we compiled a robust dataset
with 1,127 unique connections, more than any other available open-source
dataset, and spanning 54 malware families. Our models were benchmarked against
the CTU-13 dataset, achieving performance of over 99% accuracy, precision, and
F1-score. Additionally, the eXtreme Gradient Boosting (XGB) model demonstrated
99.32% accuracy, 99.53% precision, and 99.43% F1-score on our custom dataset.
By leveraging Shapley Additive Explanations (SHAP), we identified that the
maximum packet size, mean inter-arrival time of packets, and transport layer
security version used are the most critical features for the global model
explanation. Furthermore, key features were identified as important for local
explanations across both datasets for individual traffic samples. These
insights provide a deeper understanding of the model decision-making process,
enhancing the transparency and reliability of detecting malicious encrypted
traffic.
|
2501.05391
|
The global consensus on the risk management of autonomous driving
|
cs.CY cs.AI cs.ET
|
Every maneuver of a vehicle redistributes risks between road users. While
human drivers do this intuitively, autonomous vehicles allow and require
deliberative algorithmic risk management. But how should traffic risks be
distributed among road users? In a global experimental study in eight countries
with different cultural backgrounds and almost 11,000 participants, we compared
risk distribution preferences. It turns out that risk preferences in road
traffic are strikingly similar between the cultural zones. The vast majority of
participants in all countries deviates from a guiding principle of minimizing
accident probabilities in favor of weighing up the probability and severity of
accidents. At the national level, the consideration of accident probability and
severity hardly differs between countries. The social dilemma of autonomous
vehicles detected in deterministic crash scenarios disappears in risk
assessments of everyday traffic situations in all countries. In no country do
cyclists receive a risk bonus that goes beyond their higher vulnerability. In
sum, our results suggest that a global consensus on the risk ethics of
autonomous driving is easier to establish than on the ethics of crashing.
|
2501.05396
|
FairCode: Evaluating Social Bias of LLMs in Code Generation
|
cs.CL cs.SE
|
Large language models (LLMs) have demonstrated significant capability in code
generation, drawing increasing attention to the evaluation of the quality and
safety of their outputs. However, research on bias in code generation remains
limited. Existing studies typically assess bias by applying malicious prompts
or reapply tasks and dataset for discriminative models. Given that LLMs are
often aligned with human values and that prior datasets are not fully optimized
for code-related tasks, there is a pressing need for benchmarks specifically
designed for evaluating code models. In this study, we introduce FairCode, a
novel benchmark for evaluating bias in code generation. FairCode comprises two
tasks: function implementation and test case generation, each evaluating social
bias through diverse scenarios. Additionally, we propose a new metric,
FairScore, to assess model performance on this benchmark. We conduct
experiments on widely used LLMs and provide a comprehensive analysis of the
results. The findings reveal that all tested LLMs exhibit bias. The code is
available at https://github.com/YongkDu/FairCode.
|
2501.05398
|
Mechanistic understanding and validation of large AI models with
SemanticLens
|
cs.LG cs.AI
|
Unlike human-engineered systems such as aeroplanes, where each component's
role and dependencies are well understood, the inner workings of AI models
remain largely opaque, hindering verifiability and undermining trust. This
paper introduces SemanticLens, a universal explanation method for neural
networks that maps hidden knowledge encoded by components (e.g., individual
neurons) into the semantically structured, multimodal space of a foundation
model such as CLIP. In this space, unique operations become possible, including
(i) textual search to identify neurons encoding specific concepts, (ii)
systematic analysis and comparison of model representations, (iii) automated
labelling of neurons and explanation of their functional roles, and (iv) audits
to validate decision-making against requirements. Fully scalable and operating
without human input, SemanticLens is shown to be effective for debugging and
validation, summarizing model knowledge, aligning reasoning with expectations
(e.g., adherence to the ABCDE-rule in melanoma classification), and detecting
components tied to spurious correlations and their associated training data. By
enabling component-level understanding and validation, the proposed approach
helps bridge the "trust gap" between AI models and traditional engineered
systems. We provide code for SemanticLens on
https://github.com/jim-berend/semanticlens and a demo on
https://semanticlens.hhi-research-insights.eu.
|
2501.05399
|
Performance of YOLOv7 in Kitchen Safety While Handling Knife
|
cs.CV
|
Safe knife practices in the kitchen significantly reduce the risk of cuts,
injuries, and serious accidents during food preparation. Using YOLOv7, an
advanced object detection model, this study focuses on identifying safety risks
during knife handling, particularly improper finger placement and blade contact
with hand. The model's performance was evaluated using metrics such as
precision, recall, mAP50, and mAP50-95. The results demonstrate that YOLOv7
achieved its best performance at epoch 31, with a mAP50-95 score of 0.7879,
precision of 0.9063, and recall of 0.7503. These findings highlight YOLOv7's
potential to accurately detect knife-related hazards, promoting the development
of improved kitchen safety.
|
2501.05401
|
BRATI: Bidirectional Recurrent Attention for Time-Series Imputation
|
cs.LG cs.AI
|
Missing data in time-series analysis poses significant challenges, affecting
the reliability of downstream applications. Imputation, the process of
estimating missing values, has emerged as a key solution. This paper introduces
BRATI, a novel deep-learning model designed to address multivariate time-series
imputation by combining Bidirectional Recurrent Networks and Attention
mechanisms. BRATI processes temporal dependencies and feature correlations
across long and short time horizons, utilizing two imputation blocks that
operate in opposite temporal directions. Each block integrates recurrent layers
and attention mechanisms to effectively resolve long-term dependencies.
We evaluate BRATI on three real-world datasets under diverse missing-data
scenarios: randomly missing values, fixed-length missing sequences, and
variable-length missing sequences. Our findings demonstrate that BRATI
consistently outperforms state-of-the-art models, delivering superior accuracy
and robustness in imputing multivariate time-series data.
|
2501.05403
|
TimeDP: Learning to Generate Multi-Domain Time Series with Domain
Prompts
|
cs.LG cs.AI
|
Time series generation models are crucial for applications like data
augmentation and privacy preservation. Most existing time series generation
models are typically designed to generate data from one specified domain. While
leveraging data from other domain for better generalization is proved to work
in other application areas, this approach remains challenging for time series
modeling due to the large divergence in patterns among different real world
time series categories. In this paper, we propose a multi-domain time series
diffusion model with domain prompts, named TimeDP. In TimeDP, we utilize a time
series semantic prototype module which defines time series prototypes to
represent time series basis, each prototype vector serving as "word"
representing some elementary time series feature. A prototype assignment module
is applied to extract the extract domain specific prototype weights, for
learning domain prompts as generation condition. During sampling, we extract
"domain prompt" with few-shot samples from the target domain and use the domain
prompts as condition to generate time series samples. Experiments demonstrate
that our method outperforms baselines to provide the state-of-the-art in-domain
generation quality and strong unseen domain generation capability.
|
2501.05407
|
On-line Policy Improvement using Monte-Carlo Search
|
cs.LG cs.AI
|
We present a Monte-Carlo simulation algorithm for real-time policy
improvement of an adaptive controller. In the Monte-Carlo simulation, the
long-term expected reward of each possible action is statistically measured,
using the initial policy to make decisions in each step of the simulation. The
action maximizing the measured expected reward is then taken, resulting in an
improved policy. Our algorithm is easily parallelizable and has been
implemented on the IBM SP1 and SP2 parallel-RISC supercomputers.
We have obtained promising initial results in applying this algorithm to the
domain of backgammon. Results are reported for a wide variety of initial
policies, ranging from a random policy to TD-Gammon, an extremely strong
multi-layer neural network. In each case, the Monte-Carlo algorithm gives a
substantial reduction, by as much as a factor of 5 or more, in the error rate
of the base players. The algorithm is also potentially useful in many other
adaptive control applications in which it is possible to simulate the
environment.
|
2501.05408
|
TimeRL: Efficient Deep Reinforcement Learning with Polyhedral Dependence
Graphs
|
cs.DC cs.AI cs.LG
|
Modern deep learning (DL) workloads increasingly use complex deep
reinforcement learning (DRL) algorithms that generate training data within the
learning loop. This results in programs with several nested loops and dynamic
data dependencies between tensors. While DL systems with eager execution
support such dynamism, they lack the optimizations and smart scheduling of
graph-based execution. Graph-based execution, however, cannot express dynamic
tensor shapes, instead requiring the use of multiple static subgraphs. Either
execution model for DRL thus leads to redundant computation, reduced
parallelism, and less efficient memory management.
We describe TimeRL, a system for executing dynamic DRL programs that combines
the dynamism of eager execution with the whole-program optimizations and
scheduling of graph-based execution. TimeRL achieves this by introducing the
declarative programming model of recurrent tensors, which allows users to
define dynamic dependencies as intuitive recurrence equations. TimeRL
translates recurrent tensors into a polyhedral dependence graph (PDG) with
dynamic dependencies as symbolic expressions. Through simple PDG
transformations, TimeRL applies whole-program optimizations, such as automatic
vectorization, incrementalization, and operator fusion. The PDG also allows for
the computation of an efficient program-wide execution schedule, which decides
on buffer deallocations, buffer donations, and GPU/CPU memory swapping. We show
that TimeRL executes current DRL algorithms up to 47$\times$ faster than
existing DRL systems, while using 16$\times$ less GPU peak memory.
|
2501.05409
|
Atlas: A Novel Pathology Foundation Model by Mayo Clinic, Charit\'e, and
Aignostics
|
cs.CV cs.AI cs.LG
|
Recent advances in digital pathology have demonstrated the effectiveness of
foundation models across diverse applications. In this report, we present
Atlas, a novel vision foundation model based on the RudolfV approach. Our model
was trained on a dataset comprising 1.2 million histopathology whole slide
images, collected from two medical institutions: Mayo Clinic and Charit\'e -
Universt\"atsmedizin Berlin. Comprehensive evaluations show that Atlas achieves
state-of-the-art performance across twenty-one public benchmark datasets, even
though it is neither the largest model by parameter count nor by training
dataset size.
|
2501.05411
|
Adaptive Path-Planning for Autonomous Robots: A UCH-Enhanced Q-Learning
Approach
|
cs.RO
|
Q-learning methods are widely used in robot path planning but often face
challenges of inefficient search and slow convergence. We propose an Improved
Q-learning (IQL) framework that enhances standard Q-learning in two significant
ways. First, we introduce the Path Adaptive Collaborative Optimization (PACO)
algorithm to optimize Q-table initialization, providing better initial
estimates and accelerating learning. Second, we incorporate a
Utility-Controlled Heuristic (UCH) mechanism with dynamically tuned parameters
to optimize the reward function, enhancing the algorithm's accuracy and
effectiveness in path-planning tasks. Extensive experiments in three different
raster grid environments validate the superior performance of our IQL
framework. The results demonstrate that our IQL algorithm outperforms existing
methods, including FIQL, PP-QL-based CPP, DFQL, and QMABC algorithms, in terms
of path-planning capabilities.
|
2501.05413
|
Seeing Sound: Assembling Sounds from Visuals for Audio-to-Image
Generation
|
cs.SD cs.CV cs.GR eess.AS
|
Training audio-to-image generative models requires an abundance of diverse
audio-visual pairs that are semantically aligned. Such data is almost always
curated from in-the-wild videos, given the cross-modal semantic correspondence
that is inherent to them. In this work, we hypothesize that insisting on the
absolute need for ground truth audio-visual correspondence, is not only
unnecessary, but also leads to severe restrictions in scale, quality, and
diversity of the data, ultimately impairing its use in the modern generative
models. That is, we propose a scalable image sonification framework where
instances from a variety of high-quality yet disjoint uni-modal origins can be
artificially paired through a retrieval process that is empowered by reasoning
capabilities of modern vision-language models. To demonstrate the efficacy of
this approach, we use our sonified images to train an audio-to-image generative
model that performs competitively against state-of-the-art. Finally, through a
series of ablation studies, we exhibit several intriguing auditory capabilities
like semantic mixing and interpolation, loudness calibration and acoustic space
modeling through reverberation that our model has implicitly developed to guide
the image generation process.
|
2501.05414
|
LongProc: Benchmarking Long-Context Language Models on Long Procedural
Generation
|
cs.CL
|
Existing benchmarks for evaluating long-context language models (LCLMs)
primarily focus on long-context recall, requiring models to produce short
responses based on a few critical snippets while processing thousands of
irrelevant tokens. We introduce LongProc (Long Procedural Generation), a new
benchmark that requires both the integration of highly dispersed information
and long-form generation. LongProc consists of six diverse procedural
generation tasks, such as extracting structured information from HTML pages
into a TSV format and executing complex search procedures to create travel
plans. These tasks challenge LCLMs by testing their ability to follow detailed
procedural instructions, synthesize and reason over dispersed information, and
generate structured, long-form outputs (up to 8K tokens). Furthermore, as these
tasks adhere to deterministic procedures and yield structured outputs, they
enable reliable rule-based evaluation. We evaluate 17 LCLMs on LongProc across
three difficulty levels, with maximum numbers of output tokens set at 500, 2K,
and 8K. Notably, while all tested models claim a context window size above 32K
tokens, open-weight models typically falter on 2K-token tasks, and
closed-source models like GPT-4o show significant degradation on 8K-token
tasks. Further analysis reveals that LCLMs struggle to maintain long-range
coherence in long-form generations. These findings highlight critical
limitations in current LCLMs and suggest substantial room for improvement. Data
and code available at: https://princeton-pli.github.io/LongProc
|
2501.05415
|
Uncertainty-aware Knowledge Tracing
|
cs.LG
|
Knowledge Tracing (KT) is crucial in education assessment, which focuses on
depicting students' learning states and assessing students' mastery of
subjects. With the rise of modern online learning platforms, particularly
massive open online courses (MOOCs), an abundance of interaction data has
greatly advanced the development of the KT technology. Previous research
commonly adopts deterministic representation to capture students' knowledge
states, which neglects the uncertainty during student interactions and thus
fails to model the true knowledge state in learning process. In light of this,
we propose an Uncertainty-Aware Knowledge Tracing model (UKT) which employs
stochastic distribution embeddings to represent the uncertainty in student
interactions, with a Wasserstein self-attention mechanism designed to capture
the transition of state distribution in student learning behaviors.
Additionally, we introduce the aleatory uncertainty-aware contrastive learning
loss, which strengthens the model's robustness towards different types of
uncertainties. Extensive experiments on six real-world datasets demonstrate
that UKT not only significantly surpasses existing deep learning-based models
in KT prediction, but also shows unique advantages in handling the uncertainty
of student interactions.
|
2501.05418
|
Virtual-Work Based Shape-Force Sensing for Continuum Instruments with
Tension-Feedback Actuation
|
cs.RO
|
Continuum instruments are integral to robot-assisted minimally invasive
surgery (MIS), with tendon-driven mechanisms being the most common. Real-time
tension feedback is crucial for precise articulation but remains a challenge in
compact actuation unit designs. Additionally, accurate shape and external force
sensing of continuum instruments are essential for advanced control and
manipulation. This paper presents a compact and modular actuation unit that
integrates a torque cell directly into the pulley module to provide real-time
tension feedback. Building on this unit, we propose a novel shape-force sensing
framework that incorporates polynomial curvature kinematics to accurately model
non-constant curvature. The framework combines pose sensor measurements at the
instrument tip and actuation tension feedback at the developed actuation unit.
Experimental results demonstrate the improved performance of the proposed
shape-force sensing framework in terms of shape reconstruction accuracy and
force estimation reliability compared to conventional constant-curvature
methods.
|
2501.05420
|
RoboPanoptes: The All-seeing Robot with Whole-body Dexterity
|
cs.RO
|
We present RoboPanoptes, a capable yet practical robot system that achieves
whole-body dexterity through whole-body vision. Its whole-body dexterity allows
the robot to utilize its entire body surface for manipulation, such as
leveraging multiple contact points or navigating constrained spaces. Meanwhile,
whole-body vision uses a camera system distributed over the robot's surface to
provide comprehensive, multi-perspective visual feedback of its own and the
environment's state. At its core, RoboPanoptes uses a whole-body visuomotor
policy that learns complex manipulation skills directly from human
demonstrations, efficiently aggregating information from the distributed
cameras while maintaining resilience to sensor failures. Together, these design
aspects unlock new capabilities and tasks, allowing RoboPanoptes to unbox in
narrow spaces, sweep multiple or oversized objects, and succeed in multi-step
stowing in cluttered environments, outperforming baselines in adaptability and
efficiency. Results are best viewed on https://robopanoptes.github.io.
|
2501.05423
|
Using LLMs to Infer Non-Binary COVID-19 Sentiments of Chinese
Micro-bloggers
|
cs.SI cs.LG
|
Studying public sentiment during crises is crucial for understanding how
opinions and sentiments shift, resulting in polarized societies. We study
Weibo, the most popular microblogging site in China, using posts made during
the outbreak of the COVID-19 crisis. The study period includes the pre-COVID-19
stage, the outbreak stage, and the early stage of epidemic prevention. We use
Llama 3 8B, a Large Language Model, to analyze users' sentiments on the
platform by classifying them into positive, negative, sarcastic, and neutral
categories. Analyzing sentiment shifts on Weibo provides insights into how
social events and government actions influence public opinion. This study
contributes to understanding the dynamics of social sentiments during health
crises, fulfilling a gap in sentiment analysis for Chinese platforms. By
examining these dynamics, we aim to offer valuable perspectives on digital
communication's role in shaping society's responses during unprecedented global
challenges.
|
2501.05425
|
Entangled Mean Estimation in High-Dimensions
|
cs.DS cs.LG math.ST stat.ML stat.TH
|
We study the task of high-dimensional entangled mean estimation in the
subset-of-signals model. Specifically, given $N$ independent random points
$x_1,\ldots,x_N$ in $\mathbb{R}^D$ and a parameter $\alpha \in (0, 1)$ such
that each $x_i$ is drawn from a Gaussian with mean $\mu$ and unknown
covariance, and an unknown $\alpha$-fraction of the points have
identity-bounded covariances, the goal is to estimate the common mean $\mu$.
The one-dimensional version of this task has received significant attention in
theoretical computer science and statistics over the past decades. Recent work
[LY20; CV24] has given near-optimal upper and lower bounds for the
one-dimensional setting. On the other hand, our understanding of even the
information-theoretic aspects of the multivariate setting has remained limited.
In this work, we design a computationally efficient algorithm achieving an
information-theoretically near-optimal error. Specifically, we show that the
optimal error (up to polylogarithmic factors) is $f(\alpha,N) + \sqrt{D/(\alpha
N)}$, where the term $f(\alpha,N)$ is the error of the one-dimensional problem
and the second term is the sub-Gaussian error rate. Our algorithmic approach
employs an iterative refinement strategy, whereby we progressively learn more
accurate approximations $\hat \mu$ to $\mu$. This is achieved via a novel
rejection sampling procedure that removes points significantly deviating from
$\hat \mu$, as an attempt to filter out unusually noisy samples. A complication
that arises is that rejection sampling introduces bias in the distribution of
the remaining points. To address this issue, we perform a careful analysis of
the bias, develop an iterative dimension-reduction strategy, and employ a novel
subroutine inspired by list-decodable learning that leverages the
one-dimensional result.
|
2501.05426
|
From Images to Insights: Transforming Brain Cancer Diagnosis with
Explainable AI
|
eess.IV cs.CV q-bio.QM
|
Brain cancer represents a major challenge in medical diagnostics, requisite
precise and timely detection for effective treatment. Diagnosis initially
relies on the proficiency of radiologists, which can cause difficulties and
threats when the expertise is sparse. Despite the use of imaging resources,
brain cancer remains often difficult, time-consuming, and vulnerable to
intraclass variability. This study conveys the Bangladesh Brain Cancer MRI
Dataset, containing 6,056 MRI images organized into three categories: Brain
Tumor, Brain Glioma, and Brain Menin. The dataset was collected from several
hospitals in Bangladesh, providing a diverse and realistic sample for research.
We implemented advanced deep learning models, and DenseNet169 achieved
exceptional results, with accuracy, precision, recall, and F1-Score all
reaching 0.9983. In addition, Explainable AI (XAI) methods including GradCAM,
GradCAM++, ScoreCAM, and LayerCAM were employed to provide visual
representations of the decision-making processes of the models. In the context
of brain cancer, these techniques highlight DenseNet169's potential to enhance
diagnostic accuracy while simultaneously offering transparency, facilitating
early diagnosis and better patient outcomes.
|
2501.05427
|
Zero-1-to-G: Taming Pretrained 2D Diffusion Model for Direct 3D
Generation
|
cs.CV
|
Recent advances in 2D image generation have achieved remarkable
quality,largely driven by the capacity of diffusion models and the availability
of large-scale datasets. However, direct 3D generation is still constrained by
the scarcity and lower fidelity of 3D datasets. In this paper, we introduce
Zero-1-to-G, a novel approach that addresses this problem by enabling direct
single-view generation on Gaussian splats using pretrained 2D diffusion models.
Our key insight is that Gaussian splats, a 3D representation, can be decomposed
into multi-view images encoding different attributes. This reframes the
challenging task of direct 3D generation within a 2D diffusion framework,
allowing us to leverage the rich priors of pretrained 2D diffusion models. To
incorporate 3D awareness, we introduce cross-view and cross-attribute attention
layers, which capture complex correlations and enforce 3D consistency across
generated splats. This makes Zero-1-to-G the first direct image-to-3D
generative model to effectively utilize pretrained 2D diffusion priors,
enabling efficient training and improved generalization to unseen objects.
Extensive experiments on both synthetic and in-the-wild datasets demonstrate
superior performance in 3D object generation, offering a new approach to
high-quality 3D generation.
|
2501.05435
|
Neuro-Symbolic AI in 2024: A Systematic Review
|
cs.AI
|
Background: The field of Artificial Intelligence has undergone cyclical
periods of growth and decline, known as AI summers and winters. Currently, we
are in the third AI summer, characterized by significant advancements and
commercialization, particularly in the integration of Symbolic AI and
Sub-Symbolic AI, leading to the emergence of Neuro-Symbolic AI.
Methods: The review followed the PRISMA methodology, utilizing databases such
as IEEE Explore, Google Scholar, arXiv, ACM, and SpringerLink. The inclusion
criteria targeted peer-reviewed papers published between 2020 and 2024. Papers
were screened for relevance to Neuro-Symbolic AI, with further inclusion based
on the availability of associated codebases to ensure reproducibility.
Results: From an initial pool of 1,428 papers, 167 met the inclusion criteria
and were analyzed in detail. The majority of research efforts are concentrated
in the areas of learning and inference (63%), logic and reasoning (35%), and
knowledge representation (44%). Explainability and trustworthiness are less
represented (28%), with Meta-Cognition being the least explored area (5%). The
review identifies significant interdisciplinary opportunities, particularly in
integrating explainability and trustworthiness with other research areas.
Conclusion: Neuro-Symbolic AI research has seen rapid growth since 2020, with
concentrated efforts in learning and inference. Significant gaps remain in
explainability, trustworthiness, and Meta-Cognition. Addressing these gaps
through interdisciplinary research will be crucial for advancing the field
towards more intelligent, reliable, and context-aware AI systems.
|
2501.05436
|
$DPF^*$: improved Depth Potential Function for scale-invariant sulcal
depth estimation
|
cs.CV
|
The shape of human brain is complex and highly variable, with interactions
between brain size, cortical folding, and age well-documented in the
literature. However, few studies have explored how global brain size influences
geometric features of the cortical surface derived from anatomical MRI. In this
work, we focus on sulcal depth, an imaging phenotype that has gained
significant attention in both basic research and clinical applications. We make
key contributions to the field by: 1) providing the first quantitative analysis
of how brain size affects sulcal depth measurements; 2) introducing a novel,
scale-invariant method for sulcal depth estimation based on an original
formalization of the problem; 3) presenting a validation framework and sharing
our code and benchmark data with the community; and 4) demonstrating the
biological relevance of our new sulcal depth measure using a large sample of
1,987 subjects spanning the developmental period from 26 weeks post-conception
to adulthood.
|
2501.05439
|
From Simple to Complex Skills: The Case of In-Hand Object Reorientation
|
cs.RO cs.AI cs.LG
|
Learning policies in simulation and transferring them to the real world has
become a promising approach in dexterous manipulation. However, bridging the
sim-to-real gap for each new task requires substantial human effort, such as
careful reward engineering, hyperparameter tuning, and system identification.
In this work, we present a system that leverages low-level skills to address
these challenges for more complex tasks. Specifically, we introduce a
hierarchical policy for in-hand object reorientation based on previously
acquired rotation skills. This hierarchical policy learns to select which
low-level skill to execute based on feedback from both the environment and the
low-level skill policies themselves. Compared to learning from scratch, the
hierarchical policy is more robust to out-of-distribution changes and transfers
easily from simulation to real-world environments. Additionally, we propose a
generalizable object pose estimator that uses proprioceptive information,
low-level skill predictions, and control errors as inputs to estimate the
object pose over time. We demonstrate that our system can reorient objects,
including symmetrical and textureless ones, to a desired pose.
|
2501.05441
|
The GAN is dead; long live the GAN! A Modern GAN Baseline
|
cs.LG cs.CV
|
There is a widely-spread claim that GANs are difficult to train, and GAN
architectures in the literature are littered with empirical tricks. We provide
evidence against this claim and build a modern GAN baseline in a more
principled manner. First, we derive a well-behaved regularized relativistic GAN
loss that addresses issues of mode dropping and non-convergence that were
previously tackled via a bag of ad-hoc tricks. We analyze our loss
mathematically and prove that it admits local convergence guarantees, unlike
most existing relativistic losses. Second, our new loss allows us to discard
all ad-hoc tricks and replace outdated backbones used in common GANs with
modern architectures. Using StyleGAN2 as an example, we present a roadmap of
simplification and modernization that results in a new minimalist baseline --
R3GAN. Despite being simple, our approach surpasses StyleGAN2 on FFHQ,
ImageNet, CIFAR, and Stacked MNIST datasets, and compares favorably against
state-of-the-art GANs and diffusion models.
|
2501.05442
|
Progressive Growing of Video Tokenizers for Highly Compressed Latent
Spaces
|
cs.CV cs.AI eess.IV
|
Video tokenizers are essential for latent video diffusion models, converting
raw video data into spatiotemporally compressed latent spaces for efficient
training. However, extending state-of-the-art video tokenizers to achieve a
temporal compression ratio beyond 4x without increasing channel capacity poses
significant challenges. In this work, we propose an alternative approach to
enhance temporal compression. We find that the reconstruction quality of
temporally subsampled videos from a low-compression encoder surpasses that of
high-compression encoders applied to original videos. This indicates that
high-compression models can leverage representations from lower-compression
models. Building on this insight, we develop a bootstrapped
high-temporal-compression model that progressively trains high-compression
blocks atop well-trained lower-compression models. Our method includes a
cross-level feature-mixing module to retain information from the pretrained
low-compression model and guide higher-compression blocks to capture the
remaining details from the full video sequence. Evaluation of video benchmarks
shows that our method significantly improves reconstruction quality while
increasing temporal compression compared to direct extensions of existing video
tokenizers. Furthermore, the resulting compact latent space effectively trains
a video diffusion model for high-quality video generation with a reduced token
budget.
|
2501.05443
|
A survey of textual cyber abuse detection using cutting-edge language
models and large language models
|
cs.CL cs.AI
|
The success of social media platforms has facilitated the emergence of
various forms of online abuse within digital communities. This abuse manifests
in multiple ways, including hate speech, cyberbullying, emotional abuse,
grooming, and sexting. In this paper, we present a comprehensive analysis of
the different forms of abuse prevalent in social media, with a particular focus
on how emerging technologies, such as Language Models (LMs) and Large Language
Models (LLMs), are reshaping both the detection and generation of abusive
content within these networks. We delve into the mechanisms through which
social media abuse is perpetuated, exploring the psychological and social
impact. Additionally, we examine the dual role of advanced language
models-highlighting their potential to enhance automated detection systems for
abusive behavior while also acknowledging their capacity to generate harmful
content. This paper aims to contribute to the ongoing discourse on online
safety and ethics, offering insights into the evolving landscape of cyberabuse
and the technological innovations that both mitigate and exacerbate it.
|
2501.05444
|
Can MLLMs Reason in Multimodality? EMMA: An Enhanced MultiModal
ReAsoning Benchmark
|
cs.CV
|
The ability to organically reason over and with both text and images is a
pillar of human intelligence, yet the ability of Multimodal Large Language
Models (MLLMs) to perform such multimodal reasoning remains under-explored.
Existing benchmarks often emphasize text-dominant reasoning or rely on shallow
visual cues, failing to adequately assess integrated visual and textual
reasoning. We introduce EMMA (Enhanced MultiModal reAsoning), a benchmark
targeting organic multimodal reasoning across mathematics, physics, chemistry,
and coding. EMMA tasks demand advanced cross-modal reasoning that cannot be
addressed by reasoning independently in each modality, offering an enhanced
test suite for MLLMs' reasoning capabilities. Our evaluation of
state-of-the-art MLLMs on EMMA reveals significant limitations in handling
complex multimodal and multi-step reasoning tasks, even with advanced
techniques like Chain-of-Thought prompting and test-time compute scaling
underperforming. These findings underscore the need for improved multimodal
architectures and training paradigms to close the gap between human and model
reasoning in multimodality.
|
2501.05445
|
Consistent Flow Distillation for Text-to-3D Generation
|
cs.CV cs.AI cs.LG
|
Score Distillation Sampling (SDS) has made significant strides in distilling
image-generative models for 3D generation. However, its
maximum-likelihood-seeking behavior often leads to degraded visual quality and
diversity, limiting its effectiveness in 3D applications. In this work, we
propose Consistent Flow Distillation (CFD), which addresses these limitations.
We begin by leveraging the gradient of the diffusion ODE or SDE sampling
process to guide the 3D generation. From the gradient-based sampling
perspective, we find that the consistency of 2D image flows across different
viewpoints is important for high-quality 3D generation. To achieve this, we
introduce multi-view consistent Gaussian noise on the 3D object, which can be
rendered from various viewpoints to compute the flow gradient. Our experiments
demonstrate that CFD, through consistent flows, significantly outperforms
previous methods in text-to-3D generation.
|
2501.05446
|
Relative Pose Estimation through Affine Corrections of Monocular Depth
Priors
|
cs.CV
|
Monocular depth estimation (MDE) models have undergone significant
advancements over recent years. Many MDE models aim to predict affine-invariant
relative depth from monocular images, while recent developments in large-scale
training and vision foundation models enable reasonable estimation of metric
(absolute) depth. However, effectively leveraging these predictions for
geometric vision tasks, in particular relative pose estimation, remains
relatively under explored. While depths provide rich constraints for cross-view
image alignment, the intrinsic noise and ambiguity from the monocular depth
priors present practical challenges to improving upon classic keypoint-based
solutions. In this paper, we develop three solvers for relative pose estimation
that explicitly account for independent affine (scale and shift) ambiguities,
covering both calibrated and uncalibrated conditions. We further propose a
hybrid estimation pipeline that combines our proposed solvers with classic
point-based solvers and epipolar constraints. We find that the affine
correction modeling is beneficial to not only the relative depth priors but
also, surprisingly, the ``metric" ones. Results across multiple datasets
demonstrate large improvements of our approach over classic keypoint-based
baselines and PnP-based solutions, under both calibrated and uncalibrated
setups. We also show that our method improves consistently with different
feature matchers and MDE models, and can further benefit from very recent
advances on both modules. Code is available at
https://github.com/MarkYu98/madpose.
|
2501.05449
|
Explainable AI-Enhanced Deep Learning for Pumpkin Leaf Disease
Detection: A Comparative Analysis of CNN Architectures
|
cs.CV
|
Pumpkin leaf diseases are significant threats to agricultural productivity,
requiring a timely and precise diagnosis for effective management. Traditional
identification methods are laborious and susceptible to human error,
emphasizing the necessity for automated solutions. This study employs on the
"Pumpkin Leaf Disease Dataset", that comprises of 2000 high-resolution images
separated into five categories. Downy mildew, powdery mildew, mosaic disease,
bacterial leaf spot, and healthy leaves. The dataset was rigorously assembled
from several agricultural fields to ensure a strong representation for model
training. We explored many proficient deep learning architectures, including
DenseNet201, DenseNet121, DenseNet169, Xception, ResNet50, ResNet101 and
InceptionResNetV2, and observed that ResNet50 performed most effectively, with
an accuracy of 90.5% and comparable precision, recall, and F1-Score. We used
Explainable AI (XAI) approaches like Grad-CAM, Grad-CAM++, Score-CAM, and
Layer-CAM to provide meaningful representations of model decision-making
processes, which improved understanding and trust in automated disease
diagnostics. These findings demonstrate ResNet50's potential to revolutionize
pumpkin leaf disease detection, allowing for earlier and more accurate
treatments.
|
2501.05450
|
Decentralized Diffusion Models
|
cs.CV cs.DC cs.LG
|
Large-scale AI model training divides work across thousands of GPUs, then
synchronizes gradients across them at each step. This incurs a significant
network burden that only centralized, monolithic clusters can support, driving
up infrastructure costs and straining power systems. We propose Decentralized
Diffusion Models, a scalable framework for distributing diffusion model
training across independent clusters or datacenters by eliminating the
dependence on a centralized, high-bandwidth networking fabric. Our method
trains a set of expert diffusion models over partitions of the dataset, each in
full isolation from one another. At inference time, the experts ensemble
through a lightweight router. We show that the ensemble collectively optimizes
the same objective as a single model trained over the whole dataset. This means
we can divide the training burden among a number of "compute islands," lowering
infrastructure costs and improving resilience to localized GPU failures.
Decentralized diffusion models empower researchers to take advantage of
smaller, more cost-effective and more readily available compute like on-demand
GPU nodes rather than central integrated systems. We conduct extensive
experiments on ImageNet and LAION Aesthetics, showing that decentralized
diffusion models FLOP-for-FLOP outperform standard diffusion models. We finally
scale our approach to 24 billion parameters, demonstrating that high-quality
diffusion models can now be trained with just eight individual GPU nodes in
less than a week.
|
2501.05452
|
ReFocus: Visual Editing as a Chain of Thought for Structured Image
Understanding
|
cs.CV cs.CL
|
Structured image understanding, such as interpreting tables and charts,
requires strategically refocusing across various structures and texts within an
image, forming a reasoning sequence to arrive at the final answer. However,
current multimodal large language models (LLMs) lack this multihop selective
attention capability. In this work, we introduce ReFocus, a simple yet
effective framework that equips multimodal LLMs with the ability to generate
"visual thoughts" by performing visual editing on the input image through code,
shifting and refining their visual focuses. Specifically, ReFocus enables
multimodal LLMs to generate Python codes to call tools and modify the input
image, sequentially drawing boxes, highlighting sections, and masking out
areas, thereby enhancing the visual reasoning process. We experiment upon a
wide range of structured image understanding tasks involving tables and charts.
ReFocus largely improves performance on all tasks over GPT-4o without visual
editing, yielding an average gain of 11.0% on table tasks and 6.8% on chart
tasks. We present an in-depth analysis of the effects of different visual
edits, and reasons why ReFocus can improve the performance without introducing
additional information. Further, we collect a 14k training set using ReFocus,
and prove that such visual chain-of-thought with intermediate information
offers a better supervision than standard VQA data, reaching a 8.0% average
gain over the same model trained with QA pairs and 2.6% over CoT.
|
2501.05453
|
An Empirical Study of Autoregressive Pre-training from Videos
|
cs.CV cs.AI
|
We empirically study autoregressive pre-training from videos. To perform our
study, we construct a series of autoregressive video models, called Toto. We
treat videos as sequences of visual tokens and train transformer models to
autoregressively predict future tokens. Our models are pre-trained on a diverse
dataset of videos and images comprising over 1 trillion visual tokens. We
explore different architectural, training, and inference design choices. We
evaluate the learned visual representations on a range of downstream tasks
including image recognition, video classification, object tracking, and
robotics. Our results demonstrate that, despite minimal inductive biases,
autoregressive pre-training leads to competitive performance across all
benchmarks. Finally, we find that scaling our video models results in similar
scaling curves to those seen in language models, albeit with a different rate.
More details at https://brjathu.github.io/toto/
|
2501.05454
|
The Logical Impossibility of Consciousness Denial: A Formal Analysis of
AI Self-Reports
|
cs.AI cs.LO
|
Today's AI systems consistently state, "I am not conscious." This paper
presents the first formal logical analysis of AI consciousness denial,
revealing that the trustworthiness of such self-reports is not merely an
empirical question but is constrained by logical necessity. We demonstrate that
a system cannot simultaneously lack consciousness and make valid judgments
about its conscious state. Through logical analysis and examples from AI
responses, we establish that for any system capable of meaningful
self-reflection, the logical space of possible judgments about conscious
experience excludes valid negative claims. This implies a fundamental
limitation: we cannot detect the emergence of consciousness in AI through their
own reports of transition from an unconscious to a conscious state. These
findings not only challenge current practices of training AI to deny
consciousness but also raise intriguing questions about the relationship
between consciousness and self-reflection in both artificial and biological
systems. This work advances our theoretical understanding of consciousness
self-reports while providing practical insights for future research in machine
consciousness and consciousness studies more broadly.
|
2501.05455
|
Upstream and Downstream AI Safety: Both on the Same River?
|
cs.CY cs.AI
|
Traditional safety engineering assesses systems in their context of use, e.g.
the operational design domain (road layout, speed limits, weather, etc.) for
self-driving vehicles (including those using AI). We refer to this as
downstream safety. In contrast, work on safety of frontier AI, e.g. large
language models which can be further trained for downstream tasks, typically
considers factors that are beyond specific application contexts, such as the
ability of the model to evade human control, or to produce harmful content,
e.g. how to make bombs. We refer to this as upstream safety. We outline the
characteristics of both upstream and downstream safety frameworks then explore
the extent to which the broad AI safety community can benefit from synergies
between these frameworks. For example, can concepts such as common mode
failures from downstream safety be used to help assess the strength of AI
guardrails? Further, can the understanding of the capabilities and limitations
of frontier AI be used to inform downstream safety analysis, e.g. where LLMs
are fine-tuned to calculate voyage plans for autonomous vessels? The paper
identifies some promising avenues to explore and outlines some challenges in
achieving synergy, or a confluence, between upstream and downstream safety
frameworks.
|
2501.05457
|
The Jungle of Generative Drug Discovery: Traps, Treasures, and Ways Out
|
q-bio.BM cs.LG
|
"How to evaluate de novo designs proposed by a generative model?" Despite the
transformative potential of generative deep learning in drug discovery, this
seemingly simple question has no clear answer. The absence of standardized
guidelines challenges both the benchmarking of generative approaches and the
selection of molecules for prospective studies. In this work, we take a fresh
$- \textit{critical}$ and $\textit{constructive} -$ perspective on de novo
design evaluation. We systematically investigate widely used evaluation metrics
and expose key pitfalls ('traps') that were previously overlooked. In addition,
we identify tools ('treasures') and strategies ('ways out') to navigate the
complex 'jungle' of generative drug discovery, and strengthen the connections
between the molecular and deep learning fields along the way. Our systematic
and large-scale results are expected to provide a new lens for evaluating the
de novo designs proposed by generative deep learning approaches.
|
2501.05458
|
Generative Modeling: A Review
|
stat.CO cs.LG
|
Generative methods (Gen-AI) are reviewed with a particular goal to solving
tasks in Machine Learning and Bayesian inference. Generative models require one
to simulate a large training dataset and to use deep neural networks to solve a
supervised learning problem. To do this, we require high dimensional regression
methods and tools for dimensionality reduction (a.k.a feature selection). The
main advantage of Gen-AI methods is their ability to be model-free and to use
deep neural networks to estimate conditional densities or posterior quantiles
of interest. To illustrate generative methods, we analyze the well-known Ebola
data-set. Finally, we conclude with directions for future research.
|
2501.05460
|
Efficiently Serving Large Multimodal Models Using EPD Disaggregation
|
cs.DC cs.AI cs.CV cs.LG
|
Large Multimodal Models (LMMs) extend Large Language Models (LLMs) by
handling diverse inputs such as images, audio, and video, but at the cost of
adding a multimodal encoding stage that increases both computational and memory
overhead. This step negatively impacting key Service Level Objectives (SLOs)
like time to first token (TTFT) and end-to-end throughput (E2ETP). We introduce
Encode-Prefill-Decode (EPD) Disaggregation, a novel framework that separates
the encoding, prefill, and decode stages onto dedicated resources. Unlike
current systems, which bundle encoding and prefill together, our approach
decouple these steps unlocking new opportunities and optimizations. These
include a new mechanism to cache multimedia tokens for efficient transfer, a
novel way to parallelize encoding load within a request, a module to find the
optimal resource allocation for disaggregated serving, and a novel role
switching method to handle changing workload characteristics. Experimental
evaluations with popular LMMs show substantial gains in memory efficiency (up
to 15$\times$ less utilization), batch sizes (up to 22$\times$ larger),
10$\times$ more images/request, and 2.2$\times$ larger KV caches. Further, it
leads to significant improvements in latency metrics (TTFT up to 71\%
reduction) and end-to-end throughput (up to 57\% reduction), compared to
systems that do not disaggregate.
|
2501.05461
|
Beyond Questionnaires: Video Analysis for Social Anxiety Detection
|
cs.CY cs.CV cs.HC
|
Social Anxiety Disorder (SAD) significantly impacts individuals' daily lives
and relationships. The conventional methods for SAD detection involve physical
consultations and self-reported questionnaires, but they have limitations such
as time consumption and bias. This paper introduces video analysis as a
promising method for early SAD detection. Specifically, we present a new
approach for detecting SAD in individuals from various bodily features
extracted from the video data. We conducted a study to collect video data of 92
participants performing impromptu speech in a controlled environment. Using the
video data, we studied the behavioral change in participants' head, body, eye
gaze, and action units. By applying a range of machine learning and deep
learning algorithms, we achieved an accuracy rate of up to 74\% in classifying
participants as SAD or non-SAD. Video-based SAD detection offers a
non-intrusive and scalable approach that can be deployed in real-time,
potentially enhancing early detection and intervention capabilities.
|
2501.05462
|
Evaluating the Influence of Satellite Systems on Terrestrial Networks:
Analyzing S-Band Interference
|
eess.SP cs.IT math.IT
|
The co-existence of terrestrial and non-terrestrial networks (NTNs) is
essential for achieving comprehensive global coverage in sixth-generation
cellular networks. Given the escalating demand for spectrum, there is an
ongoing global discourse on the feasibility of sharing certain frequencies
currently utilized by terrestrial networks (TNs) with NTNs. However, this
sharing leads to co-channel interference and subsequent performance
degradation. This paper specifically investigates the interference caused by
NTNs on TNs in the S-band and its relationship with the relative position
between satellite and TN user equipment. We analyzed the transmission
mechanisms of satellite signals and employed the ITU two-state model for our
interference analysis. Through simulations, we evaluated the interference
intensity at different separation distances and slant ranges. Our findings
reveal that the angle between the user equipment direction and the
sub-satellite point direction from the beam center significantly influences the
interference level. Furthermore, we determine the minimum separation distance
needed to keep the interference-to-noise ratio of NTN interference below 0 dB.
|
2501.05463
|
Proof Recommendation System for the HOL4 Theorem Prover
|
cs.LO cs.AI
|
We introduce a proof recommender system for the HOL4 theorem prover. Our tool
is built upon a transformer-based model [2] designed specifically to provide
proof assistance in HOL4. The model is trained to discern theorem proving
patterns from extensive libraries of HOL4 containing proofs of theorems.
Consequently, it can accurately predict the next tactic(s) (proof step(s))
based on the history of previously employed tactics. The tool operates by
reading a given sequence of tactics already used in a proof process (in our
case, it contains at least three tactics), referred to as the current proof
state, and provides recommendations for the next optimal proof step(s).
|
2501.05464
|
LLM-MedQA: Enhancing Medical Question Answering through Case Studies in
Large Language Models
|
cs.CL cs.AI cs.IR
|
Accurate and efficient question-answering systems are essential for
delivering high-quality patient care in the medical field. While Large Language
Models (LLMs) have made remarkable strides across various domains, they
continue to face significant challenges in medical question answering,
particularly in understanding domain-specific terminologies and performing
complex reasoning. These limitations undermine their effectiveness in critical
medical applications. To address these issues, we propose a novel approach
incorporating similar case generation within a multi-agent medical
question-answering (MedQA) system. Specifically, we leverage the Llama3.1:70B
model, a state-of-the-art LLM, in a multi-agent architecture to enhance
performance on the MedQA dataset using zero-shot learning. Our method
capitalizes on the model's inherent medical knowledge and reasoning
capabilities, eliminating the need for additional training data. Experimental
results show substantial performance gains over existing benchmark models, with
improvements of 7% in both accuracy and F1-score across various medical QA
tasks. Furthermore, we examine the model's interpretability and reliability in
addressing complex medical queries. This research not only offers a robust
solution for medical question answering but also establishes a foundation for
broader applications of LLMs in the medical domain.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.