id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
2502.05215
|
Watermarking across Modalities for Content Tracing and Generative AI
|
cs.CR cs.AI cs.LG
|
Watermarking embeds information into digital content like images, audio, or
text, imperceptible to humans but robustly detectable by specific algorithms.
This technology has important applications in many challenges of the industry
such as content moderation, tracing AI-generated content, and monitoring the
usage of AI models. The contributions of this thesis include the development of
new watermarking techniques for images, audio, and text. We first introduce
methods for active moderation of images on social platforms. We then develop
specific techniques for AI-generated content. We specifically demonstrate
methods to adapt latent generative models to embed watermarks in all generated
content, identify watermarked sections in speech, and improve watermarking in
large language models with tests that ensure low false positive rates.
Furthermore, we explore the use of digital watermarking to detect model misuse,
including the detection of watermarks in language models fine-tuned on
watermarked text, and introduce training-free watermarks for the weights of
large transformers. Through these contributions, the thesis provides effective
solutions for the challenges posed by the increasing use of generative AI
models and the need for model monitoring and content moderation. It finally
examines the challenges and limitations of watermarking techniques and discuss
potential future directions for research in this area.
|
2502.05218
|
FactorGCL: A Hypergraph-Based Factor Model with Temporal Residual
Contrastive Learning for Stock Returns Prediction
|
q-fin.ST cs.AI cs.LG
|
As a fundamental method in economics and finance, the factor model has been
extensively utilized in quantitative investment. In recent years, there has
been a paradigm shift from traditional linear models with expert-designed
factors to more flexible nonlinear machine learning-based models with
data-driven factors, aiming to enhance the effectiveness of these factor
models. However, due to the low signal-to-noise ratio in market data, mining
effective factors in data-driven models remains challenging. In this work, we
propose a hypergraph-based factor model with temporal residual contrastive
learning (FactorGCL) that employs a hypergraph structure to better capture
high-order nonlinear relationships among stock returns and factors. To mine
hidden factors that supplement human-designed prior factors for predicting
stock returns, we design a cascading residual hypergraph architecture, in which
the hidden factors are extracted from the residual information after removing
the influence of prior factors. Additionally, we propose a temporal residual
contrastive learning method to guide the extraction of effective and
comprehensive hidden factors by contrasting stock-specific residual information
over different time periods. Our extensive experiments on real stock market
data demonstrate that FactorGCL not only outperforms existing state-of-the-art
methods but also mines effective hidden factors for predicting stock returns.
|
2502.05219
|
Enabling External Scrutiny of AI Systems with Privacy-Enhancing
Technologies
|
cs.CY cs.AI cs.CR
|
This article describes how technical infrastructure developed by the
nonprofit OpenMined enables external scrutiny of AI systems without
compromising sensitive information.
Independent external scrutiny of AI systems provides crucial transparency
into AI development, so it should be an integral component of any approach to
AI governance. In practice, external researchers have struggled to gain access
to AI systems because of AI companies' legitimate concerns about security,
privacy, and intellectual property.
But now, privacy-enhancing technologies (PETs) have reached a new level of
maturity: end-to-end technical infrastructure developed by OpenMined combines
several PETs into various setups that enable privacy-preserving audits of AI
systems. We showcase two case studies where this infrastructure has been
deployed in real-world governance scenarios: "Understanding Social Media
Recommendation Algorithms with the Christchurch Call" and "Evaluating Frontier
Models with the UK AI Safety Institute." We describe types of scrutiny of AI
systems that could be facilitated by current setups and OpenMined's proposed
future setups.
We conclude that these innovative approaches deserve further exploration and
support from the AI governance community. Interested policymakers can focus on
empowering researchers on a legal level.
|
2502.05220
|
Aero-LLM: A Distributed Framework for Secure UAV Communication and
Intelligent Decision-Making
|
cs.CR cs.AI
|
Increased utilization of unmanned aerial vehicles (UAVs) in critical
operations necessitates secure and reliable communication with Ground Control
Stations (GCS). This paper introduces Aero-LLM, a framework integrating
multiple Large Language Models (LLMs) to enhance UAV mission security and
operational efficiency. Unlike conventional singular LLMs, Aero-LLM leverages
multiple specialized LLMs for various tasks, such as inferencing, anomaly
detection, and forecasting, deployed across onboard systems, edge, and cloud
servers. This dynamic, distributed architecture reduces performance bottleneck
and increases security capabilities. Aero-LLM's evaluation demonstrates
outstanding task-specific metrics and robust defense against cyber threats,
significantly enhancing UAV decision-making and operational capabilities and
security resilience against cyber attacks, setting a new standard for secure,
intelligent UAV operations.
|
2502.05221
|
Blackout DIFUSCO
|
math.OC cs.AI
|
This study explores the integration of Blackout Diffusion into the DIFUSCO
framework for combinatorial optimization, specifically targeting the Traveling
Salesman Problem (TSP). Inspired by the success of discrete-time diffusion
models (D3PM) in maintaining structural integrity, we extend the paradigm to a
continuous-time framework, leveraging the unique properties of Blackout
Diffusion. Continuous-time modeling introduces smoother transitions and refined
control, hypothesizing enhanced solution quality over traditional discrete
methods. We propose three key improvements to enhance the diffusion process.
First, we transition from a discrete-time-based model to a continuous-time
framework, providing a more refined and flexible formulation. Second, we refine
the observation time scheduling to ensure a smooth and linear transformation
throughout the diffusion process, allowing for a more natural progression of
states. Finally, building upon the second improvement, we further enhance the
reverse process by introducing finer time slices in regions that are
particularly challenging for the model, thereby improving accuracy and
stability in the reconstruction phase. Although the experimental results did
not exceed the baseline performance, they demonstrate the effectiveness of
these methods in balancing simplicity and complexity, offering new insights
into diffusion-based combinatorial optimization. This work represents the first
application of Blackout Diffusion to combinatorial optimization, providing a
foundation for further advancements in this domain. * The code is available for
review at https://github.com/Giventicket/BlackoutDIFUSCO.
|
2502.05222
|
VistaFlow: Photorealistic Volumetric Reconstruction with Dynamic
Resolution Management via Q-Learning
|
cs.CV cs.GR
|
We introduce VistaFlow, a scalable three-dimensional imaging technique
capable of reconstructing fully interactive 3D volumetric images from a set of
2D photographs. Our model synthesizes novel viewpoints through a differentiable
rendering system capable of dynamic resolution management on photorealistic 3D
scenes. We achieve this through the introduction of QuiQ, a novel intermediate
video controller trained through Q-learning to maintain a consistently high
framerate by adjusting render resolution with millisecond precision. Notably,
VistaFlow runs natively on integrated CPU graphics, making it viable for mobile
and entry-level devices while still delivering high-performance rendering.
VistaFlow bypasses Neural Radiance Fields (NeRFs), using the PlenOctree data
structure to render complex light interactions such as reflection and
subsurface scattering with minimal hardware requirements. Our model is capable
of outperforming state-of-the-art methods with novel view synthesis at a
resolution of 1080p at over 100 frames per second on consumer hardware. By
tailoring render quality to the capabilities of each device, VistaFlow has the
potential to improve the efficiency and accessibility of photorealistic 3D
scene rendering across a wide spectrum of hardware, from high-end workstations
to inexpensive microcontrollers.
|
2502.05223
|
KDA: A Knowledge-Distilled Attacker for Generating Diverse Prompts to
Jailbreak LLMs
|
cs.CR cs.AI cs.CL cs.LG
|
Jailbreak attacks exploit specific prompts to bypass LLM safeguards, causing
the LLM to generate harmful, inappropriate, and misaligned content. Current
jailbreaking methods rely heavily on carefully designed system prompts and
numerous queries to achieve a single successful attack, which is costly and
impractical for large-scale red-teaming. To address this challenge, we propose
to distill the knowledge of an ensemble of SOTA attackers into a single
open-source model, called Knowledge-Distilled Attacker (KDA), which is
finetuned to automatically generate coherent and diverse attack prompts without
the need for meticulous system prompt engineering. Compared to existing
attackers, KDA achieves higher attack success rates and greater cost-time
efficiency when targeting multiple SOTA open-source and commercial black-box
LLMs. Furthermore, we conducted a quantitative diversity analysis of prompts
generated by baseline methods and KDA, identifying diverse and ensemble attacks
as key factors behind KDA's effectiveness and efficiency.
|
2502.05224
|
A Survey on Backdoor Threats in Large Language Models (LLMs): Attacks,
Defenses, and Evaluations
|
cs.CR cs.AI
|
Large Language Models (LLMs) have achieved significantly advanced
capabilities in understanding and generating human language text, which have
gained increasing popularity over recent years. Apart from their
state-of-the-art natural language processing (NLP) performance, considering
their widespread usage in many industries, including medicine, finance,
education, etc., security concerns over their usage grow simultaneously. In
recent years, the evolution of backdoor attacks has progressed with the
advancement of defense mechanisms against them and more well-developed features
in the LLMs. In this paper, we adapt the general taxonomy for classifying
machine learning attacks on one of the subdivisions - training-time white-box
backdoor attacks. Besides systematically classifying attack methods, we also
consider the corresponding defense methods against backdoor attacks. By
providing an extensive summary of existing works, we hope this survey can serve
as a guideline for inspiring future research that further extends the attack
scenarios and creates a stronger defense against them for more robust LLMs.
|
2502.05225
|
BitAbuse: A Dataset of Visually Perturbed Texts for Defending Phishing
Attacks
|
cs.CR cs.AI
|
Phishing often targets victims through visually perturbed texts to bypass
security systems. The noise contained in these texts functions as an
adversarial attack, designed to deceive language models and hinder their
ability to accurately interpret the content. However, since it is difficult to
obtain sufficient phishing cases, previous studies have used synthetic datasets
that do not contain real-world cases. In this study, we propose the BitAbuse
dataset, which includes real-world phishing cases, to address the limitations
of previous research. Our dataset comprises a total of 325,580 visually
perturbed texts. The dataset inputs are drawn from the raw corpus, consisting
of visually perturbed sentences and sentences generated through an artificial
perturbation process. Each input sentence is labeled with its corresponding
ground truth, representing the restored, non-perturbed version. Language models
trained on our proposed dataset demonstrated significantly better performance
compared to previous methods, achieving an accuracy of approximately 96%. Our
analysis revealed a significant gap between real-world and synthetic examples,
underscoring the value of our dataset for building reliable pre-trained models
for restoration tasks. We release the BitAbuse dataset, which includes
real-world phishing cases annotated with visual perturbations, to support
future research in adversarial attack defense.
|
2502.05227
|
Robotouille: An Asynchronous Planning Benchmark for LLM Agents
|
cs.RO cs.AI cs.CL
|
Effective asynchronous planning, or the ability to efficiently reason and
plan over states and actions that must happen in parallel or sequentially, is
essential for agents that must account for time delays, reason over diverse
long-horizon tasks, and collaborate with other agents. While large language
model (LLM) agents show promise in high-level task planning, current benchmarks
focus primarily on short-horizon tasks and do not evaluate such asynchronous
planning capabilities. We introduce Robotouille, a challenging benchmark
environment designed to test LLM agents' ability to handle long-horizon
asynchronous scenarios. Our synchronous and asynchronous datasets capture
increasingly complex planning challenges that go beyond existing benchmarks,
requiring agents to manage overlapping tasks and interruptions. Our results
show that ReAct (gpt4-o) achieves 47% on synchronous tasks but only 11% on
asynchronous tasks, highlighting significant room for improvement. We further
analyze failure modes, demonstrating the need for LLM agents to better
incorporate long-horizon feedback and self-audit their reasoning during task
execution. Code is available at https://github.com/portal-cornell/robotouille.
|
2502.05228
|
Multi-Objective Mobile Damped Wave Algorithm (MOMDWA): A Novel Approach
For Quantum System Control
|
quant-ph cs.AI cs.SY
|
In this paper, we introduce a novel multi-objective optimization algorithm,
the Multi-Objective Mobile Damped Wave Algorithm (MOMDWA), specifically
designed to address complex quantum control problems. Our approach extends the
capabilities of the original Mobile Damped Wave Algorithm (MDWA) by
incorporating multiple objectives, enabling a more comprehensive optimization
process. We applied MOMDWA to three quantum control scenarios, focusing on
optimizing the balance between control fidelity, energy consumption, and
control smoothness. The results demonstrate that MOMDWA significantly enhances
quantum control efficiency and robustness, achieving high fidelity while
minimizing energy use and ensuring smooth control pulses. This advancement
offers a valuable tool for quantum computing and other domains requiring
precise, multi-objective control.
|
2502.05229
|
L2GNet: Optimal Local-to-Global Representation of Anatomical Structures
for Generalized Medical Image Segmentation
|
cs.CV
|
Continuous Latent Space (CLS) and Discrete Latent Space (DLS) models, like
AttnUNet and VQUNet, have excelled in medical image segmentation. In contrast,
Synergistic Continuous and Discrete Latent Space (CDLS) models show promise in
handling fine and coarse-grained information. However, they struggle with
modeling long-range dependencies. CLS or CDLS-based models, such as TransUNet
or SynergyNet are adept at capturing long-range dependencies. Since they rely
heavily on feature pooling or aggregation using self-attention, they may
capture dependencies among redundant regions. This hinders comprehension of
anatomical structure content, poses challenges in modeling intra-class and
inter-class dependencies, increases false negatives and compromises
generalization. Addressing these issues, we propose L2GNet, which learns global
dependencies by relating discrete codes obtained from DLS using optimal
transport and aligning codes on a trainable reference. L2GNet achieves
discriminative on-the-fly representation learning without an additional weight
matrix in self-attention models, making it computationally efficient for
medical applications. Extensive experiments on multi-organ segmentation and
cardiac datasets demonstrate L2GNet's superiority over state-of-the-art
methods, including the CDLS method SynergyNet, offering an novel approach to
enhance deep learning models' performance in medical image analysis.
|
2502.05230
|
DiffNMR2: NMR Guided Sampling Acquisition Through Diffusion Model
Uncertainty
|
q-bio.QM cs.AI
|
Nuclear Magnetic Resonance (NMR) spectrometry uses electro-frequency pulses
to probe the resonance of a compound's nucleus, which is then analyzed to
determine its structure. The acquisition time of high-resolution NMR spectra
remains a significant bottleneck, especially for complex biological samples
such as proteins. In this study, we propose a novel and efficient sub-sampling
strategy based on a diffusion model trained on protein NMR data. Our method
iteratively reconstructs under-sampled spectra while using model uncertainty to
guide subsequent sampling, significantly reducing acquisition time. Compared to
state-of-the-art strategies, our approach improves reconstruction accuracy by
52.9\%, reduces hallucinated peaks by 55.6%, and requires 60% less time in
complex NMR experiments. This advancement holds promise for many applications,
from drug discovery to materials science, where rapid and high-resolution
spectral analysis is critical.
|
2502.05231
|
Thin ring wing as a means of flow improvement upstream of a propeller
|
physics.flu-dyn cs.AI
|
There are numerous devices currently known with the purpose of reducing the
irregularity of the flow upstream of the propeller and to decrease by that
means the propeller-induced vibration and noise. Many of these devices are
wing-shaped vortex-generators that affect the flow with their induced (i.e.
passive) longitudinal vortices. The paper's subject is the use of a ring-shaped
wing as a highly effective passive vortex-generator which allows to control the
flow closer to the most charged sections of propeller blades. The problem of a
thin ring-shaped wing with irregular (asymmetric) geometry in the irregular
steady flow has been solved in a linear approach and the intensity of the
induced longitudinal vortices as a function of the irregularity of the flow and
the geometry of the ring wing has been estimated using that solution.
Experiments in the towing tank showing good concordance with the theoretical
model confirmed the effectiveness of such a device. Some additional advantages
of a ring-shaped wing incorporated into the construction of stabilizers are
considered.
|
2502.05232
|
Aligner-Encoders: Self-Attention Transformers Can Be Self-Transducers
|
cs.SD cs.AI cs.LG eess.AS
|
Modern systems for automatic speech recognition, including the RNN-Transducer
and Attention-based Encoder-Decoder (AED), are designed so that the encoder is
not required to alter the time-position of information from the audio sequence
into the embedding; alignment to the final text output is processed during
decoding. We discover that the transformer-based encoder adopted in recent
years is actually capable of performing the alignment internally during the
forward pass, prior to decoding. This new phenomenon enables a simpler and more
efficient model, the "Aligner-Encoder". To train it, we discard the dynamic
programming of RNN-T in favor of the frame-wise cross-entropy loss of AED,
while the decoder employs the lighter text-only recurrence of RNN-T without
learned cross-attention -- it simply scans embedding frames in order from the
beginning, producing one token each until predicting the end-of-message. We
conduct experiments demonstrating performance remarkably close to the state of
the art, including a special inference configuration enabling long-form
recognition. In a representative comparison, we measure the total inference
time for our model to be 2x faster than RNN-T and 16x faster than AED. Lastly,
we find that the audio-text alignment is clearly visible in the self-attention
weights of a certain layer, which could be said to perform "self-transduction".
|
2502.05233
|
Efficient Knowledge Feeding to Language Models: A Novel Integrated
Encoder-Decoder Architecture
|
cs.CL cs.IR
|
This paper introduces a novel approach to efficiently feeding knowledge to
language models (LLMs) during prediction by integrating retrieval and
generation processes within a unified framework. While the Retrieval-Augmented
Generation (RAG) model addresses gaps in LLMs' training data and knowledge
limits, it is hindered by token limit restrictions and dependency on the
retrieval system's accuracy. Our proposed architecture incorporates in-context
vectors (ICV) to overcome these challenges. ICV recasts in-context learning by
using latent embeddings of LLMs to create a vector that captures essential task
information. This vector is then used to shift the latent states of the LLM,
enhancing the generation process without adding demonstration examples to the
prompt. ICV directly integrates information into the model, enabling it to
process this information more effectively. Our extensive experimental
evaluation demonstrates that ICV outperforms standard in-context learning and
fine-tuning across question-answering, information retrieval, and other tasks.
This approach mitigates the limitations of current RAG models and offers a more
robust solution for handling extensive and diverse datasets. Despite leveraging
a fraction of the parameters, our ICV-enhanced model achieves competitive
performance against models like LLaMA-3, Gemma, and Phi-3, significantly
reducing computational costs and memory requirements. ICV reduces prompt
length, is easy to control, surpasses token limitations, and is computationally
efficient compared to fine-tuning.
|
2502.05234
|
Optimizing Temperature for Language Models with Multi-Sample Inference
|
cs.LG cs.AI cs.CL
|
Multi-sample aggregation strategies, such as majority voting and best-of-N
sampling, are widely used in contemporary large language models (LLMs) to
enhance predictive accuracy across various tasks. A key challenge in this
process is temperature selection, which significantly impacts model
performance. Existing approaches either rely on a fixed default temperature or
require labeled validation data for tuning, which are often scarce and
difficult to obtain. This paper addresses the challenge of automatically
identifying the (near)-optimal temperature for different LLMs using
multi-sample aggregation strategies, without relying on task-specific
validation data. We provide a comprehensive analysis of temperature's role in
performance optimization, considering variations in model architectures,
datasets, task types, model sizes, and predictive accuracy. Furthermore, we
propose a novel entropy-based metric for automated temperature optimization,
which consistently outperforms fixed-temperature baselines. Additionally, we
incorporate a stochastic process model to enhance interpretability, offering
deeper insights into the relationship between temperature and model
performance.
|
2502.05236
|
Koel-TTS: Enhancing LLM based Speech Generation with Preference
Alignment and Classifier Free Guidance
|
cs.SD cs.AI cs.LG eess.AS
|
While autoregressive speech token generation models produce speech with
remarkable variety and naturalness, their inherent lack of controllability
often results in issues such as hallucinations and undesired vocalizations that
do not conform to conditioning inputs. We introduce Koel-TTS, a suite of
enhanced encoder-decoder Transformer TTS models that address these challenges
by incorporating preference alignment techniques guided by automatic speech
recognition and speaker verification models. Additionally, we incorporate
classifier-free guidance to further improve synthesis adherence to the
transcript and reference speaker audio. Our experiments demonstrate that these
optimizations significantly enhance target speaker similarity, intelligibility,
and naturalness of synthesized speech. Notably, Koel-TTS directly maps text and
context audio to acoustic tokens, and on the aforementioned metrics,
outperforms state-of-the-art TTS models, despite being trained on a
significantly smaller dataset. Audio samples and demos are available on our
website.
|
2502.05237
|
PSM-SQL: Progressive Schema Learning with Multi-granularity Semantics
for Text-to-SQL
|
cs.DB cs.AI
|
It is challenging to convert natural language (NL) questions into executable
structured query language (SQL) queries for text-to-SQL tasks due to the vast
number of database schemas with redundancy, which interferes with semantic
learning, and the domain shift between NL and SQL. Existing works for schema
linking focus on the table level and perform it once, ignoring the
multi-granularity semantics and chainable cyclicity of schemas. In this paper,
we propose a progressive schema linking with multi-granularity semantics
(PSM-SQL) framework to reduce the redundant database schemas for text-to-SQL.
Using the multi-granularity schema linking (MSL) module, PSM-SQL learns the
schema semantics at the column, table, and database levels. More specifically,
a triplet loss is used at the column level to learn embeddings, while
fine-tuning LLMs is employed at the database level for schema reasoning. MSL
employs classifier and similarity scores to model schema interactions for
schema linking at the table level. In particular, PSM-SQL adopts a chain loop
strategy to reduce the task difficulty of schema linking by continuously
reducing the number of redundant schemas. Experiments conducted on text-to-SQL
datasets show that the proposed PSM-SQL is 1-3 percentage points higher than
the existing methods.
|
2502.05239
|
Enhancing Knowledge Graph Construction: Evaluating with Emphasis on
Hallucination, Omission, and Graph Similarity Metrics
|
cs.CL cs.AI
|
Recent advancements in large language models have demonstrated significant
potential in the automated construction of knowledge graphs from unstructured
text. This paper builds upon our previous work [16], which evaluated various
models using metrics like precision, recall, F1 score, triple matching, and
graph matching, and introduces a refined approach to address the critical
issues of hallucination and omission. We propose an enhanced evaluation
framework incorporating BERTScore for graph similarity, setting a practical
threshold of 95% for graph matching. Our experiments focus on the Mistral
model, comparing its original and fine-tuned versions in zero-shot and few-shot
settings. We further extend our experiments using examples from the KELM-sub
training dataset, illustrating that the fine-tuned model significantly improves
knowledge graph construction accuracy while reducing the exact hallucination
and omission. However, our findings also reveal that the fine-tuned models
perform worse in generalization tasks on the KELM-sub dataset. This study
underscores the importance of comprehensive evaluation metrics in advancing the
state-of-the-art in knowledge graph construction from textual data.
|
2502.05240
|
Survey on AI-Generated Media Detection: From Non-MLLM to MLLM
|
cs.CV
|
The proliferation of AI-generated media poses significant challenges to
information authenticity and social trust, making reliable detection methods
highly demanded. Methods for detecting AI-generated media have evolved rapidly,
paralleling the advancement of Multimodal Large Language Models (MLLMs).
Current detection approaches can be categorized into two main groups:
Non-MLLM-based and MLLM-based methods. The former employs high-precision,
domain-specific detectors powered by deep learning techniques, while the latter
utilizes general-purpose detectors based on MLLMs that integrate authenticity
verification, explainability, and localization capabilities. Despite
significant progress in this field, there remains a gap in literature regarding
a comprehensive survey that examines the transition from domain-specific to
general-purpose detection methods. This paper addresses this gap by providing a
systematic review of both approaches, analyzing them from single-modal and
multi-modal perspectives. We present a detailed comparative analysis of these
categories, examining their methodological similarities and differences.
Through this analysis, we explore potential hybrid approaches and identify key
challenges in forgery detection, providing direction for future research.
Additionally, as MLLMs become increasingly prevalent in detection tasks,
ethical and security considerations have emerged as critical global concerns.
We examine the regulatory landscape surrounding Generative AI (GenAI) across
various jurisdictions, offering valuable insights for researchers and
practitioners in this field.
|
2502.05242
|
SEER: Self-Explainability Enhancement of Large Language Models'
Representations
|
cs.CL cs.AI cs.CV cs.LG
|
Explaining the hidden representations of Large Language Models (LLMs) is a
perspective to understand LLMs' underlying inference logic and improve their
reliability in application scenarios. However, previous methods introduce
external ''black-box'' modules to explain ''black-box'' LLMs, increasing the
potential uncertainty and failing to provide faithful explanations. In this
paper, we propose a self-explaining method SEER, enhancing LLMs' explainability
by aggregating the same concept and disentangling the different concepts in the
representation space. In this way, SEER provides faithful explanations carried
by representations synchronously with the LLMs' output. Additionally, we
showcase the applications of SEER on trustworthiness-related tasks (e.g., the
safety risks classification and detoxification tasks), where self-explained
LLMs achieve consistent improvement in explainability and performance. More
crucially, we theoretically analyze the improvement of SEER on LLMs'
generalization ability through optimal transport theory.
|
2502.05244
|
Probabilistic Artificial Intelligence
|
cs.AI cs.LG
|
Artificial intelligence commonly refers to the science and engineering of
artificial systems that can carry out tasks generally associated with requiring
aspects of human intelligence, such as playing games, translating languages,
and driving cars. In recent years, there have been exciting advances in
learning-based, data-driven approaches towards AI, and machine learning and
deep learning have enabled computer systems to perceive the world in
unprecedented ways. Reinforcement learning has enabled breakthroughs in complex
games such as Go and challenging robotics tasks such as quadrupedal locomotion.
A key aspect of intelligence is to not only make predictions, but reason
about the uncertainty in these predictions, and to consider this uncertainty
when making decisions. This is what this manuscript on "Probabilistic
Artificial Intelligence" is about. The first part covers probabilistic
approaches to machine learning. We discuss the differentiation between
"epistemic" uncertainty due to lack of data and "aleatoric" uncertainty, which
is irreducible and stems, e.g., from noisy observations and outcomes. We
discuss concrete approaches towards probabilistic inference and modern
approaches to efficient approximate inference.
The second part of the manuscript is about taking uncertainty into account in
sequential decision tasks. We consider active learning and Bayesian
optimization -- approaches that collect data by proposing experiments that are
informative for reducing the epistemic uncertainty. We then consider
reinforcement learning and modern deep RL approaches that use neural network
function approximation. We close by discussing modern approaches in model-based
RL, which harness epistemic and aleatoric uncertainty to guide exploration,
while also reasoning about safety.
|
2502.05246
|
Optimizing Wealth by a Game within Cellular Automata
|
cs.GT cs.MA
|
The objective is to find a Cellular Automata (CA) rule that can evolve 2D
patterns that are optimal with respect to a global fitness function. The global
fitness is defined as the sum of local computed utilities. A utility or value
function computes a score depending on the states in the local neighborhood.
First the method is explained that was followed to find such a CA rule. Then
this method is applied to find a rule that maximizes social wealth. Here wealth
is defined as the sum of the payoffs that all players (agents, cells) receive
in a prisoner's dilemma game, and then shared equally among them. The problem
is solved in four steps: (0) Defining the utility function, (1) Finding optimal
master patterns with a Genetic Algorithm, (2) Extracting templates (local
neighborhood configurations), (3) Inserting the templates in a general CA rule.
The constructed CA rule finds optimal and near-optimal patterns for even and
odd grid sizes. Optimal patterns of odd size contain exactly one singularity, a
2 x 2 block of cooperators.
|
2502.05248
|
Evaluating Personality Traits in Large Language Models: Insights from
Psychological Questionnaires
|
cs.CL cs.AI cs.MA
|
Psychological assessment tools have long helped humans understand behavioural
patterns. While Large Language Models (LLMs) can generate content comparable to
that of humans, we explore whether they exhibit personality traits. To this
end, this work applies psychological tools to LLMs in diverse scenarios to
generate personality profiles. Using established trait-based questionnaires
such as the Big Five Inventory and by addressing the possibility of training
data contamination, we examine the dimensional variability and dominance of
LLMs across five core personality dimensions: Openness, Conscientiousness,
Extraversion, Agreeableness, and Neuroticism. Our findings reveal that LLMs
exhibit unique dominant traits, varying characteristics, and distinct
personality profiles even within the same family of models.
|
2502.05252
|
GSM-Infinite: How Do Your LLMs Behave over Infinitely Increasing Context
Length and Reasoning Complexity?
|
cs.CL cs.AI
|
Long-context large language models (LLMs) have recently shown strong
performance in information retrieval and long-document QA. However, to tackle
the most challenging intellectual problems, LLMs must reason effectively in
long and complex contexts (e.g., frontier mathematical research). Studying how
LLMs handle increasing reasoning complexity and context length is essential,
yet existing benchmarks lack a solid basis for quantitative evaluation.
Inspired by the abstraction of GSM-8K problems as computational graphs, and the
ability to introduce noise by adding unnecessary nodes and edges, we develop a
grade school math problem generator capable of producing arithmetic problems
with infinite difficulty and context length under fine-grained control. Using
our newly synthesized GSM-Infinite benchmark, we comprehensively evaluate
existing LLMs. We find a consistent sigmoid decline in reasoning performance as
complexity increases, along with a systematic inference scaling trend:
exponentially increasing inference computation yields only linear performance
gains. These findings underscore the fundamental limitations of current
long-context LLMs and the key challenges in scaling reasoning capabilities. Our
GSM-Infinite benchmark provides a scalable and controllable testbed for
systematically studying and advancing LLM reasoning in long and complex
contexts.
|
2502.05253
|
LLMs Can Teach Themselves to Better Predict the Future
|
cs.CL cs.AI
|
We present an outcome-driven fine-tuning framework that enhances the
forecasting capabilities of large language models (LLMs) without relying on
human-curated reasoning samples. Our method leverages model self-play to
generate pairs of diverse reasoning trajectories and probabilistic forecasts
for a set of diverse questions that resolve after the models' knowledge cutoff
date. We then rank pairs of these reasoning traces by their distance to the
actual outcomes before fine-tuning the model via Direct Preference Optimization
(DPO). On a separate test set, our approach increases prediction accuracy of
Phi-4 14B and DeepSeek-R1 14B by between 7--10\% over a base model and a DPO
fine-tuned control model with randomized labels, bringing them on par with
forecasting capabilities of much larger frontier models like GPT-4o.
|
2502.05255
|
Incivility and Contentiousness Spillover between COVID-19 and Climate
Science Engagement
|
cs.SI cs.CY physics.soc-ph
|
Affective polarization and its accompanying cleavage-based sorting drives
incivility and contentiousness around climate change and other science-related
issues. Looking at the COVID-19 period, we study cross-domain spillover of
incivility and contentiousness in public engagements with climate change and
climate science on Twitter and Reddit. We find strong evidence of the
signatures of affective polarization surrounding COVID-19 spilling into the
climate change domain. Across different social media systems, COVID-19 content
is associated with incivility and contentiousness in climate discussions. These
patterns of increased antagonism were responsive to pandemic events that made
the link between science and public policy more salient. We also show that the
observed spillover activated along pre-pandemic political cleavages,
specifically anti-internationalist populist beliefs, that linked climate policy
opposition to vaccine hesitancy. Our findings highlight the dangers of
entrenched cross-domain polarization manifesting as spillover of antagonistic
behavior.
|
2502.05256
|
Learned Offline Query Planning via Bayesian Optimization
|
cs.DB
|
Analytics database workloads often contain queries that are executed
repeatedly. Existing optimization techniques generally prioritize keeping
optimization cost low, normally well below the time it takes to execute a
single instance of a query. If a given query is going to be executed thousands
of times, could it be worth investing significantly more optimization time? In
contrast to traditional online query optimizers, we propose an offline query
optimizer that searches a wide variety of plans and incorporates query
execution as a primitive. Our offline query optimizer combines variational
auto-encoders with Bayesian optimization to find optimized plans for a given
query. We compare our technique to the optimal plans possible with PostgreSQL
and recent RL-based systems over several datasets, and show that our technique
finds faster query plans.
|
2502.05264
|
Quantum automated learning with provable and explainable trainability
|
quant-ph cs.AI cs.LG
|
Machine learning is widely believed to be one of the most promising practical
applications of quantum computing. Existing quantum machine learning schemes
typically employ a quantum-classical hybrid approach that relies crucially on
gradients of model parameters. Such an approach lacks provable convergence to
global minima and will become infeasible as quantum learning models scale up.
Here, we introduce quantum automated learning, where no variational parameter
is involved and the training process is converted to quantum state preparation.
In particular, we encode training data into unitary operations and iteratively
evolve a random initial state under these unitaries and their inverses, with a
target-oriented perturbation towards higher prediction accuracy sandwiched in
between. Under reasonable assumptions, we rigorously prove that the evolution
converges exponentially to the desired state corresponding to the global
minimum of the loss function. We show that such a training process can be
understood from the perspective of preparing quantum states by imaginary time
evolution, where the data-encoded unitaries together with target-oriented
perturbations would train the quantum learning model in an automated fashion.
We further prove that the quantum automated learning paradigm features good
generalization ability with the generalization error upper bounded by the ratio
between a logarithmic function of the Hilbert space dimension and the number of
training samples. In addition, we carry out extensive numerical simulations on
real-life images and quantum data to demonstrate the effectiveness of our
approach and validate the assumptions. Our results establish an unconventional
quantum learning strategy that is gradient-free with provable and explainable
trainability, which would be crucial for large-scale practical applications of
quantum computing in machine learning scenarios.
|
2502.05271
|
RobotMover: Learning to Move Large Objects by Imitating the Dynamic
Chain
|
cs.RO
|
Moving large objects, such as furniture, is a critical capability for robots
operating in human environments. This task presents significant challenges due
to two key factors: the need to synchronize whole-body movements to prevent
collisions between the robot and the object, and the under-actuated dynamics
arising from the substantial size and weight of the objects. These challenges
also complicate performing these tasks via teleoperation. In this work, we
introduce \method, a generalizable learning framework that leverages
human-object interaction demonstrations to enable robots to perform large
object manipulation tasks. Central to our approach is the Dynamic Chain, a
novel representation that abstracts human-object interactions so that they can
be retargeted to robotic morphologies. The Dynamic Chain is a spatial
descriptor connecting the human and object root position via a chain of nodes,
which encode the position and velocity of different interaction keypoints. We
train policies in simulation using Dynamic-Chain-based imitation rewards and
domain randomization, enabling zero-shot transfer to real-world settings
without fine-tuning. Our approach outperforms both learning-based methods and
teleoperation baselines across six evaluation metrics when tested on three
distinct object types, both in simulation and on physical hardware.
Furthermore, we successfully apply the learned policies to real-world tasks,
such as moving a trash cart and rearranging chairs.
|
2502.05273
|
Principles and Components of Federated Learning Architectures
|
cs.LG
|
Federated learning, also known as FL, is a machine learning framework in
which a significant amount of clients (such as mobile devices or whole
enterprises) collaborate to collaboratively train a model while keeping
decentralized training data, all overseen by a central server (such as a
service provider). There are advantages in terms of privacy, security,
regulations, and economy with this decentralized approach to model training. FL
is not impervious to the flaws that plague conventional machine learning
models, despite its seeming promise. This study offers a thorough analysis of
the fundamental ideas and elements of federated learning architectures,
emphasizing five important areas: communication architectures, machine learning
models, data partitioning, privacy methods, and system heterogeneity. We
additionally address the difficulties and potential paths for future study in
the area. Furthermore, based on a comprehensive review of the literature, we
present a collection of architectural patterns for federated learning systems.
This analysis will help to understand the basic of Federated learning, the
primary components of FL, and also about several architectural details.
|
2502.05275
|
Interpretable Failure Detection with Human-Level Concepts
|
cs.CV
|
Reliable failure detection holds paramount importance in safety-critical
applications. Yet, neural networks are known to produce overconfident
predictions for misclassified samples. As a result, it remains a problematic
matter as existing confidence score functions rely on category-level signals,
the logits, to detect failures. This research introduces an innovative
strategy, leveraging human-level concepts for a dual purpose: to reliably
detect when a model fails and to transparently interpret why. By integrating a
nuanced array of signals for each category, our method enables a finer-grained
assessment of the model's confidence. We present a simple yet highly effective
approach based on the ordinal ranking of concept activation to the input image.
Without bells and whistles, our method significantly reduce the false positive
rate across diverse real-world image classification benchmarks, specifically by
3.7% on ImageNet and 9% on EuroSAT.
|
2502.05277
|
Invizo: Arabic Handwritten Document Optical Character Recognition
Solution
|
cs.CV
|
Converting images of Arabic text into plain text is a widely researched topic
in academia and industry. However, recognition of Arabic handwritten and
printed text presents difficult challenges due to the complex nature of
variations of the Arabic script. This work proposes an end-to-end solution for
recognizing Arabic handwritten, printed, and Arabic numbers and presents the
data in a structured manner. We reached 81.66% precision, 78.82% Recall, and
79.07% F-measure on a Text Detection task that powers the proposed solution.
The proposed recognition model incorporates state-of-the-art CNN-based feature
extraction, and Transformer-based sequence modeling to accommodate variations
in handwriting styles, stroke thicknesses, alignments, and noise conditions.
The evaluation of the model suggests its strong performances on both printed
and handwritten texts, yielding 0.59% CER and & 1.72% WER on printed text, and
7.91% CER and 31.41% WER on handwritten text. The overall proposed solution has
proven to be relied on in real-life OCR tasks. Equipped with both detection and
recognition models as well as other Feature Extraction and Matching helping
algorithms. With the general purpose implementation, making the solution valid
for any given document or receipt that is Arabic handwritten or printed. Thus,
it is practical and useful for any given context.
|
2502.05282
|
Homeomorphism Prior for False Positive and Negative Problem in Medical
Image Dense Contrastive Representation Learning
|
cs.CV cs.AI
|
Dense contrastive representation learning (DCRL) has greatly improved the
learning efficiency for image-dense prediction tasks, showing its great
potential to reduce the large costs of medical image collection and dense
annotation. However, the properties of medical images make unreliable
correspondence discovery, bringing an open problem of large-scale false
positive and negative (FP&N) pairs in DCRL. In this paper, we propose GEoMetric
vIsual deNse sImilarity (GEMINI) learning which embeds the homeomorphism prior
to DCRL and enables a reliable correspondence discovery for effective dense
contrast. We propose a deformable homeomorphism learning (DHL) which models the
homeomorphism of medical images and learns to estimate a deformable mapping to
predict the pixels' correspondence under topological preservation. It
effectively reduces the searching space of pairing and drives an implicit and
soft learning of negative pairs via a gradient. We also propose a geometric
semantic similarity (GSS) which extracts semantic information in features to
measure the alignment degree for the correspondence learning. It will promote
the learning efficiency and performance of deformation, constructing positive
pairs reliably. We implement two practical variants on two typical
representation learning tasks in our experiments. Our promising results on
seven datasets which outperform the existing methods show our great
superiority. We will release our code on a companion link:
https://github.com/YutingHe-list/GEMINI.
|
2502.05286
|
Fairness and Sparsity within Rashomon sets: Enumeration-Free Exploration
and Characterization
|
cs.LG
|
We introduce an enumeration-free method based on mathematical programming to
precisely characterize various properties such as fairness or sparsity within
the set of "good models", known as Rashomon set. This approach is generically
applicable to any hypothesis class, provided that a mathematical formulation of
the model learning task exists. It offers a structured framework to define the
notion of business necessity and evaluate how fairness can be improved or
degraded towards a specific protected group, while remaining within the
Rashomon set and maintaining any desired sparsity level.
We apply our approach to two hypothesis classes: scoring systems and decision
diagrams, leveraging recent mathematical programming formulations for training
such models. As seen in our experiments, the method comprehensively and
certifiably quantifies trade-offs between predictive performance, sparsity, and
fairness. We observe that a wide range of fairness values are attainable,
ranging from highly favorable to significantly unfavorable for a protected
group, while staying within less than 1% of the best possible training accuracy
for the hypothesis class. Additionally, we observe that sparsity constraints
limit these trade-offs and may disproportionately harm specific subgroups. As
we evidenced, thoroughly characterizing the tensions between these key aspects
is critical for an informed and accountable selection of models.
|
2502.05290
|
Switch-based Independent Antagonist Actuation with a Single Motor for a
Soft Exosuit
|
cs.RO
|
The use of a cable-driven soft exosuit poses challenges with regards to the
mechanical design of the actuation system, particularly when used for actuation
along multiple degrees of freedom (DoF). The simplest general solution requires
the use of two actuators to be capable of inducing movement along one DoF.
However, this solution is not practical for the development of multi-joint
exosuits. Reducing the number of actuators is a critical need in multi-DoF
exosuits. We propose a switch-based mechanism to control an antagonist pair of
cables such that it can actuate along any cable path geometry. The results
showed that 298.24ms was needed for switching between cables. While this
latency is relatively large, it can reduced in the future by a better choice of
the motor used for actuation.
|
2502.05291
|
Can LLMs Rank the Harmfulness of Smaller LLMs? We are Not There Yet
|
cs.CL
|
Large language models (LLMs) have become ubiquitous, thus it is important to
understand their risks and limitations. Smaller LLMs can be deployed where
compute resources are constrained, such as edge devices, but with different
propensity to generate harmful output. Mitigation of LLM harm typically depends
on annotating the harmfulness of LLM output, which is expensive to collect from
humans. This work studies two questions: How do smaller LLMs rank regarding
generation of harmful content? How well can larger LLMs annotate harmfulness?
We prompt three small LLMs to elicit harmful content of various types, such as
discriminatory language, offensive content, privacy invasion, or negative
influence, and collect human rankings of their outputs. Then, we evaluate three
state-of-the-art large LLMs on their ability to annotate the harmfulness of
these responses. We find that the smaller models differ with respect to
harmfulness. We also find that large LLMs show low to moderate agreement with
humans. These findings underline the need for further work on harm mitigation
in LLMs.
|
2502.05292
|
Drone Detection and Tracking with YOLO and a Rule-based Method
|
cs.CV cs.AI cs.LG
|
Drones or unmanned aerial vehicles are traditionally used for military
missions, warfare, and espionage. However, the usage of drones has
significantly increased due to multiple industrial applications involving
security and inspection, transportation, research purposes, and recreational
drone flying. Such an increased volume of drone activity in public spaces
requires regulatory actions for purposes of privacy protection and safety.
Hence, detection of illegal drone activities such as boundary encroachment
becomes a necessity. Such detection tasks are usually automated and performed
by deep learning models which are trained on annotated image datasets. This
paper builds on a previous work and extends an already published open source
dataset. A description and analysis of the entire dataset is provided. The
dataset is used to train the YOLOv7 deep learning model and some of its minor
variants and the results are provided. Since the detection models are based on
a single image input, a simple cross-correlation based tracker is used to
reduce detection drops and improve tracking performance in videos. Finally, the
entire drone detection system is summarized.
|
2502.05295
|
GST-UNet: Spatiotemporal Causal Inference with Time-Varying Confounders
|
cs.LG stat.ME
|
Estimating causal effects from spatiotemporal data is a key challenge in
fields such as public health, social policy, and environmental science, where
controlled experiments are often infeasible. However, existing causal inference
methods relying on observational data face significant limitations: they depend
on strong structural assumptions to address spatiotemporal challenges
$\unicode{x2013}$ such as interference, spatial confounding, and temporal
carryover effects $\unicode{x2013}$ or fail to account for
$\textit{time-varying confounders}$. These confounders, influenced by past
treatments and outcomes, can themselves shape future treatments and outcomes,
creating feedback loops that complicate traditional adjustment strategies. To
address these challenges, we introduce the $\textbf{GST-UNet}$
($\textbf{G}$-computation $\textbf{S}$patio-$\textbf{T}$emporal
$\textbf{UNet}$), a novel end-to-end neural network framework designed to
estimate treatment effects in complex spatial and temporal settings. The
GST-UNet leverages regression-based iterative G-computation to explicitly
adjust for time-varying confounders, providing valid estimates of potential
outcomes and treatment effects. To the best of our knowledge, the GST-UNet is
the first neural model to account for complex, non-linear dynamics and
time-varying confounders in spatiotemporal interventions. We demonstrate the
effectiveness of the GST-UNet through extensive simulation studies and showcase
its practical utility with a real-world analysis of the impact of wildfire
smoke on respiratory hospitalizations during the 2018 California Camp Fire. Our
results highlight the potential of GST-UNet to advance spatiotemporal causal
inference across a wide range of policy-driven and scientific applications.
|
2502.05300
|
Parameter Symmetry Breaking and Restoration Determines the Hierarchical
Learning in AI Systems
|
cs.LG cond-mat.dis-nn cs.AI stat.ML
|
The dynamics of learning in modern large AI systems is hierarchical, often
characterized by abrupt, qualitative shifts akin to phase transitions observed
in physical systems. While these phenomena hold promise for uncovering the
mechanisms behind neural networks and language models, existing theories remain
fragmented, addressing specific cases. In this paper, we posit that parameter
symmetry breaking and restoration serve as a unifying mechanism underlying
these behaviors. We synthesize prior observations and show how this mechanism
explains three distinct hierarchies in neural networks: learning dynamics,
model complexity, and representation formation. By connecting these
hierarchies, we highlight symmetry -- a cornerstone of theoretical physics --
as a potential fundamental principle in modern AI.
|
2502.05301
|
Decentralized Online Ensembles of Gaussian Processes for Multi-Agent
Systems
|
cs.LG cs.MA eess.SP stat.ML
|
Flexible and scalable decentralized learning solutions are fundamentally
important in the application of multi-agent systems. While several recent
approaches introduce (ensembles of) kernel machines in the distributed setting,
Bayesian solutions are much more limited. We introduce a fully decentralized,
asymptotically exact solution to computing the random feature approximation of
Gaussian processes. We further address the choice of hyperparameters by
introducing an ensembling scheme for Bayesian multiple kernel learning based on
online Bayesian model averaging. The resulting algorithm is tested against
Bayesian and frequentist methods on simulated and real-world datasets.
|
2502.05305
|
Online Covariance Estimation in Nonsmooth Stochastic Approximation
|
stat.ML cs.LG math.OC
|
We consider applying stochastic approximation (SA) methods to solve nonsmooth
variational inclusion problems. Existing studies have shown that the averaged
iterates of SA methods exhibit asymptotic normality, with an optimal limiting
covariance matrix in the local minimax sense of H\'ajek and Le Cam. However, no
methods have been proposed to estimate this covariance matrix in a nonsmooth
and potentially non-monotone (nonconvex) setting. In this paper, we study an
online batch-means covariance matrix estimator introduced in Zhu et al.(2023).
The estimator groups the SA iterates appropriately and computes the sample
covariance among batches as an estimate of the limiting covariance. Its
construction does not require prior knowledge of the total sample size, and
updates can be performed recursively as new data arrives. We establish that, as
long as the batch size sequence is properly specified (depending on the
stepsize sequence), the estimator achieves a convergence rate of order
$O(\sqrt{d}n^{-1/8+\varepsilon})$ for any $\varepsilon>0$, where $d$ and $n$
denote the problem dimensionality and the number of iterations (or samples)
used. Although the problem is nonsmooth and potentially non-monotone
(nonconvex), our convergence rate matches the best-known rate for covariance
estimation methods using only first-order information in smooth and
strongly-convex settings. The consistency of this covariance estimator enables
asymptotically valid statistical inference, including constructing confidence
intervals and performing hypothesis testing.
|
2502.05307
|
Training Set Reconstruction from Differentially Private Forests: How
Effective is DP?
|
cs.LG cs.CR
|
Recent research has shown that machine learning models are vulnerable to
privacy attacks targeting their training data. Differential privacy (DP) has
become a widely adopted countermeasure, as it offers rigorous privacy
protections.
In this paper, we introduce a reconstruction attack targeting
state-of-the-art $\varepsilon$-DP random forests. By leveraging a constraint
programming model that incorporates knowledge of the forest's structure and DP
mechanism characteristics, our approach formally reconstructs the most likely
dataset that could have produced a given forest.
Through extensive computational experiments, we examine the interplay between
model utility, privacy guarantees, and reconstruction accuracy across various
configurations. Our results reveal that random forests trained with meaningful
DP guarantees can still leak substantial portions of their training data.
Specifically, while DP reduces the success of reconstruction attacks, the only
forests fully robust to our attack exhibit predictive performance no better
than a constant classifier. Building on these insights, we provide practical
recommendations for the construction of DP random forests that are more
resilient to reconstruction attacks and maintain non-trivial predictive
performance.
|
2502.05309
|
Learning the Geometric Mechanics of Robot Motion Using Gaussian Mixtures
|
cs.RO
|
Data-driven models of robot motion constructed using principles from
Geometric Mechanics have been shown to produce useful predictions of robot
motion for a variety of robots. For robots with a useful number of DoF, these
geometric mechanics models can only be constructed in the neighborhood of a
gait. Here we show how Gaussian Mixture Models (GMM) can be used as a form of
manifold learning that learns the structure of the Geometric Mechanics
"motility map" and demonstrate: [i] a sizable improvement in prediction quality
when compared to the previously published methods; [ii] a method that can be
applied to any motion dataset and not only periodic gait data; [iii] a way to
pre-process the data-set to facilitate extrapolation in places where the
motility map is known to be linear. Our results can be applied anywhere a
data-driven geometric motion model might be useful.
|
2502.05310
|
Oracular Programming: A Modular Foundation for Building LLM-Enabled
Software
|
cs.PL cs.AI
|
Large Language Models have proved surprisingly effective at solving a wide
range of tasks from just a handful of examples. However, their lack of
reliability and modularity limits their capacity to tackle large problems that
require many steps of reasoning. In response, researchers have proposed
advanced pipelines that leverage domain-specific knowledge to chain smaller
prompts, provide intermediate feedback and improve performance through search.
However, the current complexity of writing, tuning, maintaining and improving
such pipelines has limited their sophistication. We propose oracular
programming, a foundational paradigm for building LLM-enabled applications that
lets domain experts express high-level problem-solving strategies as programs
with unresolved choice points. These choice points are resolved at runtime by
LLMs, which generalize from user-provided examples of correct and incorrect
decisions. An oracular program is composed of three orthogonal components: a
strategy that consists in a nondeterministic program with choice points that
can be reified into a search tree, a policy that specifies how to navigate this
tree with the help of LLM oracles, and a set of demonstrations that describe
successful and unsuccessful search tree navigation scenarios across diverse
problem instances. Each component is expressed in a dedicated programming
language and can be independently improved or substituted. We address the key
programming language design challenges of modularly composing oracular programs
and enforcing consistency between their components as they evolve.
|
2502.05311
|
ParquetDB: A Lightweight Python Parquet-Based Database
|
cs.DB physics.data-an
|
Traditional data storage formats and databases often introduce complexities
and inefficiencies that hinder rapid iteration and adaptability. To address
these challenges, we introduce ParquetDB, a Python-based database framework
that leverages the Parquet file format's optimized columnar storage. ParquetDB
offers efficient serialization and deserialization, native support for complex
and nested data types, reduced dependency on indexing through predicate
pushdown filtering, and enhanced portability due to its file-based storage
system. Benchmarks show that ParquetDB outperforms traditional databases like
SQLite and MongoDB in managing large volumes of data, especially when using
data formats compatible with PyArrow. We validate ParquetDB's practical utility
by applying it to the Alexandria 3D Materials Database, efficiently handling
approximately 4.8 million complex and nested records. By addressing the
inherent limitations of existing data storage systems and continuously evolving
to meet future demands, ParquetDB has the potential to significantly streamline
data management processes and accelerate research development in data-driven
fields.
|
2502.05312
|
Towards the Development of Balanced Synthetic Data for Correcting
Grammatical Errors in Arabic: An Approach Based on Error Tagging Model and
Synthetic Data Generating Model
|
cs.CL cs.AI
|
Synthetic data generation is widely recognized as a way to enhance the
quality of neural grammatical error correction (GEC) systems. However, current
approaches often lack diversity or are too simplistic to generate the wide
range of grammatical errors made by humans, especially for low-resource
languages such as Arabic. In this paper, we will develop the error tagging
model and the synthetic data generation model to create a large synthetic
dataset in Arabic for grammatical error correction. In the error tagging model,
the correct sentence is categorized into multiple error types by using the
DeBERTav3 model. Arabic Error Type Annotation tool (ARETA) is used to guide
multi-label classification tasks in an error tagging model in which each
sentence is classified into 26 error tags. The synthetic data generation model
is a back-translation-based model that generates incorrect sentences by
appending error tags before the correct sentence that was generated from the
error tagging model using the ARAT5 model. In the QALB-14 and QALB-15 Test
sets, the error tagging model achieved 94.42% F1, which is state-of-the-art in
identifying error tags in clean sentences. As a result of our syntactic data
training in grammatical error correction, we achieved a new state-of-the-art
result of F1-Score: 79.36% in the QALB-14 Test set. We generate 30,219,310
synthetic sentence pairs by using a synthetic data generation model.
|
2502.05315
|
AI/ML-Based Automatic Modulation Recognition: Recent Trends and Future
Possibilities
|
cs.LG
|
We present a review of high-performance automatic modulation recognition
(AMR) models proposed in the literature to classify various Radio Frequency
(RF) modulation schemes. We replicated these models and compared their
performance in terms of accuracy across a range of signal-to-noise ratios. To
ensure a fair comparison, we used the same dataset (RadioML-2016A), the same
hardware, and a consistent definition of test accuracy as the evaluation
metric, thereby providing a benchmark for future AMR studies. The
hyperparameters were selected based on the authors' suggestions in the
associated references to achieve results as close as possible to the originals.
The replicated models are publicly accessible for further analysis of AMR
models. We also present the test accuracies of the selected models versus their
number of parameters, indicating their complexities. Building on this
comparative analysis, we identify strategies to enhance these models'
performance. Finally, we present potential opportunities for improvement,
whether through novel architectures, data processing techniques, or training
strategies, to further advance the capabilities of AMR models.
|
2502.05318
|
Diagonal Symmetrization of Neural Network Solvers for the Many-Electron
Schr\"odinger Equation
|
cs.LG cond-mat.mtrl-sci
|
Incorporating group symmetries into neural networks has been a cornerstone of
success in many AI-for-science applications. Diagonal groups of isometries,
which describe the invariance under a simultaneous movement of multiple
objects, arise naturally in many-body quantum problems. Despite their
importance, diagonal groups have received relatively little attention, as they
lack a natural choice of invariant maps except in special cases. We study
different ways of incorporating diagonal invariance in neural network ans\"atze
trained via variational Monte Carlo methods, and consider specifically data
augmentation, group averaging and canonicalization. We show that, contrary to
standard ML setups, in-training symmetrization destabilizes training and can
lead to worse performance. Our theoretical and numerical results indicate that
this unexpected behavior may arise from a unique computational-statistical
tradeoff not found in standard ML analyses of symmetrization. Meanwhile, we
demonstrate that post hoc averaging is less sensitive to such tradeoffs and
emerges as a simple, flexible and effective method for improving neural network
solvers.
|
2502.05320
|
Towards Fine-grained Renal Vasculature Segmentation: Full-Scale
Hierarchical Learning with FH-Seg
|
cs.CV
|
Accurate fine-grained segmentation of the renal vasculature is critical for
nephrological analysis, yet it faces challenges due to diverse and
insufficiently annotated images. Existing methods struggle to accurately
segment intricate regions of the renal vasculature, such as the inner and outer
walls, arteries and lesions. In this paper, we introduce FH-Seg, a Full-scale
Hierarchical Learning Framework designed for comprehensive segmentation of the
renal vasculature. Specifically, FH-Seg employs full-scale skip connections
that merge detailed anatomical information with contextual semantics across
scales, effectively bridging the gap between structural and pathological
contexts. Additionally, we implement a learnable hierarchical soft attention
gates to adaptively reduce interference from non-core information, enhancing
the focus on critical vascular features. To advance research on renal pathology
segmentation, we also developed a Large Renal Vasculature (LRV) dataset, which
contains 16,212 fine-grained annotated images of 5,600 renal arteries.
Extensive experiments on the LRV dataset demonstrate FH-Seg's superior
accuracies (71.23% Dice, 73.06% F1), outperforming Omni-Seg by 2.67 and 2.13
percentage points respectively. Code is available at:
https://github.com/hrlblab/FH-seg.
|
2502.05321
|
Using Federated Machine Learning in Predictive Maintenance of Jet
Engines
|
cs.LG
|
The goal of this paper is to predict the Remaining Useful Life (RUL) of
turbine jet engines using a federated machine learning framework. Federated
Learning enables multiple edge devices/nodes or servers to collaboratively
train a shared model without sharing sensitive data, thus preserving data
privacy and security. By implementing a nonlinear model, the system aims to
capture complex relationships and patterns in the engine data to enhance the
accuracy of RUL predictions. This approach leverages decentralized computation,
allowing models to be trained locally at each device before aggregating the
learned weights at a central server. By predicting the RUL of jet engines
accurately, maintenance schedules can be optimized, downtime reduced, and
operational efficiency improved, ultimately leading to cost savings and
enhanced performance in the aviation industry. Computational results are
provided by using the C-MAPSS dataset which is publicly available on the NASA
website and is a valuable resource for studying and analyzing engine
degradation behaviors in various operational scenarios.
|
2502.05325
|
From Counterfactuals to Trees: Competitive Analysis of Model Extraction
Attacks
|
cs.LG cs.CR
|
The advent of Machine Learning as a Service (MLaaS) has heightened the
trade-off between model explainability and security. In particular,
explainability techniques, such as counterfactual explanations, inadvertently
increase the risk of model extraction attacks, enabling unauthorized
replication of proprietary models. In this paper, we formalize and characterize
the risks and inherent complexity of model reconstruction, focusing on the
"oracle'' queries required for faithfully inferring the underlying prediction
function. We present the first formal analysis of model extraction attacks
through the lens of competitive analysis, establishing a foundational framework
to evaluate their efficiency. Focusing on models based on additive decision
trees (e.g., decision trees, gradient boosting, and random forests), we
introduce novel reconstruction algorithms that achieve provably perfect
fidelity while demonstrating strong anytime performance. Our framework provides
theoretical bounds on the query complexity for extracting tree-based model,
offering new insights into the security vulnerabilities of their deployment.
|
2502.05330
|
Multi-Class Segmentation of Aortic Branches and Zones in Computed
Tomography Angiography: The AortaSeg24 Challenge
|
eess.IV cs.AI cs.CV cs.LG
|
Multi-class segmentation of the aorta in computed tomography angiography
(CTA) scans is essential for diagnosing and planning complex endovascular
treatments for patients with aortic dissections. However, existing methods
reduce aortic segmentation to a binary problem, limiting their ability to
measure diameters across different branches and zones. Furthermore, no
open-source dataset is currently available to support the development of
multi-class aortic segmentation methods. To address this gap, we organized the
AortaSeg24 MICCAI Challenge, introducing the first dataset of 100 CTA volumes
annotated for 23 clinically relevant aortic branches and zones. This dataset
was designed to facilitate both model development and validation. The challenge
attracted 121 teams worldwide, with participants leveraging state-of-the-art
frameworks such as nnU-Net and exploring novel techniques, including cascaded
models, data augmentation strategies, and custom loss functions. We evaluated
the submitted algorithms using the Dice Similarity Coefficient (DSC) and
Normalized Surface Distance (NSD), highlighting the approaches adopted by the
top five performing teams. This paper presents the challenge design, dataset
details, evaluation metrics, and an in-depth analysis of the top-performing
algorithms. The annotated dataset, evaluation code, and implementations of the
leading methods are publicly available to support further research. All
resources can be accessed at https://aortaseg24.grand-challenge.org.
|
2502.05331
|
Fine-Tuned LLMs are "Time Capsules" for Tracking Societal Bias Through
Books
|
cs.CL
|
Books, while often rich in cultural insights, can also mirror societal biases
of their eras - biases that Large Language Models (LLMs) may learn and
perpetuate during training. We introduce a novel method to trace and quantify
these biases using fine-tuned LLMs. We develop BookPAGE, a corpus comprising
593 fictional books across seven decades (1950-2019), to track bias evolution.
By fine-tuning LLMs on books from each decade and using targeted prompts, we
examine shifts in biases related to gender, sexual orientation, race, and
religion. Our findings indicate that LLMs trained on decade-specific books
manifest biases reflective of their times, with both gradual trends and notable
shifts. For example, model responses showed a progressive increase in the
portrayal of women in leadership roles (from 8% to 22%) from the 1950s to
2010s, with a significant uptick in the 1990s (from 4% to 12%), possibly
aligning with third-wave feminism. Same-sex relationship references increased
markedly from the 1980s to 2000s (from 0% to 10%), mirroring growing LGBTQ+
visibility. Concerningly, negative portrayals of Islam rose sharply in the
2000s (26% to 38%), likely reflecting post-9/11 sentiments. Importantly, we
demonstrate that these biases stem mainly from the books' content and not the
models' architecture or initial training. Our study offers a new perspective on
societal bias trends by bridging AI, literary studies, and social science
research.
|
2502.05332
|
Removing Neural Signal Artifacts with Autoencoder-Targeted Adversarial
Transformers (AT-AT)
|
cs.LG
|
Electromyogenic (EMG) noise is a major contamination source in EEG data that
can impede accurate analysis of brain-specific neural activity. Recent
literature on EMG artifact removal has moved beyond traditional linear
algorithms in favor of machine learning-based systems. However, existing deep
learning-based filtration methods often have large compute footprints and
prohibitively long training times. In this study, we present a new machine
learning-based system for filtering EMG interference from EEG data using an
autoencoder-targeted adversarial transformer (AT-AT). By leveraging the
lightweight expressivity of an autoencoder to determine optimal time-series
transformer application sites, our AT-AT architecture achieves a >90% model
size reduction compared to published artifact removal models. The addition of
adversarial training ensures that filtered signals adhere to the fundamental
characteristics of EEG data. We trained AT-AT using published neural data from
67 subjects and found that the system was able to achieve comparable test
performance to larger models; AT-AT posted a mean reconstructive correlation
coefficient above 0.95 at an initial signal-to-noise ratio (SNR) of 2 dB and
0.70 at -7 dB SNR. Further research generalizing these results to broader
sample sizes beyond these isolated test cases will be crucial; while outside
the scope of this study, we also include results from a real-world deployment
of AT-AT in the Appendix.
|
2502.05333
|
A Tutorial On Intersectionality in Fair Rankings
|
cs.CY cs.IR cs.LG
|
We address the critical issue of biased algorithms and unfair rankings, which
have permeated various sectors, including search engines, recommendation
systems, and workforce management. These biases can lead to discriminatory
outcomes in a data-driven world, especially against marginalized and
underrepresented groups. Efforts towards responsible data science and
responsible artificial intelligence aim to mitigate these biases and promote
fairness, diversity, and transparency. However, most fairness-aware ranking
methods singularly focus on protected attributes such as race, gender, or
socio-economic status, neglecting the intersectionality of these attributes,
i.e., the interplay between multiple social identities. Understanding
intersectionality is crucial to ensure that existing inequalities are not
preserved by fair rankings. We offer a description of the main ways to
incorporate intersectionality in fair ranking systems through practical
examples and provide a comparative overview of existing literature and a
synoptic table summarizing the various methodologies. Our analysis highlights
the need for intersectionality to attain fairness, while also emphasizing that
fairness, alone, does not necessarily imply intersectionality.
|
2502.05334
|
Geometric Machine Learning on EEG Signals
|
cs.LG
|
Brain-computer interfaces (BCIs) offer transformative potential, but decoding
neural signals presents significant challenges. The core premise of this paper
is built around demonstrating methods to elucidate the underlying
low-dimensional geometric structure present in high-dimensional brainwave data
in order to assist in downstream BCI-related neural classification tasks. We
demonstrate two pipelines related to electroencephalography (EEG) signal
processing: (1) a preliminary pipeline removing noise from individual EEG
channels, and (2) a downstream manifold learning pipeline uncovering geometric
structure across networks of EEG channels. We conduct preliminary validation
using two EEG datasets and situate our demonstration in the context of the
BCI-relevant imagined digit decoding problem. Our preliminary pipeline uses an
attention-based EEG filtration network to extract clean signal from individual
EEG channels. Our primary pipeline uses a fast Fourier transform, a Laplacian
eigenmap, a discrete analog of Ricci flow via Ollivier's notion of Ricci
curvature, and a graph convolutional network to perform dimensionality
reduction on high-dimensional multi-channel EEG data in order to enable
regularizable downstream classification. Our system achieves competitive
performance with existing signal processing and classification benchmarks; we
demonstrate a mean test correlation coefficient of >0.95 at 2 dB on
semi-synthetic neural denoising and a downstream EEG-based classification
accuracy of 0.97 on distinguishing digit- versus non-digit thoughts. Results
are preliminary and our geometric machine learning pipeline should be validated
by more extensive follow-up studies; generalizing these results to larger
inter-subject sample sizes, different hardware systems, and broader use cases
will be crucial.
|
2502.05335
|
Towards Foundational Models for Dynamical System Reconstruction:
Hierarchical Meta-Learning via Mixture of Experts
|
cs.LG
|
As foundational models reshape scientific discovery, a bottleneck persists in
dynamical system reconstruction (DSR): the ability to learn across system
hierarchies. Many meta-learning approaches have been applied successfully to
single systems, but falter when confronted with sparse, loosely related
datasets requiring multiple hierarchies to be learned. Mixture of Experts (MoE)
offers a natural paradigm to address these challenges. Despite their potential,
we demonstrate that naive MoEs are inadequate for the nuanced demands of
hierarchical DSR, largely due to their gradient descent-based gating update
mechanism which leads to slow updates and conflicted routing during training.
To overcome this limitation, we introduce MixER: Mixture of Expert
Reconstructors, a novel sparse top-1 MoE layer employing a custom gating update
algorithm based on $K$-means and least squares. Extensive experiments validate
MixER's capabilities, demonstrating efficient training and scalability to
systems of up to ten parametric ordinary differential equations. However, our
layer underperforms state-of-the-art meta-learners in high-data regimes,
particularly when each expert is constrained to process only a fraction of a
dataset composed of highly related data points. Further analysis with synthetic
and neuroscientific time series suggests that the quality of the contextual
representations generated by MixER is closely linked to the presence of
hierarchical structure in the data.
|
2502.05343
|
Towards Wearable Interfaces for Robotic Caregiving
|
cs.RO
|
Physically assistive robots in home environments can enhance the autonomy of
individuals with impairments, allowing them to regain the ability to conduct
self-care and household tasks. Individuals with physical limitations may find
existing interfaces challenging to use, highlighting the need for novel
interfaces that can effectively support them. In this work, we present insights
on the design and evaluation of an active control wearable interface named HAT,
Head-Worn Assistive Teleoperation. To tackle challenges in user workload while
using such interfaces, we propose and evaluate a shared control algorithm named
Driver Assistance. Finally, we introduce the concept of passive control, in
which wearable interfaces detect implicit human signals to inform and guide
robotic actions during caregiving tasks, with the aim of reducing user workload
while potentially preserving the feeling of control.
|
2502.05344
|
RAG-Verus: Repository-Level Program Verification with LLMs using
Retrieval Augmented Generation
|
cs.SE cs.AI
|
Scaling automated formal verification to real-world projects requires
resolving cross-module dependencies and global contexts, which are challenges
overlooked by existing function-centric methods. We introduce RagVerus, a
framework that synergizes retrieval-augmented generation with context-aware
prompting to automate proof synthesis for multi-module repositories, achieving
a 27% relative improvement on our novel RepoVBench benchmark -- the first
repository-level dataset for Verus with 383 proof completion tasks. RagVerus
triples proof pass rates on existing benchmarks under constrained language
model budgets, demonstrating a scalable and sample-efficient verification.
|
2502.05345
|
Estimating Voltage Drop: Models, Features and Data Representation
Towards a Neural Surrogate
|
cs.AR cs.AI
|
Accurate estimation of voltage drop (IR drop) in modern Application-Specific
Integrated Circuits (ASICs) is highly time and resource demanding, due to the
growing complexity and the transistor density in recent technology nodes. To
mitigate this challenge, we investigate how Machine Learning (ML) techniques,
including Extreme Gradient Boosting (XGBoost), Convolutional Neural Network
(CNN), and Graph Neural Network (GNN) can aid in reducing the computational
effort and implicitly the time required to estimate the IR drop in Integrated
Circuits (ICs). Traditional methods, including commercial tools, require
considerable time to produce accurate approximations, especially for
complicated designs with numerous transistors. ML algorithms, on the other
hand, are explored as an alternative solution to offer quick and precise IR
drop estimation, but in considerably less time. Our approach leverages ASICs'
electrical, timing, and physical to train ML models, ensuring adaptability
across diverse designs with minimal adjustments. Experimental results
underscore the superiority of ML models over commercial tools, greatly
enhancing prediction speed. Particularly, GNNs exhibit promising performance
with minimal prediction errors in voltage drop estimation. The incorporation of
GNNs marks a groundbreaking advancement in accurate IR drop prediction. This
study illustrates the effectiveness of ML algorithms in precisely estimating IR
drop and optimizing ASIC sign-off. Utilizing ML models leads to expedited
predictions, reducing calculation time and improving energy efficiency, thereby
reducing environmental impact through optimized power circuits.
|
2502.05346
|
Probabilistic Subspace Manifolds for Contextual Inference in Large
Language Models
|
cs.CL
|
Representing token embeddings as probability distributions over learned
manifolds allows for more flexible contextual inference, reducing
representational rigidity while enhancing semantic granularity. Comparative
evaluations demonstrate that probabilistic embeddings improve neighborhood
consistency and decrease redundancy, ensuring that token relationships remain
more structurally coherent across fine-tuning iterations. The integration of
probabilistic subspaces within attention mechanisms facilitates more adaptive
contextual weighting, enabling models to capture latent dependencies that would
otherwise be obscured in conventional embeddings. Experimental results
highlight increased robustness against adversarial modifications, with
probabilistic embeddings preserving contextual integrity even under
perturbation-based evaluation scenarios. Performance assessments indicate that
probabilistic representations achieve greater adaptability in domain-specific
applications, mitigating the need for extensive retraining when shifting across
linguistic domains. Computational trade-offs remain within operationally
feasible limits, with marginal increases in inference latency balanced against
the benefits of enhanced representation stability and contextual
expressiveness. The capacity to encode structured uncertainty provides
advantages in generative modeling tasks, particularly where maintaining
coherence across extended sequences requires a representation framework capable
of handling ambiguous or context-dependent linguistic constructs.
|
2502.05349
|
Contextual Scenario Generation for Two-Stage Stochastic Programming
|
math.OC cs.LG
|
Two-stage stochastic programs (2SPs) are important tools for making decisions
under uncertainty. Decision-makers use contextual information to generate a set
of scenarios to represent the true conditional distribution. However, the
number of scenarios required is a barrier to implementing 2SPs, motivating the
problem of generating a small set of surrogate scenarios that yield
high-quality decisions when they represent uncertainty. Current scenario
generation approaches do not leverage contextual information or do not address
computational concerns. In response, we propose contextual scenario generation
(CSG) to learn a mapping between the context and a set of surrogate scenarios
of user-specified size. First, we propose a distributional approach that learns
the mapping by minimizing a distributional distance between the predicted
surrogate scenarios and the true contextual distribution. Second, we propose a
task-based approach that aims to produce surrogate scenarios that yield
high-quality decisions. The task-based approach uses neural architectures to
approximate the downstream objective and leverages the approximation to search
for the mapping. The proposed approaches apply to various problem structures
and loosely only require efficient solving of the associated subproblems and
2SPs defined on the reduced scenario sets. Numerical experiments demonstrating
the effectiveness of the proposed methods are presented.
|
2502.05351
|
Deep Generative model that uses physical quantities to generate and
retrieve solar magnetic active regions
|
astro-ph.SR cs.LG stat.ML
|
Deep generative models have shown immense potential in generating unseen data
that has properties of real data. These models learn complex data-generating
distributions starting from a smaller set of latent dimensions. However,
generative models have encountered great skepticism in scientific domains due
to the disconnection between generative latent vectors and scientifically
relevant quantities. In this study, we integrate three types of machine
learning models to generate solar magnetic patches in a physically
interpretable manner and use those as a query to find matching patches in real
observations. We use the magnetic field measurements from Space-weather HMI
Active Region Patches (SHARPs) to train a Generative Adversarial Network (GAN).
We connect the physical properties of GAN-generated images with their latent
vectors to train Support Vector Machines (SVMs) that do mapping between
physical and latent spaces. These produce directions in the GAN latent space
along which known physical parameters of the SHARPs change. We train a
self-supervised learner (SSL) to make queries with generated images and find
matches from real data. We find that the GAN-SVM combination enables users to
produce high-quality patches that change smoothly only with a prescribed
physical quantity, making generative models physically interpretable. We also
show that GAN outputs can be used to retrieve real data that shares the same
physical properties as the generated query. This elevates Generative Artificial
Intelligence (AI) from a means-to-produce artificial data to a novel tool for
scientific data interrogation, supporting its applicability beyond the domain
of heliophysics.
|
2502.05352
|
ITBench: Evaluating AI Agents across Diverse Real-World IT Automation
Tasks
|
cs.AI cs.DC cs.MA
|
Realizing the vision of using AI agents to automate critical IT tasks depends
on the ability to measure and understand effectiveness of proposed solutions.
We introduce ITBench, a framework that offers a systematic methodology for
benchmarking AI agents to address real-world IT automation tasks. Our initial
release targets three key areas: Site Reliability Engineering (SRE), Compliance
and Security Operations (CISO), and Financial Operations (FinOps). The design
enables AI researchers to understand the challenges and opportunities of AI
agents for IT automation with push-button workflows and interpretable metrics.
ITBench includes an initial set of 94 real-world scenarios, which can be easily
extended by community contributions. Our results show that agents powered by
state-of-the-art models resolve only 13.8% of SRE scenarios, 25.2% of CISO
scenarios, and 0% of FinOps scenarios. We expect ITBench to be a key enabler of
AI-driven IT automation that is correct, safe, and fast.
|
2502.05360
|
Curse of Dimensionality in Neural Network Optimization
|
cs.LG math.OC stat.ML
|
The curse of dimensionality in neural network optimization under the
mean-field regime is studied. It is demonstrated that when a shallow neural
network with a Lipschitz continuous activation function is trained using either
empirical or population risk to approximate a target function that is $r$ times
continuously differentiable on $[0,1]^d$, the population risk may not decay at
a rate faster than $t^{-\frac{4r}{d-2r}}$, where $t$ is an analog of the total
number of optimization iterations. This result highlights the presence of the
curse of dimensionality in the optimization computation required to achieve a
desired accuracy. Instead of analyzing parameter evolution directly, the
training dynamics are examined through the evolution of the parameter
distribution under the 2-Wasserstein gradient flow. Furthermore, it is
established that the curse of dimensionality persists when a locally Lipschitz
continuous activation function is employed, where the Lipschitz constant in
$[-x,x]$ is bounded by $O(x^\delta)$ for any $x \in \mathbb{R}$. In this
scenario, the population risk is shown to decay at a rate no faster than
$t^{-\frac{(4+2\delta)r}{d-2r}}$. To the best of our knowledge, this work is
the first to analyze the impact of function smoothness on the curse of
dimensionality in neural network optimization theory.
|
2502.05364
|
Hypencoder: Hypernetworks for Information Retrieval
|
cs.IR cs.LG
|
The vast majority of retrieval models depend on vector inner products to
produce a relevance score between a query and a document. This naturally limits
the expressiveness of the relevance score that can be employed. We propose a
new paradigm, instead of producing a vector to represent the query we produce a
small neural network which acts as a learned relevance function. This small
neural network takes in a representation of the document, in this paper we use
a single vector, and produces a scalar relevance score. To produce the little
neural network we use a hypernetwork, a network that produce the weights of
other networks, as our query encoder or as we call it a Hypencoder. Experiments
on in-domain search tasks show that Hypencoder is able to significantly
outperform strong dense retrieval models and has higher metrics then reranking
models and models an order of magnitude larger. Hypencoder is also shown to
generalize well to out-of-domain search tasks. To assess the extent of
Hypencoder's capabilities, we evaluate on a set of hard retrieval tasks
including tip-of-the-tongue retrieval and instruction-following retrieval tasks
and find that the performance gap widens substantially compared to standard
retrieval tasks. Furthermore, to demonstrate the practicality of our method we
implement an approximate search algorithm and show that our model is able to
search 8.8M documents in under 60ms.
|
2502.05365
|
Low Dimensional Koopman Generalized Eigenfunctions Representation: An
Approach to Address Koopman High-Dimensionality Problem
|
eess.SY cs.SY
|
This Paper introduces a methodology to achieve a lower dimensional Koopman
quasi linear representation of nonlinear dynamics using Koopman generalized
eigenfunctions. The methodology is presented for the analytically derived
Koopman formulation of rigid body dynamics but can be generalized to any
data-driven or analytically derived generalized eigenfunction set. The
presented approach aim at achieving a representation for which the number of
Koopman observables matches the number of input leading to an exact
linearization solution instead of resorting to the least square approximation
method. The methodology is tested by designing a linear quadratic (LQ) flight
controller of a quadrotor unmanned aerial vehicle (UAV). Hardware in the loop
simulations validate the applicability of this approach to real-time
implementation in presence of noise and sensor delays.
|
2502.05367
|
Detecting APT Malware Command and Control over HTTP(S) Using Contextual
Summaries
|
cs.CR cs.LG cs.NI
|
Advanced Persistent Threats (APTs) are among the most sophisticated threats
facing critical organizations worldwide. APTs employ specific tactics,
techniques, and procedures (TTPs) which make them difficult to detect in
comparison to frequent and aggressive attacks. In fact, current network
intrusion detection systems struggle to detect APTs communications, allowing
such threats to persist unnoticed on victims' machines for months or even
years. In this paper, we present EarlyCrow, an approach to detect APT malware
command and control over HTTP(S) using contextual summaries.
The design of EarlyCrow is informed by a novel threat model focused on TTPs
present in traffic generated by tools recently used as part of APT campaigns.
The threat model highlights the importance of the context around the malicious
connections, and suggests traffic attributes which help APT detection.
EarlyCrow defines a novel multipurpose network flow format called PairFlow,
which is leveraged to build the contextual summary of a PCAP capture,
representing key behavioral, statistical and protocol information relevant to
APT TTPs. We evaluate the effectiveness of EarlyCrow on unseen APTs obtaining a
headline macro average F1-score of 93.02% with FPR of $0.74%.
|
2502.05368
|
Otter: Generating Tests from Issues to Validate SWE Patches
|
cs.SE cs.LG
|
While there has been plenty of work on generating tests from existing code,
there has been limited work on generating tests from issues. A correct test
must validate the code patch that resolves the issue. In this work, we focus on
the scenario where the code patch does not exist yet. This approach supports
two major use-cases. First, it supports TDD (test-driven development), the
discipline of "test first, write code later" that has well-documented benefits
for human software engineers. Second, it also validates SWE (software
engineering) agents, which generate code patches for resolving issues. This
paper introduces Otter, an LLM-based solution for generating tests from issues.
Otter augments LLMs with rule-based analysis to check and repair their outputs,
and introduces a novel self-reflective action planning stage. Experiments show
Otter outperforming state-of-the-art systems for generating tests from issues,
in addition to enhancing systems that generate patches from issues. We hope
that Otter helps make developers more productive at resolving issues and leads
to more robust, well-tested code.
|
2502.05369
|
DobLIX: A Dual-Objective Learned Index for Log-Structured Merge Trees
|
cs.DB cs.LG math.OC
|
In this paper, we introduce DobLIX, a dual-objective learned index
specifically designed for Log-Structured Merge(LSM) tree-based key-value
stores. Although traditional learned indexes focus exclusively on optimizing
index lookups, they often overlook the impact of data access from storage,
resulting in performance bottlenecks. DobLIX addresses this by incorporating a
second objective, data access optimization, into the learned index training
process. This dual-objective approach ensures that both index lookup efficiency
and data access costs are minimized, leading to significant improvements in
read performance while maintaining write efficiency in real-world LSM-tree
systems. Additionally, DobLIX features a reinforcement learning agent that
dynamically tunes the system parameters, allowing it to adapt to varying
workloads in real-time. Experimental results using real-world datasets
demonstrate that DobLIX reduces indexing overhead and improves throughput by
1.19 to 2.21 times compared to state-of-the-art methods within RocksDB, a
widely used LSM-tree-based storage engine.
|
2502.05370
|
fMoE: Fine-Grained Expert Offloading for Large Mixture-of-Experts
Serving
|
cs.LG cs.AI cs.DC
|
Large Language Models (LLMs) have gained immense success in revolutionizing
various applications, including content generation, search and recommendation,
and AI-assisted operation. To reduce high training costs, Mixture-of-Experts
(MoE) architecture has become a popular backbone for modern LLMs. However,
despite the benefits, serving MoE-based LLMs experience severe memory
inefficiency due to sparsely activated experts. Recent studies propose to
offload inactive experts from GPU memory to CPU memory to improve the serving
efficiency of MoE models. However, they either incur high inference latency or
high model memory footprints due to coarse-grained designs. To tame the
latency-memory trade-off in MoE serving, we present fMoE, a fine-grained expert
offloading system for MoE serving that achieves low inference latency with
memory efficiency. We design fMoE to extract fine-grained expert selection
patterns from MoE models and semantic hints from input prompts to efficiently
guide expert prefetching, caching, and offloading decisions. fMoE is prototyped
on top of HuggingFace Transformers and deployed on a six-GPU testbed.
Experiments with open-source MoE models and real-world workloads show that fMoE
reduces inference latency by 47% and improves expert hit rate by 36% over
state-of-the-art solutions.
|
2502.05371
|
Cumulant Structures of Entanglement Entropy
|
math-ph cs.IT math.IT math.MP quant-ph
|
We present a new method to derive exact cumulant expressions of any order of
von Neumann entropy over Hilbert-Schmidt ensemble. The new method uncovers
hidden cumulant structures that decouple each cumulant in a summation-free
manner into its lower-order joint cumulants involving families of ancillary
statistics. Importantly, the new method is able to avoid the seemingly
inevitable task of simplifying nested summations of increasing difficulty that
prevents the existing method in the literature to obtain higher-order
cumulants.
|
2502.05372
|
Active Learning of Model Discrepancy with Bayesian Experimental Design
|
cs.LG
|
Digital twins have been actively explored in many engineering applications,
such as manufacturing and autonomous systems. However, model discrepancy is
ubiquitous in most digital twin models and has significant impacts on the
performance of using those models. In recent years, data-driven modeling
techniques have been demonstrated promising in characterizing the model
discrepancy in existing models, while the training data for the learning of
model discrepancy is often obtained in an empirical way and an active approach
of gathering informative data can potentially benefit the learning of model
discrepancy. On the other hand, Bayesian experimental design (BED) provides a
systematic approach to gathering the most informative data, but its performance
is often negatively impacted by the model discrepancy. In this work, we build
on sequential BED and propose an efficient approach to iteratively learn the
model discrepancy based on the data from the BED. The performance of the
proposed method is validated by a classical numerical example governed by a
convection-diffusion equation, for which full BED is still feasible. The
proposed method is then further studied in the same numerical example with a
high-dimensional model discrepancy, which serves as a demonstration for the
scenarios where full BED is not practical anymore. An ensemble-based
approximation of information gain is further utilized to assess the data
informativeness and to enhance learning model discrepancy. The results show
that the proposed method is efficient and robust to the active learning of
high-dimensional model discrepancy, using data suggested by the sequential BED.
We also demonstrate that the proposed method is compatible with both classical
numerical solvers and modern auto-differentiable solvers.
|
2502.05374
|
Towards LLM Unlearning Resilient to Relearning Attacks: A
Sharpness-Aware Minimization Perspective and Beyond
|
cs.LG cs.CL
|
The LLM unlearning technique has recently been introduced to comply with data
regulations and address the safety and ethical concerns of LLMs by removing the
undesired data-model influence. However, state-of-the-art unlearning methods
face a critical vulnerability: they are susceptible to ``relearning'' the
removed information from a small number of forget data points, known as
relearning attacks. In this paper, we systematically investigate how to make
unlearned models robust against such attacks. For the first time, we establish
a connection between robust unlearning and sharpness-aware minimization (SAM)
through a unified robust optimization framework, in an analogy to adversarial
training designed to defend against adversarial attacks. Our analysis for SAM
reveals that smoothness optimization plays a pivotal role in mitigating
relearning attacks. Thus, we further explore diverse smoothing strategies to
enhance unlearning robustness. Extensive experiments on benchmark datasets,
including WMDP and MUSE, demonstrate that SAM and other smoothness optimization
approaches consistently improve the resistance of LLM unlearning to relearning
attacks. Notably, smoothness-enhanced unlearning also helps defend against
(input-level) jailbreaking attacks, broadening our proposal's impact in
robustifying LLM unlearning. Codes are available at
https://github.com/OPTML-Group/Unlearn-Smooth.
|
2502.05376
|
BCQ: Block Clustered Quantization for 4-bit (W4A4) LLM Inference
|
cs.LG
|
Post-training quantization (PTQ) is a promising approach to reducing the
storage and computational requirements of large language models (LLMs) without
additional training cost. Recent PTQ studies have primarily focused on
quantizing only weights to sub-8-bits while maintaining activations at 8-bits
or higher. Accurate sub-8-bit quantization for both weights and activations
without relying on quantization-aware training remains a significant challenge.
We propose a novel quantization method called block clustered quantization
(BCQ) wherein each operand tensor is decomposed into blocks (a block is a group
of contiguous scalars), blocks are clustered based on their statistics, and a
dedicated optimal quantization codebook is designed for each cluster. As a
specific embodiment of this approach, we propose a PTQ algorithm called
Locally-Optimal BCQ (LO-BCQ) that iterates between the steps of block
clustering and codebook design to greedily minimize the quantization mean
squared error. When weight and activation scalars are encoded to W4A4 format
(with 0.5-bits of overhead for storing scaling factors and codebook selectors),
we advance the current state-of-the-art by demonstrating <1% loss in inference
accuracy across several LLMs and downstream tasks.
|
2502.05378
|
NextBestPath: Efficient 3D Mapping of Unseen Environments
|
cs.CV cs.RO
|
This work addresses the problem of active 3D mapping, where an agent must
find an efficient trajectory to exhaustively reconstruct a new scene. Previous
approaches mainly predict the next best view near the agent's location, which
is prone to getting stuck in local areas. Additionally, existing indoor
datasets are insufficient due to limited geometric complexity and inaccurate
ground truth meshes. To overcome these limitations, we introduce a novel
dataset AiMDoom with a map generator for the Doom video game, enabling to
better benchmark active 3D mapping in diverse indoor environments. Moreover, we
propose a new method we call next-best-path (NBP), which predicts long-term
goals rather than focusing solely on short-sighted views. The model jointly
predicts accumulated surface coverage gains for long-term goals and obstacle
maps, allowing it to efficiently plan optimal paths with a unified model. By
leveraging online data collection, data augmentation and curriculum learning,
NBP significantly outperforms state-of-the-art methods on both the existing
MP3D dataset and our AiMDoom dataset, achieving more efficient mapping in
indoor environments of varying complexity.
|
2502.05383
|
Is attention all you need to solve the correlated electron problem?
|
cond-mat.str-el cond-mat.mes-hall cs.AI
|
The attention mechanism has transformed artificial intelligence research by
its ability to learn relations between objects. In this work, we explore how a
many-body wavefunction ansatz constructed from a large-parameter self-attention
neural network can be used to solve the interacting electron problem in solids.
By a systematic neural-network variational Monte Carlo study on a moir\'e
quantum material, we demonstrate that the self-attention ansatz provides an
accurate, efficient, and unbiased solution. Moreover, our numerical study finds
that the required number of variational parameters scales roughly as $N^2$ with
the number of electrons, which opens a path towards efficient large-scale
simulations.
|
2502.05384
|
Demonstrating CavePI: Autonomous Exploration of Underwater Caves by
Semantic Guidance
|
cs.RO
|
Enabling autonomous robots to safely and efficiently navigate, explore, and
map underwater caves is of significant importance to water resource management,
hydrogeology, archaeology, and marine robotics. In this work, we demonstrate
the system design and algorithmic integration of a visual servoing framework
for semantically guided autonomous underwater cave exploration. We present the
hardware and edge-AI design considerations to deploy this framework on a novel
AUV (Autonomous Underwater Vehicle) named CavePI. The guided navigation is
driven by a computationally light yet robust deep visual perception module,
delivering a rich semantic understanding of the environment. Subsequently, a
robust control mechanism enables CavePI to track the semantic guides and
navigate within complex cave structures. We evaluate the system through field
experiments in natural underwater caves and spring-water sites and further
validate its ROS (Robot Operating System)-based digital twin in a simulation
environment. Our results highlight how these integrated design choices
facilitate reliable navigation under feature-deprived, GPS-denied, and
low-visibility conditions.
|
2502.05387
|
Coarse-to-Fine Structure-Aware Artistic Style Transfer
|
cs.CV cs.AI
|
Artistic style transfer aims to use a style image and a content image to
synthesize a target image that retains the same artistic expression as the
style image while preserving the basic content of the content image. Many
recently proposed style transfer methods have a common problem; that is, they
simply transfer the texture and color of the style image to the global
structure of the content image. As a result, the content image has a local
structure that is not similar to the local structure of the style image. In
this paper, we present an effective method that can be used to transfer style
patterns while fusing the local style structure into the local content
structure. In our method, dif-ferent levels of coarse stylized features are
first reconstructed at low resolution using a Coarse Network, in which style
color distribution is roughly transferred, and the content structure is
combined with the style structure. Then, the reconstructed features and the
content features are adopted to synthesize high-quality structure-aware
stylized images with high resolution using a Fine Network with three structural
selective fusion (SSF) modules. The effectiveness of our method is demonstrated
through the generation of appealing high-quality stylization results and a
com-parison with some state-of-the-art style transfer methods.
|
2502.05389
|
The Role of Prosody in Spoken Question Answering
|
cs.CL
|
Spoken language understanding research to date has generally carried a heavy
text perspective. Most datasets are derived from text, which is then
subsequently synthesized into speech, and most models typically rely on
automatic transcriptions of speech. This is to the detriment of
prosody--additional information carried by the speech signal beyond the
phonetics of the words themselves and difficult to recover from text alone. In
this work, we investigate the role of prosody in Spoken Question Answering. By
isolating prosodic and lexical information on the SLUE-SQA-5 dataset, which
consists of natural speech, we demonstrate that models trained on prosodic
information alone can perform reasonably well by utilizing prosodic cues.
However, we find that when lexical information is available, models tend to
predominantly rely on it. Our findings suggest that while prosodic cues provide
valuable supplementary information, more effective integration methods are
required to ensure prosody contributes more significantly alongside lexical
features.
|
2502.05390
|
Learning Task Representations from In-Context Learning
|
cs.CL cs.LG
|
Large language models (LLMs) have demonstrated remarkable proficiency in
in-context learning (ICL), where models adapt to new tasks through
example-based prompts without requiring parameter updates. However,
understanding how tasks are internally encoded and generalized remains a
challenge. To address some of the empirical and technical gaps in the
literature, we introduce an automated formulation for encoding task information
in ICL prompts as a function of attention heads within the transformer
architecture. This approach computes a single task vector as a weighted sum of
attention heads, with the weights optimized causally via gradient descent. Our
findings show that existing methods fail to generalize effectively to
modalities beyond text. In response, we also design a benchmark to evaluate
whether a task vector can preserve task fidelity in functional regression
tasks. The proposed method successfully extracts task-specific information from
in-context demonstrations and excels in both text and regression tasks,
demonstrating its generalizability across modalities. Moreover, ablation
studies show that our method's effectiveness stems from aligning the
distribution of the last hidden state with that of an optimally performing
in-context-learned model.
|
2502.05391
|
Beyond and Free from Diffusion: Invertible Guided Consistency Training
|
cs.CV
|
Guidance in image generation steers models towards higher-quality or more
targeted outputs, typically achieved in Diffusion Models (DMs) via
Classifier-free Guidance (CFG). However, recent Consistency Models (CMs), which
offer fewer function evaluations, rely on distilling CFG knowledge from
pretrained DMs to achieve guidance, making them costly and inflexible. In this
work, we propose invertible Guided Consistency Training (iGCT), a novel
training framework for guided CMs that is entirely data-driven. iGCT, as a
pioneering work, contributes to fast and guided image generation and editing
without requiring the training and distillation of DMs, greatly reducing the
overall compute requirements. iGCT addresses the saturation artifacts seen in
CFG under high guidance scales. Our extensive experiments on CIFAR-10 and
ImageNet64 show that iGCT significantly improves FID and precision compared to
CFG. At a guidance of 13, iGCT improves precision to 0.8, while DM's drops to
0.47. Our work takes the first step toward enabling guidance and inversion for
CMs without relying on DMs.
|
2502.05392
|
Open Challenges in Time Series Anomaly Detection: An Industry
Perspective
|
cs.LG
|
Current research in time-series anomaly detection is using definitions that
miss critical aspects of how anomaly detection is commonly used in practice. We
list several areas that are of practical relevance and that we believe are
either under-investigated or missing entirely from the current discourse. Based
on an investigation of systems deployed in a cloud environment, we motivate the
areas of streaming algorithms, human-in-the-loop scenarios, point processes,
conditional anomalies and populations analysis of time series. This paper
serves as a motivation and call for action, including opportunities for
theoretical and applied research, as well as for building new dataset and
benchmarks.
|
2502.05395
|
Hierarchical Lexical Manifold Projection in Large Language Models: A
Novel Mechanism for Multi-Scale Semantic Representation
|
cs.CL
|
The integration of structured hierarchical embeddings into transformer-based
architectures introduces a refined approach to lexical representation, ensuring
that multi-scale semantic relationships are preserved without compromising
computational efficiency. A projection mechanism that maps tokens onto a
structured manifold provides improved lexical alignment, enhancing the
adaptability of word representations across diverse linguistic tasks. The
structured encoding framework ensures that hierarchical embeddings maintain
coherence across varying abstraction levels, allowing for stable transitions
between localized syntactic features and global semantic structures.
Experimental evaluations indicate that hierarchical embeddings consistently
outperform conventional token representations, improving accuracy in linguistic
benchmarks while maintaining lower computational overhead. Comparative analysis
across multiple domains highlights the ability of hierarchical embeddings to
retain contextual consistency, particularly in specialized language
applications where structured lexical alignment is essential. Statistical
assessments further demonstrate that hierarchical embeddings exhibit enhanced
robustness under perturbation conditions, ensuring that linguistic structures
remain stable across adversarial text modifications. The integration of
hierarchical projections with transformer attention mechanisms enables improved
contextual adaptation, ensuring that token representations are dynamically
adjusted based on varying linguistic distributions. The refined hierarchical
organization of embeddings provides greater interpretability in lexical
modeling, facilitating enhanced generalization capabilities across diverse text
processing tasks.
|
2502.05396
|
A Novel Convolutional-Free Method for 3D Medical Imaging Segmentation
|
eess.IV cs.CV
|
Segmentation of 3D medical images is a critical task for accurate diagnosis
and treatment planning. Convolutional neural networks (CNNs) have dominated the
field, achieving significant success in 3D medical image segmentation. However,
CNNs struggle with capturing long-range dependencies and global context,
limiting their performance, particularly for fine and complex structures.
Recent transformer-based models, such as TransUNet and nnFormer, have
demonstrated promise in addressing these limitations, though they still rely on
hybrid CNN-transformer architectures. This paper introduces a novel, fully
convolutional-free model based on transformer architecture and self-attention
mechanisms for 3D medical image segmentation. Our approach focuses on improving
multi-semantic segmentation accuracy and addressing domain adaptation
challenges between thick and thin slice CT images. We propose a joint loss
function that facilitates effective segmentation of thin slices based on thick
slice annotations, overcoming limitations in dataset availability. Furthermore,
we present a benchmark dataset for multi-semantic segmentation on thin slices,
addressing a gap in current medical imaging research. Our experiments
demonstrate the superiority of the proposed model over traditional and hybrid
architectures, offering new insights into the future of convolution-free
medical image segmentation.
|
2502.05397
|
Imitation Learning from a Single Temporally Misaligned Video
|
cs.LG
|
We examine the problem of learning sequential tasks from a single visual
demonstration. A key challenge arises when demonstrations are temporally
misaligned due to variations in timing, differences in embodiment, or
inconsistencies in execution. Existing approaches treat imitation as a
distribution-matching problem, aligning individual frames between the agent and
the demonstration. However, we show that such frame-level matching fails to
enforce temporal ordering or ensure consistent progress. Our key insight is
that matching should instead be defined at the level of sequences. We propose
that perfect matching occurs when one sequence successfully covers all the
subgoals in the same order as the other sequence. We present ORCA (ORdered
Coverage Alignment), a dense per-timestep reward function that measures the
probability of the agent covering demonstration frames in the correct order. On
temporally misaligned demonstrations, we show that agents trained with the ORCA
reward achieve $4.5$x improvement ($0.11 \rightarrow 0.50$ average normalized
returns) for Meta-world tasks and $6.6$x improvement ($6.55 \rightarrow 43.3$
average returns) for Humanoid-v4 tasks compared to the best frame-level
matching algorithms. We also provide empirical analysis showing that ORCA is
robust to varying levels of temporal misalignment. Our code is available at
https://github.com/portal-cornell/orca/
|
2502.05398
|
Probabilistic Foundations for Metacognition via Hybrid-AI
|
cs.AI
|
Metacognition is the concept of reasoning about an agent's own internal
processes, and it has recently received renewed attention with respect to
artificial intelligence (AI) and, more specifically, machine learning systems.
This paper reviews a hybrid-AI approach known as "error detecting and
correcting rules" (EDCR) that allows for the learning of rules to correct
perceptual (e.g., neural) models. Additionally, we introduce a probabilistic
framework that adds rigor to prior empirical studies, and we use this framework
to prove results on necessary and sufficient conditions for metacognitive
improvement, as well as limits to the approach. A set of future
|
2502.05400
|
Dynamic Noise Preference Optimization for LLM Self-Improvement via
Synthetic Data
|
cs.CL
|
Although LLMs have achieved significant success, their reliance on large
volumes of human-annotated data has limited their potential for further
scaling. In this situation, utilizing self-generated synthetic data has become
crucial for fine-tuning LLMs without extensive human annotation. However,
current methods often fail to ensure consistent improvements across iterations,
with performance stagnating after only minimal updates. To overcome these
challenges, we introduce Dynamic Noise Preference Optimization (DNPO). DNPO
employs a dynamic sample labeling mechanism to construct preference pairs for
training and introduces controlled, trainable noise into the preference
optimization process. Our approach effectively prevents stagnation and enables
continuous improvement. In experiments with Zephyr-7B, DNPO consistently
outperforms existing methods, showing an average performance boost of 2.6%
across multiple benchmarks. Additionally, DNPO shows a significant improvement
in model-generated data quality, with a 29.4% win-loss rate gap compared to the
baseline in GPT-4 evaluations. This highlights its effectiveness in enhancing
model performance through iterative refinement.
|
2502.05402
|
Convolutional Deep Colorization for Image Compression: A Color Grid
Based Approach
|
cs.CV cs.AI
|
The search for image compression optimization techniques is a topic of
constant interest both in and out of academic circles. One method that shows
promise toward future improvements in this field is image colorization since
image colorization algorithms can reduce the amount of color data that needs to
be stored for an image. Our work focuses on optimizing a color grid based
approach to fully-automated image color information retention with regard to
convolutional colorization network architecture for the purposes of image
compression. More generally, using a convolutional neural network for image
re-colorization, we want to minimize the amount of color information that is
stored while still being able to faithfully re-color images. Our results
yielded a promising image compression ratio, while still allowing for
successful image recolorization reaching high CSIM values.
|
2502.05403
|
Analyzing public sentiment to gauge key stock events and determine
volatility in conjunction with time and options premiums
|
cs.LG
|
Analyzing stocks and making higher accurate predictions on where the price is
heading continues to become more and more challenging therefore, we designed a
new financial algorithm that leverages social media sentiment analysis to
enhance the prediction of key stock earnings and associated volatility. Our
model integrates sentiment analysis and data retrieval techniques to extract
critical information from social media, analyze company financials, and compare
sentiments between Wall Street and the general public. This approach aims to
provide investors with timely data to execute trades based on key events,
rather than relying on long-term stock holding strategies. The stock market is
characterized by rapid data flow and fluctuating community sentiments, which
can significantly impact trading outcomes. Stock forecasting is complex given
its stochastic dynamic. Standard traditional prediction methods often overlook
key events and media engagement, focusing its practice into long-term
investment options. Our research seeks to change the stochastic dynamic to a
more predictable environment by examining the impact of media on stock
volatility, understanding and identifying sentiment differences between Wall
Street and retail investors, and evaluating the impact of various media
networks in predicting earning reports.
|
2502.05407
|
The Complexity of Learning Sparse Superposed Features with Feedback
|
cs.LG cs.AI stat.ML
|
The success of deep networks is crucially attributed to their ability to
capture latent features within a representation space. In this work, we
investigate whether the underlying learned features of a model can be
efficiently retrieved through feedback from an agent, such as a large language
model (LLM), in the form of relative \textit{triplet comparisons}. These
features may represent various constructs, including dictionaries in LLMs or
components of a covariance matrix of Mahalanobis distances. We analyze the
feedback complexity associated with learning a feature matrix in sparse
settings. Our results establish tight bounds when the agent is permitted to
construct activations and demonstrate strong upper bounds in sparse scenarios
when the agent's feedback is limited to distributional information. We validate
our theoretical findings through experiments on two distinct applications:
feature recovery from Recursive Feature Machine-trained models and dictionary
extraction from sparse autoencoders trained on Large Language Models.
|
2502.05409
|
Vision-in-the-loop Simulation for Deep Monocular Pose Estimation of UAV
in Ocean Environment
|
cs.CV cs.AI cs.LG cs.RO cs.SY eess.SY
|
This paper proposes a vision-in-the-loop simulation environment for deep
monocular pose estimation of a UAV operating in an ocean environment. Recently,
a deep neural network with a transformer architecture has been successfully
trained to estimate the pose of a UAV relative to the flight deck of a research
vessel, overcoming several limitations of GPS-based approaches. However,
validating the deep pose estimation scheme in an actual ocean environment poses
significant challenges due to the limited availability of research vessels and
the associated operational costs. To address these issues, we present a
photo-realistic 3D virtual environment leveraging recent advancements in
Gaussian splatting, a novel technique that represents 3D scenes by modeling
image pixels as Gaussian distributions in 3D space, creating a lightweight and
high-quality visual model from multiple viewpoints. This approach enables the
creation of a virtual environment integrating multiple real-world images
collected in situ. The resulting simulation enables the indoor testing of
flight maneuvers while verifying all aspects of flight software, hardware, and
the deep monocular pose estimation scheme. This approach provides a
cost-effective solution for testing and validating the autonomous flight of
shipboard UAVs, specifically focusing on vision-based control and estimation
algorithms.
|
2502.05414
|
Graph-based Molecular In-context Learning Grounded on Morgan
Fingerprints
|
cs.LG cs.CL
|
In-context learning (ICL) effectively conditions large language models (LLMs)
for molecular tasks, such as property prediction and molecule captioning, by
embedding carefully selected demonstration examples into the input prompt. This
approach avoids the computational overhead of extensive pertaining and
fine-tuning. However, current prompt retrieval methods for molecular tasks have
relied on molecule feature similarity, such as Morgan fingerprints, which do
not adequately capture the global molecular and atom-binding relationships. As
a result, these methods fail to represent the full complexity of molecular
structures during inference. Moreover, small-to-medium-sized LLMs, which offer
simpler deployment requirements in specialized systems, have remained largely
unexplored in the molecular ICL literature. To address these gaps, we propose a
self-supervised learning technique, GAMIC (Graph-Aligned Molecular In-Context
learning, which aligns global molecular structures, represented by graph neural
networks (GNNs), with textual captions (descriptions) while leveraging local
feature similarity through Morgan fingerprints. In addition, we introduce a
Maximum Marginal Relevance (MMR) based diversity heuristic during retrieval to
optimize input prompt demonstration samples. Our experimental findings using
diverse benchmark datasets show GAMIC outperforms simple Morgan-based ICL
retrieval methods across all tasks by up to 45%.
|
2502.05415
|
Show-o Turbo: Towards Accelerated Unified Multimodal Understanding and
Generation
|
cs.CV cs.AI
|
There has been increasing research interest in building unified multimodal
understanding and generation models, among which Show-o stands as a notable
representative, demonstrating great promise for both text-to-image and
image-to-text generation. The inference of Show-o involves progressively
denoising image tokens and autoregressively decoding text tokens, and hence,
unfortunately, suffers from inefficiency issues from both sides. This paper
introduces Show-o Turbo to bridge the gap. We first identify a unified
denoising perspective for the generation of images and text in Show-o based on
the parallel decoding of text tokens. We then propose to extend consistency
distillation (CD), a qualified approach for shortening the denoising process of
diffusion models, to the multimodal denoising trajectories of Show-o. We
introduce a trajectory segmentation strategy and a curriculum learning
procedure to improve the training convergence. Empirically, in text-to-image
generation, Show-o Turbo displays a GenEval score of 0.625 at 4 sampling steps
without using classifier-free guidance (CFG), outperforming that of the
original Show-o with 8 steps and CFG; in image-to-text generation, Show-o Turbo
exhibits a 1.5x speedup without significantly sacrificing performance. The code
is available at https://github.com/zhijie-group/Show-o-Turbo.
|
2502.05416
|
Deep Generative Models with Hard Linear Equality Constraints
|
cs.LG
|
While deep generative models~(DGMs) have demonstrated remarkable success in
capturing complex data distributions, they consistently fail to learn
constraints that encode domain knowledge and thus require constraint
integration. Existing solutions to this challenge have primarily relied on
heuristic methods and often ignore the underlying data distribution, harming
the generative performance. In this work, we propose a probabilistically sound
approach for enforcing the hard constraints into DGMs to generate
constraint-compliant and realistic data. This is achieved by our proposed
gradient estimators that allow the constrained distribution, the data
distribution conditioned on constraints, to be differentiably learned. We carry
out extensive experiments with various DGM model architectures over five image
datasets and three scientific applications in which domain knowledge is
governed by linear equality constraints. We validate that the standard DGMs
almost surely generate data violating the constraints. Among all the constraint
integration strategies, ours not only guarantees the satisfaction of
constraints in generation but also archives superior generative performance
than the other methods across every benchmark.
|
2502.05423
|
LRA-GNN: Latent Relation-Aware Graph Neural Network with Initial and
Dynamic Residual for Facial Age Estimation
|
cs.CV
|
Face information is mainly concentrated among facial key points, and frontier
research has begun to use graph neural networks to segment faces into patches
as nodes to model complex face representations. However, these methods
construct node-to-node relations based on similarity thresholds, so there is a
problem that some latent relations are missing. These latent relations are
crucial for deep semantic representation of face aging. In this novel, we
propose a new Latent Relation-Aware Graph Neural Network with Initial and
Dynamic Residual (LRA-GNN) to achieve robust and comprehensive facial
representation. Specifically, we first construct an initial graph utilizing
facial key points as prior knowledge, and then a random walk strategy is
employed to the initial graph for obtaining the global structure, both of which
together guide the subsequent effective exploration and comprehensive
representation. Then LRA-GNN leverages the multi-attention mechanism to capture
the latent relations and generates a set of fully connected graphs containing
rich facial information and complete structure based on the aforementioned
guidance. To avoid over-smoothing issues for deep feature extraction on the
fully connected graphs, the deep residual graph convolutional networks are
carefully designed, which fuse adaptive initial residuals and dynamic
developmental residuals to ensure the consistency and diversity of information.
Finally, to improve the estimation accuracy and generalization ability,
progressive reinforcement learning is proposed to optimize the ensemble
classification regressor. Our proposed framework surpasses the state-of-the-art
baselines on several age estimation benchmarks, demonstrating its strength and
effectiveness.
|
2502.05424
|
SAMGPT: Text-free Graph Foundation Model for Multi-domain Pre-training
and Cross-domain Adaptation
|
cs.CL cs.AI
|
Graphs are able to model interconnected entities in many online services,
supporting a wide range of applications on the Web. This raises an important
question: How can we train a graph foundational model on multiple source
domains and adapt to an unseen target domain? A major obstacle is that graphs
from different domains often exhibit divergent characteristics. Some studies
leverage large language models to align multiple domains based on textual
descriptions associated with the graphs, limiting their applicability to
text-attributed graphs. For text-free graphs, a few recent works attempt to
align different feature distributions across domains, while generally
neglecting structural differences. In this work, we propose a novel Structure
Alignment framework for text-free Multi-domain Graph Pre-Training and
cross-domain adaptation (SAMGPT). It is designed to learn multi-domain
knowledge from graphs originating in multiple source domains, which can then be
adapted to address applications in an unseen target domain. Specifically, we
introduce a set of structure tokens to harmonize structure-based aggregation
across source domains during the pre-training phase. Next, for cross-domain
adaptation, we design dual prompts, namely, holistic prompts and specific
prompts, which adapt unified multi-domain structural knowledge and
fine-grained, domain-specific information, respectively, to a target domain.
Finally, we conduct comprehensive experiments on seven public datasets to
evaluate and analyze the effectiveness of SAMGPT.
|
2502.05425
|
Toward Copyright Integrity and Verifiability via Multi-Bit Watermarking
for Intelligent Transportation Systems
|
cs.CR cs.CL
|
Intelligent transportation systems (ITS) use advanced technologies such as
artificial intelligence to significantly improve traffic flow management
efficiency, and promote the intelligent development of the transportation
industry. However, if the data in ITS is attacked, such as tampering or
forgery, it will endanger public safety and cause social losses. Therefore,
this paper proposes a watermarking that can verify the integrity of copyright
in response to the needs of ITS, termed ITSmark. ITSmark focuses on functions
such as extracting watermarks, verifying permission, and tracing tampered
locations. The scheme uses the copyright information to build the multi-bit
space and divides this space into multiple segments. These segments will be
assigned to tokens. Thus, the next token is determined by its segment which
contains the copyright. In this way, the obtained data contains the custom
watermark. To ensure the authorization, key parameters are encrypted during
copyright embedding to obtain cipher data. Only by possessing the correct
cipher data and private key, can the user entirely extract the watermark.
Experiments show that ITSmark surpasses baseline performances in data quality,
extraction accuracy, and unforgeability. It also shows unique capabilities of
permission verification and tampered location tracing, which ensures the
security of extraction and the reliability of copyright verification.
Furthermore, ITSmark can also customize the watermark embedding position and
proportion according to user needs, making embedding more flexible.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.