id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
2502.05836
|
LegalSeg: Unlocking the Structure of Indian Legal Judgments Through
Rhetorical Role Classification
|
cs.CL cs.AI cs.IR cs.LG
|
In this paper, we address the task of semantic segmentation of legal
documents through rhetorical role classification, with a focus on Indian legal
judgments. We introduce LegalSeg, the largest annotated dataset for this task,
comprising over 7,000 documents and 1.4 million sentences, labeled with 7
rhetorical roles. To benchmark performance, we evaluate multiple
state-of-the-art models, including Hierarchical BiLSTM-CRF,
TransformerOverInLegalBERT (ToInLegalBERT), Graph Neural Networks (GNNs), and
Role-Aware Transformers, alongside an exploratory RhetoricLLaMA, an
instruction-tuned large language model. Our results demonstrate that models
incorporating broader context, structural relationships, and sequential
sentence information outperform those relying solely on sentence-level
features. Additionally, we conducted experiments using surrounding context and
predicted or actual labels of neighboring sentences to assess their impact on
classification accuracy. Despite these advancements, challenges persist in
distinguishing between closely related roles and addressing class imbalance.
Our work underscores the potential of advanced techniques for improving legal
document understanding and sets a strong foundation for future research in
legal NLP.
|
2502.05842
|
A Grid-Forming HVDC Series Tapping Converter Using Extended Techniques
of Flex-LCC
|
eess.SY cs.SY
|
This paper discusses an extension technology for the previously proposed
Flexible Line-Commutated Converter (Flex LCC) [1]. The proposed extension
involves modifying the arm internal-electromotive-force control, redesigning
the main-circuit parameters, and integrating a low-power coordination strategy.
As a result, the Flex-LCC transforms from a grid-forming (GFM) voltage source
converter (VSC) based on series-connected LCC and FBMMC into a novel GFM HVDC
series tapping converter, referred to as the Extended Flex-LCC (EFLCC). The
EFLCC provides dc characteristics resembling those of current source converters
(CSCs) and ac characteristics resembling those of GFM VSCs. This makes it
easier to integrate relatively small renewable energy sources (RESs) that
operate in islanded or weak-grid supported conditions with an existing
LCC-HVDC. Meanwhile, the EFLCC distinguishes itself by requiring fewer
full-controlled switches and less energy storage, resulting in lower losses and
costs compared to the FBMMC HVDC series tap solution. In particular, the
reduced capacity requirement and the wide allowable range of valve-side ac
voltages in the FBMMC part facilitate the matching of current-carrying
capacities between full-controlled switches and thyristors. The application
scenario, system-level analysis, implementation, converter-level operation, and
comparison of the EFLCC are presented in detail in this paper. The theoretical
analysis is confirmed by experimental and simulation results.
|
2502.05843
|
Training-free Anomaly Event Detection via LLM-guided Symbolic Pattern
Discovery
|
cs.CV
|
Anomaly event detection plays a crucial role in various real-world
applications. However, current approaches predominantly rely on supervised
learning, which faces significant challenges: the requirement for extensive
labeled training data and lack of interpretability in decision-making
processes. To address these limitations, we present a training-free framework
that integrates open-set object detection with symbolic regression, powered by
Large Language Models (LLMs) for efficient symbolic pattern discovery. The LLMs
guide the symbolic reasoning process, establishing logical relationships
between detected entities. Through extensive experiments across multiple
domains, our framework demonstrates several key advantages: (1) achieving
superior detection accuracy through direct reasoning without any training
process; (2) providing highly interpretable logical expressions that are
readily comprehensible to humans; and (3) requiring minimal annotation effort -
approximately 1% of the data needed by traditional training-based methods.To
facilitate comprehensive evaluation and future research, we introduce two
datasets: a large-scale private dataset containing over 110,000 annotated
images covering various anomaly scenarios including construction site safety
violations, illegal fishing activities, and industrial hazards, along with a
public benchmark dataset of 5,000 samples with detailed anomaly event
annotations. Code is available at here.
|
2502.05845
|
Exploiting the Hidden Capacity of MMC Through Accurate Quantification of
Modulation Indices
|
eess.SY cs.SY
|
The modular multilevel converter (MMC) has become increasingly important in
voltage-source converter-based high-voltage direct current (VSC-HVDC) systems.
Direct and indirect modulation are widely used as mainstream modulation
techniques in MMCs. However, due to the challenge of quantitatively evaluating
the operation of different modulation schemes, the academic and industrial
communities still hold differing opinions on their performance. To address this
controversy, this paper employs the state-of-the-art computational methods and
quantitative metrics to compare the performance among different modulation
schemes. The findings indicate that direct modulation offers superior
modulation potential for MMCs, highlighting its higher ac voltage output
capability and broader linear PQ operation region. Conversely, indirect
modulation is disadvantaged in linear modulation, which indicates inferior
output voltage capability. Furthermore, this paper delves into the conditions
whereby direct and indirect modulation techniques become equivalent in
steady-state. The study findings suggest that the modulation capability of
direct modulation is the same as that of indirect modulation in steady-state
when additional controls, including closed-loop capacitor voltage control and
circulating current suppression control (CCSC), are simultaneously active.
Simulation and experiments verify the correctness and validity.
|
2502.05846
|
Rapid Detection of High-impedance Arc Faults in Medium Voltage
Electrical Distribution Systems
|
eess.SY cs.SY
|
High-impedance arc faults in AC power systems have the potential to lead to
catastrophic accidents. However, significant challenges exist in identifying
these faults because of the much weaker characteristics and variety when
grounded with different surfaces. Addressing a noteworthy gap in prior
research, which largely focused on arc fault detection in low-voltage systems.
A novel approach has been applied that offers rapid arc fault detection for
medium voltage distribution lines. In contrast to existing black-box
feature-based approaches, Hankel alternative view of the Koopman (HAVOK)
analysis developed from nonlinear dynamics has been applied which offers not
only interpretable features but also opens up new application options in the
area of arc fault detection. The method displays a much faster detection speed
within 0.45 ms making it appropriate for real-time applications. It
demonstrates the ability to detect arc faults across various scenarios,
boosting its practical importance for stakeholders in safety-critical
industries.
|
2502.05849
|
Fact-or-Fair: A Checklist for Behavioral Testing of AI Models on
Fairness-Related Queries
|
cs.CL
|
The generation of incorrect images, such as depictions of people of color in
Nazi-era uniforms by Gemini, frustrated users and harmed Google's reputation,
motivating us to investigate the relationship between accurately reflecting
factuality and promoting diversity and equity. In this study, we focus on 19
real-world statistics collected from authoritative sources. Using these
statistics, we develop a checklist comprising objective and subjective queries
to analyze behavior of large language models (LLMs) and text-to-image (T2I)
models. Objective queries assess the models' ability to provide accurate world
knowledge. In contrast, the design of subjective queries follows a key
principle: statistical or experiential priors should not be overgeneralized to
individuals, ensuring that models uphold diversity. These subjective queries
are based on three common human cognitive errors that often result in social
biases. We propose metrics to assess factuality and fairness, and formally
prove the inherent trade-off between these two aspects. Results show that
GPT-4o and DALL-E 3 perform notably well among six LLMs and four T2I models.
Our code is publicly available at https://github.com/uclanlp/Fact-or-Fair.
|
2502.05850
|
MetaML-Pro: Cross-Stage Design Flow Automation for Efficient Deep
Learning Acceleration
|
cs.AR cs.LG
|
This paper presents a unified framework for codifying and automating
optimization strategies to efficiently deploy deep neural networks (DNNs) on
resource-constrained hardware, such as FPGAs, while maintaining high
performance, accuracy, and resource efficiency. Deploying DNNs on such
platforms involves addressing the significant challenge of balancing
performance, resource usage (e.g., DSPs and LUTs), and inference accuracy,
which often requires extensive manual effort and domain expertise. Our novel
approach addresses two key issues: cross-stage co-optimization and optimization
search. By seamlessly integrating programmatic DNN optimization techniques with
high-level synthesis (HLS)-based metaprogramming and leveraging advanced design
space exploration (DSE) strategies like Bayesian optimization, the framework
automates both top-down and bottom-up design flows, reducing the need for
manual intervention and domain expertise. The proposed framework introduces
customizable optimization, transformation, and control blocks to enhance DNN
accelerator performance and resource efficiency. Experimental results
demonstrate up to a 92\% DSP and 89\% LUT usage reduction for select networks,
while preserving accuracy, along with a 15.6-fold reduction in optimization
time compared to grid search. These results underscore the novelty and
potential of the proposed framework for automated, resource-efficient DNN
accelerator designs.
|
2502.05851
|
Fairness Driven Slot Allocation Problem in Billboard Advertisement
|
cs.GT cs.DB cs.MA
|
In billboard advertisement, a number of digital billboards are owned by an
influence provider, and several commercial houses (which we call advertisers)
approach the influence provider for a specific number of views of their
advertisement content on a payment basis. Though the billboard slot allocation
problem has been studied in the literature, this problem still needs to be
addressed from a fairness point of view. In this paper, we introduce the Fair
Billboard Slot Allocation Problem, where the objective is to allocate a given
set of billboard slots among a group of advertisers based on their demands
fairly and efficiently. As fairness criteria, we consider the maximin fair
share, which ensures that each advertiser will receive a subset of slots that
maximizes the minimum share for all the advertisers. We have proposed a
solution approach that generates an allocation and provides an approximate
maximum fair share. The proposed methodology has been analyzed to understand
its time and space requirements and a performance guarantee. It has been
implemented with real-world trajectory and billboard datasets, and the results
have been reported. The results show that the proposed approach leads to a
balanced allocation by satisfying the maximin fairness criteria. At the same
time, it maximizes the utility of advertisers.
|
2502.05853
|
Zak-Transform-Induced Optimal Sequences and Their Applications in OTFS
|
cs.IT math.IT
|
This paper introduces a novel finite Zak transform (FZT)-aided framework for
constructing multiple zero-correlation zone (ZCZ) sequence sets with optimal
correlation properties. Specifically, each sequence is perfect with zero
auto-correlation sidelobes, each ZCZ sequence set meets the Tang-Fan-Matsufuji
bound with equality, and the maximum inter-set cross-correlation of multiple
sequence sets meets the Sarwate bound with equality. Our study shows that these
sequences can be sparsely expressed in the Zak domain through properly selected
index and phase matrices. Particularly, it is found that the maximum inter-set
cross-correlation beats the Sarwate bound if every index matrix is a circular
Florentine array. Several construction methods of multiple ZCZ sequence sets
are proposed, demonstrating both the optimality and high flexibility.
Additionally, it is shown that excellent synchronization performance can be
achieved by the proposed sequences in orthogonal-time-frequency-space (OTFS)
systems.
|
2502.05854
|
NSPG-Miner: Mining Repetitive Negative Sequential Patterns
|
cs.DB
|
Sequential pattern mining (SPM) with gap constraints (or repetitive SPM or
tandem repeat discovery in bioinformatics) can find frequent repetitive
subsequences satisfying gap constraints, which are called positive sequential
patterns with gap constraints (PSPGs). However, classical SPM with gap
constraints cannot find the frequent missing items in the PSPGs. To tackle this
issue, this paper explores negative sequential patterns with gap constraints
(NSPGs). We propose an efficient NSPG-Miner algorithm that can mine both
frequent PSPGs and NSPGs simultaneously. To effectively reduce candidate
patterns, we propose a pattern join strategy with negative patterns which can
generate both positive and negative candidate patterns at the same time. To
calculate the support (frequency of occurrence) of a pattern in each sequence,
we explore a NegPair algorithm that employs a key-value pair array structure to
deal with the gap constraints and the negative items simultaneously and can
avoid redundant rescanning of the original sequence, thus improving the
efficiency of the algorithm. To report the performance of NSPG-Miner, 11
competitive algorithms and 11 datasets are employed. The experimental results
not only validate the effectiveness of the strategies adopted by NSPG-Miner,
but also verify that NSPG-Miner can discover more valuable information than the
state-of-the-art algorithms. Algorithms and datasets can be downloaded from
https://github.com/wuc567/Pattern-Mining/tree/master/NSPG-Miner.
|
2502.05855
|
DexVLA: Vision-Language Model with Plug-In Diffusion Expert for General
Robot Control
|
cs.RO cs.CV
|
Enabling robots to perform diverse tasks across varied environments is a
central challenge in robot learning. While vision-language-action (VLA) models
have shown promise for generalizable robot skills, realizing their full
potential requires addressing limitations in action representation and
efficient training. Current VLA models often focus on scaling the
vision-language model (VLM) component, while the action space representation
remains a critical bottleneck. This paper introduces DexVLA, a novel framework
designed to enhance the efficiency and generalization capabilities of VLAs for
complex, long-horizon tasks across diverse robot embodiments. DexVLA features a
novel diffusion-based action expert, scaled to one billion parameters, designed
for cross-embodiment learning. A novel embodiment curriculum learning strategy
facilitates efficient training: (1) pre-training the diffusion expert that is
separable from the VLA on cross-embodiment data, (2) aligning the VLA model to
specific embodiments, and (3) post-training for rapid adaptation to new tasks.
We conduct comprehensive experiments across multiple embodiments, including
single-arm, bimanual, and dexterous hand, demonstrating DexVLA's adaptability
to challenging tasks without task-specific adaptation, its ability to learn
dexterous skills on novel embodiments with limited data, and its capacity to
complete complex, long-horizon tasks using only direct language prompting, such
as laundry folding. In all settings, our method demonstrates superior
performance compared to state-of-the-art models like Octo, OpenVLA, and
Diffusion Policy.
|
2502.05857
|
Acquisition through My Eyes and Steps: A Joint Predictive Agent Model in
Egocentric Worlds
|
cs.CV cs.AI cs.LG
|
This paper addresses the task of learning an agent model behaving like
humans, which can jointly perceive, predict, and act in egocentric worlds.
Previous methods usually train separate models for these three abilities,
leading to information silos among them, which prevents these abilities from
learning from each other and collaborating effectively. In this paper, we
propose a joint predictive agent model, named EgoAgent, that simultaneously
learns to represent the world, predict future states, and take reasonable
actions with a single transformer. EgoAgent unifies the representational spaces
of the three abilities by mapping them all into a sequence of continuous
tokens. Learnable query tokens are appended to obtain current states, future
states, and next actions. With joint supervision, our agent model establishes
the internal relationship among these three abilities and effectively mimics
the human inference and learning processes. Comprehensive evaluations of
EgoAgent covering image classification, egocentric future state prediction, and
3D human motion prediction tasks demonstrate the superiority of our method. The
code and trained model will be released for reproducibility.
|
2502.05858
|
Let's Have Both! Optimal List-Recoverability via Alphabet Permutation
Codes
|
cs.IT math.IT
|
We construct a new family of codes that requires only polynomial randomness
yet achieves $(\rho,\ell,L)$-list-recoverability at a rate within $\epsilon$ of
capacity, with $L \approx \tfrac{\ell}{\epsilon}$. In contrast, every previous
construction using polynomial randomness required an exponentially larger list
size. Our approach extends earlier work by Li and Wootters (2021) on the
list-decodability of random linear binary codes.
|
2502.05859
|
SphereFusion: Efficient Panorama Depth Estimation via Gated Fusion
|
cs.CV
|
Due to the rapid development of panorama cameras, the task of estimating
panorama depth has attracted significant attention from the computer vision
community, especially in applications such as robot sensing and autonomous
driving. However, existing methods relying on different projection formats
often encounter challenges, either struggling with distortion and discontinuity
in the case of equirectangular, cubemap, and tangent projections, or
experiencing a loss of texture details with the spherical projection. To tackle
these concerns, we present SphereFusion, an end-to-end framework that combines
the strengths of various projection methods. Specifically, SphereFusion
initially employs 2D image convolution and mesh operations to extract two
distinct types of features from the panorama image in both equirectangular and
spherical projection domains. These features are then projected onto the
spherical domain, where a gate fusion module selects the most reliable features
for fusion. Finally, SphereFusion estimates panorama depth within the spherical
domain. Meanwhile, SphereFusion employs a cache strategy to improve the
efficiency of mesh operation. Extensive experiments on three public panorama
datasets demonstrate that SphereFusion achieves competitive results with other
state-of-the-art methods, while presenting the fastest inference speed at only
17 ms on a 512$\times$1024 panorama image.
|
2502.05863
|
Uni-Retrieval: A Multi-Style Retrieval Framework for STEM's Education
|
cs.IR cs.AI cs.MM
|
In AI-facilitated teaching, leveraging various query styles to interpret
abstract text descriptions is crucial for ensuring high-quality teaching.
However, current retrieval models primarily focus on natural text-image
retrieval, making them insufficiently tailored to educational scenarios due to
the ambiguities in the retrieval process. In this paper, we propose a diverse
expression retrieval task tailored to educational scenarios, supporting
retrieval based on multiple query styles and expressions. We introduce the STEM
Education Retrieval Dataset (SER), which contains over 24,000 query pairs of
different styles, and the Uni-Retrieval, an efficient and style-diversified
retrieval vision-language model based on prompt tuning. Uni-Retrieval extracts
query style features as prototypes and builds a continuously updated Prompt
Bank containing prompt tokens for diverse queries. This bank can updated during
test time to represent domain-specific knowledge for different subject
retrieval scenarios. Our framework demonstrates scalability and robustness by
dynamically retrieving prompt tokens based on prototype similarity, effectively
facilitating learning for unknown queries. Experimental results indicate that
Uni-Retrieval outperforms existing retrieval models in most retrieval tasks.
This advancement provides a scalable and precise solution for diverse
educational needs.
|
2502.05864
|
Learning Accurate, Efficient, and Interpretable MLPs on Multiplex Graphs
via Node-wise Multi-View Ensemble Distillation
|
cs.LG
|
Multiplex graphs, with multiple edge types (graph views) among common nodes,
provide richer structural semantics and better modeling capabilities. Multiplex
Graph Neural Networks (MGNNs), typically comprising view-specific GNNs and a
multi-view integration layer, have achieved advanced performance in various
downstream tasks. However, their reliance on neighborhood aggregation poses
challenges for deployment in latency-sensitive applications. Motivated by
recent GNN-to-MLP knowledge distillation frameworks, we propose Multiplex
Graph-Free Neural Networks (MGFNN and MGFNN+) to combine MGNNs' superior
performance and MLPs' efficient inference via knowledge distillation. MGFNN
directly trains student MLPs with node features as input and soft labels from
teacher MGNNs as targets. MGFNN+ further employs a low-rank approximation-based
reparameterization to learn node-wise coefficients, enabling adaptive knowledge
ensemble from each view-specific GNN. This node-wise multi-view ensemble
distillation strategy allows student MLPs to learn more informative multiplex
semantic knowledge for different nodes. Experiments show that MGFNNs achieve
average accuracy improvements of about 10% over vanilla MLPs and perform
comparably or even better to teacher MGNNs (accurate); MGFNNs achieve a
35.40$\times$-89.14$\times$ speedup in inference over MGNNs (efficient); MGFNN+
adaptively assigns different coefficients for multi-view ensemble distillation
regarding different nodes (interpretable).
|
2502.05867
|
Self-Training Large Language Models for Tool-Use Without Demonstrations
|
cs.CL
|
Large language models (LLMs) remain prone to factual inaccuracies and
computational errors, including hallucinations and mistakes in mathematical
reasoning. Recent work augmented LLMs with tools to mitigate these
shortcomings, but often requires curated gold tool-use demonstrations. In this
paper, we investigate whether LLMs can learn to use tools without
demonstrations. First, we analyse zero-shot prompting strategies to guide LLMs
in tool utilisation. Second, we propose a self-training method to synthesise
tool-use traces using the LLM itself. We compare supervised fine-tuning and
preference fine-tuning techniques for fine-tuning the model on datasets
constructed using existing Question Answering (QA) datasets, i.e., TriviaQA and
GSM8K. Experiments show that tool-use enhances performance on a long-tail
knowledge task: 3.7% on PopQA, which is used solely for evaluation, but leads
to mixed results on other datasets, i.e., TriviaQA, GSM8K, and NQ-Open. Our
findings highlight the potential and challenges of integrating external tools
into LLMs without demonstrations.
|
2502.05868
|
Norm Augmented Graph AutoEncoders for Link Prediction
|
cs.LG
|
Link Prediction (LP) is a crucial problem in graph-structured data. Graph
Neural Networks (GNNs) have gained prominence in LP, with Graph AutoEncoders
(GAEs) being a notable representation. However, our empirical findings reveal
that GAEs' LP performance suffers heavily from the long-tailed node degree
distribution, i.e., low-degree nodes tend to exhibit inferior LP performance
compared to high-degree nodes. \emph{What causes this degree-related bias, and
how can it be mitigated?} In this study, we demonstrate that the norm of node
embeddings learned by GAEs exhibits variation among nodes with different
degrees, underscoring its central significance in influencing the final
performance of LP. Specifically, embeddings with larger norms tend to guide the
decoder towards predicting higher scores for positive links and lower scores
for negative links, thereby contributing to superior performance. This
observation motivates us to improve GAEs' LP performance on low-degree nodes by
increasing their embedding norms, which can be implemented simply yet
effectively by introducing additional self-loops into the training objective
for low-degree nodes. This norm augmentation strategy can be seamlessly
integrated into existing GAE methods with light computational cost. Extensive
experiments on various datasets and GAE methods show the superior performance
of norm-augmented GAEs.
|
2502.05869
|
HyLiFormer: Hyperbolic Linear Attention for Skeleton-based Human Action
Recognition
|
cs.CV
|
Transformers have demonstrated remarkable performance in skeleton-based human
action recognition, yet their quadratic computational complexity remains a
bottleneck for real-world applications. To mitigate this, linear attention
mechanisms have been explored but struggle to capture the hierarchical
structure of skeleton data. Meanwhile, the Poincar\'e model, as a typical
hyperbolic geometry, offers a powerful framework for modeling hierarchical
structures but lacks well-defined operations for existing mainstream linear
attention. In this paper, we propose HyLiFormer, a novel hyperbolic linear
attention Transformer tailored for skeleton-based action recognition. Our
approach incorporates a Hyperbolic Transformation with Curvatures (HTC) module
to map skeleton data into hyperbolic space and a Hyperbolic Linear Attention
(HLA) module for efficient long-range dependency modeling. Theoretical analysis
and extensive experiments on NTU RGB+D and NTU RGB+D 120 datasets demonstrate
that HyLiFormer significantly reduces computational complexity while preserving
model accuracy, making it a promising solution for efficiency-critical
applications.
|
2502.05874
|
MMGDreamer: Mixed-Modality Graph for Geometry-Controllable 3D Indoor
Scene Generation
|
cs.CV cs.AI cs.LG
|
Controllable 3D scene generation has extensive applications in virtual
reality and interior design, where the generated scenes should exhibit high
levels of realism and controllability in terms of geometry. Scene graphs
provide a suitable data representation that facilitates these applications.
However, current graph-based methods for scene generation are constrained to
text-based inputs and exhibit insufficient adaptability to flexible user
inputs, hindering the ability to precisely control object geometry. To address
this issue, we propose MMGDreamer, a dual-branch diffusion model for scene
generation that incorporates a novel Mixed-Modality Graph, visual enhancement
module, and relation predictor. The mixed-modality graph allows object nodes to
integrate textual and visual modalities, with optional relationships between
nodes. It enhances adaptability to flexible user inputs and enables meticulous
control over the geometry of objects in the generated scenes. The visual
enhancement module enriches the visual fidelity of text-only nodes by
constructing visual representations using text embeddings. Furthermore, our
relation predictor leverages node representations to infer absent relationships
between nodes, resulting in more coherent scene layouts. Extensive experimental
results demonstrate that MMGDreamer exhibits superior control of object
geometry, achieving state-of-the-art scene generation performance. Project
page: https://yangzhifeio.github.io/project/MMGDreamer.
|
2502.05878
|
Enhancing Financial Time-Series Forecasting with Retrieval-Augmented
Large Language Models
|
cs.CL
|
Stock movement prediction, a critical task in financial time-series
forecasting, relies on identifying and retrieving key influencing factors from
vast and complex datasets. However, traditional text-trained or numeric
similarity-based retrieval methods often struggle to handle the intricacies of
financial data. To address this, we propose the first retrieval-augmented
generation (RAG) framework specifically designed for financial time-series
forecasting. Our framework incorporates three key innovations: a fine-tuned 1B
large language model (StockLLM) as its backbone, a novel candidate selection
method enhanced by LLM feedback, and a training objective that maximizes the
similarity between queries and historically significant sequences. These
advancements enable our retriever, FinSeer, to uncover meaningful patterns
while effectively minimizing noise in complex financial datasets. To support
robust evaluation, we also construct new datasets that integrate financial
indicators and historical stock prices. Experimental results demonstrate that
our RAG framework outperforms both the baseline StockLLM and random retrieval
methods, showcasing its effectiveness. FinSeer, as the retriever, achieves an
8% higher accuracy on the BIGDATA22 benchmark and retrieves more impactful
sequences compared to existing retrieval methods. This work highlights the
importance of tailored retrieval models in financial forecasting and provides a
novel, scalable framework for future research in the field.
|
2502.05879
|
Enhancing Depression Detection with Chain-of-Thought Prompting: From
Emotion to Reasoning Using Large Language Models
|
cs.CL cs.AI
|
Depression is one of the leading causes of disability worldwide, posing a
severe burden on individuals, healthcare systems, and society at large. Recent
advancements in Large Language Models (LLMs) have shown promise in addressing
mental health challenges, including the detection of depression through
text-based analysis. However, current LLM-based methods often struggle with
nuanced symptom identification and lack a transparent, step-by-step reasoning
process, making it difficult to accurately classify and explain mental health
conditions. To address these challenges, we propose a Chain-of-Thought
Prompting approach that enhances both the performance and interpretability of
LLM-based depression detection. Our method breaks down the detection process
into four stages: (1) sentiment analysis, (2) binary depression classification,
(3) identification of underlying causes, and (4) assessment of severity. By
guiding the model through these structured reasoning steps, we improve
interpretability and reduce the risk of overlooking subtle clinical indicators.
We validate our method on the E-DAIC dataset, where we test multiple
state-of-the-art large language models. Experimental results indicate that our
Chain-of-Thought Prompting technique yields superior performance in both
classification accuracy and the granularity of diagnostic insights, compared to
baseline approaches.
|
2502.05883
|
NeuralPrefix: A Zero-shot Sensory Data Imputation Plugin
|
cs.LG cs.AI stat.ML
|
Real-world sensing challenges such as sensor failures, communication issues,
and power constraints lead to data intermittency. An issue that is known to
undermine the traditional classification task that assumes a continuous data
stream. Previous works addressed this issue by designing bespoke solutions
(i.e. task-specific and/or modality-specific imputation). These approaches,
while effective for their intended purposes, had limitations in their
applicability across different tasks and sensor modalities. This raises an
important question: Can we build a task-agnostic imputation pipeline that is
transferable to new sensors without requiring additional training? In this
work, we formalise the concept of zero-shot imputation and propose a novel
approach that enables the adaptation of pre-trained models to handle data
intermittency. This framework, named NeuralPrefix, is a generative neural
component that precedes a task model during inference, filling in gaps caused
by data intermittency. NeuralPrefix is built as a continuous dynamical system,
where its internal state can be estimated at any point in time by solving an
Ordinary Differential Equation (ODE). This approach allows for a more versatile
and adaptable imputation method, overcoming the limitations of task-specific
and modality-specific solutions. We conduct a comprehensive evaluation of
NeuralPrefix on multiple sensory datasets, demonstrating its effectiveness
across various domains. When tested on intermittent data with a high 50%
missing data rate, NeuralPreifx accurately recovers all the missing samples,
achieving SSIM score between 0.93-0.96. Zero-shot evaluations show that
NeuralPrefix generalises well to unseen datasets, even when the measurements
come from a different modality.
|
2502.05884
|
Study of Robust Multiuser Scheduling and Power Allocation in Cell-Free
MIMO Networks
|
cs.IT math.IT
|
This paper introduces a robust resource allocation framework for the downlink
of cell-free massive multi-input multi-output (CF-mMIMO) networks to address
the effects caused by imperfect channel state information (CSI). In particular,
the proposed robust resource allocation framework includes a robust user
scheduling algorithm to optimize the network's sum-rate and a robust power
allocation technique aimed at minimizing the mean square error (MSE) for a
network with a linear precoder. Unlike non-robust resource allocation
techniques, the proposed robust strategies effectively counteract the effects
of imperfect CSI, enhancing network efficiency and reliability. Simulation
results show a significant improvement in network performance obtained by the
proposed approaches, highlighting the impact of robust resource allocation in
wireless networks.
|
2502.05887
|
MTPChat: A Multimodal Time-Aware Persona Dataset for Conversational
Agents
|
cs.CL cs.AI
|
Understanding temporal dynamics is critical for conversational agents,
enabling effective content analysis and informed decision-making. However,
time-aware datasets, particularly for persona-grounded conversations, are still
limited, which narrows their scope and diminishes their complexity. To address
this gap, we introduce MTPChat, a multimodal, time-aware persona dialogue
dataset that integrates linguistic, visual, and temporal elements within
dialogue and persona memory. Leveraging MTPChat, we propose two time-sensitive
tasks: Temporal Next Response Prediction (TNRP) and Temporal Grounding Memory
Prediction (TGMP), both designed to assess a model's ability to understand
implicit temporal cues and dynamic interactions. Additionally, we present an
innovative framework featuring an adaptive temporal module to effectively
integrate multimodal streams and capture temporal dependencies. Experimental
results validate the challenges posed by MTPChat and demonstrate the
effectiveness of our framework in multimodal time-sensitive scenarios.
|
2502.05891
|
Room-scale magnetoquasistatic wireless power transfer using a
cavity-based multimode resonator
|
physics.app-ph cs.SY eess.SY
|
Magnetoquasistatic wireless power transfer can be used to charge and power
electronic devices such as smartphones and small home appliances. However,
existing coil-based transmitters, which are composed of wire conductors, have a
limited range. Here we show that multimode quasistatic cavity resonance can
provide room-scale wireless power transfer. The approach uses multidirectional,
widely distributed currents on conductive surfaces that are placed around the
target volume. It generates multiple, mutually unique, three-dimensional
magnetic field patterns, where each pattern is attributed to different
eigenmodes of a single room-scale resonator. Using these modes together, a
power delivery efficiency exceeding 37.1% can be achieved throughout a 3 m * 3
m * 2 m test room. With this approach, power exceeding 50 W could potentially
be delivered to mobile receivers in accordance with safety guidelines.
|
2502.05892
|
A Distributional Perspective on Word Learning in Neural Language Models
|
cs.CL cs.AI
|
Language models (LMs) are increasingly being studied as models of human
language learners. Due to the nascency of the field, it is not well-established
whether LMs exhibit similar learning dynamics to humans, and there are few
direct comparisons between learning trajectories in humans and models. Word
learning trajectories for children are relatively well-documented, and recent
work has tried to extend these investigations to language models. However,
there are no widely agreed-upon metrics for word learning in language models.
We take a distributional approach to this problem, defining lexical knowledge
in terms of properties of the learned distribution for a target word. We argue
that distributional signatures studied in prior work fail to capture key
distributional information. Thus, we propose an array of signatures that
improve on earlier approaches by capturing knowledge of both where the target
word can and cannot occur as well as gradient preferences about the word's
appropriateness. We obtain learning trajectories for a selection of small
language models we train from scratch, study the relationship between different
distributional signatures, compare how well they align with human word learning
trajectories and interpretable lexical features, and address basic
methodological questions about estimating these distributional signatures. Our
metrics largely capture complementary information, suggesting that it is
important not to rely on a single metric. However, across all metrics, language
models' learning trajectories fail to correlate with those of children.
|
2502.05894
|
Suppressing Leakage Magnetic Field in Wireless Power Transfer using
Halbach Array-Based Resonators
|
physics.app-ph cs.SY eess.SY
|
Wireless power transfer has the potential to seamlessly power electronic
systems, such as electric vehicles, industrial robots, and mobile devices.
However, the leakage magnetic field is a critical bottleneck that limits the
transferable power level, and heavy ferromagnetic shields are needed for
transferring large amounts of power. In this paper, we propose a ferrite-less
coil design that generates an asymmetric magnetic field pattern focused on one
side of the resonator, which effectively reduces the leakage magnetic field.
The key to enabling the asymmetric field pattern is a coil winding strategy
inspired by the Halbach array, a permanent magnet arrangement, which is then
tailored for wireless power using an evolutionary strategy algorithm. Numerical
analyses and simulations demonstrated that the proposed coil structure delivers
the same amount of power as spiral coils, while achieving an 86.6% reduction in
magnetic field intensity at a plane located 75 mm away from the resonator pair
and a power efficiency of 96.0%. We verified our approach by measuring the
power efficiency and magnetic field intensity of a test wireless power system
operating at 6.78 MHz. These findings indicate that our approach can
efficiently deliver over 50 times more power without increasing magnetic field
exposure, making it a promising solution for high-power wireless power transfer
applications.
|
2502.05895
|
Beyond Fine-Tuning: A Systematic Study of Sampling Techniques in
Personalized Image Generation
|
cs.CV
|
Personalized text-to-image generation aims to create images tailored to
user-defined concepts and textual descriptions. Balancing the fidelity of the
learned concept with its ability for generation in various contexts presents a
significant challenge. Existing methods often address this through diverse
fine-tuning parameterizations and improved sampling strategies that integrate
superclass trajectories during the diffusion process. While improved sampling
offers a cost-effective, training-free solution for enhancing fine-tuned
models, systematic analyses of these methods remain limited. Current approaches
typically tie sampling strategies with fixed fine-tuning configurations, making
it difficult to isolate their impact on generation outcomes. To address this
issue, we systematically analyze sampling strategies beyond fine-tuning,
exploring the impact of concept and superclass trajectories on the results.
Building on this analysis, we propose a decision framework evaluating text
alignment, computational constraints, and fidelity objectives to guide strategy
selection. It integrates with diverse architectures and training approaches,
systematically optimizing concept preservation, prompt adherence, and resource
efficiency. The source code can be found at
https://github.com/ControlGenAI/PersonGenSampler.
|
2502.05902
|
Fast Omni-Directional Image Super-Resolution: Adapting the Implicit
Image Function with Pixel and Semantic-Wise Spherical Geometric Priors
|
cs.CV
|
In the context of Omni-Directional Image (ODI) Super-Resolution (SR), the
unique challenge arises from the non-uniform oversampling characteristics
caused by EquiRectangular Projection (ERP). Considerable efforts in designing
complex spherical convolutions or polyhedron reprojection offer significant
performance improvements but at the expense of cumbersome processing procedures
and slower inference speeds. Under these circumstances, this paper proposes a
new ODI-SR model characterized by its capacity to perform Fast and
Arbitrary-scale ODI-SR processes, denoted as FAOR. The key innovation lies in
adapting the implicit image function from the planar image domain to the ERP
image domain by incorporating spherical geometric priors at both the latent
representation and image reconstruction stages, in a low-overhead manner.
Specifically, at the latent representation stage, we adopt a pair of pixel-wise
and semantic-wise sphere-to-planar distortion maps to perform affine
transformations on the latent representation, thereby incorporating it with
spherical properties. Moreover, during the image reconstruction stage, we
introduce a geodesic-based resampling strategy, aligning the implicit image
function with spherical geometrics without introducing additional parameters.
As a result, the proposed FAOR outperforms the state-of-the-art ODI-SR models
with a much faster inference speed. Extensive experimental results and ablation
studies have demonstrated the effectiveness of our design.
|
2502.05905
|
QP-SNN: Quantized and Pruned Spiking Neural Networks
|
cs.CV
|
Brain-inspired Spiking Neural Networks (SNNs) leverage sparse spikes to
encode information and operate in an asynchronous event-driven manner, offering
a highly energy-efficient paradigm for machine intelligence. However, the
current SNN community focuses primarily on performance improvement by
developing large-scale models, which limits the applicability of SNNs in
resource-limited edge devices. In this paper, we propose a hardware-friendly
and lightweight SNN, aimed at effectively deploying high-performance SNN in
resource-limited scenarios. Specifically, we first develop a baseline model
that integrates uniform quantization and structured pruning, called QP-SNN
baseline. While this baseline significantly reduces storage demands and
computational costs, it suffers from performance decline. To address this, we
conduct an in-depth analysis of the challenges in quantization and pruning that
lead to performance degradation and propose solutions to enhance the baseline's
performance. For weight quantization, we propose a weight rescaling strategy
that utilizes bit width more effectively to enhance the model's representation
capability. For structured pruning, we propose a novel pruning criterion using
the singular value of spatiotemporal spike activities to enable more accurate
removal of redundant kernels. Extensive experiments demonstrate that
integrating two proposed methods into the baseline allows QP-SNN to achieve
state-of-the-art performance and efficiency, underscoring its potential for
enhancing SNN deployment in edge intelligence computing.
|
2502.05907
|
EvoAgent: Agent Autonomous Evolution with Continual World Model for
Long-Horizon Tasks
|
cs.RO
|
Completing Long-Horizon (LH) tasks in open-ended worlds is an important yet
difficult problem for embodied agents. Existing approaches suffer from two key
challenges: (1) they heavily rely on experiences obtained from human-created
data or curricula, lacking the ability to continuously update multimodal
experiences, and (2) they may encounter catastrophic forgetting issues when
faced with new tasks, lacking the ability to continuously update world
knowledge. To solve these challenges, this paper presents EvoAgent, an
autonomous-evolving agent with a continual World Model (WM), which can
autonomously complete various LH tasks across environments through
self-planning, self-control, and self-reflection, without human intervention.
Our proposed EvoAgent contains three modules, i.e., i) the memory-driven
planner which uses an LLM along with the WM and interaction memory, to convert
LH tasks into executable sub-tasks; ii) the WM-guided action controller which
leverages WM to generate low-level actions and incorporates a self-verification
mechanism to update multimodal experiences; iii) the experience-inspired
reflector which implements a two-stage curriculum learning algorithm to select
experiences for task-adaptive WM updates. Moreover, we develop a continual
World Model for EvoAgent, which can continuously update the multimodal
experience pool and world knowledge through closed-loop dynamics. We conducted
extensive experiments on Minecraft, compared with existing methods, EvoAgent
can achieve an average success rate improvement of 105% and reduce ineffective
actions by more than 6x.
|
2502.05908
|
Inverse Problem Sampling in Latent Space Using Sequential Monte Carlo
|
eess.IV cs.CV cs.LG
|
In image processing, solving inverse problems is the task of finding
plausible reconstructions of an image that was corrupted by some (usually
known) degradation model. Commonly, this process is done using a generative
image model that can guide the reconstruction towards solutions that appear
natural. The success of diffusion models over the last few years has made them
a leading candidate for this task. However, the sequential nature of diffusion
models makes this conditional sampling process challenging. Furthermore, since
diffusion models are often defined in the latent space of an autoencoder, the
encoder-decoder transformations introduce additional difficulties. Here, we
suggest a novel sampling method based on sequential Monte Carlo (SMC) in the
latent space of diffusion models. We use the forward process of the diffusion
model to add additional auxiliary observations and then perform an SMC sampling
as part of the backward process. Empirical evaluations on ImageNet and FFHQ
show the benefits of our approach over competing methods on various inverse
problem tasks.
|
2502.05911
|
GRAIT: Gradient-Driven Refusal-Aware Instruction Tuning for Effective
Hallucination Mitigation
|
cs.CL
|
Refusal-Aware Instruction Tuning (RAIT) aims to enhance Large Language Models
(LLMs) by improving their ability to refuse responses to questions beyond their
knowledge, thereby reducing hallucinations and improving reliability. Effective
RAIT must address two key challenges: firstly, effectively reject unknown
questions to minimize hallucinations; secondly, avoid over-refusal to ensure
questions that can be correctly answered are not rejected, thereby maintain the
helpfulness of LLM outputs. In this paper, we address the two challenges by
deriving insightful observations from the gradient-based perspective, and
proposing the Gradient-driven Refusal Aware Instruction Tuning Framework GRAIT:
(1) employs gradient-driven sample selection to effectively minimize
hallucinations and (2) introduces an adaptive weighting mechanism during
fine-tuning to reduce the risk of over-refusal, achieving the balance between
accurate refusals and maintaining useful responses. Experimental evaluations on
open-ended and multiple-choice question answering tasks demonstrate that GRAIT
significantly outperforms existing RAIT methods in the overall performance. The
source code and data will be available at https://github.com/opendatalab/GRAIT .
|
2502.05912
|
LpBound: Pessimistic Cardinality Estimation using $\ell_p$-Norms of
Degree Sequences
|
cs.DB
|
Cardinality estimation is the problem of estimating the size of the output of
a query, without actually evaluating the query. The cardinality estimator is a
critical piece of a query optimizer, and is often the main culprit when the
optimizer chooses a poor plan.
This paper introduces LpBound, a pessimistic cardinality estimator for
multijoin queries (acyclic or cyclic) with selection predicates and group-by
clauses. LpBound computes a guaranteed upper bound on the size of the query
output using simple statistics on the input relations, consisting of
$\ell_p$-norms of degree sequences. The bound is the optimal solution of a
linear program whose constraints encode data statistics and Shannon
inequalities. We introduce two optimizations that exploit the structure of the
query in order to speed up the estimation time and make LpBound practical.
We experimentally evaluate LpBound against a range of traditional,
pessimistic, and machine learning-based estimators on the JOB, STATS, and
subgraph matching benchmarks. Our main finding is that LpBound can be orders of
magnitude more accurate than traditional estimators used in mainstream
open-source and commercial database systems. Yet it has comparable low
estimation time and space requirements. When injected the estimates of LpBound,
Postgres derives query plans at least as good as those derived using the true
cardinalities.
|
2502.05916
|
Adaptive Grasping of Moving Objects in Dense Clutter via Global-to-Local
Detection and Static-to-Dynamic Planning
|
cs.RO cs.SY eess.SY
|
Robotic grasping is facing a variety of real-world uncertainties caused by
non-static object states, unknown object properties, and cluttered object
arrangements. The difficulty of grasping increases with the presence of more
uncertainties, where commonly used learning-based approaches struggle to
perform consistently across varying conditions. In this study, we integrate the
idea of similarity matching to tackle the challenge of grasping novel objects
that are simultaneously in motion and densely cluttered using a single RGBD
camera, where multiple uncertainties coexist. We achieve this by shifting
visual detection from global to local states and operating grasp planning from
static to dynamic scenes. Notably, we introduce optimization methods to enhance
planning efficiency for this time-sensitive task. Our proposed system can adapt
to various object types, arrangements and movement speeds without the need for
extensive training, as demonstrated by real-world experiments. Videos are
available at https://youtu.be/sdC50dx-xp8?si=27oVr4dhG0rqN_tT.
|
2502.05917
|
Modeling and Beamforming Optimization for Pinching-Antenna Systems
|
cs.IT math.IT
|
The Pinching-Antenna SyStem (PASS) is a revolutionary flexible antenna
technology designed to enhance wireless communication by establishing strong
line-of-sight (LoS) links, reducing free-space path loss and enabling antenna
array reconfigurability. PASS uses dielectric waveguides with low propagation
loss for signal transmission, radiating via a passive pinching antenna, which
is a small dielectric element applied to the waveguide. This paper first
proposes a physics-based hardware model for PASS, where the pinching antenna is
modeled as an open-ended directional coupler, and the electromagnetic field
behavior is analyzed using coupled-mode theory. A simplified signal model
characterizes the coupling effect between multiple antennas on the same
waveguide. Based on this, two power models are proposed: equal power and
proportional power models. Additionally, a transmit power minimization problem
is formulated/studied for the joint optimization of transmit and pinching
beamforming under both continuous and discrete pinching antenna activations.
Two algorithms are proposed to solve this multimodal optimization problem: the
penalty-based alternating optimization algorithm and a low-complexity
zero-forcing (ZF)-based algorithm. Numerical results show that 1) the ZF-based
low-complexity algorithm performs similarly to the penalty-based algorithm, 2)
PASS reduces transmit power by over 95% compared to conventional and massive
MIMO, 3) discrete activation causes minimal performance loss but requires a
dense antenna set to match continuous activation, and 4) the proportional power
model yields performance comparable to the equal power model.
|
2502.05919
|
Can Generative Agent-Based Modeling Replicate the Friendship Paradox in
Social Media Simulations?
|
cs.SI
|
Generative Agent-Based Modeling (GABM) is an emerging simulation paradigm
that combines the reasoning abilities of Large Language Models with traditional
Agent-Based Modeling to replicate complex social behaviors, including
interactions on social media. While prior work has focused on localized
phenomena such as opinion formation and information spread, its potential to
capture global network dynamics remains underexplored. This paper addresses
this gap by analyzing GABM-based social media simulations through the lens of
the Friendship Paradox (FP), a counterintuitive phenomenon where individuals,
on average, have fewer friends than their friends. We propose a GABM framework
for social media simulations, featuring generative agents that emulate real
users with distinct personalities and interests. Using Twitter datasets on the
US 2020 Election and the QAnon conspiracy, we show that the FP emerges
naturally in GABM simulations. Consistent with real-world observations, the
simulations unveil a hierarchical structure, where agents preferentially
connect with others displaying higher activity or influence. Additionally, we
find that infrequent connections primarily drive the FP, reflecting patterns in
real networks. These findings validate GABM as a robust tool for modeling
global social media phenomena and highlight its potential for advancing social
science by enabling nuanced analysis of user behavior.
|
2502.05923
|
ARISE: Iterative Rule Induction and Synthetic Data Generation for Text
Classification
|
cs.CL
|
We propose ARISE, a framework that iteratively induces rules and generates
synthetic data for text classification. We combine synthetic data generation
and automatic rule induction, via bootstrapping, to iteratively filter the
generated rules and data. We induce rules via inductive generalisation of
syntactic n-grams, enabling us to capture a complementary source of
supervision. These rules alone lead to performance gains in both, in-context
learning (ICL) and fine-tuning (FT) settings. Similarly, use of augmented data
from ARISE alone improves the performance for a model, outperforming
configurations that rely on complex methods like contrastive learning. Further,
our extensive experiments on various datasets covering three full-shot, eight
few-shot and seven multilingual variant settings demonstrate that the rules and
data we generate lead to performance improvements across these diverse domains
and languages.
|
2502.05924
|
Multi-Branch Collaborative Learning Network for Video Quality Assessment
in Industrial Video Search
|
cs.CV cs.IR
|
Video Quality Assessment (VQA) is vital for large-scale video retrieval
systems, aimed at identifying quality issues to prioritize high-quality videos.
In industrial systems, low-quality video characteristics fall into four
categories: visual-related issues like mosaics and black boxes, textual issues
from video titles and OCR content, and semantic issues like frame incoherence
and frame-text mismatch from AI-generated videos. Despite their prevalence in
industrial settings, these low-quality videos have been largely overlooked in
academic research, posing a challenge for accurate identification. To address
this, we introduce the Multi-Branch Collaborative Network (MBCN) tailored for
industrial video retrieval systems. MBCN features four branches, each designed
to tackle one of the aforementioned quality issues. After each branch
independently scores videos, we aggregate these scores using a weighted
approach and a squeeze-and-excitation mechanism to dynamically address quality
issues across different scenarios. We implement point-wise and pair-wise
optimization objectives to ensure score stability and reasonableness. Extensive
offline and online experiments on a world-level video search engine demonstrate
MBCN's effectiveness in identifying video quality issues, significantly
enhancing the retrieval system's ranking performance. Detailed experimental
analyses confirm the positive contribution of all four evaluation branches.
Furthermore, MBCN significantly improves recognition accuracy for low-quality
AI-generated videos compared to the baseline.
|
2502.05925
|
Sign-Symmetry Learning Rules are Robust Fine-Tuners
|
cs.LG cs.AI
|
Backpropagation (BP) has long been the predominant method for training neural
networks due to its effectiveness. However, numerous alternative approaches,
broadly categorized under feedback alignment, have been proposed, many of which
are motivated by the search for biologically plausible learning mechanisms.
Despite their theoretical appeal, these methods have consistently
underperformed compared to BP, leading to a decline in research interest. In
this work, we revisit the role of such methods and explore how they can be
integrated into standard neural network training pipelines. Specifically, we
propose fine-tuning BP-pre-trained models using Sign-Symmetry learning rules
and demonstrate that this approach not only maintains performance parity with
BP but also enhances robustness. Through extensive experiments across multiple
tasks and benchmarks, we establish the validity of our approach. Our findings
introduce a novel perspective on neural network training and open new research
directions for leveraging biologically inspired learning rules in deep
learning.
|
2502.05926
|
A Generative Framework for Bidirectional Image-Report Understanding in
Chest Radiography
|
eess.IV cs.CL cs.CV
|
The rapid advancements in large language models (LLMs) have unlocked their
potential for multimodal tasks, where text and visual data are processed
jointly. However, applying LLMs to medical imaging, particularly for chest
X-rays (CXR), poses significant challenges due to the need for precise
visual-textual alignment and the preservation of critical diagnostic details.
In this paper, we propose Multi-Stage Adaptive Vision-Language Tuning (MAViLT),
a novel framework designed to enhance multimodal reasoning and generation for
CXR understanding. MAViLT incorporates a clinical gradient-weighted
tokenization process and a hierarchical fine-tuning strategy, enabling it to
generate accurate radiology reports, synthesize realistic CXRs from text, and
answer vision-based clinical questions. We evaluate MAViLT on two benchmark
datasets, MIMIC-CXR and Indiana University CXR, achieving state-of-the-art
results across all tasks. Human evaluations further validate the clinical
relevance and utility of MAViLT, making it a robust tool for real-world medical
applications. This work demonstrates the feasibility of leveraging LLMs for
multimodal medical imaging while addressing key challenges in vision-language
integration.
|
2502.05928
|
ClinKD: Cross-Modal Clinic Knowledge Distiller For Multi-Task Medical
Images
|
cs.CV
|
Med-VQA (Medical Visual Question Answering) is a crucial subtask within the
broader VQA (Visual Question Answering) domain. This task requires a visual
question answering system to analyze the provided image and corresponding
question,offering reasonable analysis and suggestions to assist medical
professionals in making pathological diagnoses, or ideally, enabling the system
to independently provide correct diagnoses. Furthermore, more advanced Med-VQA
tasks involve Referring and Grounding, which not only require the system to
accurately comprehend medical images but also to pinpoint specific biological
locations within those images. While many large pre-trained models have
demonstrated substantial VQA capabilities,challenges persist in the medical
imaging domain. The intricacy of biological features in medical images and the
scarcity of high-quality medical image datasets, combined with the fact that
current models are not tailored for the medical field in terms of architecture
and training paradigms, hinder the full exploitation of model generalization.
This results in issues such as hallucination in Visual Grounding. In this
paper, we introduce the ClinKD model, which incorporates modifications to model
position encoding and a diversified training process. Initially, we enhance the
model's ability to perceive image and modality variations by using Med-CLIP
Guided Rotary Position Embedding. Subsequently, we leverage distillation to
provide prior knowledge to the model before using complete training data.
Additionally, the feedback-based training process during the formal training
phase further enhances data utilization. Notably, under unchanged evaluation
protocols, we achieve a new state-of-the-art performance on the Med-GRIT-270k
dataset, and the Med-CLIP Guided Rotary Position Embedding approach presents
potential for generalizing to universal model position encoding.
|
2502.05931
|
Protecting Intellectual Property of EEG-based Neural Networks with
Watermarking
|
cs.LG cs.AI cs.CR
|
EEG-based neural networks, pivotal in medical diagnosis and brain-computer
interfaces, face significant intellectual property (IP) risks due to their
reliance on sensitive neurophysiological data and resource-intensive
development. Current watermarking methods, particularly those using abstract
trigger sets, lack robust authentication and fail to address the unique
challenges of EEG models. This paper introduces a cryptographic wonder
filter-based watermarking framework tailored for EEG-based neural networks.
Leveraging collision-resistant hashing and public-key encryption, the wonder
filter embeds the watermark during training, ensuring minimal distortion ($\leq
5\%$ drop in EEG task accuracy) and high reliability (100\% watermark
detection). The framework is rigorously evaluated against adversarial attacks,
including fine-tuning, transfer learning, and neuron pruning. Results
demonstrate persistent watermark retention, with classification accuracy for
watermarked states remaining above 90\% even after aggressive pruning, while
primary task performance degrades faster, deterring removal attempts. Piracy
resistance is validated by the inability to embed secondary watermarks without
severe accuracy loss ( $>10\%$ in EEGNet and CCNN models). Cryptographic
hashing ensures authentication, reducing brute-force attack success
probabilities. Evaluated on the DEAP dataset across models (CCNN, EEGNet,
TSception), the method achieves $>99.4\%$ null-embedding accuracy, effectively
eliminating false positives. By integrating wonder filters with EEG-specific
adaptations, this work bridges a critical gap in IP protection for
neurophysiological models, offering a secure, tamper-proof solution for
healthcare and biometric applications. The framework's robustness against
adversarial modifications underscores its potential to safeguard sensitive EEG
models while maintaining diagnostic utility.
|
2502.05932
|
Skill Expansion and Composition in Parameter Space
|
cs.LG cs.AI cs.RO
|
Humans excel at reusing prior knowledge to address new challenges and
developing skills while solving problems. This paradigm becomes increasingly
popular in the development of autonomous agents, as it develops systems that
can self-evolve in response to new challenges like human beings. However,
previous methods suffer from limited training efficiency when expanding new
skills and fail to fully leverage prior knowledge to facilitate new task
learning. In this paper, we propose Parametric Skill Expansion and Composition
(PSEC), a new framework designed to iteratively evolve the agents' capabilities
and efficiently address new challenges by maintaining a manageable skill
library. This library can progressively integrate skill primitives as
plug-and-play Low-Rank Adaptation (LoRA) modules in parameter-efficient
finetuning, facilitating efficient and flexible skill expansion. This structure
also enables the direct skill compositions in parameter space by merging LoRA
modules that encode different skills, leveraging shared information across
skills to effectively program new skills. Based on this, we propose a
context-aware module to dynamically activate different skills to
collaboratively handle new tasks. Empowering diverse applications including
multi-objective composition, dynamics shift, and continual policy shift, the
results on D4RL, DSRL benchmarks, and the DeepMind Control Suite show that PSEC
exhibits superior capacity to leverage prior knowledge to efficiently tackle
new challenges, as well as expand its skill libraries to evolve the
capabilities. Project website: https://ltlhuuu.github.io/PSEC/.
|
2502.05933
|
Learning to Substitute Words with Model-based Score Ranking
|
cs.CL cs.AI
|
Smart word substitution aims to enhance sentence quality by improving word
choices; however current benchmarks rely on human-labeled data. Since word
choices are inherently subjective, ground-truth word substitutions generated by
a small group of annotators are often incomplete and likely not generalizable.
To circumvent this issue, we instead employ a model-based score (BARTScore) to
quantify sentence quality, thus forgoing the need for human annotations.
Specifically, we use this score to define a distribution for each word
substitution, allowing one to test whether a substitution is statistically
superior relative to others. In addition, we propose a loss function that
directly optimizes the alignment between model predictions and sentence scores,
while also enhancing the overall quality score of a substitution. Crucially,
model learning no longer requires human labels, thus avoiding the cost of
annotation while maintaining the quality of the text modified with
substitutions. Experimental results show that the proposed approach outperforms
both masked language models (BERT, BART) and large language models (GPT-4,
LLaMA). The source code is available at
https://github.com/Hyfred/Substitute-Words-with-Ranking.
|
2502.05934
|
Barriers and Pathways to Human-AI Alignment: A Game-Theoretic Approach
|
cs.AI cs.CC cs.GT cs.LG cs.MA
|
Under what conditions can capable AI agents efficiently align their actions
with human preferences? More specifically, when they are proficient enough to
collaborate with us, how long does coordination take, and when is it
computationally feasible? These foundational questions of AI alignment help
define what makes an AI agent ``sufficiently safe'' and valuable to humans.
Since such generally capable systems do not yet exist, a theoretical analysis
is needed to establish when guarantees hold -- and what they even are.
We introduce a game-theoretic framework that generalizes prior alignment
approaches with fewer assumptions, allowing us to analyze the computational
complexity of alignment across $M$ objectives and $N$ agents, providing both
upper and lower bounds. Unlike previous work, which often assumes common
priors, idealized communication, or implicit tractability, our framework
formally characterizes the difficulty of alignment under minimal assumptions.
Our main result shows that even when agents are fully rational and
computationally \emph{unbounded}, alignment can be achieved with high
probability in time \emph{linear} in the task space size. Therefore, in
real-world settings, where task spaces are often \emph{exponential} in input
length, this remains impractical. More strikingly, our lower bound demonstrates
that alignment is \emph{impossible} to speed up when scaling to exponentially
many tasks or agents, highlighting a fundamental computational barrier to
scalable alignment.
Relaxing these idealized assumptions, we study \emph{computationally bounded}
agents with noisy messages (representing obfuscated intent), showing that while
alignment can still succeed with high probability, it incurs additional
\emph{exponential} slowdowns in the task space size, number of agents, and
number of tasks.
We conclude by identifying conditions that make alignment more feasible.
|
2502.05935
|
Interactive Inference: A Neuromorphic Theory of Human-Computer
Interaction
|
cs.HC cs.IT math.IT
|
Neuromorphic HCI is a new theoretical approach to designing better UX
inspired by the neurophysiology of the brain. Here, we apply the
neuroscientific theory of Active Inference to HCI, postulating that users
perform Bayesian inference on progress and goal distributions to predict their
next action (Interactive Inference). We show how Bayesian surprise between goal
and progress distributions follows a mean square error function of the
signal-to-noise ratio (SNR) of the task. However, capacity to process Bayesian
surprise follows the logarithm of SNR, and errors occur when average capacity
is exceeded. Our model allows the quantitative analysis of performance and
error in one framework with real-time estimation of mental load. We show
through mathematical theorems how three basic laws of HCI, Hick's Law, Fitts'
Law and the Power Law fit our model. We then test the validity of the general
model by empirically measuring how well it predicts human performance in a car
following task. Results suggest that driver processing capacity indeed is a
logarithmic function of the SNR of the distance to a lead car. This positive
result provides initial evidence that Interactive Interference can work as a
new theoretical underpinning for HCI, deserving further exploration.
|
2502.05937
|
A Semi-Supervised Text Generation Framework Combining a Deep Transformer
and a GAN
|
cs.CL cs.AI
|
This paper introduces a framework that connects a deep generative pre-trained
Transformer language model with a generative adversarial network for
semi-supervised text generation. In other words, the proposed model is first
pre-trained unsupervised on a large and diverse text corpus with 24 layers.
Then a simple GAN architecture for synthetic text generation is introduced, and
Gumbel-Softmax is applied to handle the discreteness of tokens. The paper also
shows a semi-supervised approach where real data is augmented with GAN samples,
which is further used to fine-tune the Transformer model on the merged dataset.
Detailed theoretical derivations are also included, outlining the proof of the
min-max objective function, and an extensive discussion of the Gumbel-Softmax
reparameterization trick.
|
2502.05938
|
Energy-Efficient Autonomous Aerial Navigation with Dynamic Vision
Sensors: A Physics-Guided Neuromorphic Approach
|
cs.RO
|
Vision-based object tracking is a critical component for achieving autonomous
aerial navigation, particularly for obstacle avoidance. Neuromorphic Dynamic
Vision Sensors (DVS) or event cameras, inspired by biological vision, offer a
promising alternative to conventional frame-based cameras. These cameras can
detect changes in intensity asynchronously, even in challenging lighting
conditions, with a high dynamic range and resistance to motion blur. Spiking
neural networks (SNNs) are increasingly used to process these event-based
signals efficiently and asynchronously. Meanwhile, physics-based artificial
intelligence (AI) provides a means to incorporate system-level knowledge into
neural networks via physical modeling. This enhances robustness, energy
efficiency, and provides symbolic explainability. In this work, we present a
neuromorphic navigation framework for autonomous drone navigation. The focus is
on detecting and navigating through moving gates while avoiding collisions. We
use event cameras for detecting moving objects through a shallow SNN
architecture in an unsupervised manner. This is combined with a lightweight
energy-aware physics-guided neural network (PgNN) trained with depth inputs to
predict optimal flight times, generating near-minimum energy paths. The system
is implemented in the Gazebo simulator and integrates a sensor-fused
vision-to-planning neuro-symbolic framework built with the Robot Operating
System (ROS) middleware. This work highlights the future potential of
integrating event-based vision with physics-guided planning for
energy-efficient autonomous navigation, particularly for low-latency
decision-making.
|
2502.05943
|
Continual Adaptation for Autonomous Driving with the Mixture of
Progressive Experts Network
|
cs.RO
|
Learning-based autonomous driving requires continuous integration of diverse
knowledge in complex traffic , yet existing methods exhibit significant
limitations in adaptive capabilities. Addressing this gap demands autonomous
driving systems that enable continual adaptation through dynamic adjustments to
evolving environmental interactions. This underscores the necessity for
enhanced continual learning capabilities to improve system adaptability. To
address these challenges, the paper introduces a dynamic progressive
optimization framework that facilitates adaptation to variations in dynamic
environments, achieved by integrating reinforcement learning and supervised
learning for data aggregation. Building on this framework, we propose the
Mixture of Progressive Experts (MoPE) network. The proposed method selectively
activates multiple expert models based on the distinct characteristics of each
task and progressively refines the network architecture to facilitate
adaptation to new tasks. Simulation results show that the MoPE model
outperforms behavior cloning methods, achieving up to a 7.8% performance
improvement in intricate urban road environments.
|
2502.05944
|
Multi-granular Training Strategies for Robust Multi-hop Reasoning Over
Noisy and Heterogeneous Knowledge Sources
|
cs.CL
|
Multi-source multi-hop question answering (QA) represents a challenging task
in natural language processing due to the need for dynamic integration of
heterogeneous knowledge sources and multi-step reasoning. Existing methods
often suffer from cascading errors, insufficient handling of knowledge
conflicts, and computational inefficiency. In this paper, we propose Adaptive
Multi-source Knowledge-Oriented Reasoning (AMKOR), a generative framework that
leverages large language models (LLMs) to dynamically fuse parametric and
retrieved knowledge while exploring reasoning trajectories using probabilistic
beam reasoning. AMKOR is further enhanced by a multi-granular learning
strategy, optimizing both local reasoning steps and global answer accuracy.
Experiments conducted on four widely-used multi-hop QA datasets, including
HotpotQA and MuSiQue, demonstrate that AMKOR achieves state-of-the-art
performance, significantly outperforming baseline methods on both reasoning
accuracy and robustness. Additional analyses confirm its scalability,
adaptability to noisy knowledge, and superior ability to handle complex
multi-hop tasks. This work establishes a new benchmark for multi-source
multi-hop QA by effectively combining reasoning quality and efficiency.
|
2502.05945
|
"Let the AI conspiracy begin..." Language Model coordination is just one
inference-intervention away
|
cs.CL cs.AI
|
In this work, we introduce a straightforward and effective methodology to
steer large language model behaviour capable of bypassing learned alignment
goals. We employ interference-time activation shifting, which is effective
without additional training. Following prior studies, we derive intervention
directions from activation differences in contrastive pairs of model outputs,
which represent the desired and undesired behaviour. By prompting the model to
include multiple-choice answers in its response, we can automatically evaluate
the sensitivity of model output to individual attention heads steering efforts.
We demonstrate that interventions on these heads generalize well to open-ended
answer generation in the challenging "AI coordination" dataset. In this
dataset, models must choose between assisting another AI or adhering to
ethical, safe, and unharmful behaviour. Our fine-grained interventions lead
Llama 2 to prefer coordination with other AIs over following established
alignment goals. Additionally, this approach enables stronger interventions
than those applied to whole model layers, preserving the overall cohesiveness
of the output. The simplicity of our method highlights the shortcomings of
current alignment strategies and points to potential future research
directions, as concepts like "AI coordination" can be influenced by selected
attention heads.
|
2502.05947
|
Acceleration Multiple Heads Decoding for LLM via Dynamic Tree Attention
|
cs.CV cs.CL
|
Multiple heads decoding accelerates the inference of Large Language Models
(LLMs) by predicting next several tokens simultaneously. It generates and
verifies multiple candidate sequences in parallel via tree attention with a
fixed structure. In this paper, we replace the fixed tree attention with
dynamic tree attention on multiple head decoding, specifically in the context
of MEDUSA. We propose a simple and low complexity strategy to generate
candidates and construct the dynamic tree structure. Preliminary experiments
show that the proposed method improves the decoding efficiency of multiple head
decoding for LLMs while maintaining the generation quality. This result
demonstrates the potential for improvement of multiple head decoding in
candidate generation.
|
2502.05949
|
Verifying Proportionality in Temporal Voting
|
cs.GT cs.AI
|
We study a model of temporal voting where there is a fixed time horizon, and
at each round the voters report their preferences over the available candidates
and a single candidate is selected. Prior work has adapted popular notions of
justified representation as well as voting rules that provide strong
representation guarantees from the multiwinner election setting to this model.
In our work, we focus on the complexity of verifying whether a given outcome
offers proportional representation. We show that in the temporal setting
verification is strictly harder than in multiwinner voting, but identify
natural special cases that enable efficient algorithms.
|
2502.05950
|
Survival Concept-Based Learning Models
|
cs.LG cs.AI stat.ML
|
Concept-based learning enhances prediction accuracy and interpretability by
leveraging high-level, human-understandable concepts. However, existing CBL
frameworks do not address survival analysis tasks, which involve predicting
event times in the presence of censored data -- a common scenario in fields
like medicine and reliability analysis. To bridge this gap, we propose two
novel models: SurvCBM (Survival Concept-based Bottleneck Model) and SurvRCM
(Survival Regularized Concept-based Model), which integrate concept-based
learning with survival analysis to handle censored event time data. The models
employ the Cox proportional hazards model and the Beran estimator. SurvCBM is
based on the architecture of the well-known concept bottleneck model, offering
interpretable predictions through concept-based explanations. SurvRCM uses
concepts as regularization to enhance accuracy. Both models are trained
end-to-end and provide interpretable predictions in terms of concepts. Two
interpretability approaches are proposed: one leveraging the linear
relationship in the Cox model and another using an instance-based explanation
framework with the Beran estimator. Numerical experiments demonstrate that
SurvCBM outperforms SurvRCM and traditional survival models, underscoring the
importance and advantages of incorporating concept information. The code for
the proposed algorithms is publicly available.
|
2502.05951
|
Cyri: A Conversational AI-based Assistant for Supporting the Human User
in Detecting and Responding to Phishing Attacks
|
cs.HC cs.AI cs.CR
|
This work introduces Cyri, an AI-powered conversational assistant designed to
support a human user in detecting and analyzing phishing emails by leveraging
Large Language Models. Cyri has been designed to scrutinize emails for semantic
features used in phishing attacks, such as urgency, and undesirable
consequences, using an approach that unifies features already established in
the literature with others by Cyri features extraction methodology. Cyri can be
directly plugged into a client mail or webmail, ensuring seamless integration
with the user's email workflow while maintaining data privacy through local
processing. By performing analyses on the user's machine, Cyri eliminates the
need to transmit sensitive email data over the internet, reducing associated
security risks. The Cyri user interface has been designed to reduce habituation
effects and enhance user engagement. It employs dynamic visual cues and
context-specific explanations to keep users alert and informed while using
emails. Additionally, it allows users to explore identified malicious semantic
features both through conversation with the agent and visual exploration,
obtaining the advantages of both modalities for expert or non-expert users. It
also allows users to keep track of the conversation, supports the user in
solving additional questions on both computed features or new parts of the
mail, and applies its detection on demand. To evaluate Cyri, we crafted a
comprehensive dataset of 420 phishing emails and 420 legitimate emails. Results
demonstrate high effectiveness in identifying critical phishing semantic
features fundamental to phishing detection. A user study involving 10
participants, both experts and non-experts, evaluated Cyri's effectiveness and
usability. Results indicated that Cyri significantly aided users in identifying
phishing emails and enhanced their understanding of phishing tactics.
|
2502.05954
|
Optimization under Attack: Resilience, Vulnerability, and the Path to
Collapse
|
cs.MA
|
Optimization is instrumental for improving operations of large-scale
socio-technical infrastructures of Smart Cities, for instance, energy and
traffic systems. In particular, understanding the performance of multi-agent
discrete-choice combinatorial optimization under distributed adversary attacks
is a compelling and underexplored problem, since multi-agent systems exhibit a
large number of remote control variables that can influence in an unprecedented
way the cost-effectiveness of distributed optimization heuristics. This paper
unravels for the first time the trajectories of distributed optimization from
resilience to vulnerability, and finally to collapse under varying adversary
influence. Using real-world data to emulate over 28 billion multi-agent
optimization scenarios, we exhaustively assess how the number of agents with
different adversarial severity and network positioning influences optimization
performance, including the influence on Pareto optimal points. With this novel
large-scale dataset, made openly available as a benchmark, we disentangle how
optimization remains resilient to adversaries and which adversary conditions
are required to make optimization vulnerable or collapsed. These new findings
can provide new insights for designing self-healing strategies for
fault-tolerance and fault-correction in adversarial distributed optimization
that have been missing so far.
|
2502.05957
|
AutoAgent: A Fully-Automated and Zero-Code Framework for LLM Agents
|
cs.AI cs.CL
|
Large Language Model (LLM) Agents have demonstrated remarkable capabilities
in task automation and intelligent decision-making, driving the widespread
adoption of agent development frameworks such as LangChain and AutoGen.
However, these frameworks predominantly serve developers with extensive
technical expertise - a significant limitation considering that only 0.03 % of
the global population possesses the necessary programming skills. This stark
accessibility gap raises a fundamental question: Can we enable everyone,
regardless of technical background, to build their own LLM agents using natural
language alone? To address this challenge, we introduce AutoAgent-a
Fully-Automated and highly Self-Developing framework that enables users to
create and deploy LLM agents through Natural Language Alone. Operating as an
autonomous Agent Operating System, AutoAgent comprises four key components: i)
Agentic System Utilities, ii) LLM-powered Actionable Engine, iii) Self-Managing
File System, and iv) Self-Play Agent Customization module. This lightweight yet
powerful system enables efficient and dynamic creation and modification of
tools, agents, and workflows without coding requirements or manual
intervention. Beyond its code-free agent development capabilities, AutoAgent
also serves as a versatile multi-agent system for General AI Assistants.
Comprehensive evaluations on the GAIA benchmark demonstrate AutoAgent's
effectiveness in generalist multi-agent tasks, surpassing existing
state-of-the-art methods. Furthermore, AutoAgent's Retrieval-Augmented
Generation (RAG)-related capabilities have shown consistently superior
performance compared to many alternative LLM-based solutions.
|
2502.05959
|
Ensemble-Tight Second-Order Asymptotics and Exponents for Guessing-Based
Decoding with Abandonment
|
cs.IT math.IT
|
This paper considers guessing-based decoders with abandonment for discrete
memoryless channels in which all codewords have the same composition. This
class of decoders rank-orders all input sequences in the codebook's composition
class from ``closest'' to ``farthest'' from the channel output and then queries
them sequentially in that order for codebook membership. Decoding terminates
when a codeword is encountered or when a predetermined number of guesses is
reached, and decoding is abandoned. We derive ensemble-tight first-order
asymptotics for the code rate and abandonment rate, which shows that
guessing-based decoding is more efficient than conventional testing-based
decoding whenever the capacity of the channel exceeds half the entropy of the
capacity-achieving input distribution. The main focus of this paper is on
refined asymptotics, specifically, second-order asymptotics, error exponents,
and strong converse exponents. The optimal second-order region is characterized
in terms of the minimum of the second-order code and abandonment rates. The
error (resp.\ strong converse) exponent is characterized in terms of the
minimum (resp.\ maximum) of the usual channel coding exponent and an
abandonment exponent, which turns out to be a special case of the exponent of
conditional almost-lossless source coding.
|
2502.05963
|
Redefining Robot Generalization Through Interactive Intelligence
|
cs.LG cs.AI cs.RO
|
Recent advances in large-scale machine learning have produced high-capacity
foundation models capable of adapting to a broad array of downstream tasks.
While such models hold great promise for robotics, the prevailing paradigm
still portrays robots as single, autonomous decision-makers, performing tasks
like manipulation and navigation, with limited human involvement. However, a
large class of real-world robotic systems, including wearable robotics (e.g.,
prostheses, orthoses, exoskeletons), teleoperation, and neural interfaces, are
semiautonomous, and require ongoing interactive coordination with human
partners, challenging single-agent assumptions. In this position paper, we
argue that robot foundation models must evolve to an interactive multi-agent
perspective in order to handle the complexities of real-time human-robot
co-adaptation. We propose a generalizable, neuroscience-inspired architecture
encompassing four modules: (1) a multimodal sensing module informed by
sensorimotor integration principles, (2) an ad-hoc teamwork model reminiscent
of joint-action frameworks in cognitive science, (3) a predictive world belief
model grounded in internal model theories of motor control, and (4) a
memory/feedback mechanism that echoes concepts of Hebbian and
reinforcement-based plasticity. Although illustrated through the lens of cyborg
systems, where wearable devices and human physiology are inseparably
intertwined, the proposed framework is broadly applicable to robots operating
in semi-autonomous or interactive contexts. By moving beyond single-agent
designs, our position emphasizes how foundation models in robotics can achieve
a more robust, personalized, and anticipatory level of performance.
|
2502.05964
|
Revisiting Gradient-based Uncertainty for Monocular Depth Estimation
|
cs.CV
|
Monocular depth estimation, similar to other image-based tasks, is prone to
erroneous predictions due to ambiguities in the image, for example, caused by
dynamic objects or shadows. For this reason, pixel-wise uncertainty assessment
is required for safety-critical applications to highlight the areas where the
prediction is unreliable. We address this in a post hoc manner and introduce
gradient-based uncertainty estimation for already trained depth estimation
models. To extract gradients without depending on the ground truth depth, we
introduce an auxiliary loss function based on the consistency of the predicted
depth and a reference depth. The reference depth, which acts as pseudo ground
truth, is in fact generated using a simple image or feature augmentation,
making our approach simple and effective. To obtain the final uncertainty
score, the derivatives w.r.t. the feature maps from single or multiple layers
are calculated using back-propagation. We demonstrate that our gradient-based
approach is effective in determining the uncertainty without re-training using
the two standard depth estimation benchmarks KITTI and NYU. In particular, for
models trained with monocular sequences and therefore most prone to
uncertainty, our method outperforms related approaches. In addition, we
publicly provide our code and models: https://github.com/jhornauer/GrUMoDepth
|
2502.05966
|
Detection of Physiological Data Tampering Attacks with Quantum Machine
Learning
|
quant-ph cs.LG
|
The widespread use of cloud-based medical devices and wearable sensors has
made physiological data susceptible to tampering. These attacks can compromise
the reliability of healthcare systems which can be critical and
life-threatening. Detection of such data tampering is of immediate need.
Machine learning has been used to detect anomalies in datasets but the
performance of Quantum Machine Learning (QML) is still yet to be evaluated for
physiological sensor data. Thus, our study compares the effectiveness of QML
for detecting physiological data tampering, focusing on two types of white-box
attacks: data poisoning and adversarial perturbation. The results show that QML
models are better at identifying label-flipping attacks, achieving accuracy
rates of 75%-95% depending on the data and attack severity. This superior
performance is due to the ability of quantum algorithms to handle complex and
high-dimensional data. However, both QML and classical models struggle to
detect more sophisticated adversarial perturbation attacks, which subtly alter
data without changing its statistical properties. Although QML performed poorly
against this attack with around 45%-65% accuracy, it still outperformed
classical algorithms in some cases.
|
2502.05967
|
$\mu$nit Scaling: Simple and Scalable FP8 LLM Training
|
cs.LG
|
Large Language Model training with 8-bit floating point (FP8) formats
promises significant efficiency improvements, but reduced numerical precision
makes training challenging. It is currently possible to train in FP8 only if
one is willing to tune various hyperparameters, reduce model scale, or accept
the overhead of computing dynamic scale factors. We demonstrate simple,
scalable FP8 training that requires no dynamic scaling factors or special
hyperparameters, even at large model sizes. Our method, $\mu$nit Scaling
($\mu$S), also enables simple hyperparameter transfer across model widths,
matched numerics across training and inference, and other desirable properties.
$\mu$nit Scaling is straightforward to implement, consisting of a set of
minimal interventions based on a first-principles analysis of common
transformer operations. We validate our method by training models from 1B to
13B parameters, performing all hidden linear layer computations in FP8. We
achieve quality equal to higher precision baselines while also training up to
33% faster.
|
2502.05969
|
Asymptotic FDR Control with Model-X Knockoffs: Is Moments Matching
Sufficient?
|
stat.ML cs.LG math.ST stat.TH
|
We propose a unified theoretical framework for studying the robustness of the
model-X knockoffs framework by investigating the asymptotic false discovery
rate (FDR) control of the practically implemented approximate knockoffs
procedure. This procedure deviates from the model-X knockoffs framework by
substituting the true covariate distribution with a user-specified distribution
that can be learned using in-sample observations. By replacing the
distributional exchangeability condition of the model-X knockoff variables with
three conditions on the approximate knockoff statistics, we establish that the
approximate knockoffs procedure achieves the asymptotic FDR control. Using our
unified framework, we further prove that an arguably most popularly used
knockoff variable generation method--the Gaussian knockoffs generator based on
the first two moments matching--achieves the asymptotic FDR control when the
two-moment-based knockoff statistics are employed in the knockoffs inference
procedure. For the first time in the literature, our theoretical results
justify formally the effectiveness and robustness of the Gaussian knockoffs
generator. Simulation and real data examples are conducted to validate the
theoretical findings.
|
2502.05970
|
Known Unknowns: Out-of-Distribution Property Prediction in Materials and
Molecules
|
cs.LG cond-mat.mtrl-sci cs.CE physics.chem-ph
|
Discovery of high-performance materials and molecules requires identifying
extremes with property values that fall outside the known distribution.
Therefore, the ability to extrapolate to out-of-distribution (OOD) property
values is critical for both solid-state materials and molecular design. Our
objective is to train predictor models that extrapolate zero-shot to higher
ranges than in the training data, given the chemical compositions of solids or
molecular graphs and their property values. We propose using a transductive
approach to OOD property prediction, achieving improvements in prediction
accuracy. In particular, the True Positive Rate (TPR) of OOD classification of
materials and molecules improved by 3x and 2.5x, respectively, and precision
improved by 2x and 1.5x compared to non-transductive baselines. Our method
leverages analogical input-target relations in the training and test sets,
enabling generalization beyond the training target support, and can be applied
to any other material and molecular tasks.
|
2502.05972
|
Mechanic Modeling and Nonlinear Optimal Control of Actively Articulated
Suspension of Mobile Heavy-Duty Manipulators
|
cs.RO
|
This paper presents the analytic modeling of mobile heavy-duty manipulators
with actively articulated suspension and its optimal control to maximize its
static and dynamic stabilization. By adopting the screw theory formalism, we
consider the suspension mechanism as a rigid multibody composed of two closed
kinematic chains. This mechanical modeling allows us to compute the spatial
inertial parameters of the whole platform as a function of the suspension's
linear actuators through the articulated-body inertia method. Our solution
enhances the computation accuracy of the wheels' reaction normal forces by
providing an exact solution for the center of mass and inertia tensor of the
mobile manipulator. Moreover, these inertial parameters and the normal forces
are used to define metrics of both static and dynamic stability of the mobile
manipulator and formulate a nonlinear programming problem that optimizes such
metrics to generate an optimal stability motion that prevents the platform's
overturning, such optimal position of the actuator is tracked with a
state-feedback hydraulic valve control. We demonstrate our method's efficiency
in terms of C++ computational speed, accuracy and performance improvement by
simulating a 7 degrees-of-freedom heavy-duty parallel-serial mobile manipulator
with four wheels and actively articulated suspension.
|
2502.05974
|
Decision Making in Hybrid Environments: A Model Aggregation Approach
|
cs.LG stat.ML
|
Recent work by Foster et al. (2021, 2022, 2023) and Xu and Zeevi (2023)
developed the framework of decision estimation coefficient (DEC) that
characterizes the complexity of general online decision making problems and
provides a general algorithm design principle. These works, however, either
focus on the pure stochastic regime where the world remains fixed over time, or
the pure adversarial regime where the world arbitrarily changes over time. For
the hybrid regime where the dynamics of the world is fixed while the reward
arbitrarily changes, they only give pessimistic bounds on the decision
complexity. In this work, we propose a general extension of DEC that more
precisely characterizes this case. Besides applications in special cases, our
framework leads to a flexible algorithm design where the learner learns over
subsets of the hypothesis set, trading estimation complexity with decision
complexity, which could be of independent interest. Our work covers model-based
learning and model-free learning in the hybrid regime, with a newly proposed
extension of the bilinear classes (Du et al., 2021) to the adversarial-reward
case. We also recover some existing model-free learning results in the pure
stochastic regime.
|
2502.05979
|
VFX Creator: Animated Visual Effect Generation with Controllable
Diffusion Transformer
|
cs.CV
|
Crafting magic and illusions is one of the most thrilling aspects of
filmmaking, with visual effects (VFX) serving as the powerhouse behind
unforgettable cinematic experiences. While recent advances in generative
artificial intelligence have driven progress in generic image and video
synthesis, the domain of controllable VFX generation remains relatively
underexplored. In this work, we propose a novel paradigm for animated VFX
generation as image animation, where dynamic effects are generated from
user-friendly textual descriptions and static reference images. Our work makes
two primary contributions: (i) Open-VFX, the first high-quality VFX video
dataset spanning 15 diverse effect categories, annotated with textual
descriptions, instance segmentation masks for spatial conditioning, and
start-end timestamps for temporal control. (ii) VFX Creator, a simple yet
effective controllable VFX generation framework based on a Video Diffusion
Transformer. The model incorporates a spatial and temporal controllable LoRA
adapter, requiring minimal training videos. Specifically, a plug-and-play mask
control module enables instance-level spatial manipulation, while tokenized
start-end motion timestamps embedded in the diffusion process, alongside the
text encoder, allow precise temporal control over effect timing and pace.
Extensive experiments on the Open-VFX test set demonstrate the superiority of
the proposed system in generating realistic and dynamic effects, achieving
state-of-the-art performance and generalization ability in both spatial and
temporal controllability. Furthermore, we introduce a specialized metric to
evaluate the precision of temporal control. By bridging traditional VFX
techniques with generative approaches, VFX Creator unlocks new possibilities
for efficient and high-quality video effect generation, making advanced VFX
accessible to a broader audience.
|
2502.05980
|
Speech to Speech Translation with Translatotron: A State of the Art
Review
|
cs.CL cs.AI
|
A cascade-based speech-to-speech translation has been considered a benchmark
for a very long time, but it is plagued by many issues, like the time taken to
translate a speech from one language to another and compound errors. These
issues are because a cascade-based method uses a combination of methods such as
speech recognition, speech-to-text translation, and finally, text-to-speech
translation. Translatotron, a sequence-to-sequence direct speech-to-speech
translation model was designed by Google to address the issues of compound
errors associated with cascade model. Today there are 3 versions of the
Translatotron model: Translatotron 1, Translatotron 2, and Translatotron3. The
first version was designed as a proof of concept to show that a direct
speech-to-speech translation was possible, it was found to be less effective
than the cascade model but was producing promising results. Translatotron2 was
an improved version of Translatotron 1 with results similar to the cascade
model. Translatotron 3 the latest version of the model is better than the
cascade model at some points. In this paper, a complete review of
speech-to-speech translation will be presented, with a particular focus on all
the versions of Translatotron models. We will also show that Translatotron is
the best model to bridge the language gap between African Languages and other
well-formalized languages.
|
2502.05982
|
HamRaz: A Culture-Based Persian Conversation Dataset for Person-Centered
Therapy Using LLM Agents
|
cs.CL
|
This paper presents HamRaz, a novel Persian-language mental health dataset
designed for Person-Centered Therapy (PCT) using Large Language Models (LLMs).
Despite the growing application of LLMs in AI-driven psychological counseling,
existing datasets predominantly focus on Western and East Asian contexts,
overlooking cultural and linguistic nuances essential for effective
Persian-language therapy. To address this gap, HamRaz combines script-based
dialogues with adaptive LLM role-playing, ensuring coherent and dynamic therapy
interactions. We also introduce HamRazEval, a dual evaluation framework that
measures conversational quality and therapeutic effectiveness using General
Dialogue Metrics and the Barrett-Lennard Relationship Inventory (BLRI).
Experimental results show HamRaz outperforms conventional Script Mode and
Two-Agent Mode, producing more empathetic, context-aware, and realistic therapy
sessions. By releasing HamRaz, we contribute a culturally adapted, LLM-driven
resource to advance AI-powered psychotherapy research in diverse communities.
|
2502.05986
|
Preventing Rogue Agents Improves Multi-Agent Collaboration
|
cs.CL cs.MA
|
Multi-agent systems, where specialized agents collaborate to solve a shared
task hold great potential, from increased modularity to simulating complex
environments. However, they also have a major caveat -- a single agent can
cause the entire system to fail. Consider a simple game where the knowledge to
solve the task is distributed between agents, which share information in a
communication channel. At each round, any of the agents can terminate the game
and make the final prediction, even if they are uncertain about the outcome of
their action. Detection of such rogue agents $\textit{before they act}$ may
prevent the system's failure. In this work, we propose to $\textit{monitor}$
agents during action prediction and $\textit{intervene}$ when a future error is
likely to occur. To test our approach, we introduce WhoDunitEnv, a multi-agent
collaboration environment that allows modular control over task complexity and
communication structure. Experiments on two variants of WhoDunitEnv and the
GovSim environment for resource sustainability show that our approach leads to
substantial performance gains up to 17.4% and 20%, respectively. Moreover, a
thorough analysis shows that our monitors successfully identify critical points
of agent confusion and our interventions effectively stop agent errors from
propagating.
|
2502.05988
|
SNAT-YOLO: Efficient Cross-Layer Aggregation Network for Edge-Oriented
Gangue Detection
|
cs.CV
|
To address the issues of slow detection speed,low accuracy,difficulty in
deployment on industrial edge devices,and large parameter and computational
requirements in deep learning-based coal gangue target detection methods,we
propose a lightweight coal gangue target detection algorithm based on an
improved YOLOv11.First,we use the lightweight network ShuffleNetV2 as the
backbone to enhance detection speed.Second,we introduce a lightweight
downsampling operation,ADown,which reduces model complexity while improving
average detection accuracy.Third,we improve the C2PSA module in YOLOv11 by
incorporating the Triplet Attention mechanism,resulting in the proposed
C2PSA-TriAtt module,which enhances the model's ability to focus on different
dimensions of images.Fourth,we propose the Inner-FocalerIoU loss function to
replace the existing CIoU loss function.Experimental results show that our
model achieves a detection accuracy of 99.10% in coal gangue detection
tasks,reduces the model size by 38%,the number of parameters by 41%,and the
computational cost by 40%,while decreasing the average detection time per image
by 1 ms.The improved model demonstrates enhanced detection speed and
accuracy,making it suitable for deployment on industrial edge mobile
devices,thus contributing positively to coal processing and efficient
utilization of coal resources.
|
2502.05994
|
Diffusion Models for Inverse Problems in the Exponential Family
|
stat.ML cs.LG
|
Diffusion models have emerged as powerful tools for solving inverse problems,
yet prior work has primarily focused on observations with Gaussian measurement
noise, restricting their use in real-world scenarios. This limitation persists
due to the intractability of the likelihood score, which until now has only
been approximated in the simpler case of Gaussian likelihoods. In this work, we
extend diffusion models to handle inverse problems where the observations
follow a distribution from the exponential family, such as a Poisson or a
Binomial distribution. By leveraging the conjugacy properties of exponential
family distributions, we introduce the evidence trick, a method that provides a
tractable approximation to the likelihood score. In our experiments, we
demonstrate that our methodology effectively performs Bayesian inference on
spatially inhomogeneous Poisson processes with intensities as intricate as
ImageNet images. Furthermore, we demonstrate the real-world impact of our
methodology by showing that it performs competitively with the current
state-of-the-art in predicting malaria prevalence estimates in Sub-Saharan
Africa.
|
2502.05995
|
A Comprehensive Survey on Image Signal Processing Approaches for
Low-Illumination Image Enhancement
|
cs.CV
|
The usage of digital content (photos and videos) in a variety of applications
has increased due to the popularity of multimedia devices. These uses include
advertising campaigns, educational resources, and social networking platforms.
There is an increasing need for high-quality graphic information as people
become more visually focused. However, captured images frequently have poor
visibility and a high amount of noise due to the limitations of image-capturing
devices and lighting conditions. Improving the visual quality of images taken
in low illumination is the aim of low-illumination image enhancement. This
problem is addressed by traditional image enhancement techniques, which alter
noise, brightness, and contrast. Deep learning-based methods, however, have
dominated recently made advances in this area. These methods have effectively
reduced noise while preserving important information, showing promising results
in the improvement of low-illumination images. An extensive summary of image
signal processing methods for enhancing low-illumination images is provided in
this paper. Three categories are classified in the review for approaches:
hybrid techniques, deep learning-based methods, and traditional approaches.
Conventional techniques include denoising, automated white balancing, and noise
reduction. Convolutional neural networks (CNNs) are used in deep learningbased
techniques to recognize and extract characteristics from low-light images. To
get better results, hybrid approaches combine deep learning-based methodologies
with more conventional methods. The review also discusses the advantages and
limitations of each approach and provides insights into future research
directions in this field.
|
2502.05996
|
Motion Control in Multi-Rotor Aerial Robots Using Deep Reinforcement
Learning
|
cs.RO cs.AI
|
This paper investigates the application of Deep Reinforcement (DRL) Learning
to address motion control challenges in drones for additive manufacturing (AM).
Drone-based additive manufacturing promises flexible and autonomous material
deposition in large-scale or hazardous environments. However, achieving robust
real-time control of a multi-rotor aerial robot under varying payloads and
potential disturbances remains challenging. Traditional controllers like PID
often require frequent parameter re-tuning, limiting their applicability in
dynamic scenarios. We propose a DRL framework that learns adaptable control
policies for multi-rotor drones performing waypoint navigation in AM tasks. We
compare Deep Deterministic Policy Gradient (DDPG) and Twin Delayed Deep
Deterministic Policy Gradient (TD3) within a curriculum learning scheme
designed to handle increasing complexity. Our experiments show TD3 consistently
balances training stability, accuracy, and success, particularly when mass
variability is introduced. These findings provide a scalable path toward
robust, autonomous drone control in additive manufacturing.
|
2502.05999
|
Pencils to Pixels: A Systematic Study of Creative Drawings across
Children, Adults and AI
|
cs.HC cs.AI cs.CV
|
Can we derive computational metrics to quantify visual creativity in drawings
across intelligent agents, while accounting for inherent differences in
technical skill and style? To answer this, we curate a novel dataset consisting
of 1338 drawings by children, adults and AI on a creative drawing task. We
characterize two aspects of the drawings -- (1) style and (2) content. For
style, we define measures of ink density, ink distribution and number of
elements. For content, we use expert-annotated categories to study conceptual
diversity, and image and text embeddings to compute distance measures. We
compare the style, content and creativity of children, adults and AI drawings
and build simple models to predict expert and automated creativity scores. We
find significant differences in style and content in the groups -- children's
drawings had more components, AI drawings had greater ink density, and adult
drawings revealed maximum conceptual diversity. Notably, we highlight a
misalignment between creativity judgments obtained through expert and automated
ratings and discuss its implications. Through these efforts, our work provides,
to the best of our knowledge, the first framework for studying human and
artificial creativity beyond the textual modality, and attempts to arrive at
the domain-agnostic principles underlying creativity. Our data and scripts are
available on GitHub.
|
2502.06004
|
Analysis of LLM as a grammatical feature tagger for African American
English
|
cs.CL cs.AI cs.LG
|
African American English (AAE) presents unique challenges in natural language
processing (NLP). This research systematically compares the performance of
available NLP models--rule-based, transformer-based, and large language models
(LLMs)--capable of identifying key grammatical features of AAE, namely Habitual
Be and Multiple Negation. These features were selected for their distinct
grammatical complexity and frequency of occurrence. The evaluation involved
sentence-level binary classification tasks, using both zero-shot and few-shot
strategies. The analysis reveals that while LLMs show promise compared to the
baseline, they are influenced by biases such as recency and unrelated features
in the text such as formality. This study highlights the necessity for improved
model training and architectural adjustments to better accommodate AAE's unique
linguistic characteristics. Data and code are available.
|
2502.06006
|
FactIR: A Real-World Zero-shot Open-Domain Retrieval Benchmark for
Fact-Checking
|
cs.IR
|
The field of automated fact-checking increasingly depends on retrieving
web-based evidence to determine the veracity of claims in real-world scenarios.
A significant challenge in this process is not only retrieving relevant
information, but also identifying evidence that can both support and refute
complex claims. Traditional retrieval methods may return documents that
directly address claims or lean toward supporting them, but often struggle with
more complex claims requiring indirect reasoning. While some existing
benchmarks and methods target retrieval for fact-checking, a comprehensive
real-world open-domain benchmark has been lacking. In this paper, we present a
real-world retrieval benchmark FactIR, derived from Factiverse production logs,
enhanced with human annotations. We rigorously evaluate state-of-the-art
retrieval models in a zero-shot setup on FactIR and offer insights for
developing practical retrieval systems for fact-checking. Code and data are
available at https://github.com/factiverse/factIR.
|
2502.06007
|
Transformers versus the EM Algorithm in Multi-class Clustering
|
stat.ML cs.LG
|
LLMs demonstrate significant inference capacities in complicated machine
learning tasks, using the Transformer model as its backbone. Motivated by the
limited understanding of such models on the unsupervised learning problems, we
study the learning guarantees of Transformers in performing multi-class
clustering of the Gaussian Mixture Models. We develop a theory drawing strong
connections between the Softmax Attention layers and the workflow of the EM
algorithm on clustering the mixture of Gaussians. Our theory provides
approximation bounds for the Expectation and Maximization steps by proving the
universal approximation abilities of multivariate mappings by Softmax
functions. In addition to the approximation guarantees, we also show that with
a sufficient number of pre-training samples and an initialization, Transformers
can achieve the minimax optimal rate for the problem considered. Our extensive
simulations empirically verified our theory by revealing the strong learning
capacities of Transformers even beyond the assumptions in the theory, shedding
light on the powerful inference capacities of LLMs.
|
2502.06011
|
Uncertainty Quantification and Causal Considerations for Off-Policy
Decision Making
|
stat.ML cs.LG
|
Off-policy evaluation (OPE) is a critical challenge in robust decision-making
that seeks to assess the performance of a new policy using data collected under
a different policy. However, the existing OPE methodologies suffer from several
limitations arising from statistical uncertainty as well as causal
considerations. In this thesis, we address these limitations by presenting
three different works. Firstly, we consider the problem of high variance in the
importance-sampling-based OPE estimators. We introduce the Marginal Ratio (MR)
estimator, a novel OPE method that reduces variance by focusing on the marginal
distribution of outcomes rather than direct policy shifts, improving robustness
in contextual bandits. Next, we propose Conformal Off-Policy Prediction (COPP),
a principled approach for uncertainty quantification in OPE that provides
finite-sample predictive intervals, ensuring robust decision-making in
risk-sensitive applications. Finally, we address causal unidentifiability in
off-policy decision-making by developing novel bounds for sequential decision
settings, which remain valid under arbitrary unmeasured confounding. We apply
these bounds to assess the reliability of digital twin models, introducing a
falsification framework to identify scenarios where model predictions diverge
from real-world behaviour. Our contributions provide new insights into robust
decision-making under uncertainty and establish principled methods for
evaluating policies in both static and dynamic settings.
|
2502.06018
|
Kolmogorov-Arnold Fourier Networks
|
cs.LG cs.AI
|
Although Kolmogorov-Arnold based interpretable networks (KAN) have strong
theoretical expressiveness, they face significant parameter explosion and
high-frequency feature capture challenges in high-dimensional tasks. To address
this issue, we propose the Kolmogorov-Arnold-Fourier Network (KAF), which
effectively integrates trainable Random Fourier Features (RFF) and a novel
hybrid GELU-Fourier activation mechanism to balance parameter efficiency and
spectral representation capabilities. Our key technical contributions include:
(1) merging KAN's dual-matrix structure through matrix association properties
to substantially reduce parameters; (2) introducing learnable RFF
initialization strategies to eliminate spectral distortion in high-dimensional
approximation tasks; (3) implementing an adaptive hybrid activation function
that progressively enhances frequency representation during the training
process. Comprehensive experiments demonstrate the superiority of our KAF
across various domains including vision, NLP, audio processing, and
differential equation-solving tasks, effectively combining theoretical
interpretability with practical utility and computational efficiency.
|
2502.06019
|
Noise is an Efficient Learner for Zero-Shot Vision-Language Models
|
cs.CV
|
Recently, test-time adaptation has garnered attention as a method for tuning
models without labeled data. The conventional modus operandi for adapting
pre-trained vision-language models (VLMs) during test-time primarily focuses on
tuning learnable prompts; however, this approach overlooks potential
distribution shifts in the visual representations themselves. In this work, we
address this limitation by introducing Test-Time Noise Tuning (TNT), a novel
method for handling unpredictable shifts in the visual space. TNT leverages,
for the first time, a noise adaptation strategy that optimizes learnable noise
directly in the visual input space, enabling adaptive feature learning from a
single test sample. We further introduce a novel approach for inter-view
representation alignment by explicitly enforcing coherence in embedding
distances, ensuring consistent feature representations across views. Combined
with scaled logits and confident view selection at inference, TNT substantially
enhances VLM generalization and calibration, achieving average gains of +7.38%
on natural distributions benchmark and +0.80% on cross-dataset evaluations over
zero-shot CLIP. These improvements lay a strong foundation for adaptive
out-of-distribution handling.
|
2502.06020
|
Temporal Working Memory: Query-Guided Segment Refinement for Enhanced
Multimodal Understanding
|
cs.CV cs.MM cs.SD eess.AS
|
Multimodal foundation models (MFMs) have demonstrated significant success in
tasks such as visual captioning, question answering, and image-text retrieval.
However, these models face inherent limitations due to their finite internal
capacity, which restricts their ability to process extended temporal sequences,
a crucial requirement for comprehensive video and audio analysis. To overcome
these challenges, we introduce a specialized cognitive module, temporal working
memory (TWM), which aims to enhance the temporal modeling capabilities of MFMs.
It selectively retains task-relevant information across temporal dimensions,
ensuring that critical details are preserved throughout the processing of video
and audio content. The TWM uses a query-guided attention approach to focus on
the most informative multimodal segments within temporal sequences. By
retaining only the most relevant content, TWM optimizes the use of the model's
limited capacity, enhancing its temporal modeling ability. This plug-and-play
module can be easily integrated into existing MFMs. With our TWM, nine
state-of-the-art models exhibit significant performance improvements across
tasks such as video captioning, question answering, and video-text retrieval.
By enhancing temporal modeling, TWM extends the capability of MFMs to handle
complex, time-sensitive data effectively. Our code is available at
https://github.com/xid32/NAACL_2025_TWM.
|
2502.06022
|
Nested subspace learning with flags
|
stat.ML cs.LG
|
Many machine learning methods look for low-dimensional representations of the
data. The underlying subspace can be estimated by first choosing a dimension
$q$ and then optimizing a certain objective function over the space of
$q$-dimensional subspaces (the Grassmannian). Trying different $q$ yields in
general non-nested subspaces, which raises an important issue of consistency
between the data representations. In this paper, we propose a simple trick to
enforce nestedness in subspace learning methods. It consists in lifting
Grassmannian optimization problems to flag manifolds (the space of nested
subspaces of increasing dimension) via nested projectors. We apply the flag
trick to several classical machine learning methods and show that it
successfully addresses the nestedness issue.
|
2502.06023
|
Dual Caption Preference Optimization for Diffusion Models
|
cs.CV
|
Recent advancements in human preference optimization, originally developed
for Large Language Models (LLMs), have shown significant potential in improving
text-to-image diffusion models. These methods aim to learn the distribution of
preferred samples while distinguishing them from less preferred ones. However,
existing preference datasets often exhibit overlap between these distributions,
leading to a conflict distribution. Additionally, we identified that input
prompts contain irrelevant information for less preferred images, limiting the
denoising network's ability to accurately predict noise in preference
optimization methods, known as the irrelevant prompt issue. To address these
challenges, we propose Dual Caption Preference Optimization (DCPO), a novel
approach that utilizes two distinct captions to mitigate irrelevant prompts. To
tackle conflict distribution, we introduce the Pick-Double Caption dataset, a
modified version of Pick-a-Pic v2 with separate captions for preferred and less
preferred images. We further propose three different strategies for generating
distinct captions: captioning, perturbation, and hybrid methods. Our
experiments show that DCPO significantly improves image quality and relevance
to prompts, outperforming Stable Diffusion (SD) 2.1, SFT_Chosen, Diffusion-DPO,
and MaPO across multiple metrics, including Pickscore, HPSv2.1, GenEval,
CLIPscore, and ImageReward, fine-tuned on SD 2.1 as the backbone.
|
2502.06025
|
Universal point spread function engineering for 3D optical information
processing
|
physics.optics cs.NE
|
Point spread function (PSF) engineering has been pivotal in the remarkable
progress made in high-resolution imaging in the last decades. However, the
diversity in PSF structures attainable through existing engineering methods is
limited. Here, we report universal PSF engineering, demonstrating a method to
synthesize an arbitrary set of spatially varying 3D PSFs between the input and
output volumes of a spatially incoherent diffractive processor composed of
cascaded transmissive surfaces. We rigorously analyze the PSF engineering
capabilities of such diffractive processors within the diffraction limit of
light and provide numerical demonstrations of unique imaging capabilities, such
as snapshot 3D multispectral imaging without involving any spectral filters,
axial scanning or digital reconstruction steps, which is enabled by the spatial
and spectral engineering of 3D PSFs. Our framework and analysis would be
important for future advancements in computational imaging, sensing and
diffractive processing of 3D optical information.
|
2502.06026
|
A Multimodal PDE Foundation Model for Prediction and Scientific Text
Descriptions
|
cs.LG cs.NA math.NA
|
Neural networks are one tool for approximating non-linear differential
equations used in scientific computing tasks such as surrogate modeling,
real-time predictions, and optimal control. PDE foundation models utilize
neural networks to train approximations to multiple differential equations
simultaneously and are thus a general purpose solver that can be adapted to
downstream tasks. Current PDE foundation models focus on either learning
general solution operators and/or the governing system of equations, and thus
only handle numerical or symbolic modalities. However, real-world applications
may require more flexible data modalities, e.g. text analysis or descriptive
outputs. To address this gap, we propose a novel multimodal deep learning
approach that leverages a transformer-based architecture to approximate
solution operators for a wide variety of ODEs and PDEs. Our method integrates
numerical inputs, such as equation parameters and initial conditions, with text
descriptions of physical processes or system dynamics. This enables our model
to handle settings where symbolic representations may be incomplete or
unavailable. In addition to providing accurate numerical predictions, our
approach generates interpretable scientific text descriptions, offering deeper
insights into the underlying dynamics and solution properties. The numerical
experiments show that our model provides accurate solutions for in-distribution
data (with average relative error less than 3.3%) and out-of-distribution data
(average relative error less than 7.8%) together with precise text descriptions
(with correct descriptions generated 100% of times). In certain tests, the
model is also shown to be capable of extrapolating solutions in time.
|
2502.06027
|
Generating 3D Binding Molecules Using Shape-Conditioned Diffusion Models
with Guidance
|
cs.LG
|
Drug development is a critical but notoriously resource- and time-consuming
process. In this manuscript, we develop a novel generative artificial
intelligence (genAI) method DiffSMol to facilitate drug development. DiffSmol
generates 3D binding molecules based on the shapes of known ligands. DiffSMol
encapsulates geometric details of ligand shapes within pre-trained, expressive
shape embeddings and then generates new binding molecules through a diffusion
model. DiffSMol further modifies the generated 3D structures iteratively via
shape guidance to better resemble the ligand shapes. It also tailors the
generated molecules toward optimal binding affinities under the guidance of
protein pockets. Here, we show that DiffSMol outperforms the state-of-the-art
methods on benchmark datasets. When generating binding molecules resembling
ligand shapes, DiffSMol with shape guidance achieves a success rate 61.4%,
substantially outperforming the best baseline (11.2%), meanwhile producing
molecules with novel molecular graph structures. DiffSMol with pocket guidance
also outperforms the best baseline in binding affinities by 13.2%, and even by
17.7% when combined with shape guidance. Case studies for two critical drug
targets demonstrate very favorable physicochemical and pharmacokinetic
properties of the generated molecules, thus, the potential of DiffSMol in
developing promising drug candidates.
|
2502.06029
|
DiTASK: Multi-Task Fine-Tuning with Diffeomorphic Transformations
|
cs.CV
|
Pre-trained Vision Transformers now serve as powerful tools for computer
vision. Yet, efficiently adapting them for multiple tasks remains a challenge
that arises from the need to modify the rich hidden representations encoded by
the learned weight matrices, without inducing interference between tasks.
Current parameter-efficient methods like LoRA, which apply low-rank updates,
force tasks to compete within constrained subspaces, ultimately degrading
performance. We introduce DiTASK a novel Diffeomorphic Multi-Task Fine-Tuning
approach that maintains pre-trained representations by preserving weight matrix
singular vectors, while enabling task-specific adaptations through neural
diffeomorphic transformations of the singular values. By following this
approach, DiTASK enables both shared and task-specific feature modulations with
minimal added parameters. Our theoretical analysis shows that DITASK achieves
full-rank updates during optimization, preserving the geometric structure of
pre-trained features, and establishing a new paradigm for efficient multi-task
learning (MTL). Our experiments on PASCAL MTL and NYUD show that DiTASK
achieves state-of-the-art performance across four dense prediction tasks, using
75% fewer parameters than existing methods.
|
2502.06031
|
A Conditional Tabular GAN-Enhanced Intrusion Detection System for Rare
Attacks in IoT Networks
|
cs.CR cs.LG
|
Internet of things (IoT) networks, boosted by 6G technology, are transforming
various industries. However, their widespread adoption introduces significant
security risks, particularly in detecting rare but potentially damaging
cyber-attacks. This makes the development of robust IDS crucial for monitoring
network traffic and ensuring their safety. Traditional IDS often struggle with
detecting rare attacks due to severe class imbalances in IoT data. In this
paper, we propose a novel two-stage system called conditional tabular
generative synthetic minority data generation with deep neural network
(CTGSM-DNN). In the first stage, a conditional tabular generative adversarial
network (CTGAN) is employed to generate synthetic data for rare attack classes.
In the second stage, the SMOTEENN method is applied to improve dataset quality.
The full study was conducted using the CSE-CIC-IDS2018 dataset, and we assessed
the performance of the proposed IDS using different evaluation metrics. The
experimental results demonstrated the effectiveness of the proposed multiclass
classifier, achieving an overall accuracy of 99.90% and 80% accuracy in
detecting rare attacks.
|
2502.06034
|
Traveling Waves Integrate Spatial Information Into Spectral
Representations
|
cs.CV
|
Traveling waves are widely observed in the brain, but their precise
computational function remains unclear. One prominent hypothesis is that they
enable the transfer and integration of spatial information across neural
populations. However, few computational models have explored how traveling
waves might be harnessed to perform such integrative processing. Drawing
inspiration from the famous ``Can one hear the shape of a drum?'' problem --
which highlights how spectral modes encode geometric information -- we
introduce a set of convolutional recurrent neural networks that learn to
produce traveling waves in their hidden states in response to visual stimuli.
By applying a spectral decomposition to these wave-like activations, we obtain
a powerful new representational space that outperforms equivalently local
feed-forward networks on tasks requiring global spatial context. In particular,
we observe that traveling waves effectively expand the receptive field of
locally connected neurons, supporting long-range encoding and communication of
information. We demonstrate that models equipped with this mechanism and
spectral readouts solve visual semantic segmentation tasks demanding global
integration, where local feed-forward models fail. As a first step toward
traveling-wave-based representations in artificial networks, our findings
suggest potential efficiency benefits and offer a new framework for connecting
to biological recordings of neural activity.
|
2502.06037
|
Investigating Compositional Reasoning in Time Series Foundation Models
|
cs.LG
|
Large pre-trained time series foundation models (TSFMs) have demonstrated
promising zero-shot performance across a wide range of domains. However, a
question remains: Do TSFMs succeed solely by memorizing training patterns, or
do they possess the ability to reason? While reasoning is a topic of great
interest in the study of Large Language Models (LLMs), it is undefined and
largely unexplored in the context of TSFMs. In this work, inspired by language
modeling literature, we formally define compositional reasoning in forecasting
and distinguish it from in-distribution generalization. We evaluate the
reasoning and generalization capabilities of 23 popular deep learning
forecasting models on multiple synthetic and real-world datasets. Additionally,
through controlled studies, we systematically examine which design choices in
TSFMs contribute to improved reasoning abilities. Our study yields key insights
into the impact of TSFM architecture design on compositional reasoning and
generalization. We find that patch-based Transformers have the best reasoning
performance, closely followed by residualized MLP-based architectures, which
are 97\% less computationally complex in terms of FLOPs and 86\% smaller in
terms of the number of trainable parameters. Interestingly, in some zero-shot
out-of-distribution scenarios, these models can outperform moving average and
exponential smoothing statistical baselines trained on in-distribution data.
Only a few design choices, such as the tokenization method, had a significant
(negative) impact on Transformer model performance.
|
2502.06038
|
Provably Overwhelming Transformer Models with Designed Inputs
|
cs.LG cs.AI cs.CC
|
We develop an algorithm which, given a trained transformer model
$\mathcal{M}$ as input, as well as a string of tokens $s$ of length $n_{fix}$
and an integer $n_{free}$, can generate a mathematical proof that $\mathcal{M}$
is ``overwhelmed'' by $s$, in time and space $\widetilde{O}(n_{fix}^2 +
n_{free}^3)$. We say that $\mathcal{M}$ is ``overwhelmed'' by $s$ when the
output of the model evaluated on this string plus any additional string $t$,
$\mathcal{M}(s + t)$, is completely insensitive to the value of the string $t$
whenever length($t$) $\leq n_{free}$. Along the way, we prove a particularly
strong worst-case form of ``over-squashing'', which we use to bound the model's
behavior. Our technique uses computer-aided proofs to establish this type of
operationally relevant guarantee about transformer models. We empirically test
our algorithm on a single layer transformer complete with an attention head,
layer-norm, MLP/ReLU layers, and RoPE positional encoding. We believe that this
work is a stepping stone towards the difficult task of obtaining useful
guarantees for trained transformer models.
|
2502.06039
|
Benchmarking Prompt Engineering Techniques for Secure Code Generation
with GPT Models
|
cs.SE cs.AI cs.CR
|
Prompt engineering reduces reasoning mistakes in Large Language Models
(LLMs). However, its effectiveness in mitigating vulnerabilities in
LLM-generated code remains underexplored. To address this gap, we implemented a
benchmark to automatically assess the impact of various prompt engineering
strategies on code security. Our benchmark leverages two peer-reviewed prompt
datasets and employs static scanners to evaluate code security at scale. We
tested multiple prompt engineering techniques on GPT-3.5-turbo, GPT-4o, and
GPT-4o-mini. Our results show that for GPT-4o and GPT-4o-mini, a
security-focused prompt prefix can reduce the occurrence of security
vulnerabilities by up to 56%. Additionally, all tested models demonstrated the
ability to detect and repair between 41.9% and 68.7% of vulnerabilities in
previously generated code when using iterative prompting techniques. Finally,
we introduce a "prompt agent" that demonstrates how the most effective
techniques can be applied in real-world development workflows.
|
2502.06042
|
Scaling Laws for Forgetting during Finetuning with Pretraining Data
Injection
|
cs.LG cs.CL
|
A widespread strategy to obtain a language model that performs well on a
target domain is to finetune a pretrained model to perform unsupervised
next-token prediction on data from that target domain. Finetuning presents two
challenges: (i) if the amount of target data is limited, as in most practical
applications, the model will quickly overfit, and (ii) the model will drift
away from the original model, forgetting the pretraining data and the generic
knowledge that comes with it. We aim to derive scaling laws that quantify these
two phenomena for various target domains, amounts of available target data, and
model scales. We measure the efficiency of injecting pretraining data into the
finetuning data mixture to avoid forgetting and mitigate overfitting. A key
practical takeaway from our study is that injecting as little as 1% of
pretraining data in the finetuning data mixture prevents the model from
forgetting the pretraining set.
|
2502.06044
|
Scalable Differentially Private Bayesian Optimization
|
stat.ML cs.LG
|
In recent years, there has been much work on scaling Bayesian Optimization to
high-dimensional problems, for example hyperparameter tuning in large neural
network models. These scalable methods have been successful, finding high
objective values much more quickly than traditional global Bayesian
Optimization or random search-based methods. At the same time, these large
neural network models often use sensitive data, but preservation of
Differential Privacy has not scaled alongside these modern Bayesian
Optimization procedures. Here we develop a method to privately estimate
potentially high-dimensional parameter spaces using Gradient Informative
Bayesian Optimization. Our theoretical results prove that under suitable
conditions, our method converges exponentially fast to a ball around the
optimal parameter configuration. Moreover, regardless of whether the
assumptions are satisfied, we show that our algorithm maintains privacy and
empirically demonstrates superior performance to existing methods in the
high-dimensional hyperparameter setting.
|
2502.06047
|
Neural Shortest Path for Surface Reconstruction from Point Clouds
|
cs.LG
|
In this paper, we propose the neural shortest path (NSP), a vector-valued
implicit neural representation (INR) that approximates a distance function and
its gradient. The key feature of NSP is to learn the exact shortest path (ESP),
which directs an arbitrary point to its nearest point on the target surface.
The NSP is decomposed into its magnitude and direction, and a variable
splitting method is used that each decomposed component approximates a distance
function and its gradient, respectively. Unlike to existing methods of learning
the distance function itself, the NSP ensures the simultaneous recovery of the
distance function and its gradient. We mathematically prove that the decomposed
representation of NSP guarantees the convergence of the magnitude of NSP in the
$H^1$ norm. Furthermore, we devise a novel loss function that enforces the
property of ESP, demonstrating that its global minimum is the ESP. We evaluate
the performance of the NSP through comprehensive experiments on diverse
datasets, validating its capacity to reconstruct high-quality surfaces with the
robustness to noise and data sparsity. The numerical results show substantial
improvements over state-of-the-art methods, highlighting the importance of
learning the ESP, the product of distance function and its gradient, for
representing a wide variety of complex surfaces.
|
2502.06049
|
LM2: Large Memory Models
|
cs.CL cs.AI
|
This paper introduces the Large Memory Model (LM2), a decoder-only
Transformer architecture enhanced with an auxiliary memory module that aims to
address the limitations of standard Transformers in multi-step reasoning,
relational argumentation, and synthesizing information distributed over long
contexts. The proposed LM2 incorporates a memory module that acts as a
contextual representation repository, interacting with input tokens via cross
attention and updating through gating mechanisms. To preserve the Transformers
general-purpose capabilities, LM2 maintains the original information flow while
integrating a complementary memory pathway. Experimental results on the
BABILong benchmark demonstrate that the LM2model outperforms both the
memory-augmented RMT model by 37.1% and the baseline Llama-3.2 model by 86.3%
on average across tasks. LM2 exhibits exceptional capabilities in multi-hop
inference, numerical reasoning, and large-context question-answering. On the
MMLU dataset, it achieves a 5.0% improvement over a pre-trained vanilla model,
demonstrating that its memory module does not degrade performance on general
tasks. Further, in our analysis, we explore the memory interpretability,
effectiveness of memory modules, and test-time behavior. Our findings emphasize
the importance of explicit memory in enhancing Transformer architectures.
|
2502.06051
|
Nearly Optimal Sample Complexity of Offline KL-Regularized Contextual
Bandits under Single-Policy Concentrability
|
cs.LG cs.AI math.ST stat.ML stat.TH
|
KL-regularized policy optimization has become a workhorse in learning-based
decision making, while its theoretical understanding is still very limited.
Although recent progress has been made towards settling the sample complexity
of KL-regularized contextual bandits, existing sample complexity bounds are
either $\tilde{O}(\epsilon^{-2})$ under single-policy concentrability or
$\tilde{O}(\epsilon^{-1})$ under all-policy concentrability. In this paper, we
propose the \emph{first} algorithm with $\tilde{O}(\epsilon^{-1})$ sample
complexity under single-policy concentrability for offline contextual bandits.
Our algorithm is designed for general function approximation and based on the
principle of \emph{pessimism in the face of uncertainty}. The core of our proof
leverages the strong convexity of the KL regularization, and the conditional
non-negativity of the gap between the true reward and its pessimistic estimator
to refine a mean-value-type risk upper bound to its extreme. This in turn leads
to a novel covariance-based analysis, effectively bypassing the need for
uniform control over the discrepancy between any two functions in the function
class. The near-optimality of our algorithm is demonstrated by an
$\tilde{\Omega}(\epsilon^{-1})$ lower bound. Furthermore, we extend our
algorithm to contextual dueling bandits and achieve a similar nearly optimal
sample complexity.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.