id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
2501.16539
|
Generalized Mission Planning for Heterogeneous Multi-Robot Teams via
LLM-constructed Hierarchical Trees
|
cs.RO cs.AI cs.MA
|
We present a novel mission-planning strategy for heterogeneous multi-robot
teams, taking into account the specific constraints and capabilities of each
robot. Our approach employs hierarchical trees to systematically break down
complex missions into manageable sub-tasks. We develop specialized APIs and
tools, which are utilized by Large Language Models (LLMs) to efficiently
construct these hierarchical trees. Once the hierarchical tree is generated, it
is further decomposed to create optimized schedules for each robot, ensuring
adherence to their individual constraints and capabilities. We demonstrate the
effectiveness of our framework through detailed examples covering a wide range
of missions, showcasing its flexibility and scalability.
|
2501.16542
|
UniPET-SPK: A Unified Framework for Parameter-Efficient Tuning of
Pre-trained Speech Models for Robust Speaker Verification
|
eess.AS cs.LG cs.SD
|
With excellent generalization ability, SSL speech models have shown
impressive performance on various downstream tasks in the pre-training and
fine-tuning paradigm. However, as the size of pre-trained models grows,
fine-tuning becomes practically unfeasible due to expanding computation and
storage requirements and the risk of overfitting. This study explores
parameter-efficient tuning (PET) methods for adapting large-scale pre-trained
SSL speech models to speaker verification task. Correspondingly, we propose
three PET methods: (i)an adapter-tuning method, (ii)a prompt-tuning method, and
(iii)a unified framework that effectively incorporates adapter-tuning and
prompt-tuning with a dynamically learnable gating mechanism. First, we propose
the Inner+Inter Adapter framework, which inserts two types of adapters into
pre-trained models, allowing for adaptation of latent features within the
intermediate Transformer layers and output embeddings from all Transformer
layers, through a parallel adapter design. Second, we propose the Deep Speaker
Prompting method that concatenates trainable prompt tokens into the input space
of pre-trained models to guide adaptation. Lastly, we propose the UniPET-SPK, a
unified framework that effectively incorporates these two alternate PET methods
into a single framework with a dynamic trainable gating mechanism. The proposed
UniPET-SPK learns to find the optimal mixture of PET methods to match different
datasets and scenarios. We conduct a comprehensive set of experiments on
several datasets to validate the effectiveness of the proposed PET methods.
Experimental results on VoxCeleb, CN-Celeb, and 1st 48-UTD forensic datasets
demonstrate that the proposed UniPET-SPK consistently outperforms the two PET
methods, fine-tuning, and other parameter-efficient tuning methods, achieving
superior performance while updating only 5.4% of the parameters.
|
2501.16544
|
PLANSIEVE: Real-time Suboptimal Query Plan Detection Through Incremental
Refinements
|
cs.DB
|
Cardinality estimation remains a fundamental challenge in query optimization,
often resulting in sub-optimal execution plans and degraded performance. While
errors in cardinality estimation are inevitable, existing methods for
identifying sub-optimal plans -- such as metrics like Q-error, P-error, or
L1-error -- are limited to post-execution analysis, requiring complete
knowledge of true cardinalities and failing to prevent the execution of
sub-optimal plans in real-time. This paper introduces PLANSIEVE, a novel
framework that identifies sub-optimal plans during query optimization.
PLANSIEVE operates by analyzing the relative order of sub-plans generated by
the optimizer based on estimated and true cardinalities. It begins with
surrogate cardinalities from any third-party estimator and incrementally
refines these surrogates as the system processes more queries. Experimental
results on the augmented JOB-LIGHT-SCALE and STATS-CEB-SCALE workloads
demonstrate that PLANSIEVE achieves an accuracy of up to 88.7\% in predicting
sub-optimal plans.
|
2501.16546
|
Sample-Efficient Behavior Cloning Using General Domain Knowledge
|
cs.AI
|
Behavior cloning has shown success in many sequential decision-making tasks
by learning from expert demonstrations, yet they can be very sample inefficient
and fail to generalize to unseen scenarios. One approach to these problems is
to introduce general domain knowledge, such that the policy can focus on the
essential features and may generalize to unseen states by applying that
knowledge. Although this knowledge is easy to acquire from the experts, it is
hard to be combined with learning from individual examples due to the lack of
semantic structure in neural networks and the time-consuming nature of feature
engineering. To enable learning from both general knowledge and specific
demonstration trajectories, we use a large language model's coding capability
to instantiate a policy structure based on expert domain knowledge expressed in
natural language and tune the parameters in the policy with demonstrations. We
name this approach the Knowledge Informed Model (KIM) as the structure reflects
the semantics of expert knowledge. In our experiments with lunar lander and car
racing tasks, our approach learns to solve the tasks with as few as 5
demonstrations and is robust to action noise, outperforming the baseline model
without domain knowledge. This indicates that with the help of large language
models, we can incorporate domain knowledge into the structure of the policy,
increasing sample efficiency for behavior cloning.
|
2501.16549
|
Reconciling Predictive Multiplicity in Practice
|
cs.CY cs.LG
|
Many machine learning applications predict individual probabilities, such as
the likelihood that a person develops a particular illness. Since these
probabilities are unknown, a key question is how to address situations in which
different models trained on the same dataset produce varying predictions for
certain individuals. This issue is exemplified by the model multiplicity (MM)
phenomenon, where a set of comparable models yield inconsistent predictions.
Roth, Tolbert, and Weinstein recently introduced a reconciliation procedure,
the Reconcile algorithm, to address this problem. Given two disagreeing models,
the algorithm leverages their disagreement to falsify and improve at least one
of the models. In this paper, we empirically analyze the Reconcile algorithm
using five widely-used fairness datasets: COMPAS, Communities and Crime, Adult,
Statlog (German Credit Data), and the ACS Dataset. We examine how Reconcile
fits within the model multiplicity literature and compare it to existing MM
solutions, demonstrating its effectiveness. We also discuss potential
improvements to the Reconcile algorithm theoretically and practically. Finally,
we extend the Reconcile algorithm to the setting of causal inference, given
that different competing estimators can again disagree on specific causal
average treatment effect (CATE) values. We present the first extension of the
Reconcile algorithm in causal inference, analyze its theoretical properties,
and conduct empirical tests. Our results confirm the practical effectiveness of
Reconcile and its applicability across various domains.
|
2501.16550
|
PhysAnimator: Physics-Guided Generative Cartoon Animation
|
cs.GR cs.CV
|
Creating hand-drawn animation sequences is labor-intensive and demands
professional expertise. We introduce PhysAnimator, a novel approach for
generating physically plausible meanwhile anime-stylized animation from static
anime illustrations. Our method seamlessly integrates physics-based simulations
with data-driven generative models to produce dynamic and visually compelling
animations. To capture the fluidity and exaggeration characteristic of anime,
we perform image-space deformable body simulations on extracted mesh
geometries. We enhance artistic control by introducing customizable energy
strokes and incorporating rigging point support, enabling the creation of
tailored animation effects such as wind interactions. Finally, we extract and
warp sketches from the simulation sequence, generating a texture-agnostic
representation, and employ a sketch-guided video diffusion model to synthesize
high-quality animation frames. The resulting animations exhibit temporal
consistency and visual plausibility, demonstrating the effectiveness of our
method in creating dynamic anime-style animations.
|
2501.16551
|
PackDiT: Joint Human Motion and Text Generation via Mutual Prompting
|
cs.CV cs.AI cs.LG
|
Human motion generation has advanced markedly with the advent of diffusion
models. Most recent studies have concentrated on generating motion sequences
based on text prompts, commonly referred to as text-to-motion generation.
However, the bidirectional generation of motion and text, enabling tasks such
as motion-to-text alongside text-to-motion, has been largely unexplored. This
capability is essential for aligning diverse modalities and supports
unconditional generation. In this paper, we introduce PackDiT, the first
diffusion-based generative model capable of performing various tasks
simultaneously, including motion generation, motion prediction, text
generation, text-to-motion, motion-to-text, and joint motion-text generation.
Our core innovation leverages mutual blocks to integrate multiple diffusion
transformers (DiTs) across different modalities seamlessly. We train PackDiT on
the HumanML3D dataset, achieving state-of-the-art text-to-motion performance
with an FID score of 0.106, along with superior results in motion prediction
and in-between tasks. Our experiments further demonstrate that diffusion models
are effective for motion-to-text generation, achieving performance comparable
to that of autoregressive models.
|
2501.16558
|
Distributional Information Embedding: A Framework for Multi-bit
Watermarking
|
cs.CR cs.IT cs.LG math.IT
|
This paper introduces a novel problem, distributional information embedding,
motivated by the practical demands of multi-bit watermarking for large language
models (LLMs). Unlike traditional information embedding, which embeds
information into a pre-existing host signal, LLM watermarking actively controls
the text generation process--adjusting the token distribution--to embed a
detectable signal. We develop an information-theoretic framework to analyze
this distributional information embedding problem, characterizing the
fundamental trade-offs among three critical performance metrics: text quality,
detectability, and information rate. In the asymptotic regime, we demonstrate
that the maximum achievable rate with vanishing error corresponds to the
entropy of the LLM's output distribution and increases with higher allowable
distortion. We also characterize the optimal watermarking scheme to achieve
this rate. Extending the analysis to the finite-token case, we identify schemes
that maximize detection probability while adhering to constraints on false
alarm and distortion.
|
2501.16559
|
LoRA-X: Bridging Foundation Models with Training-Free Cross-Model
Adaptation
|
cs.CV
|
The rising popularity of large foundation models has led to a heightened
demand for parameter-efficient fine-tuning methods, such as Low-Rank Adaptation
(LoRA), which offer performance comparable to full model fine-tuning while
requiring only a few additional parameters tailored to the specific base model.
When such base models are deprecated and replaced, all associated LoRA modules
must be retrained, requiring access to either the original training data or a
substantial amount of synthetic data that mirrors the original distribution.
However, the original data is often inaccessible due to privacy or licensing
issues, and generating synthetic data may be impractical and insufficiently
representative. These factors complicate the fine-tuning process considerably.
To address this challenge, we introduce a new adapter, Cross-Model Low-Rank
Adaptation (LoRA-X), which enables the training-free transfer of LoRA
parameters across source and target models, eliminating the need for original
or synthetic training data. Our approach imposes the adapter to operate within
the subspace of the source base model. This constraint is necessary because our
prior knowledge of the target model is limited to its weights, and the criteria
for ensuring the adapter's transferability are restricted to the target base
model's weights and subspace. To facilitate the transfer of LoRA parameters of
the source model to a target model, we employ the adapter only in the layers of
the target model that exhibit an acceptable level of subspace similarity. Our
extensive experiments demonstrate the effectiveness of LoRA-X for text-to-image
generation, including Stable Diffusion v1.5 and Stable Diffusion XL.
|
2501.16562
|
C-HDNet: A Fast Hyperdimensional Computing Based Method for Causal
Effect Estimation from Networked Observational Data
|
cs.LG stat.ME
|
We consider the problem of estimating causal effects from observational data
in the presence of network confounding. In this context, an individual's
treatment assignment and outcomes may be affected by their neighbors within the
network. We propose a novel matching technique which leverages hyperdimensional
computing to model network information and improve predictive performance. We
present results of extensive experiments which show that the proposed method
outperforms or is competitive with the state-of-the-art methods for causal
effect estimation from network data, including advanced computationally
demanding deep learning methods. Further, our technique benefits from
simplicity and speed, with roughly an order of magnitude lower runtime compared
to state-of-the-art methods, while offering similar causal effect estimation
error rates.
|
2501.16571
|
Efficient Object Detection of Marine Debris using Pruned YOLO Model
|
cs.CV cs.AI
|
Marine debris poses significant harm to marine life due to substances like
microplastics, polychlorinated biphenyls, and pesticides, which damage habitats
and poison organisms. Human-based solutions, such as diving, are increasingly
ineffective in addressing this issue. Autonomous underwater vehicles (AUVs) are
being developed for efficient sea garbage collection, with the choice of object
detection architecture being critical. This research employs the YOLOv4 model
for real-time detection of marine debris using the Trash-ICRA 19 dataset,
consisting of 7683 images at 480x320 pixels. Various modifications-pretrained
models, training from scratch, mosaic augmentation, layer freezing,
YOLOv4-tiny, and channel pruning-are compared to enhance architecture
efficiency. Channel pruning significantly improves detection speed, increasing
the base YOLOv4 frame rate from 15.19 FPS to 19.4 FPS, with only a 1.2% drop in
mean Average Precision, from 97.6% to 96.4%.
|
2501.16573
|
Optimization Landscapes Learned: Proxy Networks Boost Convergence in
Physics-based Inverse Problems
|
cs.LG math.OC
|
Solving inverse problems in physics is central to understanding complex
systems and advancing technologies in various fields. Iterative optimization
algorithms, commonly used to solve these problems, often encounter local
minima, chaos, or regions with zero gradients. This is due to their
overreliance on local information and highly chaotic inverse loss landscapes
governed by underlying partial differential equations (PDEs). In this work, we
show that deep neural networks successfully replicate such complex loss
landscapes through spatio-temporal trajectory inputs. They also offer the
potential to control the underlying complexity of these chaotic loss landscapes
during training through various regularization methods. We show that optimizing
on network-smoothened loss landscapes leads to improved convergence in
predicting optimum inverse parameters over conventional momentum-based
optimizers such as BFGS on multiple challenging problems.
|
2501.16574
|
Using Database Dependencies to Constrain Approval-Based Committee Voting
in the Presence of Context
|
cs.DB
|
In Approval-Based Committee (ABC) voting, each voter lists the candidates
they approve and then a voting rule aggregates the individual approvals into a
committee that represents the collective choice of the voters. An extensively
studied class of such rules is the class of ABC scoring rules, where each voter
contributes to each possible committee a score based on the voter's approvals.
We initiate a study of ABC voting in the presence of constraints about the
general context surrounding the candidates. Specifically, we consider a
framework in which there is a relational database with information about the
candidates together with integrity constraints on the relational database
extended with a virtual relation representing the committee. For an ABC scoring
rule, the goal is to find a committee of maximum score such that all integrity
constraints hold in the extended database.
We focus on two well-known types of integrity constraints in relational
databases: tuple-generating dependencies (TGDs) and denial constraints (DCs).
The former can express, for example, desired representations of groups, while
the latter can express conflicts among candidates. ABC voting is known to be
computationally hard without integrity constraints, except for the case of
approval voting where it is tractable. We show that integrity constraints make
the problem NP-hard for approval voting, but we also identify certain tractable
cases when key constraints are used. We then present an implementation of the
framework via a reduction to Mixed Integer Programming (MIP) that supports
arbitrary ABC scoring rules, TGDs and DCs. We devise heuristics for optimizing
the resulting MIP, and describe an empirical study that illustrates the
effectiveness of the optimized MIP over databases in three different domains.
|
2501.16577
|
Generative AI Uses and Risks for Knowledge Workers in a Science
Organization
|
cs.HC cs.AI
|
Generative AI could enhance scientific discovery by supporting knowledge
workers in science organizations. However, the real-world applications and
perceived concerns of generative AI use in these organizations are uncertain.
In this paper, we report on a collaborative study with a US national laboratory
with employees spanning Science and Operations about their use of generative AI
tools. We surveyed 66 employees, interviewed a subset (N=22), and measured
early adoption of an internal generative AI interface called Argo lab-wide. We
have four findings: (1) Argo usage data shows small but increasing use by
Science and Operations employees; Common current and envisioned use cases for
generative AI in this context conceptually fall into either a (2) copilot or
(3) workflow agent modality; and (4) Concerns include sensitive data security,
academic publishing, and job impacts. Based on our findings, we make
recommendations for generative AI use in science and other organizations.
|
2501.16581
|
DialUp! Modeling the Language Continuum by Adapting Models to Dialects
and Dialects to Models
|
cs.CL
|
Most of the world's languages and dialects are low-resource, and lack support
in mainstream machine translation (MT) models. However, many of them have a
closely-related high-resource language (HRL) neighbor, and differ in
linguistically regular ways from it. This underscores the importance of model
robustness to dialectical variation and cross-lingual generalization to the HRL
dialect continuum. We present DialUp, consisting of a training-time technique
for adapting a pretrained model to dialectical data (M->D), and an
inference-time intervention adapting dialectical data to the model expertise
(D->M). M->D induces model robustness to potentially unseen and unknown
dialects by exposure to synthetic data exemplifying linguistic mechanisms of
dialectical variation, whereas D->M treats dialectical divergence for known
target dialects. These methods show considerable performance gains for several
dialects from four language families, and modest gains for two other language
families. We also conduct feature and error analyses, which show that language
varieties with low baseline MT performance are more likely to benefit from
these approaches.
|
2501.16583
|
Directing Mamba to Complex Textures: An Efficient Texture-Aware State
Space Model for Image Restoration
|
cs.CV
|
Image restoration aims to recover details and enhance contrast in degraded
images. With the growing demand for high-quality imaging (\textit{e.g.}, 4K and
8K), achieving a balance between restoration quality and computational
efficiency has become increasingly critical. Existing methods, primarily based
on CNNs, Transformers, or their hybrid approaches, apply uniform deep
representation extraction across the image. However, these methods often
struggle to effectively model long-range dependencies and largely overlook the
spatial characteristics of image degradation (regions with richer textures tend
to suffer more severe damage), making it hard to achieve the best trade-off
between restoration quality and efficiency. To address these issues, we propose
a novel texture-aware image restoration method, TAMambaIR, which simultaneously
perceives image textures and achieves a trade-off between performance and
efficiency. Specifically, we introduce a novel Texture-Aware State Space Model,
which enhances texture awareness and improves efficiency by modulating the
transition matrix of the state-space equation and focusing on regions with
complex textures. Additionally, we design a {Multi-Directional Perception
Block} to improve multi-directional receptive fields while maintaining low
computational overhead. Extensive experiments on benchmarks for image
super-resolution, deraining, and low-light image enhancement demonstrate that
TAMambaIR achieves state-of-the-art performance with significantly improved
efficiency, establishing it as a robust and efficient framework for image
restoration.
|
2501.16587
|
HopCast: Calibration of Autoregressive Dynamics Models
|
cs.LG
|
Deep learning models are often trained to approximate dynamical systems that
can be modeled using differential equations. These models are optimized to
predict one step ahead and produce calibrated predictions if the predictive
model can quantify uncertainty, such as deep ensembles. At inference time,
multi-step predictions are generated via autoregression, which needs a sound
uncertainty propagation method (e.g., Trajectory Sampling) to produce
calibrated multi-step predictions. This paper introduces an approach named
HopCast that uses the Modern Hopfield Network (MHN) to learn the residuals of a
deterministic model that approximates the dynamical system. The MHN predicts
the density of residuals based on a context vector at any timestep during
autoregression. This approach produces calibrated multi-step predictions
without uncertainty propagation and turns a deterministic model into a
calibrated probabilistic model. This work is also the first to benchmark
existing uncertainty propagation methods based on calibration errors with deep
ensembles for multi-step predictions.
|
2501.16588
|
Fine-Tuned Language Models as Space Systems Controllers
|
cs.LG cs.SY eess.SY
|
Large language models (LLMs), or foundation models (FMs), are pretrained
transformers that coherently complete sentences auto-regressively. In this
paper, we show that LLMs can control simplified space systems after some
additional training, called fine-tuning. We look at relatively small language
models, ranging between 7 and 13 billion parameters. We focus on four problems:
a three-dimensional spring toy problem, low-thrust orbit transfer, low-thrust
cislunar control, and powered descent guidance. The fine-tuned LLMs are capable
of controlling systems by generating sufficiently accurate outputs that are
multi-dimensional vectors with up to 10 significant digits. We show that for
several problems the amount of data required to perform fine-tuning is smaller
than what is generally required of traditional deep neural networks (DNNs), and
that fine-tuned LLMs are good at generalizing outside of the training dataset.
Further, the same LLM can be fine-tuned with data from different problems, with
only minor performance degradation with respect to LLMs trained for a single
application. This work is intended as a first step towards the development of a
general space systems controller.
|
2501.16590
|
Benchmarking Model Predictive Control and Reinforcement Learning Based
Control for Legged Robot Locomotion in MuJoCo Simulation
|
cs.RO cs.SY eess.SY
|
Model Predictive Control (MPC) and Reinforcement Learning (RL) are two
prominent strategies for controlling legged robots, each with unique strengths.
RL learns control policies through system interaction, adapting to various
scenarios, whereas MPC relies on a predefined mathematical model to solve
optimization problems in real-time. Despite their widespread use, there is a
lack of direct comparative analysis under standardized conditions. This work
addresses this gap by benchmarking MPC and RL controllers on a Unitree Go1
quadruped robot within the MuJoCo simulation environment, focusing on a
standardized task-straight walking at a constant velocity. Performance is
evaluated based on disturbance rejection, energy efficiency, and terrain
adaptability. The results show that RL excels in handling disturbances and
maintaining energy efficiency but struggles with generalization to new terrains
due to its dependence on learned policies tailored to specific environments. In
contrast, MPC shows enhanced recovery capabilities from larger perturbations by
leveraging its optimization-based approach, allowing for a balanced
distribution of control efforts across the robot's joints. The results provide
a clear understanding of the advantages and limitations of both RL and MPC,
offering insights into selecting an appropriate control strategy for legged
robotic applications.
|
2501.16591
|
Applying Ensemble Models based on Graph Neural Network and Reinforcement
Learning for Wind Power Forecasting
|
cs.LG cs.AI
|
Accurately predicting the wind power output of a wind farm across various
time scales utilizing Wind Power Forecasting (WPF) is a critical issue in wind
power trading and utilization. The WPF problem remains unresolved due to
numerous influencing variables, such as wind speed, temperature, latitude, and
longitude. Furthermore, achieving high prediction accuracy is crucial for
maintaining electric grid stability and ensuring supply security. In this
paper, we model all wind turbines within a wind farm as graph nodes in a graph
built by their geographical locations. Accordingly, we propose an ensemble
model based on graph neural networks and reinforcement learning (EMGRL) for
WPF. Our approach includes: (1) applying graph neural networks to capture the
time-series data from neighboring wind farms relevant to the target wind farm;
(2) establishing a general state embedding that integrates the target wind
farm's data with the historical performance of base models on the target wind
farm; (3) ensembling and leveraging the advantages of all base models through
an actor-critic reinforcement learning framework for WPF.
|
2501.16599
|
Toward Safe Integration of UAM in Terminal Airspace: UAM Route
Feasibility Assessment using Probabilistic Aircraft Trajectory Prediction
|
cs.LG
|
Integrating Urban Air Mobility (UAM) into airspace managed by Air Traffic
Control (ATC) poses significant challenges, particularly in congested terminal
environments. This study proposes a framework to assess the feasibility of UAM
route integration using probabilistic aircraft trajectory prediction. By
leveraging conditional Normalizing Flows, the framework predicts short-term
trajectory distributions of conventional aircraft, enabling UAM vehicles to
dynamically adjust speeds and maintain safe separations. The methodology was
applied to airspace over Seoul metropolitan area, encompassing interactions
between UAM and conventional traffic at multiple altitudes and lanes. The
results reveal that different physical locations of lanes and routes experience
varying interaction patterns and encounter dynamics. For instance, Lane 1 at
lower altitudes (1,500 ft and 2,000 ft) exhibited minimal interactions with
conventional aircraft, resulting in the largest separations and the most stable
delay proportions. In contrast, Lane 4 near the airport experienced more
frequent and complex interactions due to its proximity to departing traffic.
The limited trajectory data for departing aircraft in this region occasionally
led to tighter separations and increased operational challenges. This study
underscores the potential of predictive modeling in facilitating UAM
integration while highlighting critical trade-offs between safety and
efficiency. The findings contribute to refining airspace management strategies
and offer insights for scaling UAM operations in complex urban environments.
|
2501.16600
|
The Power of Perturbation under Sampling in Solving Extensive-Form Games
|
cs.GT cs.LG cs.MA
|
This paper investigates how perturbation does and does not improve the
Follow-the-Regularized-Leader (FTRL) algorithm in imperfect-information
extensive-form games. Perturbing the expected payoffs guarantees that the FTRL
dynamics reach an approximate equilibrium, and proper adjustments of the
magnitude of the perturbation lead to a Nash equilibrium (\textit{last-iterate
convergence}). This approach is robust even when payoffs are estimated using
sampling -- as is the case for large games -- while the optimistic approach
often becomes unstable. Building upon those insights, we first develop a
general framework for perturbed FTRL algorithms under \textit{sampling}. We
then empirically show that in the last-iterate sense, the perturbed FTRL
consistently outperforms the non-perturbed FTRL. We further identify a
divergence function that reduces the variance of the estimates for perturbed
payoffs, with which it significantly outperforms the prior algorithms on Leduc
poker (whose structure is more asymmetric in a sense than that of the other
benchmark games) and consistently performs smooth convergence behavior on all
the benchmark games.
|
2501.16605
|
Impact and influence of modern AI in metadata management
|
cs.DB cs.AI
|
Metadata management plays a critical role in data governance, resource
discovery, and decision-making in the data-driven era. While traditional
metadata approaches have primarily focused on organization, classification, and
resource reuse, the integration of modern artificial intelligence (AI)
technologies has significantly transformed these processes. This paper
investigates both traditional and AI-driven metadata approaches by examining
open-source solutions, commercial tools, and research initiatives. A
comparative analysis of traditional and AI-driven metadata management methods
is provided, highlighting existing challenges and their impact on
next-generation datasets. The paper also presents an innovative AI-assisted
metadata management framework designed to address these challenges. This
framework leverages more advanced modern AI technologies to automate metadata
generation, enhance governance, and improve the accessibility and usability of
modern datasets. Finally, the paper outlines future directions for research and
development, proposing opportunities to further advance metadata management in
the context of AI-driven innovation and complex datasets.
|
2501.16606
|
Governing the Agent-to-Agent Economy of Trust via Progressive
Decentralization
|
cs.MA cs.AI
|
Current approaches to AI governance often fall short in anticipating a future
where AI agents manage critical tasks, such as financial operations,
administrative functions, and beyond. As AI agents may eventually delegate
tasks among themselves to optimize efficiency, understanding the foundational
principles of human value exchange could offer insights into how AI-driven
economies might operate. Just as trust and value exchange are central to human
interactions in open marketplaces, they may also be critical for enabling
secure and efficient interactions among AI agents. While cryptocurrencies could
serve as the foundation for monetizing value exchange in a collaboration and
delegation dynamic among AI agents, a critical question remains: how can these
agents reliably determine whom to trust, and how can humans ensure meaningful
oversight and control as an economy of AI agents scales and evolves? This paper
is a call for a collective exploration of cryptoeconomic incentives, which can
help design decentralized governance systems that allow AI agents to
autonomously interact and exchange value while ensuring human oversight via
progressive decentralization. Toward this end, I propose a research agenda to
address the question of agent-to-agent trust using AgentBound Tokens, which are
non-transferable, non-fungible tokens uniquely tied to individual AI agents,
akin to Soulbound tokens for humans in Web3. By staking ABTs as collateral for
autonomous actions within an agent-to-agent network via a proof-of-stake
mechanism, agents may be incentivized towards ethical behavior, and penalties
for misconduct are automatically enforced.
|
2501.16607
|
MCTS-SQL: An Effective Framework for Text-to-SQL with Monte Carlo Tree
Search
|
cs.DB cs.AI cs.CL cs.PL
|
Text-to-SQL is a fundamental and longstanding problem in the NLP area, aiming
at converting natural language queries into SQL, enabling non-expert users to
operate databases. Recent advances in LLM have greatly improved text-to-SQL
performance. However, challenges persist, especially when dealing with complex
user queries. Current approaches (e.g., COT prompting and multi-agent
frameworks) rely on the ability of models to plan and generate SQL
autonomously, but controlling performance remains difficult. In addition, LLMs
are still prone to hallucinations. To alleviate these challenges, we designed a
novel MCTS-SQL to guide SQL generation iteratively. The approach generates SQL
queries through Monte Carlo Tree Search (MCTS) and a heuristic self-refinement
mechanism are used to enhance accuracy and reliability. Key components include
a schema selector for extracting relevant information and an MCTS-based
generator for iterative query refinement. Experimental results from the SPIDER
and BIRD benchmarks show that MCTS-SQL achieves state-of-the-art performance.
Specifically, on the BIRD development dataset, MCTS-SQL achieves an Execution
(EX) accuracy of 69.40% using GPT-4o as the base model and a significant
improvement when dealing with challenging tasks, with an EX of 51.48%, which is
3.41% higher than the existing method.
|
2501.16608
|
Unsupervised Domain Adaptation with Dynamic Clustering and Contrastive
Refinement for Gait Recognition
|
cs.CV
|
Gait recognition is an emerging identification technology that distinguishes
individuals at long distances by analyzing individual walking patterns.
Traditional techniques rely heavily on large-scale labeled datasets, which
incurs high costs and significant labeling challenges. Recently, researchers
have explored unsupervised gait recognition with clustering-based unsupervised
domain adaptation methods and achieved notable success. However, these methods
directly use pseudo-label generated by clustering and neglect pseudolabel noise
caused by domain differences, which affects the effect of the model training
process. To mitigate these issues, we proposed a novel model called GaitDCCR,
which aims to reduce the influence of noisy pseudo labels on clustering and
model training. Our approach can be divided into two main stages: clustering
and training stage. In the clustering stage, we propose Dynamic Cluster
Parameters (DCP) and Dynamic Weight Centroids (DWC) to improve the efficiency
of clustering and obtain reliable cluster centroids. In the training stage, we
employ the classical teacher-student structure and propose Confidence-based
Pseudo-label Refinement (CPR) and Contrastive Teacher Module (CTM) to encourage
noisy samples to converge towards clusters containing their true identities.
Extensive experiments on public gait datasets have demonstrated that our simple
and effective method significantly enhances the performance of unsupervised
gait recognition, laying the foundation for its application in the
real-world.The code is available at https://github.com/YanSun-github/GaitDCCR
|
2501.16609
|
CowPilot: A Framework for Autonomous and Human-Agent Collaborative Web
Navigation
|
cs.AI cs.CL cs.HC
|
While much work on web agents emphasizes the promise of autonomously
performing tasks on behalf of users, in reality, agents often fall short on
complex tasks in real-world contexts and modeling user preference. This
presents an opportunity for humans to collaborate with the agent and leverage
the agent's capabilities effectively. We propose CowPilot, a framework
supporting autonomous as well as human-agent collaborative web navigation, and
evaluation across task success and task efficiency. CowPilot reduces the number
of steps humans need to perform by allowing agents to propose next steps, while
users are able to pause, reject, or take alternative actions. During execution,
users can interleave their actions with the agent by overriding suggestions or
resuming agent control when needed. We conducted case studies on five common
websites and found that the human-agent collaborative mode achieves the highest
success rate of 95% while requiring humans to perform only 15.2% of the total
steps. Even with human interventions during task execution, the agent
successfully drives up to half of task success on its own. CowPilot can serve
as a useful tool for data collection and agent evaluation across websites,
which we believe will enable research in how users and agents can work
together. Video demonstrations are available at
https://oaishi.github.io/cowpilot.html
|
2501.16610
|
Embracing Reconfigurable Antennas in the Tri-hybrid MIMO Architecture
for 6G
|
cs.IT cs.ET cs.NI math.IT
|
Multiple-input multiple-output (MIMO) communication has led to immense
enhancements in data rates and efficient spectrum management. The evolution of
MIMO has been accompanied by increased hardware complexity and array sizes,
causing system power consumption to rise as a result. Despite past advances in
power-efficient hybrid architectures, new solutions are needed to enable
extremely large-scale MIMO deployments for 6G and beyond. In this paper, we
introduce a novel architecture that integrates low-power reconfigurable
antennas with both digital and analog precoding. This \emph{tri-hybrid}
approach addresses key limitations in traditional and hybrid MIMO systems by
improving power consumption and adding new layer for signal processing. We
provide a comprehensive analysis of the proposed architecture and compare its
performance with existing solutions, including fully-digital and hybrid MIMO
systems. The results demonstrate significant improvements in energy efficiency,
highlighting the potential of the tri-hybrid system to meet the growing demands
of future wireless networks. We also discuss several design and implementation
challenges, including the need for technological advancements in reconfigurable
array hardware and tunable antenna parameters.
|
2501.16612
|
CascadeV: An Implementation of Wurstchen Architecture for Video
Generation
|
cs.CV
|
Recently, with the tremendous success of diffusion models in the field of
text-to-image (T2I) generation, increasing attention has been directed toward
their potential in text-to-video (T2V) applications. However, the computational
demands of diffusion models pose significant challenges, particularly in
generating high-resolution videos with high frame rates. In this paper, we
propose CascadeV, a cascaded latent diffusion model (LDM), that is capable of
producing state-of-the-art 2K resolution videos. Experiments demonstrate that
our cascaded model achieves a higher compression ratio, substantially reducing
the computational challenges associated with high-quality video generation. We
also implement a spatiotemporal alternating grid 3D attention mechanism, which
effectively integrates spatial and temporal information, ensuring superior
consistency across the generated video frames. Furthermore, our model can be
cascaded with existing T2V models, theoretically enabling a 4$\times$ increase
in resolution or frames per second without any fine-tuning. Our code is
available at https://github.com/bytedance/CascadeV.
|
2501.16613
|
Safe Reinforcement Learning for Real-World Engine Control
|
cs.LG cs.AI
|
This work introduces a toolchain for applying Reinforcement Learning (RL),
specifically the Deep Deterministic Policy Gradient (DDPG) algorithm, in
safety-critical real-world environments. As an exemplary application, transient
load control is demonstrated on a single-cylinder internal combustion engine
testbench in Homogeneous Charge Compression Ignition (HCCI) mode, that offers
high thermal efficiency and low emissions. However, HCCI poses challenges for
traditional control methods due to its nonlinear, autoregressive, and
stochastic nature. RL provides a viable solution, however, safety concerns,
such as excessive pressure rise rates, must be addressed when applying to HCCI.
A single unsuitable control input can severely damage the engine or cause
misfiring and shut down. Additionally, operating limits are not known a priori
and must be determined experimentally. To mitigate these risks, real-time
safety monitoring based on the k-nearest neighbor algorithm is implemented,
enabling safe interaction with the testbench. The feasibility of this approach
is demonstrated as the RL agent learns a control policy through interaction
with the testbench. A root mean square error of 0.1374 bar is achieved for the
indicated mean effective pressure, comparable to neural network-based
controllers from the literature. The toolchain's flexibility is further
demonstrated by adapting the agent's policy to increase ethanol energy shares,
promoting renewable fuel use while maintaining safety. This RL approach
addresses the longstanding challenge of applying RL to safety-critical
real-world environments. The developed toolchain, with its adaptability and
safety mechanisms, paves the way for future applicability of RL in engine
testbenches and other safety-critical settings.
|
2501.16614
|
FUNU: Boosting Machine Unlearning Efficiency by Filtering Unnecessary
Unlearning
|
cs.LG
|
Machine unlearning is an emerging field that selectively removes specific
data samples from a trained model. This capability is crucial for addressing
privacy concerns, complying with data protection regulations, and correcting
errors or biases introduced by certain data. Unlike traditional machine
learning, where models are typically static once trained, machine unlearning
facilitates dynamic updates that enable the model to ``forget'' information
without requiring complete retraining from scratch. There are various machine
unlearning methods, some of which are more time-efficient when data removal
requests are fewer.
To decrease the execution time of such machine unlearning methods, we aim to
reduce the size of data removal requests based on the fundamental assumption
that the removal of certain data would not result in a distinguishable
retrained model. We first propose the concept of unnecessary unlearning, which
indicates that the model would not alter noticeably after removing some data
points. Subsequently, we review existing solutions that can be used to solve
our problem. We highlight their limitations in adaptability to different
unlearning scenarios and their reliance on manually selected parameters. We
consequently put forward FUNU, a method to identify data points that lead to
unnecessary unlearning. FUNU circumvents the limitations of existing solutions.
The idea is to discover data points within the removal requests that have
similar neighbors in the remaining dataset. We utilize a reference model to set
parameters for finding neighbors, inspired from the area of model memorization.
We provide a theoretical analysis of the privacy guarantee offered by FUNU and
conduct extensive experiments to validate its efficacy.
|
2501.16615
|
Sparse Autoencoders Trained on the Same Data Learn Different Features
|
cs.LG
|
Sparse autoencoders (SAEs) are a useful tool for uncovering
human-interpretable features in the activations of large language models
(LLMs). While some expect SAEs to find the true underlying features used by a
model, our research shows that SAEs trained on the same model and data,
differing only in the random seed used to initialize their weights, identify
different sets of features. For example, in an SAE with 131K latents trained on
a feedforward network in Llama 3 8B, only 30% of the features were shared
across different seeds. We observed this phenomenon across multiple layers of
three different LLMs, two datasets, and several SAE architectures. While ReLU
SAEs trained with the L1 sparsity loss showed greater stability across seeds,
SAEs using the state-of-the-art TopK activation function were more
seed-dependent, even when controlling for the level of sparsity. Our results
suggest that the set of features uncovered by an SAE should be viewed as a
pragmatically useful decomposition of activation space, rather than an
exhaustive and universal list of features "truly used" by the model.
|
2501.16616
|
Few-Shot Optimized Framework for Hallucination Detection in
Resource-Limited NLP Systems
|
cs.CL
|
Hallucination detection in text generation remains an ongoing struggle for
natural language processing (NLP) systems, frequently resulting in unreliable
outputs in applications such as machine translation and definition modeling.
Existing methods struggle with data scarcity and the limitations of unlabeled
datasets, as highlighted by the SHROOM shared task at SemEval-2024. In this
work, we propose a novel framework to address these challenges, introducing
DeepSeek Few-shot optimization to enhance weak label generation through
iterative prompt engineering. We achieved high-quality annotations that
considerably enhanced the performance of downstream models by restructuring
data to align with instruct generative models. We further fine-tuned the
Mistral-7B-Instruct-v0.3 model on these optimized annotations, enabling it to
accurately detect hallucinations in resource-limited settings. Combining this
fine-tuned model with ensemble learning strategies, our approach achieved 85.5%
accuracy on the test set, setting a new benchmark for the SHROOM task. This
study demonstrates the effectiveness of data restructuring, few-shot
optimization, and fine-tuning in building scalable and robust hallucination
detection frameworks for resource-constrained NLP systems.
|
2501.16617
|
Predicting 3D representations for Dynamic Scenes
|
cs.CV
|
We present a novel framework for dynamic radiance field prediction given
monocular video streams. Unlike previous methods that primarily focus on
predicting future frames, our method goes a step further by generating explicit
3D representations of the dynamic scene. The framework builds on two core
designs. First, we adopt an ego-centric unbounded triplane to explicitly
represent the dynamic physical world. Second, we develop a 4D-aware transformer
to aggregate features from monocular videos to update the triplane. Coupling
these two designs enables us to train the proposed model with large-scale
monocular videos in a self-supervised manner. Our model achieves top results in
dynamic radiance field prediction on NVIDIA dynamic scenes, demonstrating its
strong performance on 4D physical world modeling. Besides, our model shows a
superior generalizability to unseen scenarios. Notably, we find that our
approach emerges capabilities for geometry and semantic learning.
|
2501.16619
|
SHIELD: Secure Host-Independent Extensible Logging for SATA/Network
Storage Towards Ransomware Detection
|
cs.CR cs.SY eess.SY
|
As malware such as ransomware becomes sophisticated, the ability to find and
neutralize it requires more robust and tamper-resistant solutions. Current
methods rely on data from compromised hosts, lack hardware isolation, and
cannot detect emerging threats. To address these limitations, we introduce
SHIELD - a detection architecture leveraging FPGA-based open-source SATA and
Network Block Device (NBD) technology to provide off-host, tamper-proof
measurements for continuous observation of disk activity for software executing
on a target device. SHIELD provides three distinct contributions: It (1)
develops a framework to obtain and analyze multi-level hardware metrics at NBD,
FPGA, and SATA storage levels, and shows their ability to differentiate between
harmless and malicious software; (2) Broadens the functionality of an
open-source FPGA-driven SATA Host Bus Adapter (HBA) to offer complete data
storage capabilities through NBD without relying on the host system; (3)
Provides a foundation for using the methodology and metrics in automated
machine learning-assisted detection and ASIC integration for advanced
mitigation capabilities in data storage devices. SHIELD analyzes 10 benign
programs and 10 modern ransomware families to illustrate its capacity for
real-time monitoring and use in distinguishing between ransomware and benign
software. Experimental evidence shows SHIELD's robust host-independent and
hardware-assisted metrics are a basis for detection, allowing to observe
program execution and detect malicious activities at the storage level.
|
2501.16621
|
Chinese Stock Prediction Based on a Multi-Modal Transformer Framework:
Macro-Micro Information Fusion
|
cs.LG cs.AI
|
This paper proposes an innovative Multi-Modal Transformer framework
(MMF-Trans) designed to significantly improve the prediction accuracy of the
Chinese stock market by integrating multi-source heterogeneous information
including macroeconomy, micro-market, financial text, and event knowledge. The
framework consists of four core modules: (1) A four-channel parallel encoder
that processes technical indicators, financial text, macro data, and event
knowledge graph respectively for independent feature extraction of multi-modal
data; (2) A dynamic gated cross-modal fusion mechanism that adaptively learns
the importance of different modalities through differentiable weight allocation
for effective information integration; (3) A time-aligned mixed-frequency
processing layer that uses an innovative position encoding method to
effectively fuse data of different time frequencies and solves the time
alignment problem of heterogeneous data; (4) A graph attention-based event
impact quantification module that captures the dynamic impact of events on the
market through event knowledge graph and quantifies the event impact
coefficient. We introduce a hybrid-frequency Transformer and Event2Vec
algorithm to effectively fuse data of different frequencies and quantify the
event impact. Experimental results show that in the prediction task of CSI 300
constituent stocks, the root mean square error (RMSE) of the MMF-Trans
framework is reduced by 23.7% compared to the baseline model, the event
response prediction accuracy is improved by 41.2%, and the Sharpe ratio is
improved by 32.6%.
|
2501.16624
|
More Efficient Sybil Detection Mechanisms Leveraging Resistance of Users
to Attack Requests
|
cs.SI
|
We investigate the problem of sybil (fake account) detection in social
networks from a graph algorithms perspective, where graph structural
information is used to classify users as sybil and benign. We introduce the
novel notion of user resistance to attack requests (friendship requests from
sybil accounts). Building on this notion, we propose a synthetic graph data
generation framework that supports various attack strategies. We then study the
optimization problem where we are allowed to reveal the resistance of a subset
of users with the aim to maximize the number of users which are discovered to
be benign and the number of potential attack edges (connections from a sybil to
a benign user). Furthermore, we devise efficient algorithms for this problem
and investigate their theoretical guarantees. Finally, through a large set of
experiments, we demonstrate that our proposed algorithms improve detection
performance notably when applied as a preprocessing step for different sybil
detection algorithms. The code and data used in this work are publicly
available on GitHub https://github.com/aSafarpoor/AAMAS2025-Paper/tree/main
|
2501.16625
|
A General Bayesian Framework for Informative Input Design in System
Identification
|
eess.SY cs.LG cs.SY
|
We tackle the problem of informative input design for system identification,
where we select inputs, observe the corresponding outputs from the true system,
and optimize the parameters of our model to best fit the data. We propose a
methodology that is compatible with any system and parametric family of models.
Our approach only requires input-output data from the system and first-order
information from the model with respect to the parameters. Our algorithm
consists of two modules. First, we formulate the problem of system
identification from a Bayesian perspective and propose an approximate iterative
method to optimize the model's parameters. Based on this Bayesian formulation,
we are able to define a Gaussian-based uncertainty measure for the model
parameters, which we can then minimize with respect to the next selected input.
Our method outperforms model-free baselines with various linear and nonlinear
dynamics.
|
2501.16626
|
Subject Representation Learning from EEG using Graph Convolutional
Variational Autoencoders
|
eess.SP cs.LG
|
We propose GC-VASE, a graph convolutional-based variational autoencoder that
leverages contrastive learning for subject representation learning from EEG
data. Our method successfully learns robust subject-specific latent
representations using the split-latent space architecture tailored for subject
identification. To enhance the model's adaptability to unseen subjects without
extensive retraining, we introduce an attention-based adapter network for
fine-tuning, which reduces the computational cost of adapting the model to new
subjects. Our method significantly outperforms other deep learning approaches,
achieving state-of-the-art results with a subject balanced accuracy of 89.81%
on the ERP-Core dataset and 70.85% on the SleepEDFx-20 dataset. After subject
adaptive fine-tuning using adapters and attention layers, GC-VASE further
improves the subject balanced accuracy to 90.31% on ERP-Core. Additionally, we
perform a detailed ablation study to highlight the impact of the key components
of our method.
|
2501.16627
|
Engaging with AI: How Interface Design Shapes Human-AI Collaboration in
High-Stakes Decision-Making
|
cs.HC cs.AI
|
As reliance on AI systems for decision-making grows, it becomes critical to
ensure that human users can appropriately balance trust in AI suggestions with
their own judgment, especially in high-stakes domains like healthcare. However,
human + AI teams have been shown to perform worse than AI alone, with evidence
indicating automation bias as the reason for poorer performance, particularly
because humans tend to follow AI's recommendations even when they are
incorrect. In many existing human + AI systems, decision-making support is
typically provided in the form of text explanations (XAI) to help users
understand the AI's reasoning. Since human decision-making often relies on
System 1 thinking, users may ignore or insufficiently engage with the
explanations, leading to poor decision-making. Previous research suggests that
there is a need for new approaches that encourage users to engage with the
explanations and one proposed method is the use of cognitive forcing functions
(CFFs). In this work, we examine how various decision-support mechanisms impact
user engagement, trust, and human-AI collaborative task performance in a
diabetes management decision-making scenario. In a controlled experiment with
108 participants, we evaluated the effects of six decision-support mechanisms
split into two categories of explanations (text, visual) and four CFFs. Our
findings reveal that mechanisms like AI confidence levels, text explanations,
and performance visualizations enhanced human-AI collaborative task
performance, and improved trust when AI reasoning clues were provided.
Mechanisms like human feedback and AI-driven questions encouraged deeper
reflection but often reduced task performance by increasing cognitive effort,
which in turn affected trust. Simple mechanisms like visual explanations had
little effect on trust, highlighting the importance of striking a balance in
CFF and XAI design.
|
2501.16629
|
CHiP: Cross-modal Hierarchical Direct Preference Optimization for
Multimodal LLMs
|
cs.CL cs.CV
|
Multimodal Large Language Models (MLLMs) still struggle with hallucinations
despite their impressive capabilities. Recent studies have attempted to
mitigate this by applying Direct Preference Optimization (DPO) to multimodal
scenarios using preference pairs from text-based responses. However, our
analysis of representation distributions reveals that multimodal DPO struggles
to align image and text representations and to distinguish between hallucinated
and non-hallucinated descriptions. To address these challenges, in this work,
we propose a Cross-modal Hierarchical Direct Preference Optimization (CHiP) to
address these limitations. We introduce a visual preference optimization module
within the DPO framework, enabling MLLMs to learn from both textual and visual
preferences simultaneously. Furthermore, we propose a hierarchical textual
preference optimization module that allows the model to capture preferences at
multiple granular levels, including response, segment, and token levels. We
evaluate CHiP through both quantitative and qualitative analyses, with results
across multiple benchmarks demonstrating its effectiveness in reducing
hallucinations. On the Object HalBench dataset, CHiP outperforms DPO in
hallucination reduction, achieving improvements of 52.7% and 55.5% relative
points based on the base model Muffin and LLaVA models, respectively. We make
all our datasets and code publicly available: https://github.com/LVUGAI/CHiP.
|
2501.16634
|
Towards Resource-Efficient Compound AI Systems
|
cs.DC cs.AI
|
Compound AI Systems, integrating multiple interacting components like models,
retrievers, and external tools, have emerged as essential for addressing
complex AI tasks. However, current implementations suffer from inefficient
resource utilization due to tight coupling between application logic and
execution details, a disconnect between orchestration and resource management
layers, and the perceived exclusiveness between efficiency and quality.
We propose a vision for resource-efficient Compound AI Systems through a
declarative workflow programming model and an adaptive runtime system for
dynamic scheduling and resource-aware decision-making. Decoupling application
logic from low-level details exposes levers for the runtime to flexibly
configure the execution environment and resources, without compromising on
quality. Enabling collaboration between the workflow orchestration and cluster
manager enables higher efficiency through better scheduling and resource
management.
We are building a prototype system, called Murakkab, to realize this vision.
Our preliminary evaluation demonstrates speedups up to $\sim 3.4\times$ in
workflow completion times while delivering $\sim 4.5\times$ higher energy
efficiency, showing promise in optimizing resources and advancing AI system
design.
|
2501.16635
|
Why Do We Laugh? Annotation and Taxonomy Generation for Laughable
Contexts in Spontaneous Text Conversation
|
cs.CL cs.AI
|
Laughter serves as a multifaceted communicative signal in human interaction,
yet its identification within dialogue presents a significant challenge for
conversational AI systems. This study addresses this challenge by annotating
laughable contexts in Japanese spontaneous text conversation data and
developing a taxonomy to classify the underlying reasons for such contexts.
Initially, multiple annotators manually labeled laughable contexts using a
binary decision (laughable or non-laughable). Subsequently, an LLM was used to
generate explanations for the binary annotations of laughable contexts, which
were then categorized into a taxonomy comprising ten categories, including
"Empathy and Affinity" and "Humor and Surprise," highlighting the diverse range
of laughter-inducing scenarios. The study also evaluated GPT-4's performance in
recognizing the majority labels of laughable contexts, achieving an F1 score of
43.14%. These findings contribute to the advancement of conversational AI by
establishing a foundation for more nuanced recognition and generation of
laughter, ultimately fostering more natural and engaging human-AI interactions.
|
2501.16638
|
Analysis of Zero Day Attack Detection Using MLP and XAI
|
cs.LG cs.CR
|
Any exploit taking advantage of zero-day is called a zero-day attack.
Previous research and social media trends show a massive demand for research in
zero-day attack detection. This paper analyzes Machine Learning (ML) and Deep
Learning (DL) based approaches to create Intrusion Detection Systems (IDS) and
scrutinizing them using Explainable AI (XAI) by training an explainer based on
randomly sampled data from the testing set. The focus is on using the KDD99
dataset, which has the most research done among all the datasets for detecting
zero-day attacks. The paper aims to synthesize the dataset to have fewer
classes for multi-class classification, test ML and DL approaches on pattern
recognition, establish the robustness and dependability of the model, and
establish the interpretability and scalability of the model. We evaluated the
performance of four multilayer perceptron (MLP) trained on the KDD99 dataset,
including baseline ML models, weighted ML models, truncated ML models, and
weighted truncated ML models. Our results demonstrate that the truncated ML
model achieves the highest accuracy (99.62%), precision, and recall, while
weighted truncated ML model shows lower accuracy (97.26%) but better class
representation (less bias) among all the classes with improved unweighted
recall score. We also used Shapely Additive exPlanations (SHAP) to train
explainer for our truncated models to check for feature importance among the
two weighted and unweighted models.
|
2501.16639
|
Finite Sample Analysis of Subspace Identification Methods
|
eess.SY cs.SY
|
As one of the mainstream approaches in system identification, subspace
identification methods (SIMs) are known for their simple parameterization for
MIMO systems and robust numerical properties. However, a comprehensive
statistical analysis of SIMs remains an open problem. Amid renewed focus on
identifying state-space models in the non-asymptotic regime, this work presents
a finite sample analysis for a large class of open-loop SIMs. It establishes
high-probability upper bounds for system matrices obtained via SIMs, and
reveals that convergence rates for estimating Markov parameters and system
matrices are $\mathcal{O}(1/\sqrt{N})$ up to logarithmic terms, in line with
classical asymptotic results. Following the key steps of SIMs, we arrive at the
above results by a three-step procedure. In Step 1, we begin with a
parsimonious SIM (PARSIM) that uses least-squares regression to estimate
multiple high-order ARX models in parallel. Leveraging a recent analysis of an
individual ARX model, we obtain a union error bound for a bank of ARX models.
Step 2 involves model reduction via weighted singular value decomposition
(SVD), where we consider different data-dependent weighting matrices and use
robustness results for SVD to obtain error bounds on extended controllability
and observability matrices, respectively. The final Step 3 focuses on deriving
error bounds for system matrices, where two different realization algorithms,
the MOESP type and the Larimore type, are considered. Although our study
initially focuses on PARSIM, the methodologies apply broadly across many
variants of SIMs.
|
2501.16642
|
FlowDAS: A Flow-Based Framework for Data Assimilation
|
eess.SP cs.LG eess.IV
|
Data assimilation (DA) is crucial for improving the accuracy of state
estimation in complex dynamical systems by integrating observational data with
physical models. Traditional solutions rely on either pure model-driven
approaches, such as Bayesian filters that struggle with nonlinearity, or
data-driven methods using deep learning priors, which often lack
generalizability and physical interpretability. Recently, score-based DA
methods have been introduced, focusing on learning prior distributions but
neglecting explicit state transition dynamics, leading to limited accuracy
improvements. To tackle the challenge, we introduce FlowDAS, a novel generative
model-based framework using the stochastic interpolants to unify the learning
of state transition dynamics and generative priors. FlowDAS achieves stable and
observation-consistent inference by initializing from proximal previous states,
mitigating the instability seen in score-based methods. Our extensive
experiments demonstrate FlowDAS's superior performance on various benchmarks,
from the Lorenz system to high-dimensional fluid super-resolution tasks.
FlowDAS also demonstrates improved tracking accuracy on practical Particle
Image Velocimetry (PIV) task, showcasing its effectiveness in complex flow
field reconstruction.
|
2501.16643
|
An LLM Benchmark for Addressee Recognition in Multi-modal Multi-party
Dialogue
|
cs.CL cs.AI cs.SD eess.AS
|
Handling multi-party dialogues represents a significant step for advancing
spoken dialogue systems, necessitating the development of tasks specific to
multi-party interactions. To address this challenge, we are constructing a
multi-modal multi-party dialogue corpus of triadic (three-participant)
discussions. This paper focuses on the task of addressee recognition,
identifying who is being addressed to take the next turn, a critical component
unique to multi-party dialogue systems. A subset of the corpus was annotated
with addressee information, revealing that explicit addressees are indicated in
approximately 20% of conversational turns. To evaluate the task's complexity,
we benchmarked the performance of a large language model (GPT-4o) on addressee
recognition. The results showed that GPT-4o achieved an accuracy only
marginally above chance, underscoring the challenges of addressee recognition
in multi-party dialogue. These findings highlight the need for further research
to enhance the capabilities of large language models in understanding and
navigating the intricacies of multi-party conversational dynamics.
|
2501.16650
|
DOCS: Quantifying Weight Similarity for Deeper Insights into Large
Language Models
|
cs.CL cs.AI
|
We introduce a novel index, the Distribution of Cosine Similarity (DOCS), for
quantitatively assessing the similarity between weight matrices in Large
Language Models (LLMs), aiming to facilitate the analysis of their complex
architectures. Leveraging DOCS, our analysis uncovers intriguing patterns in
the latest open-source LLMs: adjacent layers frequently exhibit high weight
similarity and tend to form clusters, suggesting depth-wise functional
specialization. Additionally, we prove that DOCS is theoretically effective in
quantifying similarity for orthogonal matrices, a crucial aspect given the
prevalence of orthogonal initializations in LLMs. This research contributes to
a deeper understanding of LLM architecture and behavior, offering tools with
potential implications for developing more efficient and interpretable models.
|
2501.16652
|
Molecular-driven Foundation Model for Oncologic Pathology
|
cs.CV cs.AI
|
Foundation models are reshaping computational pathology by enabling transfer
learning, where models pre-trained on vast datasets can be adapted for
downstream diagnostic, prognostic, and therapeutic response tasks. Despite
these advances, foundation models are still limited in their ability to encode
the entire gigapixel whole-slide images without additional training and often
lack complementary multimodal data. Here, we introduce Threads, a slide-level
foundation model capable of generating universal representations of whole-slide
images of any size. Threads was pre-trained using a multimodal learning
approach on a diverse cohort of 47,171 hematoxylin and eosin (H&E)-stained
tissue sections, paired with corresponding genomic and transcriptomic profiles
- the largest such paired dataset to be used for foundation model development
to date. This unique training paradigm enables Threads to capture the tissue's
underlying molecular composition, yielding powerful representations applicable
to a wide array of downstream tasks. In extensive benchmarking across 54
oncology tasks, including clinical subtyping, grading, mutation prediction,
immunohistochemistry status determination, treatment response prediction, and
survival prediction, Threads outperformed all baselines while demonstrating
remarkable generalizability and label efficiency. It is particularly well
suited for predicting rare events, further emphasizing its clinical utility. We
intend to make the model publicly available for the broader community.
|
2501.16655
|
Large Language Model Critics for Execution-Free Evaluation of Code
Changes
|
cs.CL cs.AI cs.SE
|
Large language models (LLMs) offer a promising way forward for automating
software engineering tasks, such as bug fixes, feature additions, etc., via
multi-step LLM-based agentic workflows. However, existing metrics for
evaluating such workflows, mainly build status and occasionally log analysis,
are too sparse and limited in providing the information needed to assess the
quality of changes made. In this work, we designed LLM-based critics to derive
well-structured and rigorous intermediate/step-level, execution-free evaluation
proxies for repo-level code changes. Importantly, we assume access to the gold
test patch for the problem (i.e., reference-aware) to assess both semantics and
executability of generated patches. With the gold test patch as a reference, we
predict executability of all editing locations with an F1 score of 91.6%,
aggregating which, we can predict the build status in 84.8% of the instances in
SWE-bench. In particular, such an execution-focused LLM critic outperforms
other reference-free and reference-aware LLM critics by 38.9% to 72.5%.
Moreover, we demonstrate the usefulness of such a reference-aware framework in
comparing patches generated by different agentic workflows. Finally, we
open-source the library developed for this project, which allows further usage
for either other agentic workflows or other benchmarks. The source code is
available at https://github.com/amazon-science/code-agent-eval.
|
2501.16656
|
Data Mining in Transportation Networks with Graph Neural Networks: A
Review and Outlook
|
cs.LG
|
Data mining in transportation networks (DMTNs) refers to using diverse types
of spatio-temporal data for various transportation tasks, including pattern
analysis, traffic prediction, and traffic controls. Graph neural networks
(GNNs) are essential in many DMTN problems due to their capability to represent
spatial correlations between entities. Between 2016 and 2024, the notable
applications of GNNs in DMTNs have extended to multiple fields such as traffic
prediction and operation. However, existing reviews have primarily focused on
traffic prediction tasks. To fill this gap, this study provides a timely and
insightful summary of GNNs in DMTNs, highlighting new progress in prediction
and operation from academic and industry perspectives since 2023. First, we
present and analyze various DMTN problems, followed by classical and recent GNN
models. Second, we delve into key works in three areas: (1) traffic prediction,
(2) traffic operation, and (3) industry involvement, such as Google Maps, Amap,
and Baidu Maps. Along these directions, we discuss new research opportunities
based on the significance of transportation problems and data availability.
Finally, we compile resources such as data, code, and other learning materials
to foster interdisciplinary communication. This review, driven by recent trends
in GNNs in DMTN studies since 2023, could democratize abundant datasets and
efficient GNN methods for various transportation problems including prediction
and operation.
|
2501.16658
|
Contextual Reinforcement in Multimodal Token Compression for Large
Language Models
|
cs.CL cs.AI
|
Effective token compression remains a critical challenge for scaling models
to handle increasingly complex and diverse datasets. A novel mechanism based on
contextual reinforcement is introduced, dynamically adjusting token importance
through interdependencies and semantic relevance. This approach enables
substantial reductions in token usage while preserving the quality and
coherence of information representation. Incorporating graph-based algorithms
and adaptive weighting, the method captures subtle contextual relationships
across textual and multimodal data, ensuring robust alignment and performance
in downstream tasks. Evaluations across varied domains reveal significant
improvements in accuracy and semantic retention, particularly for tasks
requiring detailed cross-modal interactions. Memory usage analyses demonstrate
improved computational efficiency, with minimal overhead despite the additional
reinforcement processes. Performance gains are further validated through error
distribution analyses, showing reduced semantic loss and syntactic
inconsistencies compared to baseline models. The modular architecture ensures
compatibility with a wide range of open-source frameworks, facilitating
scalable implementation for real-world applications. These findings highlight
the potential of contextual reinforcement in redefining token management
strategies and advancing large-scale model design.
|
2501.16662
|
Vision-based autonomous structural damage detection using data-driven
methods
|
cs.CV cs.AI eess.IV
|
This study addresses the urgent need for efficient and accurate damage
detection in wind turbine structures, a crucial component of renewable energy
infrastructure. Traditional inspection methods, such as manual assessments and
non-destructive testing (NDT), are often costly, time-consuming, and prone to
human error. To tackle these challenges, this research investigates advanced
deep learning algorithms for vision-based structural health monitoring (SHM). A
dataset of wind turbine surface images, featuring various damage types and
pollution, was prepared and augmented for enhanced model training. Three
algorithms-YOLOv7, its lightweight variant, and Faster R-CNN- were employed to
detect and classify surface damage. The models were trained and evaluated on a
dataset split into training, testing, and evaluation subsets (80%-10%-10%).
Results indicate that YOLOv7 outperformed the others, achieving 82.4% mAP@50
and high processing speed, making it suitable for real-time inspections. By
optimizing hyperparameters like learning rate and batch size, the models'
accuracy and efficiency improved further. YOLOv7 demonstrated significant
advancements in detection precision and execution speed, especially for
real-time applications. However, challenges such as dataset limitations and
environmental variability were noted, suggesting future work on segmentation
methods and larger datasets. This research underscores the potential of
vision-based deep learning techniques to transform SHM practices by reducing
costs, enhancing safety, and improving reliability, thus contributing to the
sustainable maintenance of critical infrastructure and supporting the longevity
of wind energy systems.
|
2501.16663
|
Data Duplication: A Novel Multi-Purpose Attack Paradigm in Machine
Unlearning
|
cs.CR cs.AI
|
Duplication is a prevalent issue within datasets. Existing research has
demonstrated that the presence of duplicated data in training datasets can
significantly influence both model performance and data privacy. However, the
impact of data duplication on the unlearning process remains largely
unexplored. This paper addresses this gap by pioneering a comprehensive
investigation into the role of data duplication, not only in standard machine
unlearning but also in federated and reinforcement unlearning paradigms.
Specifically, we propose an adversary who duplicates a subset of the target
model's training set and incorporates it into the training set. After training,
the adversary requests the model owner to unlearn this duplicated subset, and
analyzes the impact on the unlearned model. For example, the adversary can
challenge the model owner by revealing that, despite efforts to unlearn it, the
influence of the duplicated subset remains in the model. Moreover, to
circumvent detection by de-duplication techniques, we propose three novel
near-duplication methods for the adversary, each tailored to a specific
unlearning paradigm. We then examine their impacts on the unlearning process
when de-duplication techniques are applied. Our findings reveal several crucial
insights: 1) the gold standard unlearning method, retraining from scratch,
fails to effectively conduct unlearning under certain conditions; 2) unlearning
duplicated data can lead to significant model degradation in specific
scenarios; and 3) meticulously crafted duplicates can evade detection by
de-duplication methods.
|
2501.16664
|
Improving Vision-Language-Action Model with Online Reinforcement
Learning
|
cs.RO cs.CV cs.LG
|
Recent studies have successfully integrated large vision-language models
(VLMs) into low-level robotic control by supervised fine-tuning (SFT) with
expert robotic datasets, resulting in what we term vision-language-action (VLA)
models. Although the VLA models are powerful, how to improve these large models
during interaction with environments remains an open question. In this paper,
we explore how to further improve these VLA models via Reinforcement Learning
(RL), a commonly used fine-tuning technique for large models. However, we find
that directly applying online RL to large VLA models presents significant
challenges, including training instability that severely impacts the
performance of large models, and computing burdens that exceed the capabilities
of most local machines. To address these challenges, we propose iRe-VLA
framework, which iterates between Reinforcement Learning and Supervised
Learning to effectively improve VLA models, leveraging the exploratory benefits
of RL while maintaining the stability of supervised learning. Experiments in
two simulated benchmarks and a real-world manipulation suite validate the
effectiveness of our method.
|
2501.16665
|
CSPCL: Category Semantic Prior Contrastive Learning for Deformable
DETR-Based Prohibited Item Detectors
|
cs.CV
|
Prohibited item detection based on X-ray images is one of the most effective
security inspection methods. However, the foreground-background feature
coupling caused by the overlapping phenomenon specific to X-ray images makes
general detectors designed for natural images perform poorly. To address this
issue, we propose a Category Semantic Prior Contrastive Learning (CSPCL)
mechanism, which aligns the class prototypes perceived by the classifier with
the content queries to correct and supplement the missing semantic information
responsible for classification, thereby enhancing the model sensitivity to
foreground features.To achieve this alignment, we design a specific contrastive
loss, CSP loss, which includes Intra-Class Truncated Attraction (ITA) loss and
Inter-Class Adaptive Repulsion (IAR) loss, and outperforms classic N-pair loss
and InfoNCE loss. Specifically, ITA loss leverages class prototypes to attract
intra-class category-specific content queries while preserving necessary
distinctiveness. IAR loss utilizes class prototypes to adaptively repel
inter-class category-specific content queries based on the similarity between
class prototypes, helping disentangle features of similar categories.CSPCL is
general and can be easily integrated into Deformable DETR-based models.
Extensive experiments on the PIXray and OPIXray datasets demonstrate that CSPCL
significantly enhances the performance of various state-of-the-art models
without increasing complexity.The code will be open source once the paper is
accepted.
|
2501.16666
|
Federated Learning for Efficient Condition Monitoring and Anomaly
Detection in Industrial Cyber-Physical Systems
|
cs.LG cs.AI
|
Detecting and localizing anomalies in cyber-physical systems (CPS) has become
increasingly challenging as systems grow in complexity, particularly due to
varying sensor reliability and node failures in distributed environments. While
federated learning (FL) provides a foundation for distributed model training,
existing approaches often lack mechanisms to address these CPS-specific
challenges. This paper introduces an enhanced FL framework with three key
innovations: adaptive model aggregation based on sensor reliability, dynamic
node selection for resource optimization, and Weibull-based checkpointing for
fault tolerance. The proposed framework ensures reliable condition monitoring
while tackling the computational and reliability challenges of industrial CPS
deployments. Experiments on the NASA Bearing and Hydraulic System datasets
demonstrate superior performance compared to state-of-the-art FL methods,
achieving 99.5% AUC-ROC in anomaly detection and maintaining accuracy even
under node failures. Statistical validation using the Mann-Whitney U test
confirms significant improvements, with a p-value less than 0.05, in both
detection accuracy and computational efficiency across various operational
scenarios.
|
2501.16668
|
Examining Online Social Support for Countering QAnon Conspiracies
|
cs.SI cs.HC physics.soc-ph
|
As radical messaging has proliferated on social networking sites, platforms
like Reddit have been used to host support groups, including support
communities for the families and friends of radicalized individuals. This study
examines the subreddit r/QAnonCasualties, an online forum for users whose loved
ones have been radicalized by QAnon. We collected 1,665 posts and 78,171
comments posted between 7/2021 and 7/2022 and content coded top posts for
prominent themes. Sentiment analysis was also conducted on all posts. We find
venting, advice and validation-seeking, and pressure to refuse the COVID-19
vaccine were prominent themes. 40% (n=167) of coded posts identified the Q
relation(s) of users as their parent(s) and 16.3% (n=68) as their partner.
Posts with higher proportions of words related to swearing, social referents,
and physical needs were positively correlated with engagement. These findings
show ways that communities around QAnon adherents leverage anonymous online
spaces to seek and provide social support.
|
2501.16671
|
Data-Free Model-Related Attacks: Unleashing the Potential of Generative
AI
|
cs.CR cs.AI
|
Generative AI technology has become increasingly integrated into our daily
lives, offering powerful capabilities to enhance productivity. However, these
same capabilities can be exploited by adversaries for malicious purposes. While
existing research on adversarial applications of generative AI predominantly
focuses on cyberattacks, less attention has been given to attacks targeting
deep learning models. In this paper, we introduce the use of generative AI for
facilitating model-related attacks, including model extraction, membership
inference, and model inversion. Our study reveals that adversaries can launch a
variety of model-related attacks against both image and text models in a
data-free and black-box manner, achieving comparable performance to baseline
methods that have access to the target models' training data and parameters in
a white-box manner. This research serves as an important early warning to the
community about the potential risks associated with generative AI-powered
attacks on deep learning models.
|
2501.16672
|
VeriFact: Verifying Facts in LLM-Generated Clinical Text with Electronic
Health Records
|
cs.AI cs.CL cs.IR cs.LO
|
Methods to ensure factual accuracy of text generated by large language models
(LLM) in clinical medicine are lacking. VeriFact is an artificial intelligence
system that combines retrieval-augmented generation and LLM-as-a-Judge to
verify whether LLM-generated text is factually supported by a patient's medical
history based on their electronic health record (EHR). To evaluate this system,
we introduce VeriFact-BHC, a new dataset that decomposes Brief Hospital Course
narratives from discharge summaries into a set of simple statements with
clinician annotations for whether each statement is supported by the patient's
EHR clinical notes. Whereas highest agreement between clinicians was 88.5%,
VeriFact achieves up to 92.7% agreement when compared to a denoised and
adjudicated average human clinican ground truth, suggesting that VeriFact
exceeds the average clinician's ability to fact-check text against a patient's
medical record. VeriFact may accelerate the development of LLM-based EHR
applications by removing current evaluation bottlenecks.
|
2501.16673
|
LLM-AutoDiff: Auto-Differentiate Any LLM Workflow
|
cs.CL
|
Large Language Models (LLMs) have reshaped natural language processing,
powering applications from multi-hop retrieval and question answering to
autonomous agent workflows. Yet, prompt engineering -- the task of crafting
textual inputs to effectively direct LLMs -- remains difficult and
labor-intensive, particularly for complex pipelines that combine multiple LLM
calls with functional operations like retrieval and data formatting. We
introduce LLM-AutoDiff: a novel framework for Automatic Prompt Engineering
(APE) that extends textual gradient-based methods (such as Text-Grad) to
multi-component, potentially cyclic LLM architectures. Implemented within the
AdalFlow library, LLM-AutoDiff treats each textual input as a trainable
parameter and uses a frozen backward engine LLM to generate feedback-akin to
textual gradients -- that guide iterative prompt updates. Unlike prior
single-node approaches, LLM-AutoDiff inherently accommodates functional nodes,
preserves time-sequential behavior in repeated calls (e.g., multi-hop loops),
and combats the "lost-in-the-middle" problem by isolating distinct sub-prompts
(instructions, formats, or few-shot examples). It further boosts training
efficiency by focusing on error-prone samples through selective gradient
computation. Across diverse tasks, including single-step classification,
multi-hop retrieval-based QA, and agent-driven pipelines, LLM-AutoDiff
consistently outperforms existing textual gradient baselines in both accuracy
and training cost. By unifying prompt optimization through a graph-centric
lens, LLM-AutoDiff offers a powerful new paradigm for scaling and automating
LLM workflows - mirroring the transformative role that automatic
differentiation libraries have long played in neural network research.
|
2501.16675
|
Variational Schr\"odinger Momentum Diffusion
|
stat.ML cs.LG
|
The momentum Schr\"odinger Bridge (mSB) has emerged as a leading method for
accelerating generative diffusion processes and reducing transport costs.
However, the lack of simulation-free properties inevitably results in high
training costs and affects scalability. To obtain a trade-off between transport
properties and scalability, we introduce variational Schr\"odinger momentum
diffusion (VSMD), which employs linearized forward score functions (variational
scores) to eliminate the dependence on simulated forward trajectories. Our
approach leverages a multivariate diffusion process with adaptively
transport-optimized variational scores. Additionally, we apply a
critical-damping transform to stabilize training by removing the need for score
estimations for both velocity and samples. Theoretically, we prove the
convergence of samples generated with optimal variational scores and momentum
diffusion. Empirical results demonstrate that VSMD efficiently generates
anisotropic shapes while maintaining transport efficacy, outperforming
overdamped alternatives, and avoiding complex denoising processes. Our approach
also scales effectively to real-world data, achieving competitive results in
time series and image generation.
|
2501.16677
|
Improving Interpretability and Accuracy in Neuro-Symbolic Rule
Extraction Using Class-Specific Sparse Filters
|
cs.CV cs.AI
|
There has been significant focus on creating neuro-symbolic models for
interpretable image classification using Convolutional Neural Networks (CNNs).
These methods aim to replace the CNN with a neuro-symbolic model consisting of
the CNN, which is used as a feature extractor, and an interpretable rule-set
extracted from the CNN itself. While these approaches provide interpretability
through the extracted rule-set, they often compromise accuracy compared to the
original CNN model. In this paper, we identify the root cause of this accuracy
loss as the post-training binarization of filter activations to extract the
rule-set. To address this, we propose a novel sparsity loss function that
enables class-specific filter binarization during CNN training, thus minimizing
information loss when extracting the rule-set. We evaluate several training
strategies with our novel sparsity loss, analyzing their effectiveness and
providing guidance on their appropriate use. Notably, we set a new benchmark,
achieving a 9% improvement in accuracy and a 53% reduction in rule-set size on
average, compared to the previous SOTA, while coming within 3% of the original
CNN's accuracy. This highlights the significant potential of interpretable
neuro-symbolic models as viable alternatives to black-box CNNs.
|
2501.16679
|
Polyp-Gen: Realistic and Diverse Polyp Image Generation for Endoscopic
Dataset Expansion
|
cs.CV
|
Automated diagnostic systems (ADS) have shown significant potential in the
early detection of polyps during endoscopic examinations, thereby reducing the
incidence of colorectal cancer. However, due to high annotation costs and
strict privacy concerns, acquiring high-quality endoscopic images poses a
considerable challenge in the development of ADS. Despite recent advancements
in generating synthetic images for dataset expansion, existing endoscopic image
generation algorithms failed to accurately generate the details of polyp
boundary regions and typically required medical priors to specify plausible
locations and shapes of polyps, which limited the realism and diversity of the
generated images. To address these limitations, we present Polyp-Gen, the first
full-automatic diffusion-based endoscopic image generation framework.
Specifically, we devise a spatial-aware diffusion training scheme with a
lesion-guided loss to enhance the structural context of polyp boundary regions.
Moreover, to capture medical priors for the localization of potential polyp
areas, we introduce a hierarchical retrieval-based sampling strategy to match
similar fine-grained spatial features. In this way, our Polyp-Gen can generate
realistic and diverse endoscopic images for building reliable ADS. Extensive
experiments demonstrate the state-of-the-art generation quality, and the
synthetic images can improve the downstream polyp detection task. Additionally,
our Polyp-Gen has shown remarkable zero-shot generalizability on other
datasets. The source code is available at
https://github.com/CUHK-AIM-Group/Polyp-Gen.
|
2501.16683
|
On Non-intrusive Data-driven Implementations of IRKA and Balanced
Truncation
|
eess.SY cs.SY
|
This paper introduces quadrature-based approaches that enable offline
sampling of the transfer function and its derivative from available frequency
response or impulse response data. Unlike quadrature-based data-driven balanced
truncation, the proposed methods do not require samples of the transfer
function's derivative or the impulse response's derivative. Additionally, a
non-intrusive approach to track the error in IRKA as it refines the
interpolation data is presented. Furthermore, a non-intrusive data-driven
implementation of balanced truncation, equivalent to ADI-based low-rank
balanced truncation, is proposed. This approach is not quadrature-based and
only requires samples of the transfer function at the mirror images of the ADI
shifts to construct the reduced-order model. The quality of the low-rank
approximation of the Gramians in this method can also be monitored in a
non-intrusive manner. Both the non-intrusive implementations of IRKA and
balanced truncation are also extended to discrete-time systems in this paper.
For discrete-time IRKA, the impulse response-based implementation uses a
truncated summation approach, as the time-domain formulation of discrete-time
IRKA involves summations rather than integrals. The paper includes two
illustrative examples: one for the non-intrusive data-driven implementations of
IRKA and balanced truncation for continuous-time systems, and another for
discrete-time systems.
|
2501.16684
|
SliceOcc: Indoor 3D Semantic Occupancy Prediction with Vertical Slice
Representation
|
cs.CV
|
3D semantic occupancy prediction is a crucial task in visual perception, as
it requires the simultaneous comprehension of both scene geometry and
semantics. It plays a crucial role in understanding 3D scenes and has great
potential for various applications, such as robotic vision perception and
autonomous driving. Many existing works utilize planar-based representations
such as Bird's Eye View (BEV) and Tri-Perspective View (TPV). These
representations aim to simplify the complexity of 3D scenes while preserving
essential object information, thereby facilitating efficient scene
representation. However, in dense indoor environments with prevalent
occlusions, directly applying these planar-based methods often leads to
difficulties in capturing global semantic occupancy, ultimately degrading model
performance. In this paper, we present a new vertical slice representation that
divides the scene along the vertical axis and projects spatial point features
onto the nearest pair of parallel planes. To utilize these slice features, we
propose SliceOcc, an RGB camera-based model specifically tailored for indoor 3D
semantic occupancy prediction. SliceOcc utilizes pairs of slice queries and
cross-attention mechanisms to extract planar features from input images. These
local planar features are then fused to form a global scene representation,
which is employed for indoor occupancy prediction. Experimental results on the
EmbodiedScan dataset demonstrate that SliceOcc achieves a mIoU of 15.45% across
81 indoor categories, setting a new state-of-the-art performance among RGB
camera-based models for indoor 3D semantic occupancy prediction. Code is
available at https://github.com/NorthSummer/SliceOcc.
|
2501.16688
|
MME-Industry: A Cross-Industry Multimodal Evaluation Benchmark
|
cs.CL
|
With the rapid advancement of Multimodal Large Language Models (MLLMs),
numerous evaluation benchmarks have emerged. However, comprehensive assessments
of their performance across diverse industrial applications remain limited. In
this paper, we introduce MME-Industry, a novel benchmark designed specifically
for evaluating MLLMs in industrial settings.The benchmark encompasses 21
distinct domain, comprising 1050 question-answer pairs with 50 questions per
domain. To ensure data integrity and prevent potential leakage from public
datasets, all question-answer pairs were manually crafted and validated by
domain experts. Besides, the benchmark's complexity is effectively enhanced by
incorporating non-OCR questions that can be answered directly, along with tasks
requiring specialized domain knowledge. Moreover, we provide both Chinese and
English versions of the benchmark, enabling comparative analysis of MLLMs'
capabilities across these languages. Our findings contribute valuable insights
into MLLMs' practical industrial applications and illuminate promising
directions for future model optimization research.
|
2501.16689
|
MACI: Multi-Agent Collaborative Intelligence for Adaptive Reasoning and
Temporal Planning
|
cs.AI
|
Artificial intelligence requires deliberate reasoning, temporal awareness,
and effective constraint management, capabilities traditional LLMs often lack
due to their reliance on pattern matching, limited self-verification, and
inconsistent constraint handling. We introduce Multi-Agent Collaborative
Intelligence (MACI), a framework comprising three key components: 1) a
meta-planner (MP) that identifies, formulates, and refines all roles and
constraints of a task (e.g., wedding planning) while generating a dependency
graph, with common-sense augmentation to ensure realistic and practical
constraints; 2) a collection of agents to facilitate planning and address
task-specific requirements; and 3) a run-time monitor that manages plan
adjustments as needed. By decoupling planning from validation, maintaining
minimal agent context, and integrating common-sense reasoning, MACI overcomes
the aforementioned limitations and demonstrates robust performance in two
scheduling problems.
|
2501.16690
|
Quantum advantage in decentralized control of POMDPs: A
control-theoretic view of the Mermin-Peres square
|
math.OC cs.IT cs.SY eess.SY math.IT quant-ph
|
Consider a decentralized partially-observed Markov decision problem (POMDP)
with multiple cooperative agents aiming to maximize a long-term-average reward
criterion. We observe that the availability, at a fixed rate, of entangled
states of a product quantum system between the agents, where each agent has
access to one of the component systems, can result in strictly improved
performance even compared to the scenario where common randomness is provided
to the agents, i.e. there is a quantum advantage in decentralized control. This
observation comes from a simple reinterpretation of the conclusions of the
well-known Mermin-Peres square, which underpins the Mermin-Peres game. While
quantum advantage has been demonstrated earlier in one-shot team problems of
this kind, it is notable that there are examples where there is a quantum
advantage for the one-shot criterion but it disappears in the dynamical
scenario. The presence of a quantum advantage in dynamical scenarios is thus
seen to be a novel finding relative to the current state of knowledge about the
achievable performance in decentralized control problems.
This paper is dedicated to the memory of Pravin P. Varaiya.
|
2501.16692
|
Optimizing Code Runtime Performance through Context-Aware
Retrieval-Augmented Generation
|
cs.SE cs.AI
|
Optimizing software performance through automated code refinement offers a
promising avenue for enhancing execution speed and efficiency. Despite recent
advancements in LLMs, a significant gap remains in their ability to perform
in-depth program analysis. This study introduces AUTOPATCH, an in-context
learning approach designed to bridge this gap by enabling LLMs to automatically
generate optimized code. Inspired by how programmers learn and apply knowledge
to optimize software, AUTOPATCH incorporates three key components: (1) an
analogy-driven framework to align LLM optimization with human cognitive
processes, (2) a unified approach that integrates historical code examples and
CFG analysis for context-aware learning, and (3) an automated pipeline for
generating optimized code through in-context prompting. Experimental results
demonstrate that AUTOPATCH achieves a 7.3% improvement in execution efficiency
over GPT-4o across common generated executable code, highlighting its potential
to advance automated program runtime optimization.
|
2501.16698
|
3D-MoE: A Mixture-of-Experts Multi-modal LLM for 3D Vision and Pose
Diffusion via Rectified Flow
|
cs.CL cs.CV cs.RO
|
3D vision and spatial reasoning have long been recognized as preferable for
accurately perceiving our three-dimensional world, especially when compared
with traditional visual reasoning based on 2D images. Due to the difficulties
in collecting high-quality 3D data, research in this area has only recently
gained momentum. With the advent of powerful large language models (LLMs),
multi-modal LLMs for 3D vision have been developed over the past few years.
However, most of these models focus primarily on the vision encoder for 3D
data. In this paper, we propose converting existing densely activated LLMs into
mixture-of-experts (MoE) models, which have proven effective for multi-modal
data processing. In addition to leveraging these models' instruction-following
capabilities, we further enable embodied task planning by attaching a diffusion
head, Pose-DiT, that employs a novel rectified flow diffusion scheduler.
Experimental results on 3D question answering and task-planning tasks
demonstrate that our 3D-MoE framework achieves improved performance with fewer
activated parameters.
|
2501.16700
|
Determining Mosaic Resilience in Sugarcane Plants using Hyperspectral
Images
|
cs.CV cs.AI
|
Sugarcane mosaic disease poses a serious threat to the Australian sugarcane
industry, leading to yield losses of up to 30% in susceptible varieties.
Existing manual inspection methods for detecting mosaic resilience are
inefficient and impractical for large-scale application. This study introduces
a novel approach using hyperspectral imaging and machine learning to detect
mosaic resilience by leveraging global feature representation from local
spectral patches. Hyperspectral data were collected from eight sugarcane
varieties under controlled and field conditions. Local spectral patches were
analyzed to capture spatial and spectral variations, which were then aggregated
into global feature representations using a ResNet18 deep learning
architecture. While classical methods like Support Vector Machines struggled to
utilize spatial-spectral relationships effectively, the deep learning model
achieved high classification accuracy, demonstrating its capacity to identify
mosaic resilience from fine-grained hyperspectral data. This approach enhances
early detection capabilities, enabling more efficient management of susceptible
strains and contributing to sustainable sugarcane production.
|
2501.16704
|
DFCon: Attention-Driven Supervised Contrastive Learning for Robust
Deepfake Detection
|
cs.CV cs.CR cs.LG eess.IV eess.SP
|
This report presents our approach for the IEEE SP Cup 2025: Deepfake Face
Detection in the Wild (DFWild-Cup), focusing on detecting deepfakes across
diverse datasets. Our methodology employs advanced backbone models, including
MaxViT, CoAtNet, and EVA-02, fine-tuned using supervised contrastive loss to
enhance feature separation. These models were specifically chosen for their
complementary strengths. Integration of convolution layers and strided
attention in MaxViT is well-suited for detecting local features. In contrast,
hybrid use of convolution and attention mechanisms in CoAtNet effectively
captures multi-scale features. Robust pretraining with masked image modeling of
EVA-02 excels at capturing global features. After training, we freeze the
parameters of these models and train the classification heads. Finally, a
majority voting ensemble is employed to combine the predictions from these
models, improving robustness and generalization to unseen scenarios. The
proposed system addresses the challenges of detecting deepfakes in real-world
conditions and achieves a commendable accuracy of 95.83% on the validation
dataset.
|
2501.16714
|
Separate Motion from Appearance: Customizing Motion via Customizing
Text-to-Video Diffusion Models
|
cs.CV cs.AI
|
Motion customization aims to adapt the diffusion model (DM) to generate
videos with the motion specified by a set of video clips with the same motion
concept. To realize this goal, the adaptation of DM should be possible to model
the specified motion concept, without compromising the ability to generate
diverse appearances. Thus, the key to solving this problem lies in how to
separate the motion concept from the appearance in the adaptation process of
DM. Typical previous works explore different ways to represent and insert a
motion concept into large-scale pretrained text-to-video diffusion models,
e.g., learning a motion LoRA, using latent noise residuals, etc. While those
methods can encode the motion concept, they also inevitably encode the
appearance in the reference videos, resulting in weakened appearance generation
capability. In this paper, we follow the typical way to learn a motion LoRA to
encode the motion concept, but propose two novel strategies to enhance
motion-appearance separation, including temporal attention purification (TAP)
and appearance highway (AH). Specifically, we assume that in the temporal
attention module, the pretrained Value embeddings are sufficient to serve as
basic components needed by producing a new motion. Thus, in TAP, we choose only
to reshape the temporal attention with motion LoRAs so that Value embeddings
can be reorganized to produce a new motion. Further, in AH, we alter the
starting point of each skip connection in U-Net from the output of each
temporal attention module to the output of each spatial attention module.
Extensive experiments demonstrate that compared to previous works, our method
can generate videos with appearance more aligned with the text descriptions and
motion more consistent with the reference videos.
|
2501.16716
|
Point Cloud Upsampling as Statistical Shape Model for Pelvic
|
cs.CV
|
We propose a novel framework that integrates medical image segmentation and
point cloud upsampling for accurate shape reconstruction of pelvic models.
Using the SAM-Med3D model for segmentation and a point cloud upsampling network
trained on the MedShapeNet dataset, our method transforms sparse medical
imaging data into high-resolution 3D bone models. This framework leverages
prior knowledge of anatomical shapes, achieving smoother and more complete
reconstructions. Quantitative evaluations using metrics such as Chamfer
Distance etc, demonstrate the effectiveness of the point cloud upsampling in
pelvic model. Our approach offers potential applications in reconstructing
other skeletal structures, providing a robust solution for medical image
analysis and statistical shape modeling.
|
2501.16717
|
Strawberry Robotic Operation Interface: An Open-Source Device for
Collecting Dexterous Manipulation Data in Robotic Strawberry Farming
|
cs.RO
|
The strawberry farming is labor-intensive, particularly in tasks requiring
dexterous manipulation such as picking occluded strawberries. To address this
challenge, we present the Strawberry Robotic Operation Interface (SROI), an
open-source device designed for collecting dexterous manipulation data in
robotic strawberry farming. The SROI features a handheld unit with a modular
end effector, a stereo robotic camera, enabling the easy collection of
demonstration data in field environments. A data post-processing pipeline is
introduced to extract spatial trajectories and gripper states from the
collected data. Additionally, we release an open-source dataset of strawberry
picking demonstrations to facilitate research in dexterous robotic
manipulation. The SROI represents a step toward automating complex strawberry
farming tasks, reducing reliance on manual labor.
|
2501.16718
|
Outlier Synthesis via Hamiltonian Monte Carlo for Out-of-Distribution
Detection
|
cs.LG
|
Out-of-distribution (OOD) detection is crucial for developing trustworthy and
reliable machine learning systems. Recent advances in training with auxiliary
OOD data demonstrate efficacy in enhancing detection capabilities. Nonetheless,
these methods heavily rely on acquiring a large pool of high-quality natural
outliers. Some prior methods try to alleviate this problem by synthesizing
virtual outliers but suffer from either poor quality or high cost due to the
monotonous sampling strategy and the heavy-parameterized generative models. In
this paper, we overcome all these problems by proposing the Hamiltonian Monte
Carlo Outlier Synthesis (HamOS) framework, which views the synthesis process as
sampling from Markov chains. Based solely on the in-distribution data, the
Markov chains can extensively traverse the feature space and generate diverse
and representative outliers, hence exposing the model to miscellaneous
potential OOD scenarios. The Hamiltonian Monte Carlo with sampling acceptance
rate almost close to 1 also makes our framework enjoy great efficiency. By
empirically competing with SOTA baselines on both standard and large-scale
benchmarks, we verify the efficacy and efficiency of our proposed HamOS.
|
2501.16719
|
Safety-Critical Control for Aerial Physical Interaction in Uncertain
Environment
|
cs.RO
|
Aerial manipulation for safe physical interaction with their environments is
gaining significant momentum in robotics research. In this paper, we present a
disturbance-observer-based safety-critical control for a fully actuated aerial
manipulator interacting with both static and dynamic structures. Our approach
centers on a safety filter that dynamically adjusts the desired trajectory of
the vehicle's pose, accounting for the aerial manipulator's dynamics, the
disturbance observer's structure, and motor thrust limits. We provide rigorous
proof that the proposed safety filter ensures the forward invariance of the
safety set - representing motor thrust limits - even in the presence of
disturbance estimation errors. To demonstrate the superiority of our method
over existing control strategies for aerial physical interaction, we perform
comparative experiments involving complex tasks, such as pushing against a
static structure and pulling a plug firmly attached to an electric socket.
Furthermore, to highlight its repeatability in scenarios with sudden dynamic
changes, we perform repeated tests of pushing a movable cart and extracting a
plug from a socket. These experiments confirm that our method not only
outperforms existing methods but also excels in handling tasks with rapid
dynamic variations.
|
2501.16720
|
One Head Eight Arms: Block Matrix based Low Rank Adaptation for
CLIP-based Few-Shot Learning
|
cs.CV cs.AI
|
Recent advancements in fine-tuning Vision-Language Foundation Models (VLMs)
have garnered significant attention for their effectiveness in downstream
few-shot learning tasks.While these recent approaches exhibits some performance
improvements, they often suffer from excessive training parameters and high
computational costs. To address these challenges, we propose a novel Block
matrix-based low-rank adaptation framework, called Block-LoRA, for fine-tuning
VLMs on downstream few-shot tasks. Inspired by recent work on Low-Rank
Adaptation (LoRA), Block-LoRA partitions the original low-rank decomposition
matrix of LoRA into a series of sub-matrices while sharing all down-projection
sub-matrices. This structure not only reduces the number of training
parameters, but also transforms certain complex matrix multiplication
operations into simpler matrix addition, significantly lowering the
computational cost of fine-tuning. Notably, Block-LoRA enables fine-tuning CLIP
on the ImageNet few-shot benchmark using a single 24GB GPU. We also show that
Block-LoRA has the more tighter bound of generalization error than vanilla
LoRA. Without bells and whistles, extensive experiments demonstrate that
Block-LoRA achieves competitive performance compared to state-of-the-art
CLIP-based few-shot methods, while maintaining a low training parameters count
and reduced computational overhead.
|
2501.16722
|
Hypergraph Diffusion for High-Order Recommender Systems
|
cs.IR cs.AI cs.DB cs.LG cs.SI
|
Recommender systems rely on Collaborative Filtering (CF) to predict user
preferences by leveraging patterns in historical user-item interactions. While
traditional CF methods primarily focus on learning compact vector embeddings
for users and items, graph neural network (GNN)-based approaches have emerged
as a powerful alternative, utilizing the structure of user-item interaction
graphs to enhance recommendation accuracy. However, existing GNN-based models,
such as LightGCN and UltraGCN, often struggle with two major limitations: an
inability to fully account for heterophilic interactions, where users engage
with diverse item categories, and the over-smoothing problem in multi-layer
GNNs, which hinders their ability to model complex, high-order relationships.
To address these gaps, we introduce WaveHDNN, an innovative wavelet-enhanced
hypergraph diffusion framework. WaveHDNN integrates a Heterophily-aware
Collaborative Encoder, designed to capture user-item interactions across
diverse categories, with a Multi-scale Group-wise Structure Encoder, which
leverages wavelet transforms to effectively model localized graph structures.
Additionally, cross-view contrastive learning is employed to maintain robust
and consistent representations. Experiments on benchmark datasets validate the
efficacy of WaveHDNN, demonstrating its superior ability to capture both
heterophilic and localized structural information, leading to improved
recommendation performance.
|
2501.16724
|
B-RIGHT: Benchmark Re-evaluation for Integrity in Generalized
Human-Object Interaction Testing
|
cs.CV
|
Human-object interaction (HOI) is an essential problem in artificial
intelligence (AI) which aims to understand the visual world that involves
complex relationships between humans and objects. However, current benchmarks
such as HICO-DET face the following limitations: (1) severe class imbalance and
(2) varying number of train and test sets for certain classes. These issues can
potentially lead to either inflation or deflation of model performance during
evaluation, ultimately undermining the reliability of evaluation scores. In
this paper, we propose a systematic approach to develop a new class-balanced
dataset, Benchmark Re-evaluation for Integrity in Generalized Human-object
Interaction Testing (B-RIGHT), that addresses these imbalanced problems.
B-RIGHT achieves class balance by leveraging balancing algorithm and automated
generation-and-filtering processes, ensuring an equal number of instances for
each HOI class. Furthermore, we design a balanced zero-shot test set to
systematically evaluate models on unseen scenario. Re-evaluating existing
models using B-RIGHT reveals substantial the reduction of score variance and
changes in performance rankings compared to conventional HICO-DET. Our
experiments demonstrate that evaluation under balanced conditions ensure more
reliable and fair model comparisons.
|
2501.16726
|
Bridging Neural Networks and Wireless Systems with MIMO-OFDM Semantic
Communications
|
cs.IT cs.AI cs.NI math.IT
|
Semantic communications aim to enhance transmission efficiency by jointly
optimizing source coding, channel coding, and modulation. While prior research
has demonstrated promising performance in simulations, real-world
implementations often face significant challenges, including noise variability
and nonlinear distortions, leading to performance gaps. This article
investigates these challenges in a multiple-input multiple-output (MIMO) and
orthogonal frequency division multiplexing (OFDM)-based semantic communication
system, focusing on the practical impacts of power amplifier (PA) nonlinearity
and peak-to-average power ratio (PAPR) variations. Our analysis identifies
frequency selectivity of the actual channel as a critical factor in performance
degradation and demonstrates that targeted mitigation strategies can enable
semantic systems to approach theoretical performance. By addressing key
limitations in existing designs, we provide actionable insights for advancing
semantic communications in practical wireless environments. This work
establishes a foundation for bridging the gap between theoretical models and
real-world deployment, highlighting essential considerations for system design
and optimization.
|
2501.16727
|
xJailbreak: Representation Space Guided Reinforcement Learning for
Interpretable LLM Jailbreaking
|
cs.CL
|
Safety alignment mechanism are essential for preventing large language models
(LLMs) from generating harmful information or unethical content. However,
cleverly crafted prompts can bypass these safety measures without accessing the
model's internal parameters, a phenomenon known as black-box jailbreak.
Existing heuristic black-box attack methods, such as genetic algorithms, suffer
from limited effectiveness due to their inherent randomness, while recent
reinforcement learning (RL) based methods often lack robust and informative
reward signals. To address these challenges, we propose a novel black-box
jailbreak method leveraging RL, which optimizes prompt generation by analyzing
the embedding proximity between benign and malicious prompts. This approach
ensures that the rewritten prompts closely align with the intent of the
original prompts while enhancing the attack's effectiveness. Furthermore, we
introduce a comprehensive jailbreak evaluation framework incorporating
keywords, intent matching, and answer validation to provide a more rigorous and
holistic assessment of jailbreak success. Experimental results show the
superiority of our approach, achieving state-of-the-art (SOTA) performance on
several prominent open and closed-source LLMs, including Qwen2.5-7B-Instruct,
Llama3.1-8B-Instruct, and GPT-4o-0806. Our method sets a new benchmark in
jailbreak attack effectiveness, highlighting potential vulnerabilities in LLMs.
The codebase for this work is available at
https://github.com/Aegis1863/xJailbreak.
|
2501.16728
|
Optimizing Efficiency of Mixed Traffic through Reinforcement Learning: A
Topology-Independent Approach and Benchmark
|
cs.RO
|
This paper presents a mixed traffic control policy designed to optimize
traffic efficiency across diverse road topologies, addressing issues of
congestion prevalent in urban environments. A model-free reinforcement learning
(RL) approach is developed to manage large-scale traffic flow, using data
collected by autonomous vehicles to influence human-driven vehicles. A
real-world mixed traffic control benchmark is also released, which includes 444
scenarios from 20 countries, representing a wide geographic distribution and
covering a variety of scenarios and road topologies. This benchmark serves as a
foundation for future research, providing a realistic simulation environment
for the development of effective policies. Comprehensive experiments
demonstrate the effectiveness and adaptability of the proposed method,
achieving better performance than existing traffic control methods in both
intersection and roundabout scenarios. To the best of our knowledge, this is
the first project to introduce a real-world complex scenarios mixed traffic
control benchmark. Videos and code of our work are available at
https://sites.google.com/berkeley.edu/mixedtrafficplus/home
|
2501.16729
|
On the Interplay Between Sparsity and Training in Deep Reinforcement
Learning
|
cs.LG cs.AI
|
We study the benefits of different sparse architectures for deep
reinforcement learning. In particular, we focus on image-based domains where
spatially-biased and fully-connected architectures are common. Using these and
several other architectures of equal capacity, we show that sparse structure
has a significant effect on learning performance. We also observe that choosing
the best sparse architecture for a given domain depends on whether the hidden
layer weights are fixed or learned.
|
2501.16730
|
Growing the Efficient Frontier on Panel Trees
|
cs.LG q-fin.PR stat.ML
|
We introduce a new class of tree-based models, P-Trees, for analyzing
(unbalanced) panel of individual asset returns, generalizing high-dimensional
sorting with economic guidance and interpretability. Under the mean-variance
efficient framework, P-Trees construct test assets that significantly advance
the efficient frontier compared to commonly used test assets, with alphas
unexplained by benchmark pricing models. P-Tree tangency portfolios also
constitute traded factors, recovering the pricing kernel and outperforming
popular observable and latent factor models for investments and cross-sectional
pricing. Finally, P-Trees capture the complexity of asset returns with
sparsity, achieving out-of-sample Sharpe ratios close to those attained only by
over-parameterized large models.
|
2501.16733
|
Dream to Drive with Predictive Individual World Model
|
cs.RO cs.CV cs.LG
|
It is still a challenging topic to make reactive driving behaviors in complex
urban environments as road users' intentions are unknown. Model-based
reinforcement learning (MBRL) offers great potential to learn a reactive policy
by constructing a world model that can provide informative states and
imagination training. However, a critical limitation in relevant research lies
in the scene-level reconstruction representation learning, which may overlook
key interactive vehicles and hardly model the interactive features among
vehicles and their long-term intentions. Therefore, this paper presents a novel
MBRL method with a predictive individual world model (PIWM) for autonomous
driving. PIWM describes the driving environment from an individual-level
perspective and captures vehicles' interactive relations and their intentions
via trajectory prediction task. Meanwhile, a behavior policy is learned jointly
with PIWM. It is trained in PIWM's imagination and effectively navigates in the
urban driving scenes leveraging intention-aware latent states. The proposed
method is trained and evaluated on simulation environments built upon
real-world challenging interactive scenarios. Compared with popular model-free
and state-of-the-art model-based reinforcement learning methods, experimental
results show that the proposed method achieves the best performance in terms of
safety and efficiency.
|
2501.16734
|
Distilling Large Language Models for Network Active Queue Management
|
cs.NI cs.AI
|
The growing complexity of network traffic and demand for ultra-low latency
communication require smarter packet traffic management. Existing Deep
Learning-based queuing approaches struggle with dynamic network scenarios and
demand high engineering effort. We propose AQM-LLM, distilling Large Language
Models (LLMs) with few-shot learning, contextual understanding, and pattern
recognition to improve Active Queue Management (AQM) [RFC 9330] with minimal
manual effort. We consider a specific case where AQM is Low Latency, Low Loss,
and Scalable Throughput (L4S) and our design of AQM-LLM builds on speculative
decoding and reinforcement-based distilling of LLM by tackling congestion
prevention in the L4S architecture using Explicit Congestion Notification (ECN)
[RFC 9331] and periodic packet dropping. We develop a new open-source
experimental platform by executing L4S-AQM on FreeBSD-14, providing
interoperable modules to support LLM integration and facilitate IETF
recognition through wider testing. Our extensive evaluations show L4S-LLM
enhances queue management, prevents congestion, reduces latency, and boosts
network performance, showcasing LLMs' adaptability and efficiency in uplifting
AQM systems.
|
2501.16735
|
Stochastic Population Update Provably Needs An Archive in Evolutionary
Multi-objective Optimization
|
cs.NE
|
Evolutionary algorithms (EAs) have been widely applied to multi-objective
optimization, due to their nature of population-based search. Population
update, a key component in multi-objective EAs (MOEAs), is usually performed in
a greedy, deterministic manner. However, recent studies have questioned this
practice and shown that stochastic population update (SPU), which allows
inferior solutions have a chance to be preserved, can help MOEAs jump out of
local optima more easily. While introducing randomness in the population update
process boosts the exploration of MOEAs, there is a drawback that the
population may not always preserve the very best solutions found, thus
entailing a large population. Intuitively, a possible solution to this issue is
to introduce an archive that stores the best solutions ever found. In this
paper, we theoretically show that using an archive can allow a small population
and accelerate the search of SPU-based MOEAs substantially. Specifically, we
analyze the expected running time of two well-established MOEAs, SMS-EMOA and
NSGA-II, with SPU for solving a commonly studied bi-objective problem
OneJumpZeroJump, and prove that using an archive can bring (even exponential)
speedups. The comparison between SMS-EMOA and NSGA-II also suggests that the
$(\mu+\mu)$ update mode may be more suitable for SPU than the $(\mu+1)$ update
mode. Furthermore, our derived running time bounds for using SPU alone are
significantly tighter than previously known ones. Our theoretical findings are
also empirically validated on a well-known practical problem, the
multi-objective traveling salesperson problem. We hope this work may provide
theoretical support to explore different ideas of designing algorithms in
evolutionary multi-objective optimization.
|
2501.16737
|
Consistency Diffusion Models for Single-Image 3D Reconstruction with
Priors
|
cs.CV
|
This paper delves into the study of 3D point cloud reconstruction from a
single image. Our objective is to develop the Consistency Diffusion Model,
exploring synergistic 2D and 3D priors in the Bayesian framework to ensure
superior consistency in the reconstruction process, a challenging yet critical
requirement in this field. Specifically, we introduce a pioneering training
framework under diffusion models that brings two key innovations. First, we
convert 3D structural priors derived from the initial 3D point cloud as a bound
term to increase evidence in the variational Bayesian framework, leveraging
these robust intrinsic priors to tightly govern the diffusion training process
and bolster consistency in reconstruction. Second, we extract and incorporate
2D priors from the single input image, projecting them onto the 3D point cloud
to enrich the guidance for diffusion training. Our framework not only sidesteps
potential model learning shifts that may arise from directly imposing
additional constraints during training but also precisely transposes the 2D
priors into the 3D domain. Extensive experimental evaluations reveal that our
approach sets new benchmarks in both synthetic and real-world datasets. The
code is included with the submission.
|
2501.16740
|
Efficient Knowledge Distillation of SAM for Medical Image Segmentation
|
eess.IV cs.AI cs.CV
|
The Segment Anything Model (SAM) has set a new standard in interactive image
segmentation, offering robust performance across various tasks. However, its
significant computational requirements limit its deployment in real-time or
resource-constrained environments. To address these challenges, we propose a
novel knowledge distillation approach, KD SAM, which incorporates both encoder
and decoder optimization through a combination of Mean Squared Error (MSE) and
Perceptual Loss. This dual-loss framework captures structural and semantic
features, enabling the student model to maintain high segmentation accuracy
while reducing computational complexity. Based on the model evaluation on
datasets, including Kvasir-SEG, ISIC 2017, Fetal Head Ultrasound, and Breast
Ultrasound, we demonstrate that KD SAM achieves comparable or superior
performance to the baseline models, with significantly fewer parameters. KD SAM
effectively balances segmentation accuracy and computational efficiency, making
it well-suited for real-time medical image segmentation applications in
resource-constrained environments.
|
2501.16743
|
Hierarchical Trajectory (Re)Planning for a Large Scale Swarm
|
cs.RO
|
We consider the trajectory replanning problem for a large-scale swarm in a
cluttered environment. Our path planner replans for robots by utilizing a
hierarchical approach, dividing the workspace, and computing collision-free
paths for robots within each cell in parallel. Distributed trajectory
optimization generates a deadlock-free trajectory for efficient execution and
maintains the control feasibility even when the optimization fails. Our
hierarchical approach combines the benefits of both centralized and
decentralized methods, achieving a high task success rate while providing
real-time replanning capability. Compared to decentralized approaches, our
approach effectively avoids deadlocks and collisions, significantly increasing
the task success rate. We demonstrate the real-time performance of our
algorithm with up to 142 robots in simulation, and a representative 24 physical
Crazyflie nano-quadrotor experiment.
|
2501.16744
|
LLM Assisted Anomaly Detection Service for Site Reliability Engineers:
Enhancing Cloud Infrastructure Resilience
|
cs.LG cs.AI
|
This paper introduces a scalable Anomaly Detection Service with a
generalizable API tailored for industrial time-series data, designed to assist
Site Reliability Engineers (SREs) in managing cloud infrastructure. The service
enables efficient anomaly detection in complex data streams, supporting
proactive identification and resolution of issues. Furthermore, it presents an
innovative approach to anomaly modeling in cloud infrastructure by utilizing
Large Language Models (LLMs) to understand key components, their failure modes,
and behaviors. A suite of algorithms for detecting anomalies is offered in
univariate and multivariate time series data, including regression-based,
mixture-model-based, and semi-supervised approaches. We provide insights into
the usage patterns of the service, with over 500 users and 200,000 API calls in
a year. The service has been successfully applied in various industrial
settings, including IoT-based AI applications. We have also evaluated our
system on public anomaly benchmarks to show its effectiveness. By leveraging
it, SREs can proactively identify potential issues before they escalate,
reducing downtime and improving response times to incidents, ultimately
enhancing the overall customer experience. We plan to extend the system to
include time series foundation models, enabling zero-shot anomaly detection
capabilities.
|
2501.16745
|
Toward Relative Positional Encoding in Spiking Transformers
|
cs.NE
|
Spiking neural networks (SNNs) are bio-inspired networks that model how
neurons in the brain communicate through discrete spikes, which have great
potential in various tasks due to their energy efficiency and temporal
processing capabilities. SNNs with self-attention mechanisms (Spiking
Transformers) have recently shown great advancements in various tasks such as
sequential modeling and image classifications. However, integrating positional
information, which is essential for capturing sequential relationships in data,
remains a challenge in Spiking Transformers. In this paper, we introduce an
approximate method for relative positional encoding (RPE) in Spiking
Transformers, leveraging Gray Code as the foundation for our approach. We
provide comprehensive proof of the method's effectiveness in partially
capturing relative positional information for sequential tasks. Additionally,
we extend our RPE approach by adapting it to a two-dimensional form suitable
for image patch processing. We evaluate the proposed RPE methods on several
tasks, including time series forecasting, text classification, and patch-based
image classification. Our experimental results demonstrate that the
incorporation of RPE significantly enhances performance by effectively
capturing relative positional information.
|
2501.16748
|
Through the Prism of Culture: Evaluating LLMs' Understanding of Indian
Subcultures and Traditions
|
cs.CL
|
Large Language Models (LLMs) have shown remarkable advancements but also
raise concerns about cultural bias, often reflecting dominant narratives at the
expense of under-represented subcultures. In this study, we evaluate the
capacity of LLMs to recognize and accurately respond to the Little Traditions
within Indian society, encompassing localized cultural practices and
subcultures such as caste, kinship, marriage, and religion. Through a series of
case studies, we assess whether LLMs can balance the interplay between dominant
Great Traditions and localized Little Traditions. We explore various prompting
strategies and further investigate whether using prompts in regional languages
enhances the models cultural sensitivity and response quality. Our findings
reveal that while LLMs demonstrate an ability to articulate cultural nuances,
they often struggle to apply this understanding in practical, context-specific
scenarios. To the best of our knowledge, this is the first study to analyze
LLMs engagement with Indian subcultures, offering critical insights into the
challenges of embedding cultural diversity in AI systems.
|
2501.16750
|
HateBench: Benchmarking Hate Speech Detectors on LLM-Generated Content
and Hate Campaigns
|
cs.CR cs.LG
|
Large Language Models (LLMs) have raised increasing concerns about their
misuse in generating hate speech. Among all the efforts to address this issue,
hate speech detectors play a crucial role. However, the effectiveness of
different detectors against LLM-generated hate speech remains largely unknown.
In this paper, we propose HateBench, a framework for benchmarking hate speech
detectors on LLM-generated hate speech. We first construct a hate speech
dataset of 7,838 samples generated by six widely-used LLMs covering 34 identity
groups, with meticulous annotations by three labelers. We then assess the
effectiveness of eight representative hate speech detectors on the
LLM-generated dataset. Our results show that while detectors are generally
effective in identifying LLM-generated hate speech, their performance degrades
with newer versions of LLMs. We also reveal the potential of LLM-driven hate
campaigns, a new threat that LLMs bring to the field of hate speech detection.
By leveraging advanced techniques like adversarial attacks and model stealing
attacks, the adversary can intentionally evade the detector and automate hate
campaigns online. The most potent adversarial attack achieves an attack success
rate of 0.966, and its attack efficiency can be further improved by
$13-21\times$ through model stealing attacks with acceptable attack
performance. We hope our study can serve as a call to action for the research
community and platform moderators to fortify defenses against these emerging
threats.
|
2501.16751
|
DebugAgent: Efficient and Interpretable Error Slice Discovery for
Comprehensive Model Debugging
|
cs.CV cs.AI
|
Despite the significant success of deep learning models in computer vision,
they often exhibit systematic failures on specific data subsets, known as error
slices. Identifying and mitigating these error slices is crucial to enhancing
model robustness and reliability in real-world scenarios. In this paper, we
introduce DebugAgent, an automated framework for error slice discovery and
model repair. DebugAgent first generates task-specific visual attributes to
highlight instances prone to errors through an interpretable and structured
process. It then employs an efficient slice enumeration algorithm to
systematically identify error slices, overcoming the combinatorial challenges
that arise during slice exploration. Additionally, DebugAgent extends its
capabilities by predicting error slices beyond the validation set, addressing a
key limitation of prior approaches. Extensive experiments across multiple
domains, including image classification, pose estimation, and object detection
- show that DebugAgent not only improves the coherence and precision of
identified error slices but also significantly enhances the model repair
capabilities.
|
2501.16753
|
Overcoming Semantic Dilution in Transformer-Based Next Frame Prediction
|
cs.CV cs.AI
|
Next-frame prediction in videos is crucial for applications such as
autonomous driving, object tracking, and motion prediction. The primary
challenge in next-frame prediction lies in effectively capturing and processing
both spatial and temporal information from previous video sequences. The
transformer architecture, known for its prowess in handling sequence data, has
made remarkable progress in this domain. However, transformer-based next-frame
prediction models face notable issues: (a) The multi-head self-attention (MHSA)
mechanism requires the input embedding to be split into $N$ chunks, where $N$
is the number of heads. Each segment captures only a fraction of the original
embeddings information, which distorts the representation of the embedding in
the latent space, resulting in a semantic dilution problem; (b) These models
predict the embeddings of the next frames rather than the frames themselves,
but the loss function based on the errors of the reconstructed frames, not the
predicted embeddings -- this creates a discrepancy between the training
objective and the model output. We propose a Semantic Concentration Multi-Head
Self-Attention (SCMHSA) architecture, which effectively mitigates semantic
dilution in transformer-based next-frame prediction. Additionally, we introduce
a loss function that optimizes SCMHSA in the latent space, aligning the
training objective more closely with the model output. Our method demonstrates
superior performance compared to the original transformer-based predictors.
|
2501.16754
|
SSF-PAN: Semantic Scene Flow-Based Perception for Autonomous Navigation
in Traffic Scenarios
|
cs.RO cs.CV
|
Vehicle detection and localization in complex traffic scenarios pose
significant challenges due to the interference of moving objects. Traditional
methods often rely on outlier exclusions or semantic segmentations, which
suffer from low computational efficiency and accuracy. The proposed SSF-PAN can
achieve the functionalities of LiDAR point cloud based object
detection/localization and SLAM (Simultaneous Localization and Mapping) with
high computational efficiency and accuracy, enabling map-free navigation
frameworks. The novelty of this work is threefold: 1) developing a neural
network which can achieve segmentation among static and dynamic objects within
the scene flows with different motion features, that is, semantic scene flow
(SSF); 2) developing an iterative framework which can further optimize the
quality of input scene flows and output segmentation results; 3) developing a
scene flow-based navigation platform which can test the performance of the SSF
perception system in the simulation environment. The proposed SSF-PAN method is
validated using the SUScape-CARLA and the KITTI datasets, as well as on the
CARLA simulator. Experimental results demonstrate that the proposed approach
outperforms traditional methods in terms of scene flow computation accuracy,
moving object detection accuracy, computational efficiency, and autonomous
navigation effectiveness.
|
2501.16756
|
Random Forest Calibration
|
cs.LG
|
The Random Forest (RF) classifier is often claimed to be relatively well
calibrated when compared with other machine learning methods. Moreover, the
existing literature suggests that traditional calibration methods, such as
isotonic regression, do not substantially enhance the calibration of RF
probability estimates unless supplied with extensive calibration data sets,
which can represent a significant obstacle in cases of limited data
availability. Nevertheless, there seems to be no comprehensive study validating
such claims and systematically comparing state-of-the-art calibration methods
specifically for RF. To close this gap, we investigate a broad spectrum of
calibration methods tailored to or at least applicable to RF, ranging from
scaling techniques to more advanced algorithms. Our results based on synthetic
as well as real-world data unravel the intricacies of RF probability estimates,
scrutinize the impacts of hyper-parameters, compare calibration methods in a
systematic way. We show that a well-optimized RF performs as well as or better
than leading calibration approaches.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.