id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
2502.07237
|
DrugImproverGPT: A Large Language Model for Drug Optimization with
Fine-Tuning via Structured Policy Optimization
|
cs.LG cs.CL q-bio.BM stat.ML
|
Finetuning a Large Language Model (LLM) is crucial for generating results
towards specific objectives. This research delves into the realm of drug
optimization and introduce a novel reinforcement learning algorithm to finetune
a drug optimization LLM-based generative model, enhancing the original drug
across target objectives, while retains the beneficial chemical properties of
the original drug. This work is comprised of two primary components: (1)
DrugImprover: A framework tailored for improving robustness and efficiency in
drug optimization. It includes a LLM designed for drug optimization and a novel
Structured Policy Optimization (SPO) algorithm, which is theoretically
grounded. This algorithm offers a unique perspective for fine-tuning the
LLM-based generative model by aligning the improvement of the generated
molecule with the input molecule under desired objectives. (2) A dataset of 1
million compounds, each with OEDOCK docking scores on 5 human proteins
associated with cancer cells and 24 binding sites from SARS-CoV-2 virus. We
conduct a comprehensive evaluation of SPO and demonstrate its effectiveness in
improving the original drug across target properties. Our code and dataset will
be publicly available at: https://github.com/xuefeng-cs/DrugImproverGPT.
|
2502.07238
|
Diffusion Suction Grasping with Large-Scale Parcel Dataset
|
cs.CV cs.AI
|
While recent advances in object suction grasping have shown remarkable
progress, significant challenges persist particularly in cluttered and complex
parcel handling scenarios. Two fundamental limitations hinder current
approaches: (1) the lack of a comprehensive suction grasp dataset tailored for
parcel manipulation tasks, and (2) insufficient adaptability to diverse object
characteristics including size variations, geometric complexity, and textural
diversity. To address these challenges, we present Parcel-Suction-Dataset, a
large-scale synthetic dataset containing 25 thousand cluttered scenes with 410
million precision-annotated suction grasp poses. This dataset is generated
through our novel geometric sampling algorithm that enables efficient
generation of optimal suction grasps incorporating both physical constraints
and material properties. We further propose Diffusion-Suction, an innovative
framework that reformulates suction grasp prediction as a conditional
generation task through denoising diffusion probabilistic models. Our method
iteratively refines random noise into suction grasp score maps through
visual-conditioned guidance from point cloud observations, effectively learning
spatial point-wise affordances from our synthetic dataset. Extensive
experiments demonstrate that the simple yet efficient Diffusion-Suction
achieves new state-of-the-art performance compared to previous models on both
Parcel-Suction-Dataset and the public SuctionNet-1Billion benchmark.
|
2502.07239
|
Contextual Gesture: Co-Speech Gesture Video Generation through
Context-aware Gesture Representation
|
cs.CV cs.AI
|
Co-speech gesture generation is crucial for creating lifelike avatars and
enhancing human-computer interactions by synchronizing gestures with speech.
Despite recent advancements, existing methods struggle with accurately
identifying the rhythmic or semantic triggers from audio for generating
contextualized gesture patterns and achieving pixel-level realism. To address
these challenges, we introduce Contextual Gesture, a framework that improves
co-speech gesture video generation through three innovative components: (1) a
chronological speech-gesture alignment that temporally connects two modalities,
(2) a contextualized gesture tokenization that incorporate speech context into
motion pattern representation through distillation, and (3) a structure-aware
refinement module that employs edge connection to link gesture keypoints to
improve video generation. Our extensive experiments demonstrate that Contextual
Gesture not only produces realistic and speech-aligned gesture videos but also
supports long-sequence generation and video gesture editing applications, shown
in Fig.1 Project Page: https://andypinxinliu.github.io/Contextual-Gesture/.
|
2502.07242
|
Nonlinear Reed-Solomon codes and nonlinear skew quasi-cyclic codes
|
cs.IT math.IT math.RA
|
This article begins with an exploration of nonlinear codes
($\mathbb{F}_q$-linear subspaces of $\mathbb{F}_{q^m}^n$) which are
generalizations of the familiar Reed-Solomon codes. This then leads to a wider
exploration of nonlinear analogues of the skew quasi-cyclic codes of index
$\ell$ first explored in 2010 by Abualrub et al., i.e.,
$\mathbb{F}_{q^m}[x;\sigma]$-submodules of
$\left(\mathbb{F}_{q^m}[x;\sigma]/(x^n - 1)\right)^\ell$. After introducing
nonlinear skew quasi-cyclic codes, we then determine the module structure of
these codes using a two-fold iteration of the Smith normal form of matrices
over skew polynomial rings. Finally we show that in certain cases, a single use
of the Smith normal form will suffice to determine the elementary divisors of
the code.
|
2502.07243
|
Vevo: Controllable Zero-Shot Voice Imitation with Self-Supervised
Disentanglement
|
cs.SD cs.AI
|
The imitation of voice, targeted on specific speech attributes such as timbre
and speaking style, is crucial in speech generation. However, existing methods
rely heavily on annotated data, and struggle with effectively disentangling
timbre and style, leading to challenges in achieving controllable generation,
especially in zero-shot scenarios. To address these issues, we propose Vevo, a
versatile zero-shot voice imitation framework with controllable timbre and
style. Vevo operates in two core stages: (1) Content-Style Modeling: Given
either text or speech's content tokens as input, we utilize an autoregressive
transformer to generate the content-style tokens, which is prompted by a style
reference; (2) Acoustic Modeling: Given the content-style tokens as input, we
employ a flow-matching transformer to produce acoustic representations, which
is prompted by a timbre reference. To obtain the content and content-style
tokens of speech, we design a fully self-supervised approach that progressively
decouples the timbre, style, and linguistic content of speech. Specifically, we
adopt VQ-VAE as the tokenizer for the continuous hidden features of HuBERT. We
treat the vocabulary size of the VQ-VAE codebook as the information bottleneck,
and adjust it carefully to obtain the disentangled speech representations.
Solely self-supervised trained on 60K hours of audiobook speech data, without
any fine-tuning on style-specific corpora, Vevo matches or surpasses existing
methods in accent and emotion conversion tasks. Additionally, Vevo's
effectiveness in zero-shot voice conversion and text-to-speech tasks further
demonstrates its strong generalization and versatility. Audio samples are
available at https://versavoice.github.io.
|
2502.07244
|
Linear Transformers as VAR Models: Aligning Autoregressive Attention
Mechanisms with Autoregressive Forecasting
|
cs.LG cs.AI stat.ML
|
Autoregressive attention-based time series forecasting (TSF) has drawn
increasing interest, with mechanisms like linear attention sometimes
outperforming vanilla attention. However, deeper Transformer architectures
frequently misalign with autoregressive objectives, obscuring the underlying
VAR structure embedded within linear attention and hindering their ability to
capture the data generative processes in TSF. In this work, we first show that
a single linear attention layer can be interpreted as a dynamic vector
autoregressive (VAR) structure. We then explain that existing multi-layer
Transformers have structural mismatches with the autoregressive forecasting
objective, which impair interpretability and generalization ability. To address
this, we show that by rearranging the MLP, attention, and input-output flow,
multi-layer linear attention can also be aligned as a VAR model. Then, we
propose Structural Aligned Mixture of VAR (SAMoVAR), a linear Transformer
variant that integrates interpretable dynamic VAR weights for multivariate TSF.
By aligning the Transformer architecture with autoregressive objectives,
SAMoVAR delivers improved performance, interpretability, and computational
efficiency, comparing to SOTA TSF models.
|
2502.07246
|
Robust Indoor Localization in Dynamic Environments: A Multi-source
Unsupervised Domain Adaptation Framework
|
cs.CV physics.pop-ph
|
Fingerprint localization has gained significant attention due to its
cost-effective deployment, low complexity, and high efficacy. However,
traditional methods, while effective for static data, often struggle in dynamic
environments where data distributions and feature spaces evolve-a common
occurrence in real-world scenarios. To address the challenges of robustness and
adaptability in fingerprint localization for dynamic indoor environments, this
paper proposes DF-Loc, an end-to-end dynamic fingerprint localization system
based on multi-source unsupervised domain adaptation (MUDA). DF-Loc leverages
historical data from multiple time scales to facilitate knowledge transfer in
specific feature spaces, thereby enhancing generalization capabilities in the
target domain and reducing reliance on labeled data. Specifically, the system
incorporates a Quality Control (QC) module for CSI data preprocessing and
employs image processing techniques for CSI fingerprint feature reconstruction.
Additionally, a multi-scale attention-based feature fusion backbone network is
designed to extract multi-level transferable fingerprint features. Finally, a
dual-stage alignment model aligns the distributions of multiple source-target
domain pairs, improving regression characteristics in the target domain.
Extensive experiments conducted in office and classroom environments
demonstrate that DF-Loc outperforms comparative methods in terms of both
localization accuracy and robustness. With 60% of reference points used for
training, DF-Loc achieves average localization errors of 0.79m and 3.72m in
"same-test" scenarios, and 0.94m and 4.39m in "different-test" scenarios,
respectively. This work pioneers an end-to-end multi-source transfer learning
approach for fingerprint localization, providing valuable insights for future
research in dynamic environments.
|
2502.07250
|
NARCE: A Mamba-Based Neural Algorithmic Reasoner Framework for Online
Complex Event Detection
|
cs.LG cs.AI
|
Current machine learning models excel in short-span perception tasks but
struggle to derive high-level insights from long-term observation, a capability
central to understanding complex events (CEs). CEs, defined as sequences of
short-term atomic events (AEs) governed by spatiotemporal rules, are
challenging to detect online due to the need to extract meaningful patterns
from long and noisy sensor data while ignoring irrelevant events. We
hypothesize that state-based methods are well-suited for CE detection, as they
capture event progression through state transitions without requiring long-term
memory. Baseline experiments validate this, demonstrating that the state-space
model Mamba outperforms existing architectures. However, Mamba's reliance on
extensive labeled data, which are difficult to obtain, motivates our second
hypothesis: decoupling CE rule learning from noisy sensor data can reduce data
requirements. To address this, we propose NARCE, a framework that combines
Neural Algorithmic Reasoning (NAR) to split the task into two components: (i)
learning CE rules independently of sensor data using synthetic concept traces
generated by LLMs and (ii) mapping sensor inputs to these rules via an adapter.
Our results show that NARCE outperforms baselines in accuracy, generalization
to unseen and longer sensor data, and data efficiency, significantly reducing
annotation costs while advancing robust CE detection.
|
2502.07254
|
Fairness in Multi-Agent AI: A Unified Framework for Ethical and
Equitable Autonomous Systems
|
cs.MA cs.AI cs.CY
|
Ensuring fairness in decentralized multi-agent systems presents significant
challenges due to emergent biases, systemic inefficiencies, and conflicting
agent incentives. This paper provides a comprehensive survey of fairness in
multi-agent AI, introducing a novel framework where fairness is treated as a
dynamic, emergent property of agent interactions. The framework integrates
fairness constraints, bias mitigation strategies, and incentive mechanisms to
align autonomous agent behaviors with societal values while balancing
efficiency and robustness. Through empirical validation, we demonstrate that
incorporating fairness constraints results in more equitable decision-making.
This work bridges the gap between AI ethics and system design, offering a
foundation for accountable, transparent, and socially responsible multi-agent
AI systems.
|
2502.07255
|
Beyond Confidence: Adaptive Abstention in Dual-Threshold Conformal
Prediction for Autonomous System Perception
|
cs.RO cs.LG
|
Safety-critical perception systems require both reliable uncertainty
quantification and principled abstention mechanisms to maintain safety under
diverse operational conditions. We present a novel dual-threshold
conformalization framework that provides statistically-guaranteed uncertainty
estimates while enabling selective prediction in high-risk scenarios. Our
approach uniquely combines a conformal threshold ensuring valid prediction sets
with an abstention threshold optimized through ROC analysis, providing
distribution-free coverage guarantees (>= 1 - alpha) while identifying
unreliable predictions. Through comprehensive evaluation on CIFAR-100,
ImageNet1K, and ModelNet40 datasets, we demonstrate superior robustness across
camera and LiDAR modalities under varying environmental perturbations. The
framework achieves exceptional detection performance (AUC: 0.993 to 0.995)
under severe conditions while maintaining high coverage (>90.0%) and enabling
adaptive abstention (13.5% to 63.4% +/- 0.5) as environmental severity
increases. For LiDAR-based perception, our approach demonstrates particularly
strong performance, maintaining robust coverage (>84.5%) while appropriately
abstaining from unreliable predictions. Notably, the framework shows remarkable
stability under heavy perturbations, with detection performance (AUC: 0.995 +/-
0.001) significantly outperforming existing methods across all modalities. Our
unified approach bridges the gap between theoretical guarantees and practical
deployment needs, offering a robust solution for safety-critical autonomous
systems operating in challenging real-world conditions.
|
2502.07259
|
Flat U-Net: An Efficient Ultralightweight Model for Solar Filament
Segmentation in Full-disk H$\alpha$ Images
|
astro-ph.IM astro-ph.SR cs.CV cs.LG
|
Solar filaments are one of the most prominent features observed on the Sun,
and their evolutions are closely related to various solar activities, such as
flares and coronal mass ejections. Real-time automated identification of solar
filaments is the most effective approach to managing large volumes of data.
Existing models of filament identification are characterized by large parameter
sizes and high computational costs, which limit their future applications in
highly integrated and intelligent ground-based and space-borne observation
devices. Consequently, the design of more lightweight models will facilitate
the advancement of intelligent observation equipment. In this study, we
introduce Flat U-Net, a novel and highly efficient ultralightweight model that
incorporates simplified channel attention (SCA) and channel self-attention
(CSA) convolutional blocks for the segmentation of solar filaments in full-disk
H$\alpha$ images. Feature information from each network layer is fully
extracted to reconstruct interchannel feature representations. Each block
effectively optimizes the channel features from the previous layer,
significantly reducing parameters. The network architecture presents an elegant
flattening, improving its efficiency, and simplifying the overall design.
Experimental validation demonstrates that a model composed of pure SCAs
achieves a precision of approximately 0.93, with dice similarity coefficient
(DSC) and recall rates of 0.76 and 0.64, respectively, significantly
outperforming the classical U-Net. Introducing a certain number of CSA blocks
improves the DSC and recall rates to 0.82 and 0.74, respectively, which
demonstrates a pronounced advantage, particularly concerning model weight size
and detection effectiveness. The data set, models, and code are available as
open-source resources.
|
2502.07263
|
Hidden Division of Labor in Scientific Teams Revealed Through 1.6
Million LaTeX Files
|
cs.SI cs.CL cs.DL
|
Recognition of individual contributions is fundamental to the scientific
reward system, yet coauthored papers obscure who did what. Traditional
proxies-author order and career stage-reinforce biases, while contribution
statements remain self-reported and limited to select journals. We construct
the first large-scale dataset on writing contributions by analyzing
author-specific macros in LaTeX files from 1.6 million papers (1991-2023) by 2
million scientists. Validation against self-reported statements (precision =
0.87), author order patterns, field-specific norms, and Overleaf records
(Spearman's rho = 0.6, p < 0.05) confirms the reliability of the created data.
Using explicit section information, we reveal a hidden division of labor within
scientific teams: some authors primarily contribute to conceptual sections
(e.g., Introduction and Discussion), while others focus on technical sections
(e.g., Methods and Experiments). These findings provide the first large-scale
evidence of implicit labor division in scientific teams, challenging
conventional authorship practices and informing institutional policies on
credit allocation.
|
2502.07265
|
Riemannian Proximal Sampler for High-accuracy Sampling on Manifolds
|
stat.ML cs.LG math.ST stat.TH
|
We introduce the Riemannian Proximal Sampler, a method for sampling from
densities defined on Riemannian manifolds. The performance of this sampler
critically depends on two key oracles: the Manifold Brownian Increments (MBI)
oracle and the Riemannian Heat-kernel (RHK) oracle. We establish high-accuracy
sampling guarantees for the Riemannian Proximal Sampler, showing that
generating samples with $\varepsilon$-accuracy requires
$O(\log(1/\varepsilon))$ iterations in Kullback-Leibler divergence assuming
access to exact oracles and $O(\log^2(1/\varepsilon))$ iterations in the total
variation metric assuming access to sufficiently accurate inexact oracles.
Furthermore, we present practical implementations of these oracles by
leveraging heat-kernel truncation and Varadhan's asymptotics. In the latter
case, we interpret the Riemannian Proximal Sampler as a discretization of the
entropy-regularized Riemannian Proximal Point Method on the associated
Wasserstein space. We provide preliminary numerical results that illustrate the
effectiveness of the proposed methodology.
|
2502.07266
|
When More is Less: Understanding Chain-of-Thought Length in LLMs
|
cs.AI cs.CL cs.LG
|
Chain-of-thought (CoT) reasoning enhances the multi-step reasoning
capabilities of large language models (LLMs) by breaking complex tasks into
smaller, manageable sub-tasks. Researchers have been exploring ways to guide
models to generate more complex CoT processes to improve the reasoning ability
of LLMs, such as long CoT and the test-time scaling law. However, for most
models and tasks, does an increase in CoT length consistently lead to improved
reasoning accuracy? In this paper, we observe a nuanced relationship: as the
number of reasoning steps increases, performance initially improves but
eventually decreases. To understand this phenomenon, we provide a piece of
evidence that longer reasoning processes are increasingly susceptible to noise.
We theoretically prove the existence of an optimal CoT length and derive a
scaling law for this optimal length based on model capability and task
difficulty. Inspired by our theory, we conduct experiments on both synthetic
and real world datasets and propose Length-filtered Vote to alleviate the
effects of excessively long or short CoTs. Our findings highlight the critical
need to calibrate CoT length to align with model capabilities and task demands,
offering a principled framework for optimizing multi-step reasoning in LLMs.
|
2502.07269
|
Exploring Active Data Selection Strategies for Continuous Training in
Deepfake Detection
|
cs.CV
|
In deepfake detection, it is essential to maintain high performance by
adjusting the parameters of the detector as new deepfake methods emerge. In
this paper, we propose a method to automatically and actively select the small
amount of additional data required for the continuous training of deepfake
detection models in situations where deepfake detection models are regularly
updated. The proposed method automatically selects new training data from a
\textit{redundant} pool set containing a large number of images generated by
new deepfake methods and real images, using the confidence score of the
deepfake detection model as a metric. Experimental results show that the
deepfake detection model, continuously trained with a small amount of
additional data automatically selected and added to the original training set,
significantly and efficiently improved the detection performance, achieving an
EER of 2.5% with only 15% of the amount of data in the pool set.
|
2502.07272
|
GENERator: A Long-Context Generative Genomic Foundation Model
|
cs.CL q-bio.GN
|
Advancements in DNA sequencing technologies have significantly improved our
ability to decode genomic sequences. However, the prediction and interpretation
of these sequences remain challenging due to the intricate nature of genetic
material. Large language models (LLMs) have introduced new opportunities for
biological sequence analysis. Recent developments in genomic language models
have underscored the potential of LLMs in deciphering DNA sequences.
Nonetheless, existing models often face limitations in robustness and
application scope, primarily due to constraints in model structure and training
data scale. To address these limitations, we present GENERator, a generative
genomic foundation model featuring a context length of 98k base pairs (bp) and
1.2B parameters. Trained on an expansive dataset comprising 386B bp of
eukaryotic DNA, the GENERator demonstrates state-of-the-art performance across
both established and newly proposed benchmarks. The model adheres to the
central dogma of molecular biology, accurately generating protein-coding
sequences that translate into proteins structurally analogous to known
families. It also shows significant promise in sequence optimization,
particularly through the prompt-responsive generation of promoter sequences
with specific activity profiles. These capabilities position the GENERator as a
pivotal tool for genomic research and biotechnological advancement, enhancing
our ability to interpret and predict complex biological systems and enabling
precise genomic interventions.
|
2502.07273
|
Variational Learning Induces Adaptive Label Smoothing
|
cs.LG cs.AI
|
We show that variational learning naturally induces an adaptive label
smoothing where label noise is specialized for each example. Such
label-smoothing is useful to handle examples with labeling errors and
distribution shifts, but designing a good adaptivity strategy is not always
easy. We propose to skip this step and simply use the natural adaptivity
induced during the optimization of a variational objective. We show empirical
results where a variational algorithm called IVON outperforms traditional label
smoothing and yields adaptivity strategies similar to those of an existing
approach. By connecting Bayesian methods to label smoothing, our work provides
a new way to handle overconfident predictions.
|
2502.07274
|
Cost-Efficient Continual Learning with Sufficient Exemplar Memory
|
cs.LG cs.AI
|
Continual learning (CL) research typically assumes highly constrained
exemplar memory resources. However, in many real-world scenarios-especially in
the era of large foundation models-memory is abundant, while GPU computational
costs are the primary bottleneck. In this work, we investigate CL in a novel
setting where exemplar memory is ample (i.e., sufficient exemplar memory).
Unlike prior methods designed for strict exemplar memory constraints, we
propose a simple yet effective approach that directly operates in the model's
weight space through a combination of weight resetting and averaging
techniques. Our method achieves state-of-the-art performance while reducing the
computational cost to a quarter or third of existing methods. These findings
challenge conventional CL assumptions and provide a practical baseline for
computationally efficient CL applications.
|
2502.07276
|
Dataset Ownership Verification in Contrastive Pre-trained Models
|
cs.LG cs.AI cs.CV
|
High-quality open-source datasets, which necessitate substantial efforts for
curation, has become the primary catalyst for the swift progress of deep
learning. Concurrently, protecting these datasets is paramount for the
well-being of the data owner. Dataset ownership verification emerges as a
crucial method in this domain, but existing approaches are often limited to
supervised models and cannot be directly extended to increasingly popular
unsupervised pre-trained models. In this work, we propose the first dataset
ownership verification method tailored specifically for self-supervised
pre-trained models by contrastive learning. Its primary objective is to
ascertain whether a suspicious black-box backbone has been pre-trained on a
specific unlabeled dataset, aiding dataset owners in upholding their rights.
The proposed approach is motivated by our empirical insights that when models
are trained with the target dataset, the unary and binary instance
relationships within the embedding space exhibit significant variations
compared to models trained without the target dataset. We validate the efficacy
of this approach across multiple contrastive pre-trained models including
SimCLR, BYOL, SimSiam, MOCO v3, and DINO. The results demonstrate that our
method rejects the null hypothesis with a $p$-value markedly below $0.05$,
surpassing all previous methodologies. Our code is available at
https://github.com/xieyc99/DOV4CL.
|
2502.07277
|
Enhancing Video Understanding: Deep Neural Networks for Spatiotemporal
Analysis
|
cs.CV cs.AI
|
It's no secret that video has become the primary way we share information
online. That's why there's been a surge in demand for algorithms that can
analyze and understand video content. It's a trend going to continue as video
continues to dominate the digital landscape. These algorithms will extract and
classify related features from the video and will use them to describe the
events and objects in the video. Deep neural networks have displayed
encouraging outcomes in the realm of feature extraction and video description.
This paper will explore the spatiotemporal features found in videos and recent
advancements in deep neural networks in video understanding. We will review
some of the main trends in video understanding models and their structural
design, the main problems, and some offered solutions in this topic. We will
also review and compare significant video understanding and action recognition
datasets.
|
2502.07278
|
Articulate That Object Part (ATOP): 3D Part Articulation from Text and
Motion Personalization
|
cs.CV
|
We present ATOP (Articulate That Object Part), a novel method based on motion
personalization to articulate a 3D object with respect to a part and its motion
as prescribed in a text prompt. Specifically, the text input allows us to tap
into the power of modern-day video diffusion to generate plausible motion
samples for the right object category and part. In turn, the input 3D object
provides image prompting to personalize the generated video to that very object
we wish to articulate. Our method starts with a few-shot finetuning for
category-specific motion generation, a key first step to compensate for the
lack of articulation awareness by current video diffusion models. For this, we
finetune a pre-trained multi-view image generation model for controllable
multi-view video generation, using a small collection of video samples obtained
for the target object category. This is followed by motion video
personalization that is realized by multi-view rendered images of the target 3D
object. At last, we transfer the personalized video motion to the target 3D
object via differentiable rendering to optimize part motion parameters by a
score distillation sampling loss. We show that our method is capable of
generating realistic motion videos and predict 3D motion parameters in a more
accurate and generalizable way, compared to prior works.
|
2502.07279
|
Exploratory Diffusion Policy for Unsupervised Reinforcement Learning
|
cs.LG cs.AI
|
Unsupervised reinforcement learning (RL) aims to pre-train agents by
exploring states or skills in reward-free environments, facilitating the
adaptation to downstream tasks. However, existing methods often overlook the
fitting ability of pre-trained policies and struggle to handle the
heterogeneous pre-training data, which are crucial for achieving efficient
exploration and fast fine-tuning. To address this gap, we propose Exploratory
Diffusion Policy (EDP), which leverages the strong expressive ability of
diffusion models to fit the explored data, both boosting exploration and
obtaining an efficient initialization for downstream tasks. Specifically, we
estimate the distribution of collected data in the replay buffer with the
diffusion policy and propose a score intrinsic reward, encouraging the agent to
explore unseen states. For fine-tuning the pre-trained diffusion policy on
downstream tasks, we provide both theoretical analyses and practical
algorithms, including an alternating method of Q function optimization and
diffusion policy distillation. Extensive experiments demonstrate the
effectiveness of EDP in efficient exploration during pre-training and fast
adaptation during fine-tuning.
|
2502.07280
|
MIGT: Memory Instance Gated Transformer Framework for Financial
Portfolio Management
|
cs.LG cs.AI
|
Deep reinforcement learning (DRL) has been applied in financial portfolio
management to improve returns in changing market conditions. However, unlike
most fields where DRL is widely used, the stock market is more volatile and
dynamic as it is affected by several factors such as global events and investor
sentiment. Therefore, it remains a challenge to construct a DRL-based portfolio
management framework with strong return capability, stable training, and
generalization ability. This study introduces a new framework utilizing the
Memory Instance Gated Transformer (MIGT) for effective portfolio management. By
incorporating a novel Gated Instance Attention module, which combines a
transformer variant, instance normalization, and a Lite Gate Unit, our approach
aims to maximize investment returns while ensuring the learning process's
stability and reducing outlier impacts. Tested on the Dow Jones Industrial
Average 30, our framework's performance is evaluated against fifteen other
strategies using key financial metrics like the cumulative return and
risk-return ratios (Sharpe, Sortino, and Omega ratios). The results highlight
MIGT's advantage, showcasing at least a 9.75% improvement in cumulative returns
and a minimum 2.36% increase in risk-return ratios over competing strategies,
marking a significant advancement in DRL for portfolio management.
|
2502.07281
|
Supervised Contrastive Block Disentanglement
|
cs.LG
|
Real-world datasets often combine data collected under different experimental
conditions. This yields larger datasets, but also introduces spurious
correlations that make it difficult to model the phenomena of interest. We
address this by learning two embeddings to independently represent the
phenomena of interest and the spurious correlations. The embedding representing
the phenomena of interest is correlated with the target variable $y$, and is
invariant to the environment variable $e$. In contrast, the embedding
representing the spurious correlations is correlated with $e$. The invariance
to $e$ is difficult to achieve on real-world datasets. Our primary contribution
is an algorithm called Supervised Contrastive Block Disentanglement (SCBD) that
effectively enforces this invariance. It is based purely on Supervised
Contrastive Learning, and applies to real-world data better than existing
approaches. We empirically validate SCBD on two challenging problems. The first
problem is domain generalization, where we achieve strong performance on a
synthetic dataset, as well as on Camelyon17-WILDS. We introduce a single
hyperparameter $\alpha$ to control the degree of invariance to $e$. When we
increase $\alpha$ to strengthen the degree of invariance, out-of-distribution
performance improves at the expense of in-distribution performance. The second
problem is batch correction, in which we apply SCBD to preserve biological
signal and remove inter-well batch effects when modeling single-cell
perturbations from 26 million Optical Pooled Screening images.
|
2502.07282
|
Leader-follower formation enabled by pressure sensing in free-swimming
undulatory robotic fish
|
cs.RO
|
Fish use their lateral lines to sense flows and pressure gradients, enabling
them to detect nearby objects and organisms. Towards replicating this
capability, we demonstrated successful leader-follower formation swimming using
flow pressure sensing in our undulatory robotic fish ($\mu$Bot/MUBot). The
follower $\mu$Bot is equipped at its head with bilateral pressure sensors to
detect signals excited by both its own and the leader's movements. First, using
experiments with static formations between an undulating leader and a
stationary follower, we determined the formation that resulted in strong
pressure variations measured by the follower. This formation was then selected
as the desired formation in free swimming for obtaining an expert policy. Next,
a long short-term memory neural network was used as the control policy that
maps the pressure signals along with the robot motor commands and the Euler
angles (measured by the onboard IMU) to the steering command. The policy was
trained to imitate the expert policy using behavior cloning and Dataset
Aggregation (DAgger). The results show that with merely two bilateral pressure
sensors and less than one hour of training data, the follower effectively
tracked the leader within distances of up to 200 mm (= 1 body length) while
swimming at speeds of 155 mm/s (= 0.8 body lengths/s). This work highlights the
potential of fish-inspired robots to effectively navigate fluid environments
and achieve formation swimming through the use of flow pressure feedback.
|
2502.07285
|
Negative Dependence as a toolbox for machine learning : review and new
developments
|
stat.ML cs.LG math.PR
|
Negative dependence is becoming a key driver in advancing learning
capabilities beyond the limits of traditional independence. Recent developments
have evidenced support towards negatively dependent systems as a learning
paradigm in a broad range of fundamental machine learning challenges including
optimization, sampling, dimensionality reduction and sparse signal recovery,
often surpassing the performance of current methods based on statistical
independence. The most popular negatively dependent model has been that of
determinantal point processes (DPPs), which have their origins in quantum
theory. However, other models, such as perturbed lattice models, strongly
Rayleigh measures, zeros of random functions have gained salience in various
learning applications. In this article, we review this burgeoning field of
research, as it has developed over the past two decades or so. We also present
new results on applications of DPPs to the parsimonious representation of
neural networks. In the limited scope of the article, we mostly focus on
aspects of this area to which the authors contributed over the recent years,
including applications to Monte Carlo methods, coresets and stochastic gradient
descent, stochastic networks, signal processing and connections to quantum
computation. However, starting from basics of negative dependence for the
uninitiated reader, extensive references are provided to a broad swath of
related developments which could not be covered within our limited scope. While
existing works and reviews generally focus on specific negatively dependent
models (e.g. DPPs), a notable feature of this article is that it addresses
negative dependence as a machine learning methodology as a whole. In this vein,
it covers within its span an array of negatively dependent models and their
applications well beyond DPPs, thereby putting forward a very general and
rather unique perspective.
|
2502.07286
|
Small Language Model Makes an Effective Long Text Extractor
|
cs.CL cs.AI
|
Named Entity Recognition (NER) is a fundamental problem in natural language
processing (NLP). However, the task of extracting longer entity spans (e.g.,
awards) from extended texts (e.g., homepages) is barely explored. Current NER
methods predominantly fall into two categories: span-based methods and
generation-based methods. Span-based methods require the enumeration of all
possible token-pair spans, followed by classification on each span, resulting
in substantial redundant computations and excessive GPU memory usage. In
contrast, generation-based methods involve prompting or fine-tuning large
language models (LLMs) to adapt to downstream NER tasks. However, these methods
struggle with the accurate generation of longer spans and often incur
significant time costs for effective fine-tuning. To address these challenges,
this paper introduces a lightweight span-based NER method called SeNER, which
incorporates a bidirectional arrow attention mechanism coupled with
LogN-Scaling on the [CLS] token to embed long texts effectively, and comprises
a novel bidirectional sliding-window plus-shaped attention (BiSPA) mechanism to
reduce redundant candidate token-pair spans significantly and model
interactions between token-pair spans simultaneously. Extensive experiments
demonstrate that our method achieves state-of-the-art extraction accuracy on
three long NER datasets and is capable of extracting entities from long texts
in a GPU-memory-friendly manner. Code:
https://github.com/THUDM/scholar-profiling/tree/main/sener
|
2502.07288
|
KPIs 2024 Challenge: Advancing Glomerular Segmentation from Patch- to
Slide-Level
|
cs.CV cs.AI
|
Chronic kidney disease (CKD) is a major global health issue, affecting over
10% of the population and causing significant mortality. While kidney biopsy
remains the gold standard for CKD diagnosis and treatment, the lack of
comprehensive benchmarks for kidney pathology segmentation hinders progress in
the field. To address this, we organized the Kidney Pathology Image
Segmentation (KPIs) Challenge, introducing a dataset that incorporates
preclinical rodent models of CKD with over 10,000 annotated glomeruli from 60+
Periodic Acid Schiff (PAS)-stained whole slide images. The challenge includes
two tasks, patch-level segmentation and whole slide image segmentation and
detection, evaluated using the Dice Similarity Coefficient (DSC) and F1-score.
By encouraging innovative segmentation methods that adapt to diverse CKD models
and tissue conditions, the KPIs Challenge aims to advance kidney pathology
analysis, establish new benchmarks, and enable precise, large-scale
quantification for disease research and diagnosis.
|
2502.07289
|
Learning Inverse Laplacian Pyramid for Progressive Depth Completion
|
cs.CV
|
Depth completion endeavors to reconstruct a dense depth map from sparse depth
measurements, leveraging the information provided by a corresponding color
image. Existing approaches mostly hinge on single-scale propagation strategies
that iteratively ameliorate initial coarse depth estimates through pixel-level
message passing. Despite their commendable outcomes, these techniques are
frequently hampered by computational inefficiencies and a limited grasp of
scene context. To circumvent these challenges, we introduce LP-Net, an
innovative framework that implements a multi-scale, progressive prediction
paradigm based on Laplacian Pyramid decomposition. Diverging from
propagation-based approaches, LP-Net initiates with a rudimentary,
low-resolution depth prediction to encapsulate the global scene context,
subsequently refining this through successive upsampling and the reinstatement
of high-frequency details at incremental scales. We have developed two novel
modules to bolster this strategy: 1) the Multi-path Feature Pyramid module,
which segregates feature maps into discrete pathways, employing multi-scale
transformations to amalgamate comprehensive spatial information, and 2) the
Selective Depth Filtering module, which dynamically learns to apply both
smoothness and sharpness filters to judiciously mitigate noise while
accentuating intricate details. By integrating these advancements, LP-Net not
only secures state-of-the-art (SOTA) performance across both outdoor and indoor
benchmarks such as KITTI, NYUv2, and TOFDC, but also demonstrates superior
computational efficiency. At the time of submission, LP-Net ranks 1st among all
peer-reviewed methods on the official KITTI leaderboard.
|
2502.07293
|
Global Universal Scaling and Ultra-Small Parameterization in Machine
Learning Interatomic Potentials with Super-Linearity
|
cond-mat.mtrl-sci cs.LG
|
Using machine learning (ML) to construct interatomic interactions and thus
potential energy surface (PES) has become a common strategy for materials
design and simulations. However, those current models of machine learning
interatomic potential (MLIP) provide no relevant physical constrains, and thus
may owe intrinsic out-of-domain difficulty which underlies the challenges of
model generalizability and physical scalability. Here, by incorporating
physics-informed Universal-Scaling law and nonlinearity-embedded interaction
function, we develop a Super-linear MLIP with both Ultra-Small parameterization
and greatly expanded expressive capability, named SUS2-MLIP. Due to the global
scaling rooting in universal equation of state (UEOS), SUS2-MLIP not only has
significantly-reduced parameters by decoupling the element space from
coordinate space, but also naturally outcomes the out-of-domain difficulty and
endows the potentials with inherent generalizability and scalability even with
relatively small training dataset. The nonlinearity-enbeding transformation for
interaction function expands the expressive capability and make the potentials
super-linear. The SUS2-MLIP outperforms the state-of-the-art MLIP models with
its exceptional computational efficiency especially for multiple-element
materials and physical scalability in property prediction. This work not only
presents a highly-efficient universal MLIP model but also sheds light on
incorporating physical constraints into artificial-intelligence-aided materials
simulation.
|
2502.07295
|
Treatment Effect Estimation for Exponential Family Outcomes using Neural
Networks with Targeted Regularization
|
cs.LG
|
Neural Networks (NNs) have became a natural choice for treatment effect
estimation due to their strong approximation capabilities. Nevertheless, how to
design NN-based estimators with desirable properties, such as low bias and
doubly robustness, still remains a significant challenge. A common approach to
address this is targeted regularization, which modifies the objective function
of NNs. However, existing works on targeted regularization are limited to
Gaussian-distributed outcomes, significantly restricting their applicability in
real-world scenarios. In this work, we aim to bridge this blank by extending
this framework to the boarder exponential family outcomes. Specifically, we
first derive the von-Mises expansion of the Average Dose function of Canonical
Functions (ADCF), which inspires us how to construct a doubly robust estimator
with good properties. Based on this, we develop a NN-based estimator for ADCF
by generalizing functional targeted regularization to exponential families, and
provide the corresponding theoretical convergence rate. Extensive experimental
results demonstrate the effectiveness of our proposed model.
|
2502.07297
|
Generation of Drug-Induced Cardiac Reactions towards Virtual Clinical
Trials
|
cs.LG q-bio.QM
|
Clinical trials are pivotal in cardiac drug development, yet they often fail
due to inadequate efficacy and unexpected safety issues, leading to significant
financial losses. Using in-silico trials to replace a part of physical clinical
trials, e.g., leveraging advanced generative models to generate drug-influenced
electrocardiograms (ECGs), seems an effective method to reduce financial risk
and potential harm to trial participants. While existing generative models have
demonstrated progress in ECG generation, they fall short in modeling drug
reactions due to limited fidelity and inability to capture individualized drug
response patterns. In this paper, we propose a Drug-Aware Diffusion Model
(DADM), which could simulate individualized drug reactions while ensuring
fidelity. To ensure fidelity, we construct a set of ordinary differential
equations to provide external physical knowledge (EPK) of the realistic ECG
morphology. The EPK is used to adaptively constrain the morphology of the
generated ECGs through a dynamic cross-attention (DCA) mechanism. Furthermore,
we propose an extension of ControlNet to incorporate demographic and drug data,
simulating individual drug reactions. We compare DADM with the other eight
state-of-the-art ECG generative models on two real-world databases covering 8
types of drug regimens. The results demonstrate that DADM can more accurately
simulate drug-induced changes in ECGs, improving the accuracy by at least 5.79%
and recall by 8%.
|
2502.07299
|
Life-Code: Central Dogma Modeling with Multi-Omics Sequence Unification
|
cs.LG cs.AI cs.CL q-bio.GN
|
The interactions between DNA, RNA, and proteins are fundamental to biological
processes, as illustrated by the central dogma of molecular biology. While
modern biological pre-trained models have achieved great success in analyzing
these macromolecules individually, their interconnected nature remains
under-explored. In this paper, we follow the guidance of the central dogma to
redesign both the data and model pipeline and offer a comprehensive framework,
Life-Code, that spans different biological functions. As for data flow, we
propose a unified pipeline to integrate multi-omics data by
reverse-transcribing RNA and reverse-translating amino acids into
nucleotide-based sequences. As for the model, we design a codon tokenizer and a
hybrid long-sequence architecture to encode the interactions of both coding and
non-coding regions with masked modeling pre-training. To model the translation
and folding process with coding sequences, Life-Code learns protein structures
of the corresponding amino acids by knowledge distillation from off-the-shelf
protein language models. Such designs enable Life-Code to capture complex
interactions within genetic sequences, providing a more comprehensive
understanding of multi-omics with the central dogma. Extensive Experiments show
that Life-Code achieves state-of-the-art performance on various tasks across
three omics, highlighting its potential for advancing multi-omics analysis and
interpretation.
|
2502.07302
|
CASC-AI: Consensus-aware Self-corrective AI Agents for Noise Cell
Segmentation
|
cs.CV
|
Multi-class cell segmentation in high-resolution gigapixel whole slide images
(WSI) is crucial for various clinical applications. However, training such
models typically requires labor-intensive, pixel-wise annotations by domain
experts. Recent efforts have democratized this process by involving lay
annotators without medical expertise. However, conventional non-agent-based
approaches struggle to handle annotation noise adaptively, as they lack
mechanisms to mitigate false positives (FP) and false negatives (FN) at both
the image-feature and pixel levels. In this paper, we propose a consensus-aware
self-corrective AI agent that leverages the Consensus Matrix to guide its
learning process. The Consensus Matrix defines regions where both the AI and
annotators agree on cell and non-cell annotations, which are prioritized with
stronger supervision. Conversely, areas of disagreement are adaptively weighted
based on their feature similarity to high-confidence agreement regions, with
more similar regions receiving greater attention. Additionally, contrastive
learning is employed to separate features of noisy regions from those of
reliable agreement regions by maximizing their dissimilarity. This paradigm
enables the AI to iteratively refine noisy labels, enhancing its robustness.
Validated on one real-world lay-annotated cell dataset and two simulated noisy
datasets, our method demonstrates improved segmentation performance,
effectively correcting FP and FN errors and showcasing its potential for
training robust models on noisy datasets. The official implementation and cell
annotations are publicly available at https://github.com/ddrrnn123/CASC-AI.
|
2502.07303
|
Flow Matching for Collaborative Filtering
|
cs.IR
|
Generative models have shown great promise in collaborative filtering by
capturing the underlying distribution of user interests and preferences.
However, existing approaches struggle with inaccurate posterior approximations
and misalignment with the discrete nature of recommendation data, limiting
their expressiveness and real-world performance. To address these limitations,
we propose FlowCF, a novel flow-based recommendation system leveraging flow
matching for collaborative filtering. We tailor flow matching to the unique
challenges in recommendation through two key innovations: (1) a behavior-guided
prior that aligns with user behavior patterns to handle the sparse and
heterogeneous user-item interactions, and (2) a discrete flow framework to
preserve the binary nature of implicit feedback while maintaining the benefits
of flow matching, such as stable training and efficient inference. Extensive
experiments demonstrate that FlowCF achieves state-of-the-art recommendation
accuracy across various datasets with the fastest inference speed, making it a
compelling approach for real-world recommender systems.
|
2502.07306
|
TRAVEL: Training-Free Retrieval and Alignment for Vision-and-Language
Navigation
|
cs.CV cs.AI cs.CL cs.LG cs.RO
|
In this work, we propose a modular approach for the Vision-Language
Navigation (VLN) task by decomposing the problem into four sub-modules that use
state-of-the-art Large Language Models (LLMs) and Vision-Language Models (VLMs)
in a zero-shot setting. Given navigation instruction in natural language, we
first prompt LLM to extract the landmarks and the order in which they are
visited. Assuming the known model of the environment, we retrieve the top-k
locations of the last landmark and generate $k$ path hypotheses from the
starting location to the last landmark using the shortest path algorithm on the
topological map of the environment. Each path hypothesis is represented by a
sequence of panoramas. We then use dynamic programming to compute the alignment
score between the sequence of panoramas and the sequence of landmark names,
which match scores obtained from VLM. Finally, we compute the nDTW metric
between the hypothesis that yields the highest alignment score to evaluate the
path fidelity. We demonstrate superior performance compared to other approaches
that use joint semantic maps like VLMaps \cite{vlmaps} on the complex
R2R-Habitat \cite{r2r} instruction dataset and quantify in detail the effect of
visual grounding on navigation performance.
|
2502.07307
|
CreAgent: Towards Long-Term Evaluation of Recommender System under
Platform-Creator Information Asymmetry
|
cs.IR
|
Ensuring the long-term sustainability of recommender systems (RS) emerges as
a crucial issue. Traditional offline evaluation methods for RS typically focus
on immediate user feedback, such as clicks, but they often neglect the
long-term impact of content creators. On real-world content platforms, creators
can strategically produce and upload new items based on user feedback and
preference trends. While previous studies have attempted to model creator
behavior, they often overlook the role of information asymmetry. This asymmetry
arises because creators primarily have access to feedback on the items they
produce, while platforms possess data on the entire spectrum of user feedback.
Current RS simulators, however, fail to account for this asymmetry, leading to
inaccurate long-term evaluations. To address this gap, we propose CreAgent, a
Large Language Model (LLM)-empowered creator simulation agent. By incorporating
game theory's belief mechanism and the fast-and-slow thinking framework,
CreAgent effectively simulates creator behavior under conditions of information
asymmetry. Additionally, we enhance CreAgent's simulation ability by
fine-tuning it using Proximal Policy Optimization (PPO). Our credibility
validation experiments show that CreAgent aligns well with the behaviors
between real-world platform and creator, thus improving the reliability of
long-term RS evaluations. Moreover, through the simulation of RS involving
CreAgents, we can explore how fairness- and diversity-aware RS algorithms
contribute to better long-term performance for various stakeholders. CreAgent
and the simulation platform are publicly available at
https://github.com/shawnye2000/CreAgent.
|
2502.07308
|
Explicit Codes approaching Generalized Singleton Bound using Expanders
|
cs.IT cs.CC math.IT
|
We construct a new family of explicit codes that are list decodable to
capacity and achieve an optimal list size of $O(\frac{1}{\epsilon})$. In
contrast to existing explicit constructions of codes achieving list decoding
capacity, our arguments do not rely on algebraic structure but utilize simple
combinatorial properties of expander graphs.
Our construction is based on a celebrated distance amplification procedure
due to Alon, Edmonds, and Luby [FOCS'95], which transforms any high-rate code
into one with near-optimal rate-distance tradeoff. We generalize it to show
that the same procedure can be used to transform any high-rate code into one
that achieves list decoding capacity. Our proof can be interpreted as a
"local-to-global" phenomenon for (a slight strengthening of) the generalized
Singleton bound. Using this construction, for every $R, \epsilon \in (0,1)$ and
$k \in \mathbb{N}^+$, we obtain an \emph{explicit} family of codes $\mathcal{C}
\subseteq \Sigma^n$, with rate $R$ such that,
- They achieve the $\epsilon$-relaxed generalized Singleton bound: for any $g
\in \Sigma^n$ and any list $\mathcal{H}$ of at most $k$ codewords, we have, \[
\underset{h \in \mathcal{H}}{\mathbb{E}} [\Delta(g,h)] ~\geq~
\frac{|\mathcal{H}|-1}{|\mathcal{H}|} \cdot (1 - R - \epsilon). \]
- The alphabet size is a constant depending only on $\epsilon$ and $k$.
- They can be list decoded up to radius $\frac{k-1}{k}(1-R-\epsilon)$, in
time $n^{O_{k,\epsilon}(1)}$.
As a corollary of our result, we also obtain the first explicit construction
of LDPC codes achieving list decoding capacity, and in fact arbitrarily close
to the generalized Singleton bound.
|
2502.07309
|
Semi-Supervised Vision-Centric 3D Occupancy World Model for Autonomous
Driving
|
cs.CV
|
Understanding world dynamics is crucial for planning in autonomous driving.
Recent methods attempt to achieve this by learning a 3D occupancy world model
that forecasts future surrounding scenes based on current observation. However,
3D occupancy labels are still required to produce promising results.
Considering the high annotation cost for 3D outdoor scenes, we propose a
semi-supervised vision-centric 3D occupancy world model, PreWorld, to leverage
the potential of 2D labels through a novel two-stage training paradigm: the
self-supervised pre-training stage and the fully-supervised fine-tuning stage.
Specifically, during the pre-training stage, we utilize an attribute projection
head to generate different attribute fields of a scene (e.g., RGB, density,
semantic), thus enabling temporal supervision from 2D labels via volume
rendering techniques. Furthermore, we introduce a simple yet effective
state-conditioned forecasting module to recursively forecast future occupancy
and ego trajectory in a direct manner. Extensive experiments on the nuScenes
dataset validate the effectiveness and scalability of our method, and
demonstrate that PreWorld achieves competitive performance across 3D occupancy
prediction, 4D occupancy forecasting and motion planning tasks.
|
2502.07312
|
OpenGrok: Enhancing SNS Data Processing with Distilled Knowledge and
Mask-like Mechanisms
|
cs.LG cs.AI
|
This report details Lumen Labs' novel approach to processing Social
Networking Service (SNS) data. We leverage knowledge distillation, specifically
a simple distillation method inspired by DeepSeek-R1's CoT acquisition,
combined with prompt hacking, to extract valuable training data from the Grok
model. This data is then used to fine-tune a Phi-3-mini model, augmented with a
mask-like mechanism specifically designed for handling the nuances of SNS data.
Our method demonstrates state-of-the-art (SOTA) performance on several SNS data
processing tasks, outperforming existing models like Grok, Phi-3, and GPT-4. We
provide a comprehensive analysis of our approach, including mathematical
formulations, engineering details, ablation studies, and comparative
evaluations.
|
2502.07315
|
Prompt-Based Document Modifications In Ranking Competitions
|
cs.IR cs.GT
|
We study prompting-based approaches with Large Language Models (LLMs) for
modifying documents so as to promote their ranking in a competitive search
setting. Our methods are inspired by prior work on leveraging LLMs as rankers.
We evaluate our approach by deploying it as a bot in previous ranking
competitions and in competitions we organized. Our findings demonstrate that
our approach effectively improves document ranking while preserving high levels
of faithfulness to the original content and maintaining overall document
quality.
|
2502.07316
|
CodeI/O: Condensing Reasoning Patterns via Code Input-Output Prediction
|
cs.CL cs.AI
|
Reasoning is a fundamental capability of Large Language Models. While prior
research predominantly focuses on enhancing narrow skills like math or code
generation, improving performance on many other reasoning tasks remains
challenging due to sparse and fragmented training data. To address this issue,
we propose CodeI/O, a novel approach that systematically condenses diverse
reasoning patterns inherently embedded in contextually-grounded codes, through
transforming the original code into a code input-output prediction format. By
training models to predict inputs/outputs given code and test cases entirely in
natural language as Chain-of-Thought (CoT) rationales, we expose them to
universal reasoning primitives -- like logic flow planning, state-space
searching, decision tree traversal, and modular decomposition -- while
decoupling structured reasoning from code-specific syntax and preserving
procedural rigor. Experimental results demonstrate CodeI/O leads to consistent
improvements across symbolic, scientific, logic, math & numerical, and
commonsense reasoning tasks. By matching the existing ground-truth outputs or
re-executing the code with predicted inputs, we can verify each prediction and
further enhance the CoTs through multi-turn revision, resulting in CodeI/O++
and achieving higher performance. Our data and models are available at
https://github.com/hkust-nlp/CodeIO.
|
2502.07318
|
Beamfocusing Capabilities of a Uniform Linear Array in the Holographic
Regime
|
cs.IT eess.SP math.IT
|
The use of multiantenna technologies in the near field offers the possibility
of focusing the energy in spatial regions rather than just in angle. The
objective of this paper is to provide a formal framework that allows to
establish the region in space where this effect can take place and how
efficient this focusing can be, assuming that the transmit architecture is a
uniform linear array (ULA). A dyadic Green's channel model is adopted, and the
amplitude differences between the receiver and each transmit antenna are
effectively incorporated in the model. By considering a second-order expansion
of the SNR around the intended receiver, a formal criterion is derived in order
to establish whether beamfocusing is feasible or not. An analytic description
is provided that determines the shape and position of the asymptotic ellipsoid
where a minimum SNR is achieved. Further insights are provided by considering
the holographic regime, whereby the number of elements of the ULA increase
without bound while the distance between adjacent elements converges to zero.
This asymptotic framework allows to simplify the analytical form of the
beamfocusing feasibility region, which in turn provides some further insights
into the shape of the coverage regions depending on the position of the
intended receiver. In particular, it is shown that beamfocusing is only
possible if the size of the ULA is at least $4.4\lambda$ where $\lambda$ is the
transmission wavelength. Furthermore, a closed form analytical expression is
provided that asymptotically determines the maximum distance where beamfocusing
is feasible as a function of the elevation angle. In particular, beamfocusing
is only feasible when the receiver is located between a minimum and a maximum
distance from the array, where these upper and lower distance limits
effectively depend on the angle of elevation
|
2502.07319
|
Learnable Residual-based Latent Denoising in Semantic Communication
|
cs.LG cs.IT math.IT
|
A latent denoising semantic communication (SemCom) framework is proposed for
robust image transmission over noisy channels. By incorporating a learnable
latent denoiser into the receiver, the received signals are preprocessed to
effectively remove the channel noise and recover the semantic information,
thereby enhancing the quality of the decoded images. Specifically, a latent
denoising mapping is established by an iterative residual learning approach to
improve the denoising efficiency while ensuring stable performance. Moreover,
channel signal-to-noise ratio (SNR) is utilized to estimate and predict the
latent similarity score (SS) for conditional denoising, where the number of
denoising steps is adapted based on the predicted SS sequence, further reducing
the communication latency. Finally, simulations demonstrate that the proposed
framework can effectively and efficiently remove the channel noise at various
levels and reconstruct visual-appealing images.
|
2502.07322
|
MEMIT-Merge: Addressing MEMIT's Key-Value Conflicts in Same-Subject
Batch Editing for LLMs
|
cs.CL cs.LG
|
As large language models continue to scale up, knowledge editing techniques
that modify models' internal knowledge without full retraining have gained
significant attention. MEMIT, a prominent batch editing algorithm, stands out
for its capability to perform mass knowledge modifications. However, we uncover
a critical limitation that MEMIT's editing efficacy significantly deteriorates
when processing batches containing multiple edits sharing the same subject. Our
analysis reveals that the root cause lies in MEMIT's key value modeling
framework: When multiple facts with the same subject in a batch are modeled
through MEMIT's key value mechanism, identical keys (derived from the shared
subject) are forced to represent different values (corresponding to different
knowledge), resulting in updates conflicts during editing. Addressing this
issue, we propose MEMIT-Merge, an enhanced approach that merges value
computation processes for facts sharing the same subject, effectively resolving
the performance degradation in same-subject batch editing scenarios.
Experimental results demonstrate that when MEMIT's edit success rate drops to
around 50% at larger batch sizes, MEMIT-Merge maintains a success rate
exceeding 90%, showcasing remarkable robustness to subject entity collisions.
|
2502.07323
|
Semantic to Structure: Learning Structural Representations for
Infringement Detection
|
cs.CV
|
Structural information in images is crucial for aesthetic assessment, and it
is widely recognized in the artistic field that imitating the structure of
other works significantly infringes on creators' rights. The advancement of
diffusion models has led to AI-generated content imitating artists' structural
creations, yet effective detection methods are still lacking. In this paper, we
define this phenomenon as "structural infringement" and propose a corresponding
detection method. Additionally, we develop quantitative metrics and create
manually annotated datasets for evaluation: the SIA dataset of synthesized
data, and the SIR dataset of real data. Due to the current lack of datasets for
structural infringement detection, we propose a new data synthesis strategy
based on diffusion models and LLM, successfully training a structural
infringement detection model. Experimental results show that our method can
successfully detect structural infringements and achieve notable improvements
on annotated test sets.
|
2502.07325
|
Long-term simulation of physical and mechanical behaviors using
curriculum-transfer-learning based physics-informed neural networks
|
cs.LG cs.NA math.NA
|
This paper proposes a Curriculum-Transfer-Learning based physics-informed
neural network (CTL-PINN) for long-term simulation of physical and mechanical
behaviors. The main innovation of CTL-PINN lies in decomposing long-term
problems into a sequence of short-term subproblems. Initially, the standard
PINN is employed to solve the first sub-problem. As the simulation progresses,
subsequent time-domain problems are addressed using a curriculum learning
approach that integrates information from previous steps. Furthermore, transfer
learning techniques are incorporated, allowing the model to effectively utilize
prior training data and solve sequential time domain transfer problems.
CTL-PINN combines the strengths of curriculum learning and transfer learning,
overcoming the limitations of standard PINNs, such as local optimization
issues, and addressing the inaccuracies over extended time domains encountered
in CL-PINN and the low computational efficiency of TL-PINN. The efficacy and
robustness of CTL-PINN are demonstrated through applications to nonlinear wave
propagation, Kirchhoff plate dynamic response, and the hydrodynamic model of
the Three Gorges Reservoir Area, showcasing its superior capability in
addressing long-term computational challenges.
|
2502.07326
|
PICTS: A Novel Deep Reinforcement Learning Approach for Dynamic P-I
Control in Scanning Probe Microscopy
|
cond-mat.mtrl-sci cs.LG physics.app-ph
|
We have developed a Parallel Integrated Control and Training System,
leveraging the deep reinforcement learning to dynamically adjust the control
strategies in real time for scanning probe microscopy techniques.
|
2502.07327
|
Generative Ghost: Investigating Ranking Bias Hidden in AI-Generated
Videos
|
cs.IR cs.CV
|
With the rapid development of AI-generated content (AIGC), the creation of
high-quality AI-generated videos has become faster and easier, resulting in the
Internet being flooded with all kinds of video content. However, the impact of
these videos on the content ecosystem remains largely unexplored. Video
information retrieval remains a fundamental approach for accessing video
content. Building on the observation that retrieval models often favor
AI-generated content in ad-hoc and image retrieval tasks, we investigate
whether similar biases emerge in the context of challenging video retrieval,
where temporal and visual factors may further influence model behavior. To
explore this, we first construct a comprehensive benchmark dataset containing
both real and AI-generated videos, along with a set of fair and rigorous
metrics to assess bias. This benchmark consists of 13,000 videos generated by
two state-of-the-art open-source video generation models. We meticulously
design a suite of rigorous metrics to accurately measure this preference,
accounting for potential biases arising from the limited frame rate and
suboptimal quality of AIGC videos. We then applied three off-the-shelf video
retrieval models to perform retrieval tasks on this hybrid dataset. Our
findings reveal a clear preference for AI-generated videos in retrieval.
Further investigation shows that incorporating AI-generated videos into the
training set of retrieval models exacerbates this bias. Unlike the preference
observed in image modalities, we find that video retrieval bias arises from
both unseen visual and temporal information, making the root causes of video
bias a complex interplay of these two factors. To mitigate this bias, we
fine-tune the retrieval models using a contrastive learning approach. The
results of this study highlight the potential implications of AI-generated
videos on retrieval systems.
|
2502.07328
|
Music for All: Exploring Multicultural Representations in Music
Generation Models
|
cs.SD cs.AI cs.CL cs.LG cs.MM
|
The advent of Music-Language Models has greatly enhanced the automatic music
generation capability of AI systems, but they are also limited in their
coverage of the musical genres and cultures of the world. We present a study of
the datasets and research papers for music generation and quantify the bias and
under-representation of genres. We find that only 5.7% of the total hours of
existing music datasets come from non-Western genres, which naturally leads to
disparate performance of the models across genres. We then investigate the
efficacy of Parameter-Efficient Fine-Tuning (PEFT) techniques in mitigating
this bias. Our experiments with two popular models -- MusicGen and Mustango,
for two underrepresented non-Western music traditions -- Hindustani Classical
and Turkish Makam music, highlight the promises as well as the non-triviality
of cross-genre adaptation of music through small datasets, implying the need
for more equitable baseline music-language models that are designed for
cross-cultural transfer learning.
|
2502.07331
|
ERANet: Edge Replacement Augmentation for Semi-Supervised Meniscus
Segmentation with Prototype Consistency Alignment and Conditional
Self-Training
|
cs.CV
|
Manual segmentation is labor-intensive, and automatic segmentation remains
challenging due to the inherent variability in meniscal morphology, partial
volume effects, and low contrast between the meniscus and surrounding tissues.
To address these challenges, we propose ERANet, an innovative semi-supervised
framework for meniscus segmentation that effectively leverages both labeled and
unlabeled images through advanced augmentation and learning strategies. ERANet
integrates three key components: edge replacement augmentation (ERA), prototype
consistency alignment (PCA), and a conditional self-training (CST) strategy
within a mean teacher architecture. ERA introduces anatomically relevant
perturbations by simulating meniscal variations, ensuring that augmentations
align with the structural context. PCA enhances segmentation performance by
aligning intra-class features and promoting compact, discriminative feature
representations, particularly in scenarios with limited labeled data. CST
improves segmentation robustness by iteratively refining pseudo-labels and
mitigating the impact of label noise during training. Together, these
innovations establish ERANet as a robust and scalable solution for meniscus
segmentation, effectively addressing key barriers to practical implementation.
We validated ERANet comprehensively on 3D Double Echo Steady State (DESS) and
3D Fast/Turbo Spin Echo (FSE/TSE) MRI sequences. The results demonstrate the
superior performance of ERANet compared to state-of-the-art methods. The
proposed framework achieves reliable and accurate segmentation of meniscus
structures, even when trained on minimal labeled data. Extensive ablation
studies further highlight the synergistic contributions of ERA, PCA, and CST,
solidifying ERANet as a transformative solution for semi-supervised meniscus
segmentation in medical imaging.
|
2502.07332
|
The Combined Problem of Online Task Assignment and Lifelong Path Finding
in Logistics Warehouses: A Case Study
|
cs.MA cs.RO
|
We study the combined problem of online task assignment and lifelong path
finding, which is crucial for the logistics industries. However, most
literature either (1) focuses on lifelong path finding assuming a given task
assigner, or (2) studies the offline version of this problem where tasks are
known in advance. We argue that, to maximize the system throughput, the online
version that integrates these two components should be tackled directly. To
this end, we introduce a formal framework of the combined problem and its
solution concept. Then, we design a rule-based lifelong planner under a
practical robot model that works well even in environments with severe local
congestion. Upon that, we automate the search for the task assigner with
respect to the underlying path planner. Simulation experiments conducted in
warehouse scenarios at \textit{Meituan}, one of the largest shopping platforms
in China, demonstrate that (a)~\textit{in terms of time efficiency}, our system
requires only 83.77\% of the execution time needed for the currently deployed
system at Meituan, outperforming other SOTA algorithms by 8.09\%;
(b)~\textit{in terms of economic efficiency}, ours can achieve the same
throughput with only 60\% of the agents currently in use.
|
2502.07336
|
Frequency-selective Dynamic Scattering Arrays for Over-the-air EM
Processing
|
eess.SP cs.IT math.IT
|
In this paper, we investigate frequency-selective dynamic scattering array
(DSA), a versatile antenna structure capable of performing joint wave-based
computing and radiation by transitioning signal processing tasks from the
digital domain to the electromagnetic (EM) domain. The numerical results
demonstrate the potential of DSAs to produce space-frequency superdirective
responses with minimal usage of radiofrequency (RF) chains, making it
particularly attractive for future holographic multiple-input multiple-output
(MIMO) systems.
|
2502.07337
|
Neural Flow Samplers with Shortcut Models
|
cs.LG
|
Sampling from unnormalized densities is a fundamental task across various
domains. Flow-based samplers generate samples by learning a velocity field that
satisfies the continuity equation, but this requires estimating the intractable
time derivative of the partition function. While importance sampling provides
an approximation, it suffers from high variance. To mitigate this, we introduce
a velocity-driven Sequential Monte Carlo method combined with control variates
to reduce variance. Additionally, we incorporate a shortcut model to improve
efficiency by minimizing the number of sampling steps. Empirical results on
both synthetic datasets and $n$-body system targets validate the effectiveness
of our approach.
|
2502.07340
|
Aligning Large Language Models to Follow Instructions and Hallucinate
Less via Effective Data Filtering
|
cs.CL cs.AI
|
Training LLMs on data containing unfamiliar knowledge during the instruction
tuning stage can encourage hallucinations. To address this challenge, we
introduce NOVA, a novel framework designed to identify high-quality data that
aligns well with the LLM's learned knowledge to reduce hallucinations. NOVA
includes Internal Consistency Probing (ICP) and Semantic Equivalence
Identification (SEI) to measure how familiar the LLM is with instruction data.
Specifically, ICP evaluates the LLM's understanding of the given instruction by
calculating the tailored consistency among multiple self-generated responses.
SEI further assesses the familiarity of the LLM with the target response by
comparing it to the generated responses, using the proposed semantic clustering
and well-designed voting strategy. Finally, to ensure the quality of selected
samples, we introduce an expert-aligned reward model, considering
characteristics beyond just familiarity. By considering data quality and
avoiding unfamiliar data, we can utilize the selected data to effectively align
LLMs to follow instructions and hallucinate less.
|
2502.07343
|
DEG: Efficient Hybrid Vector Search Using the Dynamic Edge Navigation
Graph
|
cs.DB
|
Bimodal data, such as image-text pairs, has become increasingly prevalent in
the digital era. The Hybrid Vector Query (HVQ) is an effective approach for
querying such data and has recently garnered considerable attention from
researchers. It calculates similarity scores for objects represented by two
vectors using a weighted sum of each individual vector's similarity, with a
query-specific parameter $\alpha$ to determine the weight. Existing methods for
HVQ typically construct Approximate Nearest Neighbors Search (ANNS) indexes
with a fixed $\alpha$ value. This leads to significant performance degradation
when the query's $\alpha$ dynamically changes based on the different scenarios
and needs.
In this study, we introduce the Dynamic Edge Navigation Graph (DEG), a
graph-based ANNS index that maintains efficiency and accuracy with changing
$\alpha$ values. It includes three novel components: (1) a greedy Pareto
frontier search algorithm to compute a candidate neighbor set for each node,
which comprises the node's approximate nearest neighbors for all possible
$\alpha$ values; (2) a dynamic edge pruning strategy to determine the final
edges from the candidate set and assign each edge an active range. This active
range enables the dynamic use of the Relative Neighborhood Graph's pruning
strategy based on the query's $\alpha$ values, skipping redundant edges at
query time and achieving a better accuracy-efficiency trade-off; and (3) an
edge seed method that accelerates the querying process. Extensive experiments
on real-world datasets show that DEG demonstrates superior performance compared
to existing methods under varying $\alpha$ values.
|
2502.07344
|
Integrating Physics and Data-Driven Approaches: An Explainable and
Uncertainty-Aware Hybrid Model for Wind Turbine Power Prediction
|
cs.LG cs.AI cs.CE
|
The rapid growth of the wind energy sector underscores the urgent need to
optimize turbine operations and ensure effective maintenance through early
fault detection systems. While traditional empirical and physics-based models
offer approximate predictions of power generation based on wind speed, they
often fail to capture the complex, non-linear relationships between other input
variables and the resulting power output. Data-driven machine learning methods
present a promising avenue for improving wind turbine modeling by leveraging
large datasets, enhancing prediction accuracy but often at the cost of
interpretability. In this study, we propose a hybrid semi-parametric model that
combines the strengths of both approaches, applied to a dataset from a wind
farm with four turbines. The model integrates a physics-inspired submodel,
providing a reasonable approximation of power generation, with a non-parametric
submodel that predicts the residuals. This non-parametric submodel is trained
on a broader range of variables to account for phenomena not captured by the
physics-based component. The hybrid model achieves a 37% improvement in
prediction accuracy over the physics-based model. To enhance interpretability,
SHAP values are used to analyze the influence of input features on the residual
submodel's output. Additionally, prediction uncertainties are quantified using
a conformalized quantile regression method. The combination of these
techniques, alongside the physics grounding of the parametric submodel,
provides a flexible, accurate, and reliable framework. Ultimately, this study
opens the door for evaluating the impact of unmodeled variables on wind turbine
power generation, offering a basis for potential optimization.
|
2502.07346
|
BenchMAX: A Comprehensive Multilingual Evaluation Suite for Large
Language Models
|
cs.CL
|
Previous multilingual benchmarks focus primarily on simple understanding
tasks, but for large language models(LLMs), we emphasize proficiency in
instruction following, reasoning, long context understanding, code generation,
and so on. However, measuring these advanced capabilities across languages is
underexplored. To address the disparity, we introduce BenchMAX, a multi-way
multilingual evaluation benchmark that allows for fair comparisons of these
important abilities across languages. To maintain high quality, three distinct
native-speaking annotators independently annotate each sample within all tasks
after the data was machine-translated from English into 16 other languages.
Additionally, we present a novel translation challenge stemming from dataset
construction. Extensive experiments on BenchMAX reveal varying effectiveness of
core capabilities across languages, highlighting performance gaps that cannot
be bridged by simply scaling up model size. BenchMAX serves as a comprehensive
multilingual evaluation platform, providing a promising test bed to promote the
development of multilingual language models. The dataset and code are publicly
accessible.
|
2502.07347
|
Coarse Set Theory: A Mathematical Foundation for Coarse Ethics
|
cs.AI cs.IT math.IT math.LO math.PR
|
In ethical decision-making, individuals are often evaluated based on
generalized assessments rather than precise individual performance. This
concept, known as Coarse Ethics (CE), has primarily been discussed in natural
language without a formal mathematical foundation. This paper introduces Coarse
Set Theory (CST) to establish a mathematical framework for CE. We define coarse
sets using totally ordered sets and propose axioms that characterize the
hierarchical relationships between elements and their groupings. Additionally,
we introduce coarse-grained sets, which partition an underlying set into
equivalence classes based on predefined criteria. We extend this framework by
defining coarse mappings, which transform detailed individual data into coarser
representations while maintaining essential structural properties. To measure
the information loss, we employ Kullback-Leibler (KL) divergence, demonstrating
how different coarse partitions affect the preservation of information. We
illustrate how CST can be applied to real-world grading systems through
theoretical formulations and empirical analysis. This study provides a rigorous
foundation for CE, enabling a more systematic exploration of fairness,
interpretability, and decision-making trade-offs.
|
2502.07350
|
KABB: Knowledge-Aware Bayesian Bandits for Dynamic Expert Coordination
in Multi-Agent Systems
|
cs.AI
|
As scaling large language models faces prohibitive costs, multi-agent systems
emerge as a promising alternative, though challenged by static knowledge
assumptions and coordination inefficiencies. We introduces Knowledge-Aware
Bayesian Bandits (KABB), a novel framework that enhances multi-agent system
coordination through semantic understanding and dynamic adaptation. The
framework features three key innovations: a three-dimensional knowledge
distance model for deep semantic understanding, a dual-adaptation mechanism for
continuous expert optimization, and a knowledge-aware Thompson Sampling
strategy for efficient expert selection. Extensive evaluation demonstrates KABB
achieves an optimal cost-performance balance, maintaining high performance
while keeping computational demands relatively low in multi-agent coordination.
|
2502.07351
|
Multi-Task-oriented Nighttime Haze Imaging Enhancer for Vision-driven
Measurement Systems
|
cs.CV cs.AI
|
Salient object detection (SOD) plays a critical role in vision-driven
measurement systems (VMS), facilitating the detection and segmentation of key
visual elements in an image. However, adverse imaging conditions such as haze
during the day, low light, and haze at night severely degrade image quality,
and complicating the SOD process. To address these challenges, we propose a
multi-task-oriented nighttime haze imaging enhancer (MToIE), which integrates
three tasks: daytime dehazing, low-light enhancement, and nighttime dehazing.
The MToIE incorporates two key innovative components: First, the network
employs a task-oriented node learning mechanism to handle three specific
degradation types: day-time haze, low light, and night-time haze conditions,
with an embedded self-attention module enhancing its performance in nighttime
imaging. In addition, multi-receptive field enhancement module that efficiently
extracts multi-scale features through three parallel depthwise separable
convolution branches with different dilation rates, capturing comprehensive
spatial information with minimal computational overhead. To ensure optimal
image reconstruction quality and visual characteristics, we suggest a hybrid
loss function. Extensive experiments on different types of weather/imaging
conditions illustrate that MToIE surpasses existing methods, significantly
enhancing the accuracy and reliability of vision systems across diverse imaging
scenarios. The code is available at https://github.com/Ai-Chen-Lab/MToIE.
|
2502.07352
|
Bridging the Evaluation Gap: Leveraging Large Language Models for Topic
Model Evaluation
|
cs.CL cs.AI cs.DL
|
This study presents a framework for automated evaluation of dynamically
evolving topic taxonomies in scientific literature using Large Language Models
(LLMs). In digital library systems, topic modeling plays a crucial role in
efficiently organizing and retrieving scholarly content, guiding researchers
through complex knowledge landscapes. As research domains proliferate and
shift, traditional human centric and static evaluation methods struggle to
maintain relevance. The proposed approach harnesses LLMs to measure key quality
dimensions, such as coherence, repetitiveness, diversity, and topic-document
alignment, without heavy reliance on expert annotators or narrow statistical
metrics. Tailored prompts guide LLM assessments, ensuring consistent and
interpretable evaluations across various datasets and modeling techniques.
Experiments on benchmark corpora demonstrate the method's robustness,
scalability, and adaptability, underscoring its value as a more holistic and
dynamic alternative to conventional evaluation strategies.
|
2502.07355
|
Performance Bounds and Degree-Distribution Optimization of Finite-Length
BATS Codes
|
cs.IT math.IT
|
Batched sparse (BATS) codes were proposed as a reliable communication
solution for networks with packet loss. In the finite-length regime, the error
probability of BATS codes under belief propagation (BP) decoding has been
studied in the literature and can be analyzed by recursive formulae. However,
all existing analyses have not considered precoding or have treated the BATS
code and the precode as two separate entities. In this paper, we analyze the
word-wise error probability of finite-length BATS codes with a precode under
joint decoding, including BP decoding and maximum-likelihood (ML) decoding. The
joint BP decoder performs peeling decoding on a joint Tanner graph constructed
from both the BATS and the precode Tanner graphs, and the joint ML decoder
solves a single linear system with all linear constraints implied by the BATS
code and the precode. We derive closed-form upper bounds on the error
probability for both decoders. Specifically, low-density parity-check (LDPC)
precodes are used for BP decoding, and any generic precode can be used for ML
decoding. Even for BATS codes without a precode, the derived upper bound for BP
decoding is more accurate than the approximate recursive formula, and easier to
compute than the exact recursive formula. The accuracy of the two upper bounds
has been verified by many simulation results. Based on the two upper bounds, we
formulate an optimization problem to optimize the degree distribution of
LDPC-precoded BATS codes, which improves BP performance, ML performance, or
both. In our experiments, to transmit 128 packets over a line network with
packet loss, the optimized LDPC-precoded BATS codes reduce the transmission
overhead to less than 50% of that of standard BATS codes under comparable
decoding complexity constraints.
|
2502.07358
|
SymbioSim: Human-in-the-loop Simulation Platform for Bidirectional
Continuing Learning in Human-Robot Interaction
|
cs.RO
|
The development of intelligent robots seeks to seamlessly integrate them into
the human world, providing assistance and companionship in daily life and work,
with the ultimate goal of achieving human-robot symbiosis. To realize this
vision, robots must continuously learn and evolve through consistent
interaction and collaboration with humans, while humans need to gradually
develop an understanding of and trust in robots through shared experiences.
However, training and testing algorithms directly on physical robots involve
substantial costs and safety risks. Moreover, current robotic simulators fail
to support real human participation, limiting their ability to provide
authentic interaction experiences and gather valuable human feedback. In this
paper, we introduce SymbioSim, a novel human-in-the-loop robotic simulation
platform designed to enable the safe and efficient development, evaluation, and
optimization of human-robot interactions. By leveraging a carefully designed
system architecture and modules, SymbioSim delivers a natural and realistic
interaction experience, facilitating bidirectional continuous learning and
adaptation for both humans and robots. Extensive experiments and user studies
demonstrate the platform's promising performance and highlight its potential to
significantly advance research on human-robot symbiosis.
|
2502.07360
|
Supervised contrastive learning for cell stage classification of animal
embryos
|
q-bio.QM cs.CV
|
Video microscopy, when combined with machine learning, offers a promising
approach for studying the early development of in vitro produced (IVP) embryos.
However, manually annotating developmental events, and more specifically cell
divisions, is time-consuming for a biologist and cannot scale up for practical
applications. We aim to automatically classify the cell stages of embryos from
2D time-lapse microscopy videos with a deep learning approach. We focus on the
analysis of bovine embryonic development using video microscopy, as we are
primarily interested in the application of cattle breeding, and we have created
a Bovine Embryos Cell Stages (ECS) dataset. The challenges are three-fold: (1)
low-quality images and bovine dark cells that make the identification of cell
stages difficult, (2) class ambiguity at the boundaries of developmental
stages, and (3) imbalanced data distribution. To address these challenges, we
introduce CLEmbryo, a novel method that leverages supervised contrastive
learning combined with focal loss for training, and the lightweight 3D neural
network CSN-50 as an encoder. We also show that our method generalizes well.
CLEmbryo outperforms state-of-the-art methods on both our Bovine ECS dataset
and the publicly available NYU Mouse Embryos dataset.
|
2502.07364
|
Effects of Random Edge-Dropping on Over-Squashing in Graph Neural
Networks
|
cs.LG
|
Message Passing Neural Networks (MPNNs) are a class of Graph Neural Networks
(GNNs) that leverage the graph topology to propagate messages across
increasingly larger neighborhoods. The message-passing scheme leads to two
distinct challenges: over-smoothing and over-squashing. While several
algorithms, e.g. DropEdge and its variants -- DropNode, DropAgg and DropGNN --
have successfully addressed the over-smoothing problem, their impact on
over-squashing remains largely unexplored. This represents a critical gap in
the literature as failure to mitigate over-squashing would make these methods
unsuitable for long-range tasks. In this work, we take the first step towards
closing this gap by studying the aforementioned algorithms in the context of
over-squashing. We present novel theoretical results that characterize the
negative effects of DropEdge on sensitivity between distant nodes, suggesting
its unsuitability for long-range tasks. Our findings are easily extended to its
variants, allowing us to build a comprehensive understanding of how they affect
over-squashing. We evaluate these methods using real-world datasets,
demonstrating their detrimental effects. Specifically, we show that while
DropEdge-variants improve test-time performance in short range tasks, they
deteriorate performance in long-range ones. Our theory explains these results
as follows: random edge-dropping lowers the effective receptive field of GNNs,
which although beneficial for short-range tasks, misaligns the models on
long-range ones. This forces the models to overfit to short-range artefacts in
the training set, resulting in poor generalization. Our conclusions highlight
the need to re-evaluate various methods designed for training deep GNNs, with a
renewed focus on modelling long-range interactions.
|
2502.07365
|
LongReD: Mitigating Short-Text Degradation of Long-Context Large
Language Models via Restoration Distillation
|
cs.CL cs.LG
|
Large language models (LLMs) have gained extended context windows through
scaling positional encodings and lightweight continual pre-training. However,
this often leads to degraded performance on short-text tasks, while the reasons
for this degradation remain insufficiently explored. In this work, we identify
two primary factors contributing to this issue: distribution drift in hidden
states and attention scores, and catastrophic forgetting during continual
pre-training. To address these challenges, we propose Long Context Pre-training
with Restoration Distillation (LongReD), a novel approach designed to mitigate
short-text performance degradation through minimizing the distribution
discrepancy between the extended and original models. Besides training on long
texts, LongReD distills the hidden state of selected layers from the original
model on short texts. Additionally, LongReD also introduces a short-to-long
distillation, aligning the output distribution on short texts with that on long
texts by leveraging skipped positional indices. Experiments on common text
benchmarks demonstrate that LongReD effectively preserves the model's
short-text performance while maintaining comparable or even better capacity to
handle long texts than baselines. Our code is available at
https://github.com/RUCAIBox/LongReD.
|
2502.07368
|
Bidirectional Piggybacking Design for Systematic Nodes with
Sub-Packetization $l=2$
|
cs.IT math.IT
|
In 2013, Rashmi et al. proposed the piggybacking design framework to reduce
the repair bandwidth of $(n,k;l)$ MDS array codes with small sub-packetization
$l$ and it has been studied extensively in recent years. In this work, we
propose an explicit bidirectional piggybacking design (BPD) with
sub-packetization $l=2$ and the field size $q=O(n^{\lfloor r/2 \rfloor
\!+\!1})$ for systematic nodes, where $r=n-k$ equals the redundancy of an
$(n,k)$ linear code. And BPD has lower average repair bandwidth than previous
piggybacking designs for $l=2$ when $r\geq 3$. Surprisingly, we can prove that
the field size $q\leq 256$ is sufficient when $n\leq 15$ and $n-k\leq 4$. For
example, we provide the BPD for the $(14,10)$ Reed-Solomon (RS) code over
$\mathbb{F}_{2^8}$ and obtain approximately $41\%$ savings in the average
repair bandwidth for systematic nodes compared with the trivial repair
approach. This is the lowest repair bandwidth achieved so far for
$(14,10)_{256}$ RS codes with sub-packetization $l=2$.
|
2502.07369
|
Uniform Kernel Prober
|
stat.ML cs.LG math.ST stat.TH
|
The ability to identify useful features or representations of the input data
based on training data that achieves low prediction error on test data across
multiple prediction tasks is considered the key to multitask learning success.
In practice, however, one faces the issue of the choice of prediction tasks and
the availability of test data from the chosen tasks while comparing the
relative performance of different features. In this work, we develop a class of
pseudometrics called Uniform Kernel Prober (UKP) for comparing features or
representations learned by different statistical models such as neural networks
when the downstream prediction tasks involve kernel ridge regression. The
proposed pseudometric, UKP, between any two representations, provides a uniform
measure of prediction error on test data corresponding to a general class of
kernel ridge regression tasks for a given choice of a kernel without access to
test data. Additionally, desired invariances in representations can be
successfully captured by UKP only through the choice of the kernel function and
the pseudometric can be efficiently estimated from $n$ input data samples with
$O(\frac{1}{\sqrt{n}})$ estimation error. We also experimentally demonstrate
the ability of UKP to discriminate between different types of features or
representations based on their generalization performance on downstream kernel
ridge regression tasks.
|
2502.07371
|
Mixed Integer Linear Programming for Active Contact Selection in Deep
Brain Stimulation
|
eess.SY cs.SY
|
Deep brain stimulation (DBS) programming remains a complex and time-consuming
process, requiring manual selection of stimulation parameters to achieve
therapeutic effects while minimizing adverse side-effects. This study explores
mathematical optimization for DBS programming, using functional subdivisions of
the subthalamic nucleus (STN) to define the desired activation profile. A Mixed
Integer Linear Programming (MILP) framework is presented allowing for
dissimilar current distribution across active contacts. MILP is compared to a
Linear Programming (LP) approach in terms of computational efficiency and
activation accuracy. Results from ten Parkinson's disease patients treated with
DBS show that while MILP better matches the predefined stimulation target
activation profile, LP solutions more closely resemble clinically applied
settings, suggesting the profile may not fully capture clinically relevant
patterns. Additionally, MILP's limitations are discussed, including its
reliance on precisely defined target regions and its computational burden for
larger target sets.
|
2502.07372
|
USRNet: Unified Scene Recovery Network for Enhancing Traffic Imaging
under Multiple Adverse Weather Conditions
|
cs.CV
|
Advancements in computer vision technology have facilitated the extensive
deployment of intelligent transportation systems and visual surveillance
systems across various applications, including autonomous driving, public
safety, and environmental monitoring. However, adverse weather conditions such
as haze, rain, snow, and more complex mixed degradation can significantly
degrade image quality. The degradation compromises the accuracy and reliability
of these systems across various scenarios. To tackle the challenge of
developing adaptable models for scene restoration, we introduce the unified
scene recovery network (USRNet), capable of handling multiple types of image
degradation. The USRNet features a sophisticated architecture consisting of a
scene encoder, an attention-driven node independent learning mechanism (NILM),
an edge decoder, and a scene restoration module. The scene encoder, powered by
advanced residual blocks, extracts deep features from degraded images in a
progressive manner, ensuring thorough encoding of degradation information. To
enhance the USRNet's adaptability in diverse weather conditions, we introduce
NILM, which enables the network to learn and respond to different scenarios
with precision, thereby increasing its robustness. The edge decoder is designed
to extract edge features with precision, which is essential for maintaining
image sharpness. Experimental results demonstrate that USRNet surpasses
existing methods in handling complex imaging degradations, thereby improving
the accuracy and reliability of visual systems across diverse scenarios. The
code resources for this work can be accessed in
https://github.com/LouisYxLu/USRNet.
|
2502.07373
|
EvoFlow: Evolving Diverse Agentic Workflows On The Fly
|
cs.LG cs.CL cs.MA cs.NE
|
The past two years have witnessed the evolution of large language model
(LLM)-based multi-agent systems from labor-intensive manual design to partial
automation (\textit{e.g.}, prompt engineering, communication topology) and
eventually to fully automated design. However, existing agentic automation
pipelines often lack LLM heterogeneity and focus on single-objective
performance optimization, limiting their potential to combine weaker models for
more customized and cost-effective solutions. To address this challenge, we
propose EvoFlow, a niching evolutionary algorithm-based framework to
automatically search a population of heterogeneous and complexity-adaptive
agentic workflows, rather than a single homogeneous, complex workflow.
Technically, EvoFlow performs \textit{(1) tag-based retrieval} to extract
parent workflows from an agentic population, evolves new workflows through
\textit{(2) crossover} and \textit{(3) mutation}, and employs \textit{(4)
niching-based selection} to maintain population diversity and quality.
Extensive evaluations across seven benchmarks demonstrate that EvoFlow is:
\textbf{(I) diverse}, evolving a population of workflows ranging from simple
I/O tasks to complex multi-turn interactions; \textbf{(II) high-performing},
outperforming previous handcrafted and automated workflows by
$1.23\%\sim29.86\%$; \textbf{(III) economical}, surpassing powerful
\llmname{o1-preview} at $12.4\%$ of its inference cost using weaker open-source
models.
|
2502.07374
|
LLMs Can Easily Learn to Reason from Demonstrations Structure, not
content, is what matters!
|
cs.AI
|
Large reasoning models (LRMs) tackle complex reasoning problems by following
long chain-of-thoughts (Long CoT) that incorporate reflection, backtracking,
and self-validation. However, the training techniques and data requirements to
elicit Long CoT remain poorly understood. In this work, we find that a Large
Language model (LLM) can effectively learn Long CoT reasoning through
data-efficient supervised fine-tuning (SFT) and parameter-efficient low-rank
adaptation (LoRA). With just 17k long CoT training samples, the
Qwen2.5-32B-Instruct model achieves significant improvements on a wide range of
math and coding benchmarks, including 56.7% (+40.0%) on AIME 2024 and 57.0%
(+8.1%) on LiveCodeBench, competitive to the proprietary o1-preview model's
score of 44.6% and 59.1%. More importantly, we find that the structure of Long
CoT is critical to the learning process, whereas the content of individual
reasoning steps has minimal impact. Perturbations affecting content, such as
training on incorrect samples or removing reasoning keywords, have little
impact on performance. In contrast, structural modifications that disrupt
logical consistency in the Long CoT, such as shuffling or deleting reasoning
steps, significantly degrade accuracy. For example, a model trained on Long CoT
samples with incorrect answers still achieves only 3.2% lower accuracy compared
to training with fully correct samples. These insights deepen our understanding
of how to elicit reasoning capabilities in LLMs and highlight key
considerations for efficiently training the next generation of reasoning
models. This is the academic paper of our previous released Sky-T1-32B-Preview
model. Codes are available at https://github.com/NovaSky-AI/SkyThought.
|
2502.07377
|
Reddit's Appetite: Predicting User Engagement with Nutritional Content
|
cs.SI cs.CY
|
The increased popularity of food communities on social media shapes the way
people engage with food-related content. Due to the extensive consequences of
such content on users' eating behavior, researchers have started studying the
factors that drive user engagement with food in online platforms. However,
while most studies focus on visual aspects of food content in social media,
there exist only initial studies exploring the impact of nutritional content on
user engagement. In this paper, we set out to close this gap and analyze
food-related posts on Reddit, focusing on the association between the
nutritional density of a meal and engagement levels, particularly the number of
comments. Hence, we collect and empirically analyze almost 600,000 food-related
posts and uncover differences in nutritional content between engaging and
non-engaging posts. Moreover, we train a series of XGBoost models, and evaluate
the importance of nutritional density while predicting whether users will
comment on a post or whether a post will substantially resonate with the
community. We find that nutritional features improve the baseline model's
accuracy by 4%, with a positive contribution of calorie density towards
prediction of engagement, suggesting that higher nutritional content is
associated with higher user engagement in food-related posts. Our results
provide valuable insights for the design of more engaging online initiatives
aimed at, for example, encouraging healthy eating habits.
|
2502.07380
|
Demonstrating Wheeled Lab: Modern Sim2Real for Low-cost, Open-source
Wheeled Robotics
|
cs.RO
|
Simulation has been pivotal in recent robotics milestones and is poised to
play a prominent role in the field's future. However, recent robotic advances
often rely on expensive and high-maintenance platforms, limiting access to
broader robotics audiences. This work introduces Wheeled Lab, a framework for
the low-cost, open-source wheeled platforms that are already widely established
in education and research. Through integration with Isaac Lab, Wheeled Lab
introduces modern techniques in Sim2Real, such as domain randomization, sensor
simulation, and end-to-end learning, to new user communities. To kickstart
education and demonstrate the framework's capabilities, we develop three
state-of-the-art policies for small-scale RC cars: controlled drifting,
elevation traversal, and visual navigation, each trained in simulation and
deployed in the real world. By bridging the gap between advanced Sim2Real
methods and affordable, available robotics, Wheeled Lab aims to democratize
access to cutting-edge tools, fostering innovation and education in a broader
robotics context. The full stack, from hardware to software, is low cost and
open-source.
|
2502.07381
|
Spatial Degradation-Aware and Temporal Consistent Diffusion Model for
Compressed Video Super-Resolution
|
cs.CV
|
Due to limitations of storage and bandwidth, videos stored and transmitted on
the Internet are usually low-quality with low-resolution and compression noise.
Although video super-resolution (VSR) is an efficient technique to enhance
video resolution, relatively VSR methods focus on compressed videos. Directly
applying general VSR approaches leads to the failure of improving practical
videos, especially when frames are highly compressed at a low bit rate.
Recently, diffusion models have achieved superior performance in low-level
visual tasks, and their high-realism generation capability enables them to be
applied in VSR. To synthesize more compression-lost details and refine temporal
consistency, we propose a novel Spatial Degradation-Aware and Temporal
Consistent (SDATC) diffusion model for compressed VSR. Specifically, we
introduce a distortion Control module (DCM) to modulate diffusion model inputs
and guide the generation. Next, the diffusion model executes the denoising
process for texture generation with fine-tuned spatial prompt-based
compression-aware module (PCAM) and spatio-temporal attention module (STAM).
PCAM extracts features to encode specific compression information dynamically.
STAM extends the spatial attention mechanism to a spatio-temporal dimension for
capturing temporal correlation. Extensive experimental results on benchmark
datasets demonstrate the effectiveness of the proposed modules in enhancing
compressed videos.
|
2502.07384
|
SAGEPhos: Sage Bio-Coupled and Augmented Fusion for Phosphorylation Site
Detection
|
cs.CE
|
Phosphorylation site prediction based on kinase-substrate interaction plays a
vital role in understanding cellular signaling pathways and disease mechanisms.
Computational methods for this task can be categorized into
kinase-family-focused and individual kinase-targeted approaches. Individual
kinase-targeted methods have gained prominence for their ability to explore a
broader protein space and provide more precise target information for kinase
inhibitors. However, most existing individual kinase-based approaches focus
solely on sequence inputs, neglecting crucial structural information. To
address this limitation, we introduce SAGEPhos (Structure-aware
kinAse-substrate bio-coupled and bio-auGmented nEtwork for Phosphorylation site
prediction), a novel framework that modifies the semantic space of main protein
inputs using auxiliary inputs at two distinct modality levels. At the
inter-modality level, SAGEPhos introduces a Bio-Coupled Modal Fusion method,
distilling essential kinase sequence information to refine task-oriented local
substrate feature space, creating a shared semantic space that captures crucial
kinase-substrate interaction patterns. Within the substrate's intra-modality
domain, it focuses on Bio-Augmented Fusion, emphasizing 2D local sequence
information while selectively incorporating 3D spatial information from
predicted structures to complement the sequence space. Moreover, to address the
lack of structural information in current datasets, we contribute a new,
refined phosphorylation site prediction dataset, which incorporates crucial
structural elements and will serve as a new benchmark for the field.
Experimental results demonstrate that SAGEPhos significantly outperforms
baseline methods. We release the SAGEPhos models and code at
https://github.com/ZhangJJ26/SAGEPhos.
|
2502.07386
|
Parametric type design in the era of variable and color fonts
|
cs.CL cs.GR
|
Parametric fonts are programatically defined fonts with variable parameters,
pioneered by Donald Kunth with his MetaFont technology in the 1980s. While
Donald Knuth's ideas in MetaFont and subsequently in MetaPost are often seen as
legacy techniques from the pre-graphical user interface (GUI) era of type
design, recent trends like variable fonts suggest a resurgence of certain
principles. This paper explores a modern type design process built on
parametric design principles, specifically using MetaPost. The author created
two variable fonts with this method and released them under a free, open-source
license. The paper details the methodology, workflow, and insights gained from
this process.
|
2502.07388
|
UAV-assisted Joint Mobile Edge Computing and Data Collection via
Matching-enabled Deep Reinforcement Learning
|
cs.NE
|
Unmanned aerial vehicle (UAV)-assisted mobile edge computing (MEC) and data
collection (DC) have been popular research issues. Different from existing
works that consider MEC and DC scenarios separately, this paper investigates a
multi-UAV-assisted joint MEC-DC system. Specifically, we formulate a joint
optimization problem to minimize the MEC latency and maximize the collected
data volume. This problem can be classified as a non-convex mixed integer
programming problem that exhibits long-term optimization and dynamics. Thus, we
propose a deep reinforcement learning-based approach that jointly optimizes the
UAV movement, user transmit power, and user association in real time to solve
the problem efficiently. Specifically, we reformulate the optimization problem
into an action space-reduced Markov decision process (MDP) and optimize the
user association by using a two-phase matching-based association (TMA)
strategy. Subsequently, we propose a soft actor-critic (SAC)-based approach
that integrates the proposed TMA strategy (SAC-TMA) to solve the formulated
joint optimization problem collaboratively. Simulation results demonstrate that
the proposed SAC-TMA is able to coordinate the two subsystems and can
effectively reduce the system latency and improve the data collection volume
compared with other benchmark algorithms.
|
2502.07389
|
FADE: Forecasting for Anomaly Detection on ECG
|
cs.CV
|
Cardiovascular diseases, a leading cause of noncommunicable disease-related
deaths, require early and accurate detection to improve patient outcomes.
Taking advantage of advances in machine learning and deep learning, multiple
approaches have been proposed in the literature to address the challenge of
detecting ECG anomalies. Typically, these methods are based on the manual
interpretation of ECG signals, which is time consuming and depends on the
expertise of healthcare professionals. The objective of this work is to propose
a deep learning system, FADE, designed for normal ECG forecasting and anomaly
detection, which reduces the need for extensive labeled datasets and manual
interpretation. FADE has been trained in a self-supervised manner with a novel
morphological inspired loss function. Unlike conventional models that learn
from labeled anomalous ECG waveforms, our approach predicts the future of
normal ECG signals, thus avoiding the need for extensive labeled datasets.
Using a novel distance function to compare forecasted ECG signals with actual
sensor data, our method effectively identifies cardiac anomalies. Additionally,
this approach can be adapted to new contexts through domain adaptation
techniques. To evaluate our proposal, we performed a set of experiments using
two publicly available datasets: MIT-BIH NSR and MIT-BIH Arrythmia. The results
demonstrate that our system achieves an average accuracy of 83.84% in anomaly
detection, while correctly classifying normal ECG signals with an accuracy of
85.46%. Our proposed approach exhibited superior performance in the early
detection of cardiac anomalies in ECG signals, surpassing previous methods that
predominantly identify a limited range of anomalies. FADE effectively detects
both abnormal heartbeats and arrhythmias, offering significant advantages in
healthcare through cost reduction or processing of large-scale ECG data.
|
2502.07391
|
Target-Augmented Shared Fusion-based Multimodal Sarcasm Explanation
Generation
|
cs.CL
|
Sarcasm is a linguistic phenomenon that intends to ridicule a target (e.g.,
entity, event, or person) in an inherent way. Multimodal Sarcasm Explanation
(MuSE) aims at revealing the intended irony in a sarcastic post using a natural
language explanation. Though important, existing systems overlooked the
significance of the target of sarcasm in generating explanations. In this
paper, we propose a Target-aUgmented shaRed fusion-Based sarcasm explanatiOn
model, aka. TURBO. We design a novel shared-fusion mechanism to leverage the
inter-modality relationships between an image and its caption. TURBO assumes
the target of the sarcasm and guides the multimodal shared fusion mechanism in
learning intricacies of the intended irony for explanations. We evaluate our
proposed TURBO model on the MORE+ dataset. Comparison against multiple
baselines and state-of-the-art models signifies the performance improvement of
TURBO by an average margin of $+3.3\%$. Moreover, we explore LLMs in zero and
one-shot settings for our task and observe that LLM-generated explanation,
though remarkable, often fails to capture the critical nuances of the sarcasm.
Furthermore, we supplement our study with extensive human evaluation on TURBO's
generated explanations and find them out to be comparatively better than other
systems.
|
2502.07394
|
Interpretable Rules for Online Failure Prediction: A Case Study on the
Metro do Porto dataset
|
cs.LG
|
Due to their high predictive performance, predictive maintenance applications
have increasingly been approached with Deep Learning techniques in recent
years. However, as in other real-world application scenarios, the need for
explainability is often stated but not sufficiently addressed. This study will
focus on predicting failures on Metro trains in Porto, Portugal. While recent
works have found high-performing deep neural network architectures that feature
a parallel explainability pipeline, the generated explanations are fairly
complicated and need help explaining why the failures are happening. This work
proposes a simple online rule-based explainability approach with interpretable
features that leads to straightforward, interpretable rules. We showcase our
approach on MetroPT2 and find that three specific sensors on the Metro do Porto
trains suffice to predict the failures present in the dataset with simple
rules.
|
2502.07396
|
Optimality in importance sampling: a gentle survey
|
stat.CO cs.CE stat.ML
|
The performance of the Monte Carlo sampling methods relies on the crucial
choice of a proposal density. The notion of optimality is fundamental to design
suitable adaptive procedures of the proposal density within Monte Carlo
schemes. This work is an exhaustive review around the concept of optimality in
importance sampling. Several frameworks are described and analyzed, such as the
marginal likelihood approximation for model selection, the use of multiple
proposal densities, a sequence of tempered posteriors, and noisy scenarios
including the applications to approximate Bayesian computation (ABC) and
reinforcement learning, to name a few. Some theoretical and empirical
comparisons are also provided.
|
2502.07397
|
Bandit Optimal Transport
|
stat.ML cs.LG
|
Despite the impressive progress in statistical Optimal Transport (OT) in
recent years, there has been little interest in the study of the
\emph{sequential learning} of OT. Surprisingly so, as this problem is both
practically motivated and a challenging extension of existing settings such as
linear bandits. This article considers (for the first time) the stochastic
bandit problem of learning to solve generic Kantorovich and entropic OT
problems from repeated interactions when the marginals are known but the cost
is unknown. We provide $\tilde{\mathcal O}(\sqrt{T})$ regret algorithms for
both problems by extending linear bandits on Hilbert spaces. These results
provide a reduction to infinite-dimensional linear bandits. To deal with the
dimension, we provide a method to exploit the intrinsic regularity of the cost
to learn, yielding corresponding regret bounds which interpolate between
$\tilde{\mathcal O}(\sqrt{T})$ and $\tilde{\mathcal O}(T)$.
|
2502.07399
|
On Iterative Evaluation and Enhancement of Code Quality Using GPT-4o
|
cs.SE cs.AI
|
This paper introduces CodeQUEST, a novel framework leveraging Large Language
Models (LLMs) to iteratively evaluate and enhance code quality across multiple
dimensions, including readability, maintainability, efficiency, and security.
The framework is divided into two main components: an Evaluator that assesses
code quality across ten dimensions, providing both quantitative scores and
qualitative summaries, and an Optimizer that iteratively improves the code
based on the Evaluator's feedback. Our study demonstrates that CodeQUEST can
effectively and robustly evaluate code quality, with its assessments aligning
closely with established code quality metrics. Through a series of experiments
using a curated dataset of Python and JavaScript examples, CodeQUEST
demonstrated significant improvements in code quality, achieving a mean
relative percentage improvement of 52.6%. The framework's evaluations were
validated against a set of proxy metrics comprising of Pylint Score, Radon
Maintainability Index, and Bandit output logs, showing a meaningful
correlation. This highlights the potential of LLMs in automating code quality
evaluation and improvement processes, presenting a significant advancement
toward enhancing software development practices. The code implementation of the
framework is available at: https://github.com/jpmorganchase/CodeQuest.
|
2502.07400
|
Explainable Multimodal Machine Learning for Revealing Structure-Property
Relationships in Carbon Nanotube Fibers
|
cond-mat.mtrl-sci cond-mat.soft cs.AI cs.LG physics.data-an
|
In this study, we propose Explainable Multimodal Machine Learning (EMML),
which integrates the analysis of diverse data types (multimodal data) using
factor analysis for feature extraction with Explainable AI (XAI), for carbon
nanotube (CNT) fibers prepared from aqueous dispersions. This method is a
powerful approach to elucidate the mechanisms governing material properties,
where multi-stage fabrication conditions and multiscale structures have complex
influences. Thus, in our case, this approach helps us understand how different
processing steps and structures at various scales impact the final properties
of CNT fibers. The analysis targeted structures ranging from the nanoscale to
the macroscale, including aggregation size distributions of CNT dispersions and
the effective length of CNTs. Furthermore, because some types of data were
difficult to interpret using standard methods, challenging-to-interpret
distribution data were analyzed using Negative Matrix Factorization (NMF) for
extracting key features that determine the outcome. Contribution analysis with
SHapley Additive exPlanations (SHAP) demonstrated that small, uniformly
distributed aggregates are crucial for improving fracture strength, while CNTs
with long effective lengths are significant factors for enhancing electrical
conductivity. The analysis also identified thresholds and trends for these key
factors to assist in defining the conditions needed to optimize CNT fiber
properties. EMML is not limited to CNT fibers but can be applied to the design
of other materials derived from nanomaterials, making it a useful tool for
developing a wide range of advanced materials. This approach provides a
foundation for advancing data-driven materials research.
|
2502.07401
|
Enhancing Higher Education with Generative AI: A Multimodal Approach for
Personalised Learning
|
cs.HC cs.AI
|
This research explores the opportunities of Generative AI (GenAI) in the
realm of higher education through the design and development of a multimodal
chatbot for an undergraduate course. Leveraging the ChatGPT API for nuanced
text-based interactions and Google Bard for advanced image analysis and
diagram-to-code conversions, we showcase the potential of GenAI in addressing a
broad spectrum of educational queries. Additionally, the chatbot presents a
file-based analyser designed for educators, offering deep insights into student
feedback via sentiment and emotion analysis, and summarising course evaluations
with key metrics. These combinations highlight the crucial role of multimodal
conversational AI in enhancing teaching and learning processes, promising
significant advancements in educational adaptability, engagement, and feedback
analysis. By demonstrating a practical web application, this research
underlines the imperative for integrating GenAI technologies to foster more
dynamic and responsive educational environments, ultimately contributing to
improved educational outcomes and pedagogical strategies.
|
2502.07403
|
Extended monocular 3D imaging
|
cs.CV physics.optics
|
3D vision is of paramount importance for numerous applications ranging from
machine intelligence to precision metrology. Despite much recent progress, the
majority of 3D imaging hardware remains bulky and complicated and provides much
lower image resolution compared to their 2D counterparts. Moreover, there are
many well-known scenarios that existing 3D imaging solutions frequently fail.
Here, we introduce an extended monocular 3D imaging (EM3D) framework that fully
exploits the vectorial wave nature of light. Via the multi-stage fusion of
diffraction- and polarization-based depth cues, using a compact monocular
camera equipped with a diffractive-refractive hybrid lens, we experimentally
demonstrate the snapshot acquisition of a million-pixel and accurate 3D point
cloud for extended scenes that are traditionally challenging, including those
with low texture, being highly reflective, or nearly transparent, without a
data prior. Furthermore, we discover that the combination of depth and
polarization information can unlock unique new opportunities in material
identification, which may further expand machine intelligence for applications
like target recognition and face anti-spoofing. The straightforward yet
powerful architecture thus opens up a new path for a higher-dimensional machine
vision in a minimal form factor, facilitating the deployment of monocular
cameras for applications in much more diverse scenarios.
|
2502.07404
|
Human-in-the-Loop Annotation for Image-Based Engagement Estimation:
Assessing the Impact of Model Reliability on Annotation Accuracy
|
cs.HC cs.AI cs.CV
|
Human-in-the-loop (HITL) frameworks are increasingly recognized for their
potential to improve annotation accuracy in emotion estimation systems by
combining machine predictions with human expertise. This study focuses on
integrating a high-performing image-based emotion model into a HITL annotation
framework to evaluate the collaborative potential of human-machine interaction
and identify the psychological and practical factors critical to successful
collaboration. Specifically, we investigate how varying model reliability and
cognitive framing influence human trust, cognitive load, and annotation
behavior in HITL systems. We demonstrate that model reliability and
psychological framing significantly impact annotators' trust, engagement, and
consistency, offering insights into optimizing HITL frameworks. Through three
experimental scenarios with 29 participants--baseline model reliability (S1),
fabricated errors (S2), and cognitive bias introduced by negative framing
(S3)--we analyzed behavioral and qualitative data. Reliable predictions in S1
yielded high trust and annotation consistency, while unreliable outputs in S2
led to increased critical evaluations but also heightened frustration and
response variability. Negative framing in S3 revealed how cognitive bias
influenced participants to perceive the model as more relatable and accurate,
despite misinformation regarding its reliability. These findings highlight the
importance of both reliable machine outputs and psychological factors in
shaping effective human-machine collaboration. By leveraging the strengths of
both human oversight and automated systems, this study establishes a scalable
HITL framework for emotion annotation and lays the foundation for broader
applications in adaptive learning and human-computer interaction.
|
2502.07405
|
Coupling Agent-Based Simulations and VR universes: the case of GAMA and
Unity
|
cs.MA
|
Agent-based models (ABMs) and video games, including those taking advantage
of virtual reality (VR), have undergone a remarkable parallel evolution,
achieving impressive levels of complexity and sophistication. This paper argues
that while ABMs prioritize scientific analysis and understanding and VR aims
for immersive entertainment, they both simulate artificial worlds and can
benefit from closer integration. Coupling both approaches indeed opens
interesting possibilities for research and development in various fields, and
in particular education, at the heart of the SIMPLE project, an EU-funded
project on the development of digital tools for awareness raising on
environmental issues. However, existing tools often present limitations,
including technical complexity, limited functionalities, and lack of
interoperability. To address these challenges, we introduce a novel framework
for linking GAMA, a popular ABM platform, with Unity, a widely used game
engine. This framework enables seamless data exchange, real-time visualization,
and user interaction within VR environments, allowing researchers to leverage
the strengths of both ABMs and VR for more impactful and engaging simulations.
We demonstrate the capabilities of our framework through two prototypes built
to highlight its potential in representing and interacting with complex
socio-environmental system models. We conclude by emphasizing the importance of
continued collaboration between the ABM and VR communities to develop robust,
user-friendly tools, paving the way for a new era of collaborative research and
immersive experiences in simulations.
|
2502.07408
|
No Data, No Optimization: A Lightweight Method To Disrupt Neural
Networks With Sign-Flips
|
cs.LG cs.AI cs.CV
|
Deep Neural Networks (DNNs) can be catastrophically disrupted by flipping
only a handful of sign bits in their parameters. We introduce Deep Neural
Lesion (DNL), a data-free, lightweight method that locates these critical
parameters and triggers massive accuracy drops. We validate its efficacy on a
wide variety of computer vision models and datasets. The method requires no
training data or optimization and can be carried out via common exploits
software, firmware or hardware based attack vectors. An enhanced variant that
uses a single forward and backward pass further amplifies the damage beyond
DNL's zero-pass approach. Flipping just two sign bits in ResNet50 on ImageNet
reduces accuracy by 99.8\%. We also show that selectively protecting a small
fraction of vulnerable sign bits provides a practical defense against such
attacks.
|
2502.07409
|
MGPATH: Vision-Language Model with Multi-Granular Prompt Learning for
Few-Shot WSI Classification
|
cs.CV cs.LG
|
Whole slide pathology image classification presents challenges due to
gigapixel image sizes and limited annotation labels, hindering model
generalization. This paper introduces a prompt learning method to adapt large
vision-language models for few-shot pathology classification. We first extend
the Prov-GigaPath vision foundation model, pre-trained on 1.3 billion pathology
image tiles, into a vision-language model by adding adaptors and aligning it
with medical text encoders via contrastive learning on 923K image-text pairs.
The model is then used to extract visual features and text embeddings from
few-shot annotations and fine-tunes with learnable prompt embeddings. Unlike
prior methods that combine prompts with frozen features using prefix embeddings
or self-attention, we propose multi-granular attention that compares
interactions between learnable prompts with individual image patches and groups
of them. This approach improves the model's ability to capture both
fine-grained details and broader context, enhancing its recognition of complex
patterns across sub-regions. To further improve accuracy, we leverage
(unbalanced) optimal transport-based visual-text distance to secure model
robustness by mitigating perturbations that might occur during the data
augmentation process. Empirical experiments on lung, kidney, and breast
pathology modalities validate the effectiveness of our approach; thereby, we
surpass several of the latest competitors and consistently improve performance
across diverse architectures, including CLIP, PLIP, and Prov-GigaPath
integrated PLIP. We release our implementations and pre-trained models at this
MGPATH.
|
2502.07411
|
EgoTextVQA: Towards Egocentric Scene-Text Aware Video Question Answering
|
cs.CV cs.MM
|
We introduce EgoTextVQA, a novel and rigorously constructed benchmark for
egocentric QA assistance involving scene text. EgoTextVQA contains 1.5K
ego-view videos and 7K scene-text aware questions that reflect real-user needs
in outdoor driving and indoor house-keeping activities. The questions are
designed to elicit identification and reasoning on scene text in an egocentric
and dynamic environment. With EgoTextVQA, we comprehensively evaluate 10
prominent multimodal large language models. Currently, all models struggle, and
the best results (Gemini 1.5 Pro) are around 33% accuracy, highlighting the
severe deficiency of these techniques in egocentric QA assistance. Our further
investigations suggest that precise temporal grounding and multi-frame
reasoning, along with high resolution and auxiliary scene-text inputs, are key
for better performance. With thorough analyses and heuristic suggestions, we
hope EgoTextVQA can serve as a solid testbed for research in egocentric
scene-text QA assistance.
|
2502.07412
|
Mapping the Intellectual Structure of Social Network Research: A
Comparative Bibliometric Analysis
|
cs.SI
|
Network science is an interdisciplinary field that transcends traditional
academic boundaries, offering profound insights into complex systems across
disciplines. This study conducts a bibliometric analysis of three leading
journals, Social Networks, Network Science, and the Journal of Complex
Networks, each representing a distinct yet interconnected perspective within
the field. Social Networks focuses on empirical and theoretical advancements in
social structures, emphasizing sociological and behavioral approaches. Network
Science bridges physics, computer science, and applied mathematics to explore
network dynamics in diverse domains. The Journal of Complex Networks, by
contrast, is dedicated to the mathematical and algorithmic foundations of
network theory. By employing co-authorship and citation network analysis, we
map the intellectual landscape of these journals, identifying key contributors,
influential works, and structural trends in collaboration. Through centrality
measures such as degree, betweenness, and eigenvector centrality, we uncover
the most impactful publications and their roles in shaping the discourse within
and beyond their respective domains. Our analysis not only delineates the
disciplinary contours of network science but also highlights its convergence
points, revealing the evolving trajectory of this dynamic and rapidly expanding
field.
|
2502.07414
|
Sample Weight Averaging for Stable Prediction
|
cs.LG
|
The challenge of Out-of-Distribution (OOD) generalization poses a
foundational concern for the application of machine learning algorithms to
risk-sensitive areas. Inspired by traditional importance weighting and
propensity weighting methods, prior approaches employ an independence-based
sample reweighting procedure. They aim at decorrelating covariates to
counteract the bias introduced by spurious correlations between unstable
variables and the outcome, thus enhancing generalization and fulfilling stable
prediction under covariate shift. Nonetheless, these methods are prone to
experiencing an inflation of variance, primarily attributable to the reduced
efficacy in utilizing training samples during the reweighting process. Existing
remedies necessitate either environmental labels or substantially higher time
costs along with additional assumptions and supervised information. To mitigate
this issue, we propose SAmple Weight Averaging (SAWA), a simple yet efficacious
strategy that can be universally integrated into various sample reweighting
algorithms to decrease the variance and coefficient estimation error, thus
boosting the covariate-shift generalization and achieving stable prediction
across different environments. We prove its rationality and benefits
theoretically. Experiments across synthetic datasets and real-world datasets
consistently underscore its superiority against covariate shift.
|
2502.07415
|
Quantification of model error for inverse problems in the Weak Neural
Variational Inference framework
|
stat.ML cs.LG
|
We present a novel extension of the Weak Neural Variational Inference (WNVI)
framework for probabilistic material property estimation that explicitly
quantifies model errors in PDE-based inverse problems. Traditional approaches
assume the correctness of all governing equations, including potentially
unreliable constitutive laws, which can lead to biased estimates and
misinterpretations. Our proposed framework addresses this limitation by
distinguishing between reliable governing equations, such as conservation laws,
and uncertain constitutive relationships. By treating all state variables as
latent random variables, we enforce these equations through separate sets of
residuals, leveraging a virtual likelihood approach with weighted residuals.
This formulation not only identifies regions where constitutive laws break down
but also improves robustness against model uncertainties without relying on a
fully trustworthy forward model. We demonstrate the effectiveness of our
approach in the context of elastography, showing that it provides a structured,
interpretable, and computationally efficient alternative to traditional model
error correction techniques. Our findings suggest that the proposed framework
enhances the accuracy and reliability of material property estimation by
offering a principled way to incorporate uncertainty in constitutive modeling.
|
2502.07417
|
Fast-COS: A Fast One-Stage Object Detector Based on Reparameterized
Attention Vision Transformer for Autonomous Driving
|
cs.CV
|
The perception system is a a critical role of an autonomous driving system
for ensuring safety. The driving scene perception system fundamentally
represents an object detection task that requires achieving a balance between
accuracy and processing speed. Many contemporary methods focus on improving
detection accuracy but often overlook the importance of real-time detection
capabilities when computational resources are limited. Thus, it is vital to
investigate efficient object detection strategies for driving scenes. This
paper introduces Fast-COS, a novel single-stage object detection framework
crafted specifically for driving scene applications. The research initiates
with an analysis of the backbone, considering both macro and micro
architectural designs, yielding the Reparameterized Attention Vision
Transformer (RAViT). RAViT utilizes Reparameterized Multi-Scale Depth-Wise
Convolution (RepMSDW) and Reparameterized Self-Attention (RepSA) to enhance
computational efficiency and feature extraction. In extensive tests across GPU,
edge, and mobile platforms, RAViT achieves 81.4% Top-1 accuracy on the
ImageNet-1K dataset, demonstrating significant throughput improvements over
comparable backbone models such as ResNet, FastViT, RepViT, and
EfficientFormer. Additionally, integrating RepMSDW into a feature pyramid
network forms RepFPN, enabling fast and multi-scale feature fusion. Fast-COS
enhances object detection in driving scenes, attaining an AP50 score of 57.2%
on the BDD100K dataset and 80.0% on the TJU-DHD Traffic dataset. It surpasses
leading models in efficiency, delivering up to 75.9% faster GPU inference and
1.38 higher throughput on edge devices compared to FCOS, YOLOF, and RetinaNet.
These findings establish Fast-COS as a highly scalable and reliable solution
suitable for real-time applications, especially in resource-limited
environments like autonomous driving systems
|
2502.07418
|
Entity Linking using LLMs for Automated Product Carbon Footprint
Estimation
|
cs.CL
|
Growing concerns about climate change and sustainability are driving
manufacturers to take significant steps toward reducing their carbon
footprints. For these manufacturers, a first step towards this goal is to
identify the environmental impact of the individual components of their
products. We propose a system leveraging large language models (LLMs) to
automatically map components from manufacturer Bills of Materials (BOMs) to
Life Cycle Assessment (LCA) database entries by using LLMs to expand on
available component information. Our approach reduces the need for manual data
processing, paving the way for more accessible sustainability practices.
|
2502.07422
|
MoENAS: Mixture-of-Expert based Neural Architecture Search for jointly
Accurate, Fair, and Robust Edge Deep Neural Networks
|
cs.LG cs.CV
|
There has been a surge in optimizing edge Deep Neural Networks (DNNs) for
accuracy and efficiency using traditional optimization techniques such as
pruning, and more recently, employing automatic design methodologies. However,
the focus of these design techniques has often overlooked critical metrics such
as fairness, robustness, and generalization. As a result, when evaluating SOTA
edge DNNs' performance in image classification using the FACET dataset, we
found that they exhibit significant accuracy disparities (14.09%) across 10
different skin tones, alongside issues of non-robustness and poor
generalizability. In response to these observations, we introduce
Mixture-of-Experts-based Neural Architecture Search (MoENAS), an automatic
design technique that navigates through a space of mixture of experts to
discover accurate, fair, robust, and general edge DNNs. MoENAS improves the
accuracy by 4.02% compared to SOTA edge DNNs and reduces the skin tone accuracy
disparities from 14.09% to 5.60%, while enhancing robustness by 3.80% and
minimizing overfitting to 0.21%, all while keeping model size close to
state-of-the-art models average size (+0.4M). With these improvements, MoENAS
establishes a new benchmark for edge DNN design, paving the way for the
development of more inclusive and robust edge DNNs.
|
2502.07423
|
Towards a Formal Theory of the Need for Competence via Computational
Intrinsic Motivation
|
cs.AI
|
Computational models offer powerful tools for formalising psychological
theories, making them both testable and applicable in digital contexts.
However, they remain little used in the study of motivation within psychology.
We focus on the "need for competence", postulated as a key basic human need
within Self-Determination Theory (SDT) -- arguably the most influential
psychological framework for studying intrinsic motivation (IM). The need for
competence is treated as a single construct across SDT texts. Yet, recent
research has identified multiple, ambiguously defined facets of competence in
SDT. We propose that these inconsistencies may be alleviated by drawing on
computational models from the field of artificial intelligence, specifically
from the domain of reinforcement learning (RL). By aligning the aforementioned
facets of competence -- effectance, skill use, task performance, and capacity
growth -- with existing RL formalisms, we provide a foundation for advancing
competence-related theory in SDT and motivational psychology more broadly. The
formalisms reveal underlying preconditions that SDT fails to make explicit,
demonstrating how computational models can improve our understanding of IM.
Additionally, our work can support a cycle of theory development by inspiring
new computational models formalising aspects of the theory, which can then be
tested empirically to refine the theory. While our research lays a promising
foundation, empirical studies of these models in both humans and machines are
needed, inviting collaboration across disciplines.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.