id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
2502.02623
|
Sample Complexity of Bias Detection with Subsampled Point-to-Subspace
Distances
|
cs.LG cs.AI math.ST stat.TH
|
Sample complexity of bias estimation is a lower bound on the runtime of any
bias detection method. Many regulatory frameworks require the bias to be tested
for all subgroups, whose number grows exponentially with the number of
protected attributes. Unless one wishes to run a bias detection with a
doubly-exponential run-time, one should like to have polynomial complexity of
bias detection for a single subgroup. At the same time, the reference data may
be based on surveys, and thus come with non-trivial uncertainty.
Here, we reformulate bias detection as a point-to-subspace problem on the
space of measures and show that, for supremum norm, it can be subsampled
efficiently. In particular, our probabilistically approximately correct (PAC)
results are corroborated by tests on well-known instances.
|
2502.02624
|
Muographic Image Upsampling with Machine Learning for Built
Infrastructure Applications
|
eess.IV cs.CV
|
The civil engineering industry faces a critical need for innovative
non-destructive evaluation methods, particularly for ageing critical
infrastructure, such as bridges, where current techniques fall short.
Muography, a non-invasive imaging technique, constructs three-dimensional
density maps by detecting interactions of naturally occurring cosmic-ray muons
within the scanned volume. Cosmic-ray muons provide deep penetration and
inherent safety due to their high momenta and natural source. However, the
technology's reliance on this source results in constrained muon flux, leading
to prolonged acquisition times, noisy reconstructions and image interpretation
challenges. To address these limitations, we developed a two-model deep
learning approach. First, we employed a conditional Wasserstein generative
adversarial network with gradient penalty (cWGAN-GP) to perform predictive
upsampling of undersampled muography images. Using the structural similarity
index measure (SSIM), 1-day sampled images matched the perceptual qualities of
a 21-day image, while the peak signal-to-noise ratio (PSNR) indicated noise
improvement equivalent to 31 days of sampling. A second cWGAN-GP model, trained
for semantic segmentation, quantitatively assessed the upsampling model's
impact on concrete sample features. This model achieved segmentation of rebar
grids and tendon ducts, with Dice-S{\o}rensen accuracy coefficients of 0.8174
and 0.8663. Notably, it could mitigate or remove z-plane smearing artifacts
caused by muography's inverse imaging problem. Both models were trained on a
comprehensive Geant4 Monte-Carlo simulation dataset reflecting realistic civil
infrastructure scenarios. Our results demonstrate significant improvements in
acquisition speed and image quality, marking a substantial step toward making
muography more practical for reinforced concrete infrastructure monitoring
applications.
|
2502.02625
|
Bayesian Parameter Shift Rule in Variational Quantum Eigensolvers
|
cs.LG quant-ph
|
Parameter shift rules (PSRs) are key techniques for efficient gradient
estimation in variational quantum eigensolvers (VQEs). In this paper, we
propose its Bayesian variant, where Gaussian processes with appropriate kernels
are used to estimate the gradient of the VQE objective. Our Bayesian PSR offers
flexible gradient estimation from observations at arbitrary locations with
uncertainty information and reduces to the generalized PSR in special cases. In
stochastic gradient descent (SGD), the flexibility of Bayesian PSR allows the
reuse of observations in previous steps, which accelerates the optimization
process. Furthermore, the accessibility to the posterior uncertainty, along
with our proposed notion of gradient confident region (GradCoRe), enables us to
minimize the observation costs in each SGD step. Our numerical experiments show
that the VQE optimization with Bayesian PSR and GradCoRe significantly
accelerates SGD and outperforms the state-of-the-art methods, including
sequential minimal optimization.
|
2502.02628
|
e-SimFT: Alignment of Generative Models with Simulation Feedback for
Pareto-Front Design Exploration
|
cs.LG cs.AI
|
Deep generative models have recently shown success in solving complex
engineering design problems where models predict solutions that address the
design requirements specified as input. However, there remains a challenge in
aligning such models for effective design exploration. For many design
problems, finding a solution that meets all the requirements is infeasible. In
such a case, engineers prefer to obtain a set of Pareto optimal solutions with
respect to those requirements, but uniform sampling of generative models may
not yield a useful Pareto front. To address this gap, we introduce a new
framework for Pareto-front design exploration with simulation fine-tuned
generative models. First, the framework adopts preference alignment methods
developed for Large Language Models (LLMs) and showcases the first application
in fine-tuning a generative model for engineering design. The important
distinction here is that we use a simulator instead of humans to provide
accurate and scalable feedback. Next, we propose epsilon-sampling, inspired by
the epsilon-constraint method used for Pareto-front generation with classical
optimization algorithms, to construct a high-quality Pareto front with the
fine-tuned models. Our framework, named e-SimFT, is shown to produce
better-quality Pareto fronts than existing multi-objective alignment methods.
|
2502.02629
|
Graph Structure Learning for Tumor Microenvironment with Cell Type
Annotation from non-spatial scRNA-seq data
|
q-bio.GN cs.AI cs.LG
|
The exploration of cellular heterogeneity within the tumor microenvironment
(TME) via single-cell RNA sequencing (scRNA-seq) is essential for understanding
cancer progression and response to therapy. Current scRNA-seq approaches,
however, lack spatial context and rely on incomplete datasets of
ligand-receptor interactions (LRIs), limiting accurate cell type annotation and
cell-cell communication (CCC) inference. This study addresses these challenges
using a novel graph neural network (GNN) model that enhances cell type
prediction and cell interaction analysis. Our study utilized a dataset
consisting of 49,020 cells from 19 patients across three cancer types:
Leukemia, Breast Invasive Carcinoma, and Colorectal Cancer. The proposed scGSL
model demonstrated robust performance, achieving an average accuracy of 84.83%,
precision of 86.23%, recall of 81.51%, and an F1 score of 80.92% across all
datasets. These metrics represent a significant enhancement over existing
methods, which typically exhibit lower performance metrics. Additionally, by
reviewing existing literature on gene interactions within the TME, the scGSL
model proves to robustly identify biologically meaningful gene interactions in
an unsupervised manner, validated by significant expression differences in key
gene pairs across various cancers. The source code and data used in this paper
can be found in https://github.com/LiYuechao1998/scGSL.
|
2502.02630
|
scBIT: Integrating Single-cell Transcriptomic Data into fMRI-based
Prediction for Alzheimer's Disease Diagnosis
|
q-bio.QM cs.AI cs.LG
|
Functional MRI (fMRI) and single-cell transcriptomics are pivotal in
Alzheimer's disease (AD) research, each providing unique insights into neural
function and molecular mechanisms. However, integrating these complementary
modalities remains largely unexplored. Here, we introduce scBIT, a novel method
for enhancing AD prediction by combining fMRI with single-nucleus RNA (snRNA).
scBIT leverages snRNA as an auxiliary modality, significantly improving
fMRI-based prediction models and providing comprehensive interpretability. It
employs a sampling strategy to segment snRNA data into cell-type-specific gene
networks and utilizes a self-explainable graph neural network to extract
critical subgraphs. Additionally, we use demographic and genetic similarities
to pair snRNA and fMRI data across individuals, enabling robust cross-modal
learning. Extensive experiments validate scBIT's effectiveness in revealing
intricate brain region-gene associations and enhancing diagnostic prediction
accuracy. By advancing brain imaging transcriptomics to the single-cell level,
scBIT sheds new light on biomarker discovery in AD research. Experimental
results show that incorporating snRNA data into the scBIT model significantly
boosts accuracy, improving binary classification by 3.39% and five-class
classification by 26.59%. The codes were implemented in Python and have been
released on GitHub (https://github.com/77YQ77/scBIT) and Zenodo
(https://zenodo.org/records/11599030) with detailed instructions.
|
2502.02631
|
ParetoQ: Scaling Laws in Extremely Low-bit LLM Quantization
|
cs.LG cs.AI cs.CL cs.CV
|
The optimal bit-width for achieving the best trade-off between quantized
model size and accuracy has been a subject of ongoing debate. While some
advocate for 4-bit quantization, others propose that 1.58-bit offers superior
results. However, the lack of a cohesive framework for different bits has left
such conclusions relatively tenuous. We present ParetoQ, the first unified
framework that facilitates rigorous comparisons across 1-bit, 1.58-bit, 2-bit,
3-bit, and 4-bit quantization settings. Our findings reveal a notable learning
transition between 2 and 3 bits: For 3-bits and above, the fine-tuned models
stay close to their original pre-trained distributions, whereas for learning
2-bit networks or below, the representations change drastically. By optimizing
training schemes and refining quantization functions, ParetoQ surpasses all
previous methods tailored to specific bit widths. Remarkably, our ParetoQ
ternary 600M-parameter model even outperforms the previous SoTA ternary
3B-parameter model in accuracy, using only one-fifth of the parameters.
Extensive experimentation shows that ternary, 2-bit, and 3-bit quantization
maintains comparable performance in the size-accuracy trade-off and generally
exceeds 4-bit and binary quantization. Considering hardware constraints, 2-bit
quantization offers promising potential for memory reduction and speedup.
|
2502.02649
|
Fully Autonomous AI Agents Should Not be Developed
|
cs.AI
|
This paper argues that fully autonomous AI agents should not be developed. In
support of this position, we build from prior scientific literature and current
product marketing to delineate different AI agent levels and detail the ethical
values at play in each, documenting trade-offs in potential benefits and risks.
Our analysis reveals that risks to people increase with the autonomy of a
system: The more control a user cedes to an AI agent, the more risks to people
arise. Particularly concerning are safety risks, which affect human life and
impact further values.
|
2502.02657
|
SiLVR: Scalable Lidar-Visual Radiance Field Reconstruction with
Uncertainty Quantification
|
cs.RO cs.CV
|
We present a neural radiance field (NeRF) based large-scale reconstruction
system that fuses lidar and vision data to generate high-quality
reconstructions that are geometrically accurate and capture photorealistic
texture. Our system adopts the state-of-the-art NeRF representation to
additionally incorporate lidar. Adding lidar data adds strong geometric
constraints on the depth and surface normals, which is particularly useful when
modelling uniform texture surfaces which contain ambiguous visual
reconstruction cues. Furthermore, we estimate the epistemic uncertainty of the
reconstruction as the spatial variance of each point location in the radiance
field given the sensor observations from camera and lidar. This enables the
identification of areas that are reliably reconstructed by each sensor
modality, allowing the map to be filtered according to the estimated
uncertainty. Our system can also exploit the trajectory produced by a real-time
pose-graph lidar SLAM system during online mapping to bootstrap a
(post-processed) Structure-from-Motion (SfM) reconstruction procedure reducing
SfM training time by up to 70%. It also helps to properly constrain the overall
metric scale which is essential for the lidar depth loss. The
globally-consistent trajectory can then be divided into submaps using Spectral
Clustering to group sets of co-visible images together. This submapping
approach is more suitable for visual reconstruction than distance-based
partitioning. Each submap is filtered according to point-wise uncertainty
estimates and merged to obtain the final large-scale 3D reconstruction. We
demonstrate the reconstruction system using a multi-camera, lidar sensor suite
in experiments involving both robot-mounted and handheld scanning. Our test
datasets cover a total area of more than 20,000 square metres, including
multiple university buildings and an aerial survey of a multi-storey.
|
2502.02659
|
A Training-Free Length Extrapolation Approach for LLMs: Greedy Attention
Logit Interpolation (GALI)
|
cs.CL cs.AI
|
Transformer-based Large Language Models (LLMs) struggle to process inputs
exceeding their training context window, with performance degrading due to
positional out-of-distribution (O.O.D.) that disrupt attention computations.
Existing solutions, fine-tuning and training-free methods, are limited by
computational inefficiency, attention logit outliers or loss of local
positional information. To address this, we propose Greedy Attention Logit
Interpolation (GALI), a training-free length extrapolation method that
maximizes the utilization of pretrained positional intervals while avoiding
attention logit outliers through attention logit interpolation. The result
demonstrates that GALI consistently outperforms state-of-the-art training-free
methods. Our findings reveal that LLMs interpret positional intervals unevenly
within their training context window, suggesting that extrapolating within a
smaller positional interval range yields superior results-even for
short-context tasks. GALI represents a significant step toward resolving the
positional O.O.D. challenge, enabling more reliable long-text understanding in
LLMs. Our implementation of GALI, along with the experiments from our paper, is
open-sourced at https://github.com/AcademyCityL/GALI.
|
2502.02663
|
Learning to Double Guess: An Active Perception Approach for Estimating
the Center of Mass of Arbitrary Objects
|
cs.RO cs.LG
|
Manipulating arbitrary objects in unstructured environments is a significant
challenge in robotics, primarily due to difficulties in determining an object's
center of mass. This paper introduces U-GRAPH: Uncertainty-Guided Rotational
Active Perception with Haptics, a novel framework to enhance the center of mass
estimation using active perception. Traditional methods often rely on single
interaction and are limited by the inherent inaccuracies of Force-Torque (F/T)
sensors. Our approach circumvents these limitations by integrating a Bayesian
Neural Network (BNN) to quantify uncertainty and guide the robotic system
through multiple, information-rich interactions via grid search and a neural
network that scores each action. We demonstrate the remarkable generalizability
and transferability of our method with training on a small dataset with limited
variation yet still perform well on unseen complex real-world objects.
|
2502.02664
|
Differentiable Composite Neural Signed Distance Fields for Robot
Navigation in Dynamic Indoor Environments
|
cs.RO
|
Neural Signed Distance Fields (SDFs) provide a differentiable environment
representation to readily obtain collision checks and well-defined gradients
for robot navigation tasks. However, updating neural SDFs as the scene evolves
entails re-training, which is tedious, time consuming, and inefficient, making
it unsuitable for robot navigation with limited field-of-view in dynamic
environments. Towards this objective, we propose a compositional framework of
neural SDFs to solve robot navigation in indoor environments using only an
onboard RGB-D sensor. Our framework embodies a dual mode procedure for
trajectory optimization, with different modes using complementary methods of
modeling collision costs and collision avoidance gradients. The primary stage
queries the robot body's SDF, swept along the route to goal, at the obstacle
point cloud, enabling swift local optimization of trajectories. The secondary
stage infers the visible scene's SDF by aligning and composing the SDF
representations of its constituents, providing better informed costs and
gradients for trajectory optimization. The dual mode procedure combines the
best of both stages, achieving a success rate of 98%, 14.4% higher than
baseline with comparable amortized plan time on iGibson 2.0. We also
demonstrate its effectiveness in adapting to real-world indoor scenarios.
|
2502.02666
|
Deep Reinforcement Learning Enabled Persistent Surveillance with
Energy-Aware UAV-UGV Systems for Disaster Management Applications
|
cs.RO
|
Integrating Unmanned Aerial Vehicles (UAVs) with Unmanned Ground Vehicles
(UGVs) provides an effective solution for persistent surveillance in disaster
management. UAVs excel at covering large areas rapidly, but their range is
limited by battery capacity. UGVs, though slower, can carry larger batteries
for extended missions. By using UGVs as mobile recharging stations, UAVs can
extend mission duration through periodic refueling, leveraging the
complementary strengths of both systems. To optimize this energy-aware UAV-UGV
cooperative routing problem, we propose a planning framework that determines
optimal routes and recharging points between a UAV and a UGV. Our solution
employs a deep reinforcement learning (DRL) framework built on an
encoder-decoder transformer architecture with multi-head attention mechanisms.
This architecture enables the model to sequentially select actions for visiting
mission points and coordinating recharging rendezvous between the UAV and UGV.
The DRL model is trained to minimize the age periods (the time gap between
consecutive visits) of mission points, ensuring effective surveillance. We
evaluate the framework across various problem sizes and distributions,
comparing its performance against heuristic methods and an existing
learning-based model. Results show that our approach consistently outperforms
these baselines in both solution quality and runtime. Additionally, we
demonstrate the DRL policy's applicability in a real-world disaster scenario as
a case study and explore its potential for online mission planning to handle
dynamic changes. Adapting the DRL policy for priority-driven surveillance
highlights the model's generalizability for real-time disaster response.
|
2502.02668
|
Recovering Imbalanced Clusters via Gradient-Based Projection Pursuit
|
cs.LG
|
Projection Pursuit is a classic exploratory technique for finding interesting
projections of a dataset. We propose a method for recovering projections
containing either Imbalanced Clusters or a Bernoulli-Rademacher distribution
using a gradient-based technique to optimize the projection index. As sample
complexity is a major limiting factor in Projection Pursuit, we analyze our
algorithm's sample complexity within a Planted Vector setting where we can
observe that Imbalanced Clusters can be recovered more easily than balanced
ones. Additionally, we give a generalized result that works for a variety of
data distributions and projection indices. We compare these results to
computational lower bounds in the Low-Degree-Polynomial Framework. Finally, we
experimentally evaluate our method's applicability to real-world data using
FashionMNIST and the Human Activity Recognition Dataset, where our algorithm
outperforms others when only a few samples are available.
|
2502.02669
|
Distributed Prescribed-Time Observer for Nonlinear Systems
|
eess.SY cs.SY
|
This paper proposes a distributed prescribed-time observer for nonlinear
systems representable in a block-triangular observable canonical form. Using a
weighted average of neighbor estimates exchanged over a strongly connected
digraph, each observer estimates the system state despite limited local sensor
measurements. The proposed design guarantees that distributed state estimation
errors converge to zero at a user-specified convergence time, irrespective of
observers' initial conditions. To achieve this prescribed-time convergence,
distributed observers implement time-varying local output injection gains that
asymptotically approach infinity as the prescribed time is approached. The
theoretical convergence is rigorously proven and validated through numerical
simulations.
|
2502.02671
|
On Teacher Hacking in Language Model Distillation
|
cs.LG cs.AI cs.CL stat.ML
|
Post-training of language models (LMs) increasingly relies on the following
two stages: (i) knowledge distillation, where the LM is trained to imitate a
larger teacher LM, and (ii) reinforcement learning from human feedback (RLHF),
where the LM is aligned by optimizing a reward model. In the second RLHF stage,
a well-known challenge is reward hacking, where the LM over-optimizes the
reward model. Such phenomenon is in line with Goodhart's law and can lead to
degraded performance on the true objective. In this paper, we investigate
whether a similar phenomenon, that we call teacher hacking, can occur during
knowledge distillation. This could arise because the teacher LM is itself an
imperfect approximation of the true distribution. To study this, we propose a
controlled experimental setup involving: (i) an oracle LM representing the
ground-truth distribution, (ii) a teacher LM distilled from the oracle, and
(iii) a student LM distilled from the teacher. Our experiments reveal the
following insights. When using a fixed offline dataset for distillation,
teacher hacking occurs; moreover, we can detect it by observing when the
optimization process deviates from polynomial convergence laws. In contrast,
employing online data generation techniques effectively mitigates teacher
hacking. More precisely, we identify data diversity as the key factor in
preventing hacking. Overall, our findings provide a deeper understanding of the
benefits and limitations of distillation for building robust and efficient LMs.
|
2502.02672
|
Transformers Boost the Performance of Decision Trees on Tabular Data
across Sample Sizes
|
cs.CL cs.LG
|
Large language models (LLMs) perform remarkably well on tabular datasets in
zero- and few-shot settings, since they can extract meaning from natural
language column headers that describe features and labels. Similarly, TabPFN, a
recent non-LLM transformer pretrained on numerous tables for in-context
learning, has demonstrated excellent performance for dataset sizes up to a
thousand samples. In contrast, gradient-boosted decision trees (GBDTs) are
typically trained from scratch on each dataset without benefiting from
pretraining data and must learn the relationships between columns from their
entries alone since they lack natural language understanding. LLMs and TabPFN
excel on small tabular datasets where a strong prior is essential, yet they are
not competitive with GBDTs on medium or large datasets, since their context
lengths are limited. In this paper, we propose a simple and lightweight
approach for fusing large language models and TabPFN with gradient-boosted
decision trees, which allows scalable GBDTs to benefit from the natural
language capabilities and pretraining of transformers. We name our fusion
methods LLM-Boost and PFN-Boost, respectively. While matching or surpassing the
performance of the transformer at sufficiently small dataset sizes and GBDTs at
sufficiently large sizes, LLM-Boost and PFN-Boost outperform both standalone
components on a wide range of dataset sizes in between. We demonstrate
state-of-the-art performance against numerous baselines and ensembling
algorithms. We find that PFN-Boost achieves the best average performance among
all methods we test for all but very small dataset sizes. We release our code
at http://github.com/MayukaJ/LLM-Boost .
|
2502.02673
|
MedRAX: Medical Reasoning Agent for Chest X-ray
|
cs.LG cs.AI cs.MA
|
Chest X-rays (CXRs) play an integral role in driving critical decisions in
disease management and patient care. While recent innovations have led to
specialized models for various CXR interpretation tasks, these solutions often
operate in isolation, limiting their practical utility in clinical practice. We
present MedRAX, the first versatile AI agent that seamlessly integrates
state-of-the-art CXR analysis tools and multimodal large language models into a
unified framework. MedRAX dynamically leverages these models to address complex
medical queries without requiring additional training. To rigorously evaluate
its capabilities, we introduce ChestAgentBench, a comprehensive benchmark
containing 2,500 complex medical queries across 7 diverse categories. Our
experiments demonstrate that MedRAX achieves state-of-the-art performance
compared to both open-source and proprietary models, representing a significant
step toward the practical deployment of automated CXR interpretation systems.
Data and code have been publicly available at
https://github.com/bowang-lab/MedRAX
|
2502.02676
|
Blind Visible Watermark Removal with Morphological Dilation
|
cs.CV cs.CR cs.LG
|
Visible watermarks pose significant challenges for image restoration
techniques, especially when the target background is unknown. Toward this end,
we present MorphoMod, a novel method for automated visible watermark removal
that operates in a blind setting -- without requiring target images. Unlike
existing methods, MorphoMod effectively removes opaque and transparent
watermarks while preserving semantic content, making it well-suited for
real-world applications. Evaluations on benchmark datasets, including the
Colored Large-scale Watermark Dataset (CLWD), LOGO-series, and the newly
introduced Alpha1 datasets, demonstrate that MorphoMod achieves up to a 50.8%
improvement in watermark removal effectiveness compared to state-of-the-art
methods. Ablation studies highlight the impact of prompts used for inpainting,
pre-removal filling strategies, and inpainting model performance on watermark
removal. Additionally, a case study on steganographic disorientation reveals
broader applications for watermark removal in disrupting high-level hidden
messages. MorphoMod offers a robust, adaptable solution for watermark removal
and opens avenues for further advancements in image restoration and adversarial
manipulation.
|
2502.02679
|
Networks with Finite VC Dimension: Pro and Contra
|
stat.ML cs.LG
|
Approximation and learning of classifiers of large data sets by neural
networks in terms of high-dimensional geometry and statistical learning theory
are investigated. The influence of the VC dimension of sets of input-output
functions of networks on approximation capabilities is compared with its
influence on consistency in learning from samples of data. It is shown that,
whereas finite VC dimension is desirable for uniform convergence of empirical
errors, it may not be desirable for approximation of functions drawn from a
probability distribution modeling the likelihood that they occur in a given
type of application. Based on the concentration-of-measure properties of high
dimensional geometry, it is proven that both errors in approximation and
empirical errors behave almost deterministically for networks implementing sets
of input-output functions with finite VC dimensions in processing large data
sets. Practical limitations of the universal approximation property, the
trade-offs between the accuracy of approximation and consistency in learning
from data, and the influence of depth of networks with ReLU units on their
accuracy and consistency are discussed.
|
2502.02681
|
Building Bridges between Users and Content across Multiple Platforms
during Natural Disasters
|
cs.SI
|
Social media is a primary medium for information diffusion during natural
disasters. The social media ecosystem has been used to identify destruction,
analyze opinions and organize aid. While the overall picture and aggregate
trends may be important, a crucial part of the picture is the connections on
these sites. These bridges are essential to facilitate information flow within
the network. In this work, we perform a multi-platform analysis (X, Reddit,
YouTube) of Hurricanes Helene and Milton, which occurred in quick session to
each other in the US in late 2024. We construct network graphs to understand
the properties of effective bridging content and users. We find that bridges
tend to exist on X, that bridging content is complex, and that bridging users
have relatable affiliations related to gender, race and job. Public
organizations can use these characteristics to manage their social media
personas during natural disasters more effectively.
|
2502.02682
|
Pseudo-Physics-Informed Neural Operators: Enhancing Operator Learning
from Limited Data
|
cs.LG physics.comp-ph
|
Neural operators have shown great potential in surrogate modeling. However,
training a well-performing neural operator typically requires a substantial
amount of data, which can pose a major challenge in complex applications. In
such scenarios, detailed physical knowledge can be unavailable or difficult to
obtain, and collecting extensive data is often prohibitively expensive. To
mitigate this challenge, we propose the Pseudo Physics-Informed Neural Operator
(PPI-NO) framework. PPI-NO constructs a surrogate physics system for the target
system using partial differential equations (PDEs) derived from simple,
rudimentary physics principles, such as basic differential operators. This
surrogate system is coupled with a neural operator model, using an alternating
update and learning process to iteratively enhance the model's predictive
power. While the physics derived via PPI-NO may not mirror the ground-truth
underlying physical laws -- hence the term ``pseudo physics'' -- this approach
significantly improves the accuracy of standard operator learning models in
data-scarce scenarios, which is evidenced by extensive evaluations across five
benchmark tasks and a fatigue modeling application.
|
2502.02683
|
Streaming Speaker Change Detection and Gender Classification for
Transducer-Based Multi-Talker Speech Translation
|
cs.SD cs.AI cs.CL eess.AS
|
Streaming multi-talker speech translation is a task that involves not only
generating accurate and fluent translations with low latency but also
recognizing when a speaker change occurs and what the speaker's gender is.
Speaker change information can be used to create audio prompts for a zero-shot
text-to-speech system, and gender can help to select speaker profiles in a
conventional text-to-speech model. We propose to tackle streaming speaker
change detection and gender classification by incorporating speaker embeddings
into a transducer-based streaming end-to-end speech translation model. Our
experiments demonstrate that the proposed methods can achieve high accuracy for
both speaker change detection and gender classification.
|
2502.02684
|
Three-dimensional signal processing: a new approach in dynamical
sampling via tensor products
|
eess.SP cs.IT cs.LG math.IT
|
The dynamical sampling problem is centered around reconstructing signals that
evolve over time according to a dynamical process, from spatial-temporal
samples that may be noisy. This topic has been thoroughly explored for
one-dimensional signals. Multidimensional signal recovery has also been
studied, but primarily in scenarios where the driving operator is a convolution
operator. In this work, we shift our focus to the dynamical sampling problem in
the context of three-dimensional signal recovery, where the evolution system
can be characterized by tensor products. Specifically, we provide a necessary
condition for the sampling set that ensures successful recovery of the
three-dimensional signal. Furthermore, we reformulate the reconstruction
problem as an optimization task, which can be solved efficiently. To
demonstrate the effectiveness of our approach, we include some straightforward
numerical simulations that showcase the reconstruction performance.
|
2502.02685
|
A Methodology for Process Design Kit Re-Centering Using TCAD and
Experimental Data for Cryogenic Temperatures
|
physics.app-ph cond-mat.mes-hall cs.SY eess.SY
|
In this work, we describe and demonstrate a novel Technology Computer Aided
Design (TCAD) driven methodology that allows measurement data from 'non-ideal'
silicon wafers to be used for re-centering a room temperature-based Process
Design Kit (PDK) to cryogenic temperatures. This comprehensive approach holds
promise for advancing cryogenic CMOS design in the absence of foundry supplied
cryogenic PDKs.
|
2502.02687
|
NDKF: A Neural-Enhanced Distributed Kalman Filter for Nonlinear
Multi-Sensor Estimation
|
eess.SY cs.SY
|
We propose a Neural-Enhanced Distributed Kalman Filter (NDKF) for
multi-sensor state estimation in nonlinear systems. Unlike traditional Kalman
filters that rely on explicit, linear models and centralized data fusion, the
NDKF leverages neural networks to learn both the system dynamics and
measurement functions directly from data. Each sensor node performs local
prediction and update steps using these learned models and exchanges only
compact summary information with its neighbors via a consensus-based fusion
process, which reduces communication overhead and eliminates a single point of
failure. Our theoretical convergence analysis establishes sufficient conditions
under which the local linearizations of the neural models guarantee overall
filter stability and provides a solid foundation for the proposed approach.
Simulation studies on a 2D system with four partially observing nodes
demonstrate that the NDKF significantly outperforms a distributed Extended
Kalman Filter. These outcomes, as yielded by the proposed NDKF method,
highlight its potential to improve the scalability, robustness, and accuracy of
distributed state estimation in complex nonlinear environments.
|
2502.02688
|
Efficient Implementation of the Global Cardinality Constraint with Costs
|
cs.AI cs.DS
|
The success of Constraint Programming relies partly on the global constraints
and implementation of the associated filtering algorithms. Recently, new ideas
emerged to improve these implementations in practice, especially regarding the
all different constraint. In this paper, we consider the cardinality constraint
with costs. The cardinality constraint is a generalization of the all different
constraint that specifies the number of times each value must be taken by a
given set of variables in a solution. The version with costs introduces an
assignment cost and bounds the total sum of assignment costs. The arc
consistency filtering algorithm of this constraint is difficult to use in
practice, as it systematically searches for many shortest paths. We propose a
new approach that works with upper bounds on shortest paths based on landmarks.
This approach can be seen as a preprocessing. It is fast and avoids, in
practice, a large number of explicit computations of shortest paths.
|
2502.02689
|
Multidimensional Swarm Flight Approach For Chasing Unauthorized UAVs
Leveraging Asynchronous Deep Learning
|
eess.SY cs.SY
|
This paper introduces a novel unmanned aerial vehicles (UAV) chasing system
designed to track and chase unauthorized UAVs, significantly enhancing their
neutralization effectiveness.
|
2502.02690
|
Controllable Video Generation with Provable Disentanglement
|
cs.CV cs.AI cs.LG
|
Controllable video generation remains a significant challenge, despite recent
advances in generating high-quality and consistent videos. Most existing
methods for controlling video generation treat the video as a whole, neglecting
intricate fine-grained spatiotemporal relationships, which limits both control
precision and efficiency. In this paper, we propose Controllable Video
Generative Adversarial Networks (CoVoGAN) to disentangle the video concepts,
thus facilitating efficient and independent control over individual concepts.
Specifically, following the minimal change principle, we first disentangle
static and dynamic latent variables. We then leverage the sufficient change
property to achieve component-wise identifiability of dynamic latent variables,
enabling independent control over motion and identity. To establish the
theoretical foundation, we provide a rigorous analysis demonstrating the
identifiability of our approach. Building on these theoretical insights, we
design a Temporal Transition Module to disentangle latent dynamics. To enforce
the minimal change principle and sufficient change property, we minimize the
dimensionality of latent dynamic variables and impose temporal conditional
independence. To validate our approach, we integrate this module as a plug-in
for GANs. Extensive qualitative and quantitative experiments on various video
generation benchmarks demonstrate that our method significantly improves
generation quality and controllability across diverse real-world scenarios.
|
2502.02692
|
Intelligent Sensing-to-Action for Robust Autonomy at the Edge:
Opportunities and Challenges
|
cs.RO cs.CV cs.LG
|
Autonomous edge computing in robotics, smart cities, and autonomous vehicles
relies on the seamless integration of sensing, processing, and actuation for
real-time decision-making in dynamic environments. At its core is the
sensing-to-action loop, which iteratively aligns sensor inputs with
computational models to drive adaptive control strategies. These loops can
adapt to hyper-local conditions, enhancing resource efficiency and
responsiveness, but also face challenges such as resource constraints,
synchronization delays in multi-modal data fusion, and the risk of cascading
errors in feedback loops. This article explores how proactive, context-aware
sensing-to-action and action-to-sensing adaptations can enhance efficiency by
dynamically adjusting sensing and computation based on task demands, such as
sensing a very limited part of the environment and predicting the rest. By
guiding sensing through control actions, action-to-sensing pathways can improve
task relevance and resource use, but they also require robust monitoring to
prevent cascading errors and maintain reliability. Multi-agent sensing-action
loops further extend these capabilities through coordinated sensing and actions
across distributed agents, optimizing resource use via collaboration.
Additionally, neuromorphic computing, inspired by biological systems, provides
an efficient framework for spike-based, event-driven processing that conserves
energy, reduces latency, and supports hierarchical control--making it ideal for
multi-agent optimization. This article highlights the importance of end-to-end
co-design strategies that align algorithmic models with hardware and
environmental dynamics and improve cross-layer interdependencies to improve
throughput, precision, and adaptability for energy-efficient edge autonomy in
complex environments.
|
2502.02696
|
How Inclusively do LMs Perceive Social and Moral Norms?
|
cs.CL
|
This paper discusses and contains offensive content. Language models (LMs)
are used in decision-making systems and as interactive assistants. However, how
well do these models making judgements align with the diversity of human
values, particularly regarding social and moral norms? In this work, we
investigate how inclusively LMs perceive norms across demographic groups (e.g.,
gender, age, and income). We prompt 11 LMs on rules-of-thumb (RoTs) and compare
their outputs with the existing responses of 100 human annotators. We introduce
the Absolute Distance Alignment Metric (ADA-Met) to quantify alignment on
ordinal questions. We find notable disparities in LM responses, with younger,
higher-income groups showing closer alignment, raising concerns about the
representation of marginalized perspectives. Our findings highlight the
importance of further efforts to make LMs more inclusive of diverse human
values. The code and prompts are available on GitHub under the CC BY-NC 4.0
license.
|
2502.02700
|
Scalable Higher Resolution Polar Sea Ice Classification and Freeboard
Calculation from ICESat-2 ATL03 Data
|
cs.LG
|
ICESat-2 (IS2) by NASA is an Earth-observing satellite that measures
high-resolution surface elevation. The IS2's ATL07 and ATL10 sea ice elevation
and freeboard products of 10m-200m segments which aggregated 150 signal photons
from the raw ATL03 (geolocated photon) data. These aggregated products can
potentially overestimate local sea surface height, thus underestimating the
calculations of freeboard (sea ice height above sea surface). To achieve a
higher resolution of sea surface height and freeboard information, in this work
we utilize a 2m window to resample the ATL03 data. Then, we classify these 2m
segments into thick sea ice, thin ice, and open water using deep learning
methods (Long short-term memory and Multi-layer perceptron models). To obtain
labeled training data for our deep learning models, we use segmented Sentinel-2
(S2) multi-spectral imagery overlapping with IS2 tracks in space and time to
auto-label IS2 data, followed by some manual corrections in the regions of
transition between different ice/water types or cloudy regions. We employ a
parallel workflow for this auto-labeling using PySpark to scale, and we achieve
9-fold data loading and 16.25-fold map-reduce speedup. To train our models, we
employ a Horovod-based distributed deep-learning workflow on a DGX A100 8 GPU
cluster, achieving a 7.25-fold speedup. Next, we calculate the local sea
surface heights based on the open water segments. Finally, we scale the
freeboard calculation using the derived local sea level and achieve 8.54-fold
data loading and 15.7-fold map-reduce speedup. Compared with the ATL07 (local
sea level) and ATL10 (freeboard) data products, our results show higher
resolutions and accuracy (96.56%).
|
2502.02701
|
Practically Effective Adjustment Variable Selection in Causal Inference
|
cs.LG cs.AI physics.data-an stat.ME
|
In the estimation of causal effects, one common method for removing the
influence of confounders is to adjust the variables that satisfy the back-door
criterion. However, it is not always possible to uniquely determine sets of
such variables. Moreover, real-world data is almost always limited, which means
it may be insufficient for statistical estimation. Therefore, we propose
criteria for selecting variables from a list of candidate adjustment variables
along with an algorithm to prevent accuracy degradation in causal effect
estimation. We initially focus on directed acyclic graphs (DAGs) and then
outlines specific steps for applying this method to completed partially
directed acyclic graphs (CPDAGs). We also present and prove a theorem on causal
effect computation possibility in CPDAGs. Finally, we demonstrate the practical
utility of our method using both existing and artificial data.
|
2502.02703
|
Developing multilingual speech synthesis system for Ojibwe, Mi'kmaq, and
Maliseet
|
cs.CL cs.AI cs.LG cs.SD eess.AS
|
We present lightweight flow matching multilingual text-to-speech (TTS)
systems for Ojibwe, Mi'kmaq, and Maliseet, three Indigenous languages in North
America. Our results show that training a multilingual TTS model on three
typologically similar languages can improve the performance over monolingual
models, especially when data are scarce. Attention-free architectures are
highly competitive with self-attention architecture with higher memory
efficiency. Our research not only advances technical development for the
revitalization of low-resource languages but also highlights the cultural gap
in human evaluation protocols, calling for a more community-centered approach
to human evaluation.
|
2502.02705
|
Rapidly Adapting Policies to the Real World via Simulation-Guided
Fine-Tuning
|
cs.RO cs.LG
|
Robot learning requires a considerable amount of high-quality data to realize
the promise of generalization. However, large data sets are costly to collect
in the real world. Physics simulators can cheaply generate vast data sets with
broad coverage over states, actions, and environments. However, physics engines
are fundamentally misspecified approximations to reality. This makes direct
zero-shot transfer from simulation to reality challenging, especially in tasks
where precise and force-sensitive manipulation is necessary. Thus, fine-tuning
these policies with small real-world data sets is an appealing pathway for
scaling robot learning. However, current reinforcement learning fine-tuning
frameworks leverage general, unstructured exploration strategies which are too
inefficient to make real-world adaptation practical. This paper introduces the
Simulation-Guided Fine-tuning (SGFT) framework, which demonstrates how to
extract structural priors from physics simulators to substantially accelerate
real-world adaptation. Specifically, our approach uses a value function learned
in simulation to guide real-world exploration. We demonstrate this approach
across five real-world dexterous manipulation tasks where zero-shot sim-to-real
transfer fails. We further demonstrate our framework substantially outperforms
baseline fine-tuning methods, requiring up to an order of magnitude fewer
real-world samples and succeeding at difficult tasks where prior approaches
fail entirely. Last but not least, we provide theoretical justification for
this new paradigm which underpins how SGFT can rapidly learn high-performance
policies in the face of large sim-to-real dynamics gaps. Project webpage:
https://weirdlabuw.github.io/sgft/{weirdlabuw.github.io/sgft}
|
2502.02707
|
Multiple Instance Learning with Coarse-to-Fine Self-Distillation
|
cs.CV
|
Multiple Instance Learning (MIL) for whole slide image (WSI) analysis in
computational pathology often neglects instance-level learning as supervision
is typically provided only at the bag level. In this work, we present PathMIL,
a framework designed to improve MIL through two perspectives: (1) employing
instance-level supervision and (2) learning inter-instance contextual
information on bag level. Firstly, we propose a novel Coarse-to-Fine
Self-Distillation (CFSD) paradigm, to probe and distil a classifier trained
with bag-level information to obtain instance-level labels which could
effectively provide the supervision for the same classifier in a finer way.
Secondly, to capture inter-instance contextual information in WSI, we propose
Two-Dimensional Positional Encoding (2DPE), which encodes the spatial
appearance of instances within a bag. We also theoretically and empirically
prove the instance-level learnability of CFSD. PathMIL is evaluated on multiple
benchmarking tasks, including subtype classification (TCGA-NSCLC), tumour
classification (CAMELYON16), and an internal benchmark for breast cancer
receptor status classification. Our method achieves state-of-the-art
performance, with AUC scores of 0.9152 and 0.8524 for estrogen and progesterone
receptor status classification, respectively, an AUC of 0.9618 for subtype
classification, and 0.8634 for tumour classification, surpassing existing
methods.
|
2502.02709
|
Enforcing Demographic Coherence: A Harms Aware Framework for Reasoning
about Private Data Release
|
cs.CR cs.DB
|
The technical literature about data privacy largely consists of two
complementary approaches: formal definitions of conditions sufficient for
privacy preservation and attacks that demonstrate privacy breaches.
Differential privacy is an accepted standard in the former sphere. However,
differential privacy's powerful adversarial model and worst-case guarantees may
make it too stringent in some situations, especially when achieving it comes at
a significant cost to data utility. Meanwhile, privacy attacks aim to expose
real and worrying privacy risks associated with existing data release processes
but often face criticism for being unrealistic. Moreover, the literature on
attacks generally does not identify what properties are necessary to defend
against them.
We address the gap between these approaches by introducing demographic
coherence, a condition inspired by privacy attacks that we argue is necessary
for data privacy. This condition captures privacy violations arising from
inferences about individuals that are incoherent with respect to the
demographic patterns in the data. Our framework focuses on confidence rated
predictors, which can in turn be distilled from almost any data-informed
process. Thus, we capture privacy threats that exist even when no attack is
explicitly being carried out. Our framework not only provides a condition with
respect to which data release algorithms can be analysed but suggests natural
experimental evaluation methodologies that could be used to build practical
intuition and make tangible assessment of risks. Finally, we argue that
demographic coherence is weaker than differential privacy: we prove that every
differentially private data release is also demographically coherent, and that
there are demographically coherent algorithms which are not differentially
private.
|
2502.02710
|
Achievable distributional robustness when the robust risk is only
partially identified
|
stat.ML cs.LG
|
In safety-critical applications, machine learning models should generalize
well under worst-case distribution shifts, that is, have a small robust risk.
Invariance-based algorithms can provably take advantage of structural
assumptions on the shifts when the training distributions are heterogeneous
enough to identify the robust risk. However, in practice, such identifiability
conditions are rarely satisfied -- a scenario so far underexplored in the
theoretical literature. In this paper, we aim to fill the gap and propose to
study the more general setting when the robust risk is only partially
identifiable. In particular, we introduce the worst-case robust risk as a new
measure of robustness that is always well-defined regardless of
identifiability. Its minimum corresponds to an algorithm-independent
(population) minimax quantity that measures the best achievable robustness
under partial identifiability. While these concepts can be defined more
broadly, in this paper we introduce and derive them explicitly for a linear
model for concreteness of the presentation. First, we show that existing
robustness methods are provably suboptimal in the partially identifiable case.
We then evaluate these methods and the minimizer of the (empirical) worst-case
robust risk on real-world gene expression data and find a similar trend: the
test error of existing robustness methods grows increasingly suboptimal as the
fraction of data from unseen environments increases, whereas accounting for
partial identifiability allows for better generalization.
|
2502.02711
|
Tensor Network Structure Search Using Program Synthesis
|
cs.CE cs.PL
|
Tensor networks provide a powerful framework for compressing
multi-dimensional data. The optimal tensor network structure for a given data
tensor depends on both the inherent data properties and the specific optimality
criteria, making tensor network structure search a crucial research problem.
Existing solutions typically involve sampling and validating numerous candidate
structures; this is computationally expensive, limiting their practical
applications. We address this challenge by formulating tensor network structure
search as a program synthesis problem and proposing a highly efficient
validation method that is based on constraint solving. Specifically, we design
a domain specific language: it builds the correspondence between programs and
network structures, and uses a novel idea of output-directed splits to compress
the search space without hindering the expressiveness. We then propose a
synthesis algorithm that can prioritize promising candidates through constraint
solving. % Experimental results show that our approach improves search speed by
$10\times$ and achieves compression ratios by $1.5\times$ to $3\times$ better
than state-of-the-art. Notably, our approach scales to larger tensors that are
out of reach by prior work. Finally, we demonstrate that the discovered
topologies generalize to data from the same source, achieving compression
ratios up to $ 2.4\times$ better than hierarchical Tuckers while maintaining
the runtime around $110$ seconds.
|
2502.02715
|
An Analysis of LLM Fine-Tuning and Few-Shot Learning for Flaky Test
Detection and Classification
|
cs.SE cs.AI
|
Flaky tests exhibit non-deterministic behavior during execution and they may
pass or fail without any changes to the program under test. Detecting and
classifying these flaky tests is crucial for maintaining the robustness of
automated test suites and ensuring the overall reliability and confidence in
the testing. However, flaky test detection and classification is challenging
due to the variability in test behavior, which can depend on environmental
conditions and subtle code interactions. Large Language Models (LLMs) offer
promising approaches to address this challenge, with fine-tuning and few-shot
learning (FSL) emerging as viable techniques. With enough data fine-tuning a
pre-trained LLM can achieve high accuracy, making it suitable for organizations
with more resources. Alternatively, we introduce FlakyXbert, an FSL approach
that employs a Siamese network architecture to train efficiently with limited
data. To understand the performance and cost differences between these two
methods, we compare fine-tuning on larger datasets with FSL in scenarios
restricted by smaller datasets. Our evaluation involves two existing flaky test
datasets, FlakyCat and IDoFT. Our results suggest that while fine-tuning can
achieve high accuracy, FSL provides a cost-effective approach with competitive
accuracy, which is especially beneficial for organizations or projects with
limited historical data available for training. These findings underscore the
viability of both fine-tuning and FSL in flaky test detection and
classification with each suited to different organizational needs and resource
availability.
|
2502.02716
|
A Unified Understanding and Evaluation of Steering Methods
|
cs.LG cs.CL
|
Steering methods provide a practical approach to controlling large language
models by applying steering vectors to intermediate activations, guiding
outputs toward desired behaviors while avoiding retraining. Despite their
growing importance, the field lacks a unified understanding and consistent
evaluation across tasks and datasets, hindering progress. This paper introduces
a unified framework for analyzing and evaluating steering methods, formalizing
their core principles and offering theoretical insights into their
effectiveness. Through comprehensive empirical evaluations on multiple-choice
and open-ended text generation tasks, we validate these insights, identifying
key factors that influence performance and demonstrating the superiority of
certain methods. Our work bridges theoretical and practical perspectives,
offering actionable guidance for advancing the design, optimization, and
deployment of steering methods in LLMs.
|
2502.02717
|
Astromer 2
|
astro-ph.IM cs.AI cs.LG
|
Foundational models have emerged as a powerful paradigm in deep learning
field, leveraging their capacity to learn robust representations from
large-scale datasets and effectively to diverse downstream applications such as
classification. In this paper, we present Astromer 2 a foundational model
specifically designed for extracting light curve embeddings. We introduce
Astromer 2 as an enhanced iteration of our self-supervised model for light
curve analysis. This paper highlights the advantages of its pre-trained
embeddings, compares its performance with that of its predecessor, Astromer 1,
and provides a detailed empirical analysis of its capabilities, offering deeper
insights into the model's representations. Astromer 2 is pretrained on 1.5
million single-band light curves from the MACHO survey using a self-supervised
learning task that predicts randomly masked observations within sequences.
Fine-tuning on a smaller labeled dataset allows us to assess its performance in
classification tasks. The quality of the embeddings is measured by the F1 score
of an MLP classifier trained on Astromer-generated embeddings. Our results
demonstrate that Astromer 2 significantly outperforms Astromer 1 across all
evaluated scenarios, including limited datasets of 20, 100, and 500 samples per
class. The use of weighted per-sample embeddings, which integrate intermediate
representations from Astromer's attention blocks, is particularly impactful.
Notably, Astromer 2 achieves a 15% improvement in F1 score on the ATLAS dataset
compared to prior models, showcasing robust generalization to new datasets.
This enhanced performance, especially with minimal labeled data, underscores
the potential of Astromer 2 for more efficient and scalable light curve
analysis.
|
2502.02719
|
Beyond Topological Self-Explainable GNNs: A Formal Explainability
Perspective
|
cs.LG
|
Self-Explainable Graph Neural Networks (SE-GNNs) are popular
explainable-by-design GNNs, but the properties and the limitations of their
explanations are not well understood. Our first contribution fills this gap by
formalizing the explanations extracted by SE-GNNs, referred to as Trivial
Explanations (TEs), and comparing them to established notions of explanations,
namely Prime Implicant (PI) and faithful explanations. Our analysis reveals
that TEs match PI explanations for a restricted but significant family of
tasks. In general, however, they can be less informative than PI explanations
and are surprisingly misaligned with widely accepted notions of faithfulness.
Although faithful and PI explanations are informative, they are intractable to
find and we show that they can be prohibitively large. Motivated by this, we
propose Dual-Channel GNNs that integrate a white-box rule extractor and a
standard SE-GNN, adaptively combining both channels when the task benefits. Our
experiments show that even a simple instantiation of Dual-Channel GNNs can
recover succinct rules and perform on par or better than widely used SE-GNNs.
Our code can be found in the supplementary material.
|
2502.02722
|
Cross-Lingual Transfer for Low-Resource Natural Language Processing
|
cs.CL
|
Natural Language Processing (NLP) has seen remarkable advances in recent
years, particularly with the emergence of Large Language Models that have
achieved unprecedented performance across many tasks. However, these
developments have mainly benefited a small number of high-resource languages
such as English. The majority of languages still face significant challenges
due to the scarcity of training data and computational resources. To address
this issue, this thesis focuses on cross-lingual transfer learning, a research
area aimed at leveraging data and models from high-resource languages to
improve NLP performance for low-resource languages. Specifically, we focus on
Sequence Labeling tasks such as Named Entity Recognition, Opinion Target
Extraction, and Argument Mining.
The research is structured around three main objectives: (1) advancing
data-based cross-lingual transfer learning methods through improved translation
and annotation projection techniques, (2) developing enhanced model-based
transfer learning approaches utilizing state-of-the-art multilingual models,
and (3) applying these methods to real-world problems while creating
open-source resources that facilitate future research in low-resource NLP.
More specifically, this thesis presents a new method to improve data-based
transfer with T-Projection, a state-of-the-art annotation projection method
that leverages text-to-text multilingual models and machine translation
systems. T-Projection significantly outperforms previous annotation projection
methods by a wide margin. For model-based transfer, we introduce a constrained
decoding algorithm that enhances cross-lingual Sequence Labeling in zero-shot
settings using text-to-text models. Finally, we develop Medical mT5, the first
multilingual text-to-text medical model, demonstrating the practical impact of
our research on real-world applications.
|
2502.02723
|
Dobi-SVD: Differentiable SVD for LLM Compression and Some New
Perspectives
|
cs.LG
|
We provide a new LLM-compression solution via SVD, unlocking new
possibilities for LLM compression beyond quantization and pruning. We point out
that the optimal use of SVD lies in truncating activations, rather than merely
using activations as an optimization distance. Building on this principle, we
address three critical challenges in SVD-based LLM compression: including (1)
How can we determine the optimal activation truncation position for each weight
matrix in LLMs? (2) How can we efficiently reconstruct the weight matrices
based on truncated activations? (3) How can we address the inherent "injection"
nature that results in the information loss of the SVD? We propose Dobi-SVD,
which establishes a new, principled approach to SVD-based LLM compression.
|
2502.02725
|
The Design of On-Body Robots for Older Adults
|
cs.RO cs.HC
|
Wearable technology has significantly improved the quality of life for older
adults, and the emergence of on-body, movable robots presents new opportunities
to further enhance well-being. Yet, the interaction design for these robots
remains under-explored, particularly from the perspective of older adults. We
present findings from a two-phase co-design process involving 13 older adults
to uncover design principles for on-body robots for this population. We
identify a rich spectrum of potential applications and characterize a design
space to inform how on-body robots should be built for older adults. Our
findings highlight the importance of considering factors like co-presence,
embodiment, and multi-modal communication. Our work offers design insights to
facilitate the integration of on-body robots into daily life and underscores
the value of involving older adults in the co-design process to promote
usability and acceptance of emerging wearable robotic technologies.
|
2502.02727
|
Parameter Tracking in Federated Learning with Adaptive Optimization
|
cs.LG cs.AI cs.DC
|
In Federated Learning (FL), model training performance is strongly impacted
by data heterogeneity across clients. Gradient Tracking (GT) has recently
emerged as a solution which mitigates this issue by introducing correction
terms to local model updates. To date, GT has only been considered under
Stochastic Gradient Descent (SGD)-based model training, while modern FL
frameworks increasingly employ adaptive optimizers for improved convergence. In
this work, we generalize the GT framework to a more flexible Parameter Tracking
(PT) paradigm and propose two novel adaptive optimization algorithms, {\tt
FAdamET} and {\tt FAdamGT}, that integrate PT into Adam-based FL. We provide a
rigorous convergence analysis of these algorithms under non-convex settings.
Our experimental results demonstrate that both proposed algorithms consistently
outperform existing methods when evaluating total communication cost and total
computation cost across varying levels of data heterogeneity, showing the
effectiveness of correcting first-order information in federated adaptive
optimization.
|
2502.02732
|
Peri-LN: Revisiting Layer Normalization in the Transformer Architecture
|
cs.LG cs.AI cs.CL
|
Designing Transformer architectures with the optimal layer normalization (LN)
strategy that ensures large-scale training stability and expedite convergence
has remained elusive, even in this era of large language models (LLMs). To this
end, we present a comprehensive analytical foundation for understanding how
different LN strategies influence training dynamics in large-scale Transformer
training. Until recently, Pre-LN and Post-LN have long dominated standard
practices despite their limitations in large-scale training. However, several
open-source large-scale models have recently begun silently adopting a third
strategy without much explanation. This strategy places layer normalization
(LN) peripherally around sublayers, a design we term Peri-LN. While Peri-LN has
demonstrated promising empirical performance, its precise mechanisms and
benefits remain almost unexplored. Our in-depth analysis shows that Peri-LN
strikes an ideal balance in variance growth -- unlike Pre-LN and Post-LN, which
are prone to vanishing gradients and ``massive activations.'' To validate our
theoretical insight, we conduct large-scale experiments on Transformers up to
3.2B parameters, showing that Peri-LN consistently achieves more balanced
variance growth, steadier gradient flow, and convergence stability. Our results
suggest that Peri-LN warrants broader consideration for large-scale Transformer
architectures, providing renewed insights into the optimal placement and
application of LN.
|
2502.02735
|
A Modal-Based Approach for System Frequency Response and Frequency Nadir
Prediction
|
eess.SY cs.SY
|
This letter introduces a novel approach for predicting system frequency
response and frequency nadir by leveraging modal information. It significantly
differentiates from traditional methods rooted in the average system frequency
model. The proposed methodology targets system modes associated with the slower
dynamics of the grid, enabling precise predictions through modal decomposition
applied to the full system model. This decomposition facilitates an analytical
solution for the frequency at the center of inertia, resulting in highly
accurate predictions of both frequency response and nadir. Numerical results
from a 39-bus, 10-machine test system verify the method's effectiveness and
accuracy. This methodology represents a shift from observing a simplified
average system frequency response to a more detailed analysis focusing on
system dynamics.
|
2502.02737
|
SmolLM2: When Smol Goes Big -- Data-Centric Training of a Small Language
Model
|
cs.CL
|
While large language models have facilitated breakthroughs in many
applications of artificial intelligence, their inherent largeness makes them
computationally expensive and challenging to deploy in resource-constrained
settings. In this paper, we document the development of SmolLM2, a
state-of-the-art "small" (1.7 billion parameter) language model (LM). To attain
strong performance, we overtrain SmolLM2 on ~11 trillion tokens of data using a
multi-stage training process that mixes web text with specialized math, code,
and instruction-following data. We additionally introduce new specialized
datasets (FineMath, Stack-Edu, and SmolTalk) at stages where we found existing
datasets to be problematically small or low-quality. To inform our design
decisions, we perform both small-scale ablations as well as a manual refinement
process that updates the dataset mixing rates at each stage based on the
performance at the previous stage. Ultimately, we demonstrate that SmolLM2
outperforms other recent small LMs including Qwen2.5-1.5B and Llama3.2-1B. To
facilitate future research on LM development as well as applications of small
LMs, we release both SmolLM2 as well as all of the datasets we prepared in the
course of this project.
|
2502.02740
|
Vision-Language Model Dialog Games for Self-Improvement
|
cs.LG cs.AI
|
The increasing demand for high-quality, diverse training data poses a
significant bottleneck in advancing vision-language models (VLMs). This paper
presents VLM Dialog Games, a novel and scalable self-improvement framework for
VLMs. Our approach leverages self-play between two agents engaged in a
goal-oriented play centered around image identification. By filtering for
successful game interactions, we automatically curate a high-quality dataset of
interleaved images and text. We demonstrate that fine-tuning on this synthetic
data leads to performance gains on downstream tasks and generalises across
datasets. Moreover, as the improvements in the model lead to better game play,
this procedure can be applied iteratively. This work paves the way for
self-improving VLMs, with potential applications in various real-world
scenarios especially when the high-quality multimodal data is scarce.
|
2502.02741
|
RFMedSAM 2: Automatic Prompt Refinement for Enhanced Volumetric Medical
Image Segmentation with SAM 2
|
cs.CV
|
Segment Anything Model 2 (SAM 2), a prompt-driven foundation model extending
SAM to both image and video domains, has shown superior zero-shot performance
compared to its predecessor. Building on SAM's success in medical image
segmentation, SAM 2 presents significant potential for further advancement.
However, similar to SAM, SAM 2 is limited by its output of binary masks,
inability to infer semantic labels, and dependence on precise prompts for the
target object area. Additionally, direct application of SAM and SAM 2 to
medical image segmentation tasks yields suboptimal results. In this paper, we
explore the upper performance limit of SAM 2 using custom fine-tuning adapters,
achieving a Dice Similarity Coefficient (DSC) of 92.30% on the BTCV dataset,
surpassing the state-of-the-art nnUNet by 12%. Following this, we address the
prompt dependency by investigating various prompt generators. We introduce a
UNet to autonomously generate predicted masks and bounding boxes, which serve
as input to SAM 2. Subsequent dual-stage refinements by SAM 2 further enhance
performance. Extensive experiments show that our method achieves
state-of-the-art results on the AMOS2022 dataset, with a Dice improvement of
2.9% compared to nnUNet, and outperforms nnUNet by 6.4% on the BTCV dataset.
|
2502.02743
|
LLM Bandit: Cost-Efficient LLM Generation via Preference-Conditioned
Dynamic Routing
|
cs.LG
|
The rapid advancement in large language models (LLMs) has brought forth a
diverse range of models with varying capabilities that excel in different tasks
and domains. However, selecting the optimal LLM for user queries often involves
a challenging trade-off between accuracy and cost, a problem exacerbated by the
diverse demands of individual queries. In this work, we present a novel
framework that formulates the LLM selection process as a multi-armed bandit
problem, enabling dynamic and intelligent routing of queries to the most
appropriate model. Our approach incorporates a preference-conditioned dynamic
routing mechanism, allowing users to specify their preferences at inference
time, thereby offering a customizable balance between performance and cost.
Additionally, our selection policy is designed to generalize to unseen LLMs,
ensuring adaptability to new models as they emerge. Experimental results
demonstrate that our method achieves significant improvements in both accuracy
and cost-effectiveness across various LLM platforms, showcasing the potential
of our framework to adaptively optimize LLM selection in real-world scenarios.
|
2502.02747
|
PatchPilot: A Stable and Cost-Efficient Agentic Patching Framework
|
cs.RO cs.AI cs.CR
|
Recent research builds various patching agents that combine large language
models (LLMs) with non-ML tools and achieve promising results on the
state-of-the-art (SOTA) software patching benchmark, SWE-Bench. Based on how to
determine the patching workflows, existing patching agents can be categorized
as agent-based planning methods, which rely on LLMs for planning, and
human-based planning methods, which follow a pre-defined workflow. At a high
level, agent-based planning methods achieve high patching performance but with
a high cost and limited stability. Human-based planning methods, on the other
hand, are more stable and efficient but have key workflow limitations that
compromise their patching performance. In this paper, we propose PatchPilot, an
agentic patcher that strikes a balance between patching efficacy, stability,
and cost-efficiency. PatchPilot proposes a novel human-based planning workflow
with five components: reproduction, localization, generation, validation, and
refinement (where refinement is unique to PatchPilot). We introduce novel and
customized designs to each component to optimize their effectiveness and
efficiency. Through extensive experiments on the SWE-Bench benchmarks,
PatchPilot shows a superior performance than existing open-source methods while
maintaining low cost (less than 1$ per instance) and ensuring higher stability.
We also conduct a detailed ablation study to validate the key designs in each
component.
|
2502.02748
|
ReGNet: Reciprocal Space-Aware Long-Range Modeling and Multi-Property
Prediction for Crystals
|
cs.LG cond-mat.mtrl-sci
|
Predicting properties of crystals from their structures is a fundamental yet
challenging task in materials science. Unlike molecules, crystal structures
exhibit infinite periodic arrangements of atoms, requiring methods capable of
capturing both local and global information effectively. However, most current
works fall short of capturing long-range interactions within periodic
structures. To address this limitation, we leverage reciprocal space to
efficiently encode long-range interactions with learnable filters within
Fourier transforms. We introduce Reciprocal Geometry Network (ReGNet), a novel
architecture that integrates geometric GNNs and reciprocal blocks to model
short-range and long-range interactions, respectively. Additionally, we
introduce ReGNet-MT, a multi-task extension that employs mixture of experts
(MoE) for multi-property prediction. Experimental results on the JARVIS and
Materials Project benchmarks demonstrate that ReGNet achieves significant
performance improvements. Moreover, ReGNet-MT attains state-of-the-art results
on two bandgap properties due to positive transfer, while maintaining high
computational efficiency. These findings highlight the potential of our model
as a scalable and accurate solution for crystal property prediction. The code
will be released upon paper acceptance.
|
2502.02753
|
MuST: Multi-Head Skill Transformer for Long-Horizon Dexterous
Manipulation with Skill Progress
|
cs.RO
|
Robot picking and packing tasks require dexterous manipulation skills, such
as rearranging objects to establish a good grasping pose, or placing and
pushing items to achieve tight packing. These tasks are challenging for robots
due to the complexity and variability of the required actions. To tackle the
difficulty of learning and executing long-horizon tasks, we propose a novel
framework called the Multi-Head Skill Transformer (MuST). This model is
designed to learn and sequentially chain together multiple motion primitives
(skills), enabling robots to perform complex sequences of actions effectively.
MuST introduces a "progress value" for each skill, guiding the robot on which
skill to execute next and ensuring smooth transitions between skills.
Additionally, our model is capable of expanding its skill set and managing
various sequences of sub-tasks efficiently. Extensive experiments in both
simulated and real-world environments demonstrate that MuST significantly
enhances the robot's ability to perform long-horizon dexterous manipulation
tasks.
|
2502.02756
|
Adaptive Voxel-Weighted Loss Using L1 Norms in Deep Neural Networks for
Detection and Segmentation of Prostate Cancer Lesions in PET/CT Images
|
eess.IV cs.AI cs.CV
|
This study proposes a new loss function for deep neural networks, L1-weighted
Dice Focal Loss (L1DFL), that leverages L1 norms for adaptive weighting of
voxels based on their classification difficulty, towards automated detection
and segmentation of metastatic prostate cancer lesions in PET/CT scans. We
obtained 380 PSMA [18-F] DCFPyL PET/CT scans of patients diagnosed with
biochemical recurrence metastatic prostate cancer. We trained two 3D
convolutional neural networks, Attention U-Net and SegResNet, and concatenated
the PET and CT volumes channel-wise as input. The performance of our custom
loss function was evaluated against the Dice and Dice Focal Loss functions. For
clinical significance, we considered a detected region of interest (ROI) as a
true positive if at least the voxel with the maximum standardized uptake value
falls within the ROI. We assessed the models' performance based on the number
of lesions in an image, tumour volume, activity, and extent of spread. The
L1DFL outperformed the comparative loss functions by at least 13% on the test
set. In addition, the F1 scores of the Dice Loss and the Dice Focal Loss were
lower than that of L1DFL by at least 6% and 34%, respectively. The Dice Focal
Loss yielded more false positives, whereas the Dice Loss was more sensitive to
smaller volumes and struggled to segment larger lesions accurately. They also
exhibited network-specific variations and yielded declines in segmentation
accuracy with increased tumour spread. Our results demonstrate the potential of
L1DFL to yield robust segmentation of metastatic prostate cancer lesions in
PSMA PET/CT images. The results further highlight potential complexities
arising from the variations in lesion characteristics that may influence
automated prostate cancer tumour detection and segmentation. The code is
publicly available at: https://github.com/ObedDzik/pca_segment.git.
|
2502.02761
|
Federated Low-Rank Tensor Estimation for Multimodal Image Reconstruction
|
cs.LG cs.CV cs.DC
|
Low-rank tensor estimation offers a powerful approach to addressing
high-dimensional data challenges and can substantially improve solutions to
ill-posed inverse problems, such as image reconstruction under noisy or
undersampled conditions. Meanwhile, tensor decomposition has gained prominence
in federated learning (FL) due to its effectiveness in exploiting latent space
structure and its capacity to enhance communication efficiency. In this paper,
we present a federated image reconstruction method that applies Tucker
decomposition, incorporating joint factorization and randomized sketching to
manage large-scale, multimodal data. Our approach avoids reconstructing
full-size tensors and supports heterogeneous ranks, allowing clients to select
personalized decomposition ranks based on prior knowledge or communication
capacity. Numerical results demonstrate that our method achieves superior
reconstruction quality and communication compression compared to existing
approaches, thereby highlighting its potential for multimodal inverse problems
in the FL setting.
|
2502.02763
|
Rethinking Vision Transformer for Object Centric Foundation Models
|
cs.CV
|
Recent state-of-the-art object segmentation mechanisms, such as the Segment
Anything Model (SAM) and FastSAM, first encode the full image over several
layers and then focus on generating the mask for one particular object or area.
We present an off-grid Fovea-Like Input Patching (FLIP) approach, which selects
image input and encodes it from the beginning in an object-focused manner.
While doing so, it separates locational encoding from an object-centric
perceptual code. FLIP is more data-efficient and yields improved segmentation
performance when masking relatively small objects in high-resolution visual
scenes. On standard benchmarks such as Hypersim, KITTI-360, and OpenImages,
FLIP achieves Intersection over Union (IoU) scores that approach the
performance of SAM with much less compute effort. It surpasses FastSAM in all
IoU measurements. We also introduce an additional semi-natural but highly
intuitive dataset where FLIP outperforms SAM and FastSAM overall and
particularly on relatively small objects. Seeing that FLIP is an end-to-end
object-centric segmentation approach, it has high potential particularly for
applications that benefit from computationally efficient, spatially highly
selective object tracking.
|
2502.02764
|
LLM-USO: Large Language Model-based Universal Sizing Optimizer
|
cs.AR cs.LG
|
The design of analog circuits is a cornerstone of integrated circuit (IC)
development, requiring the optimization of complex, interconnected
sub-structures such as amplifiers, comparators, and buffers. Traditionally,
this process relies heavily on expert human knowledge to refine design
objectives by carefully tuning sub-components while accounting for their
interdependencies. Existing methods, such as Bayesian Optimization (BO), offer
a mathematically driven approach for efficiently navigating large design
spaces. However, these methods fall short in two critical areas compared to
human expertise: (i) they lack the semantic understanding of the sizing
solution space and its direct correlation with design objectives before
optimization, and (ii) they fail to reuse knowledge gained from optimizing
similar sub-structures across different circuits. To overcome these
limitations, we propose the Large Language Model-based Universal Sizing
Optimizer (LLM-USO), which introduces a novel method for knowledge
representation to encode circuit design knowledge in a structured text format.
This representation enables the systematic reuse of optimization insights for
circuits with similar sub-structures. LLM-USO employs a hybrid framework that
integrates BO with large language models (LLMs) and a learning summary module.
This approach serves to: (i) infuse domain-specific knowledge into the BO
process and (ii) facilitate knowledge transfer across circuits, mirroring the
cognitive strategies of expert designers. Specifically, LLM-USO constructs a
knowledge summary mechanism to distill and apply design insights from one
circuit to related ones. It also incorporates a knowledge summary critiquing
mechanism to ensure the accuracy and quality of the summaries and employs
BO-guided suggestion filtering to identify optimal design points efficiently.
|
2502.02766
|
Theoretical Guarantees for Low-Rank Compression of Deep Neural Networks
|
cs.LG cs.IT math.IT
|
Deep neural networks have achieved state-of-the-art performance across
numerous applications, but their high memory and computational demands present
significant challenges, particularly in resource-constrained environments.
Model compression techniques, such as low-rank approximation, offer a promising
solution by reducing the size and complexity of these networks while only
minimally sacrificing accuracy. In this paper, we develop an analytical
framework for data-driven post-training low-rank compression. We prove three
recovery theorems under progressively weaker assumptions about the approximate
low-rank structure of activations, modeling deviations via noise. Our results
represent a step toward explaining why data-driven low-rank compression methods
outperform data-agnostic approaches and towards theoretically grounded
compression algorithms that reduce inference costs while maintaining
performance.
|
2502.02768
|
Planning with affordances: Integrating learned affordance models and
symbolic planning
|
cs.AI cs.RO
|
Intelligent agents working in real-world environments must be able to learn
about the environment and its capabilities which enable them to take actions to
change to the state of the world to complete a complex multi-step task in a
photorealistic environment. Learning about the environment is especially
important to perform various multiple-step tasks without having to redefine an
agent's action set for different tasks or environment settings. In our work, we
augment an existing task and motion planning framework with learned affordance
models of objects in the world to enable planning and executing multi-step
tasks using learned models. Each task can be seen as changing the current state
of the world to a given goal state. The affordance models provide us with what
actions are possible and how to perform those actions in any given state. A
symbolic planning algorithm uses this information and the starting and goal
state to create a feasible plan to reach the desired goal state to complete a
given task. We demonstrate our approach in a virtual 3D photorealistic
environment, AI2-Thor, and evaluate it on real-world tasks. Our results show
that our agent quickly learns how to interact with the environment and is well
prepared to perform tasks such as "Moving an object out of the way to reach the
desired location."
|
2502.02770
|
Twilight: Adaptive Attention Sparsity with Hierarchical Top-$p$ Pruning
|
cs.LG cs.CL
|
Leveraging attention sparsity to accelerate long-context large language
models (LLMs) has been a hot research topic. However, current algorithms such
as sparse attention or key-value (KV) cache compression tend to use a fixed
budget, which presents a significant challenge during deployment because it
fails to account for the dynamic nature of real-world scenarios, where the
optimal balance between accuracy and efficiency can vary greatly. In this
paper, we find that borrowing top-$p$ sampling (nucleus sampling) to sparse
attention can surprisingly achieve adaptive budgeting. Based on this, we
propose Twilight, a framework to bring adaptive sparsity to any existing sparse
attention algorithm without sacrificing their accuracy. Empirical results show
that Twilight can adaptively prune at most 98% of redundant tokens, leading to
$15.4\times$ acceleration in self-attention operations and $3.9\times$
acceleration in end-to-end per token latency in long context LLM decoding.
|
2502.02771
|
When are Diffusion Priors Helpful in Sparse Reconstruction? A Study with
Sparse-view CT
|
physics.med-ph cs.CV cs.LG eess.IV stat.AP
|
Diffusion models demonstrate state-of-the-art performance on image
generation, and are gaining traction for sparse medical image reconstruction
tasks. However, compared to classical reconstruction algorithms relying on
simple analytical priors, diffusion models have the dangerous property of
producing realistic looking results \emph{even when incorrect}, particularly
with few observations. We investigate the utility of diffusion models as priors
for image reconstruction by varying the number of observations and comparing
their performance to classical priors (sparse and Tikhonov regularization)
using pixel-based, structural, and downstream metrics. We make comparisons on
low-dose chest wall computed tomography (CT) for fat mass quantification.
First, we find that classical priors are superior to diffusion priors when the
number of projections is ``sufficient''. Second, we find that diffusion priors
can capture a large amount of detail with very few observations, significantly
outperforming classical priors. However, they fall short of capturing all
details, even with many observations. Finally, we find that the performance of
diffusion priors plateau after extremely few ($\approx$10-15) projections.
Ultimately, our work highlights potential issues with diffusion-based sparse
reconstruction and underscores the importance of further investigation,
particularly in high-stakes clinical settings.
|
2502.02772
|
Cross-Modality Embedding of Force and Language for Natural Human-Robot
Communication
|
cs.RO cs.AI cs.HC
|
A method for cross-modality embedding of force profile and words is presented
for synergistic coordination of verbal and haptic communication. When two
people carry a large, heavy object together, they coordinate through verbal
communication about the intended movements and physical forces applied to the
object. This natural integration of verbal and physical cues enables effective
coordination. Similarly, human-robot interaction could achieve this level of
coordination by integrating verbal and haptic communication modalities. This
paper presents a framework for embedding words and force profiles in a unified
manner, so that the two communication modalities can be integrated and
coordinated in a way that is effective and synergistic. Here, it will be shown
that, although language and physical force profiles are deemed completely
different, the two can be embedded in a unified latent space and proximity
between the two can be quantified. In this latent space, a force profile and
words can a) supplement each other, b) integrate the individual effects, and c)
substitute in an exchangeable manner. First, the need for cross-modality
embedding is addressed, and the basic architecture and key building block
technologies are presented. Methods for data collection and implementation
challenges will be addressed, followed by experimental results and discussions.
|
2502.02773
|
SD++: Enhancing Standard Definition Maps by Incorporating Road Knowledge
using LLMs
|
cs.RO cs.CV
|
High-definition maps (HD maps) are detailed and informative maps capturing
lane centerlines and road elements. Although very useful for autonomous
driving, HD maps are costly to build and maintain. Furthermore, access to these
high-quality maps is usually limited to the firms that build them. On the other
hand, standard definition (SD) maps provide road centerlines with an accuracy
of a few meters. In this paper, we explore the possibility of enhancing SD maps
by incorporating information from road manuals using LLMs. We develop SD++, an
end-to-end pipeline to enhance SD maps with location-dependent road information
obtained from a road manual. We suggest and compare several ways of using LLMs
for such a task. Furthermore, we show the generalization ability of SD++ by
showing results from both California and Japan.
|
2502.02774
|
Optimal Computational Secret Sharing
|
cs.IT cs.CR math.IT
|
In $(t, n)$-threshold secret sharing, a secret $S$ is distributed among $n$
participants such that any subset of size $t$ can recover $S$, while any subset
of size $t-1$ or fewer learns nothing about it. For information-theoretic
secret sharing, it is known that the share size must be at least as large as
the secret, i.e., $|S|$. When computational security is employed using
cryptographic encryption with a secret key $K$, previous work has shown that
the share size can be reduced to $\tfrac{|S|}{t} + |K|$.
In this paper, we present a construction achieving a share size of
$\tfrac{|S| + |K|}{t}$. Furthermore, we prove that, under reasonable
assumptions on the encryption scheme -- namely, the non-compressibility of
pseudorandom encryption and the non-redundancy of the secret key -- this share
size is optimal.
|
2502.02777
|
Symmetry of information for space-bounded online Kolmogorov complexity
|
cs.CC cs.IT math.IT
|
The even online Kolmogorov complexity of a string $x = x_1 x_2 \cdots x_{n}$
is the minimal length of a program that for all $i\le n/2$, on input $x_1x_3
\cdots x_{2i-1}$ outputs $x_{2i}$. The odd complexity is defined similarly. The
sum of the odd and even complexities is called the dialogue complexity.
In [Bauwens, 2014] it is proven that for all $n$, there exist $n$-bit $x$ for
which the dialogue complexity exceeds the Kolmogorov complexity by $n\log \frac
4 3 + O(\log n)$. Let $\mathrm C^s(x)$ denote the Kolmogorov complexity with
space bound~$s$. Here, we prove that the space-bounded dialogue complexity with
bound $s + 6n + O(1)$ is at most $\mathrm C^{s}(x) + O(\log (sn))$, where
$n=|x|$.
|
2502.02779
|
3D Foundation AI Model for Generalizable Disease Detection in Head
Computed Tomography
|
cs.CV cs.AI
|
Head computed tomography (CT) imaging is a widely-used imaging modality with
multitudes of medical indications, particularly in assessing pathology of the
brain, skull, and cerebrovascular system. It is commonly the first-line imaging
in neurologic emergencies given its rapidity of image acquisition, safety,
cost, and ubiquity. Deep learning models may facilitate detection of a wide
range of diseases. However, the scarcity of high-quality labels and
annotations, particularly among less common conditions, significantly hinders
the development of powerful models. To address this challenge, we introduce
FM-CT: a Foundation Model for Head CT for generalizable disease detection,
trained using self-supervised learning. Our approach pre-trains a deep learning
model on a large, diverse dataset of 361,663 non-contrast 3D head CT scans
without the need for manual annotations, enabling the model to learn robust,
generalizable features. To investigate the potential of self-supervised
learning in head CT, we employed both discrimination with self-distillation and
masked image modeling, and we construct our model in 3D rather than at the
slice level (2D) to exploit the structure of head CT scans more comprehensively
and efficiently. The model's downstream classification performance is evaluated
using internal and three external datasets, encompassing both in-distribution
(ID) and out-of-distribution (OOD) data. Our results demonstrate that the
self-supervised foundation model significantly improves performance on
downstream diagnostic tasks compared to models trained from scratch and
previous 3D CT foundation models on scarce annotated datasets. This work
highlights the effectiveness of self-supervised learning in medical imaging and
sets a new benchmark for head CT image analysis in 3D, enabling broader use of
artificial intelligence for head CT-based diagnosis.
|
2502.02780
|
Classroom Simulacra: Building Contextual Student Generative Agents in
Online Education for Learning Behavioral Simulation
|
cs.HC cs.AI cs.LG
|
Student simulation supports educators to improve teaching by interacting with
virtual students. However, most existing approaches ignore the modulation
effects of course materials because of two challenges: the lack of datasets
with granularly annotated course materials, and the limitation of existing
simulation models in processing extremely long textual data. To solve the
challenges, we first run a 6-week education workshop from N = 60 students to
collect fine-grained data using a custom built online education system, which
logs students' learning behaviors as they interact with lecture materials over
time. Second, we propose a transferable iterative reflection (TIR) module that
augments both prompting-based and finetuning-based large language models (LLMs)
for simulating learning behaviors. Our comprehensive experiments show that TIR
enables the LLMs to perform more accurate student simulation than classical
deep learning models, even with limited demonstration data. Our TIR approach
better captures the granular dynamism of learning performance and inter-student
correlations in classrooms, paving the way towards a ''digital twin'' for
online education.
|
2502.02783
|
Runway capacity expansion planning for public airports under demand
uncertainty
|
eess.SY cs.SY math.OC
|
Flight delay is a significant issue affecting air travel. The runway system,
frequently falling short of demand, serves as a bottleneck. As demand
increases, runway capacity expansion becomes imperative to mitigate congestion.
However, the decision to expand runway capacity is challenging due to inherent
uncertainties in demand forecasts. This paper presents a novel approach to
modeling air traffic demand growth as a jump diffusion process, incorporating
two layers of uncertainty: Geometric Brownian Motion (GBM) for continuous
variability and a Poisson process to capture the impact of crisis events, such
as natural disasters or public health emergencies, on decision-making. We
propose a real options model to jointly evaluate the interrelated factors of
optimal runway capacity and investment timing under uncertainty, with
investment timing linked to trigger demand. The findings suggest that increased
uncertainty indicates more conservative decision-making. Furthermore, the
relationship between optimal investment timing and expansion size is complex:
if the expansion size remains unchanged, the trigger demand decreases as the
demand growth rate increases; if the expansion size experiences a jump, the
trigger demand also exhibits a sharp rise. This work provides valuable insights
for airport authorities for informed capacity expansion decision-making.
|
2502.02785
|
OpenSTARLab: Open Approach for Spatio-Temporal Agent Data Analysis in
Soccer
|
cs.LG
|
Sports analytics has become both more professional and sophisticated, driven
by the growing availability of detailed performance data. This progress enables
applications such as match outcome prediction, player scouting, and tactical
analysis. In soccer, the effective utilization of event and tracking data is
fundamental for capturing and analyzing the dynamics of the game. However,
there are two primary challenges: the limited availability of event data,
primarily restricted to top-tier teams and leagues, and the scarcity and high
cost of tracking data, which complicates its integration with event data for
comprehensive analysis. Here we propose OpenSTARLab, an open-source framework
designed to democratize spatio-temporal agent data analysis in sports by
addressing these key challenges. OpenSTARLab includes the Pre-processing
Package that standardizes event and tracking data through Unified and
Integrated Event Data and State-Action-Reward formats, the Event Modeling
Package that implements deep learning-based event prediction, alongside the
RLearn Package for reinforcement learning tasks. These technical components
facilitate the handling of diverse data sources and support advanced analytical
tasks, thereby enhancing the overall functionality and usability of the
framework. To assess OpenSTARLab's effectiveness, we conducted several
experimental evaluations. These demonstrate the superior performance of the
specific event prediction model in terms of action and time prediction
accuracies and maintained its robust event simulation performance. Furthermore,
reinforcement learning experiments reveal a trade-off between action accuracy
and temporal difference loss and show comprehensive visualization. Overall,
OpenSTARLab serves as a robust platform for researchers and practitioners,
enhancing innovation and collaboration in the field of soccer data analytics.
|
2502.02786
|
When Machine Learning Gets Personal: Understanding Fairness of
Personalized Models
|
cs.LG
|
Personalization in machine learning involves tailoring models to individual
users by incorporating personal attributes such as demographic or medical data.
While personalization can improve prediction accuracy, it may also amplify
biases and reduce explainability. This work introduces a unified framework to
evaluate the impact of personalization on both prediction accuracy and
explanation quality across classification and regression tasks. We derive novel
upper bounds for the number of personal attributes that can be used to reliably
validate benefits of personalization. Our analysis uncovers key trade-offs. We
show that regression models can potentially utilize more personal attributes
than classification models. We also demonstrate that improvements in prediction
accuracy due to personalization do not necessarily translate to enhanced
explainability -- underpinning the importance to evaluate both metrics when
personalizing machine learning models in critical settings such as healthcare.
Validated with a real-world dataset, this framework offers practical guidance
for balancing accuracy, fairness, and interpretability in personalized models.
|
2502.02787
|
SimMark: A Robust Sentence-Level Similarity-Based Watermarking Algorithm
for Large Language Models
|
cs.CL cs.CR cs.CY cs.LG
|
The rapid proliferation of large language models (LLMs) has created an urgent
need for reliable methods to detect whether a text is generated by such models.
In this paper, we propose SimMark, a posthoc watermarking algorithm that makes
LLMs' outputs traceable without requiring access to the model's internal
logits, enabling compatibility with a wide range of LLMs, including API-only
models. By leveraging the similarity of semantic sentence embeddings and
rejection sampling to impose detectable statistical patterns imperceptible to
humans, and employing a soft counting mechanism, SimMark achieves robustness
against paraphrasing attacks. Experimental results demonstrate that SimMark
sets a new benchmark for robust watermarking of LLM-generated content,
surpassing prior sentence-level watermarking techniques in robustness, sampling
efficiency, and applicability across diverse domains, all while preserving the
text quality.
|
2502.02788
|
Inducing Diversity in Differentiable Search Indexing
|
cs.IR cs.AI cs.LG
|
Differentiable Search Indexing (DSI) is a recent paradigm for information
retrieval which uses a transformer-based neural network architecture as the
document index to simplify the retrieval process. A differentiable index has
many advantages enabling modifications, updates or extensions to the index. In
this work, we explore balancing relevance and novel information content
(diversity) for training DSI systems inspired by Maximal Marginal Relevance
(MMR), and show the benefits of our approach over the naive DSI training. We
present quantitative and qualitative evaluations of relevance and diversity
measures obtained using our method on NQ320K and MSMARCO datasets in comparison
to naive DSI. With our approach, it is possible to achieve diversity without
any significant impact to relevance. Since we induce diversity while training
DSI, the trained model has learned to diversify while being relevant. This
obviates the need for a post-processing step to induce diversity in the recall
set as typically performed using MMR. Our approach will be useful for
Information Retrieval problems where both relevance and diversity are important
such as in sub-topic retrieval. Our work can also be easily be extended to the
incremental DSI settings which would enable fast updates to the index while
retrieving a diverse recall set.
|
2502.02789
|
Speculative Prefill: Turbocharging TTFT with Lightweight and
Training-Free Token Importance Estimation
|
cs.CL cs.AI
|
Improving time-to-first-token (TTFT) is an essentially important objective in
modern large language model (LLM) inference engines. Because optimizing TTFT
directly results in higher maximal QPS and meets the requirements of many
critical applications. However, boosting TTFT is notoriously challenging since
it is purely compute-bounded and the performance bottleneck shifts from the
self-attention to the MLP part. We present SpecPrefill, a training free
framework that accelerates the inference TTFT for both long and medium context
queries based on the following insight: LLMs are generalized enough to still
preserve the quality given only a carefully chosen subset of prompt tokens. At
its core, SpecPrefill leverages a lightweight model to speculate locally
important tokens based on the context. These tokens, along with the necessary
positional information, are then sent to the main model for processing. We
evaluate SpecPrefill with a diverse set of tasks, followed by a comprehensive
benchmarking of performance improvement both in a real end-to-end setting and
ablation studies. SpecPrefill manages to serve Llama-3.1-405B-Instruct-FP8 with
up to $7\times$ maximal end-to-end QPS on real downstream tasks and
$7.66\times$ TTFT improvement during benchmarking.
|
2502.02790
|
Leveraging the true depth of LLMs
|
cs.LG cs.CL
|
Large Language Models demonstrate remarkable capabilities at the cost of high
compute requirements. While recent research has shown that intermediate layers
can be removed or have their order shuffled without impacting performance
significantly, these findings have not been employed to reduce the
computational cost of inference. We investigate several potential ways to
reduce the depth of pre-trained LLMs without significantly affecting
performance. Leveraging our insights, we present a novel approach that exploits
this decoupling between layers by grouping some of them into pairs that can be
evaluated in parallel.
This modification of the computational graph -- through better parallelism --
results in an average improvement of around 1.20x on the number of tokens
generated per second, without re-training nor fine-tuning, while retaining
95%-99% of the original accuracy. Empirical evaluation demonstrates that this
approach significantly improves serving efficiency while maintaining model
performance, offering a practical improvement for large-scale LLM deployment.
|
2502.02797
|
Upweighting Easy Samples in Fine-Tuning Mitigates Forgetting
|
cs.LG cs.AI stat.ML
|
Fine-tuning a pre-trained model on a downstream task often degrades its
original capabilities, a phenomenon known as "catastrophic forgetting". This is
especially an issue when one does not have access to the data and recipe used
to develop the pre-trained model. Under this constraint, most existing methods
for mitigating forgetting are inapplicable. To address this challenge, we
propose a sample weighting scheme for the fine-tuning data solely based on the
pre-trained model's losses. Specifically, we upweight the easy samples on which
the pre-trained model's loss is low and vice versa to limit the drift from the
pre-trained model. Our approach is orthogonal and yet complementary to existing
methods; while such methods mostly operate on parameter or gradient space, we
concentrate on the sample space. We theoretically analyze the impact of
fine-tuning with our method in a linear setting, showing that it stalls
learning in a certain subspace which inhibits overfitting to the target task.
We empirically demonstrate the efficacy of our method on both language and
vision tasks. As an example, when fine-tuning Gemma 2 2B on MetaMathQA, our
method results in only a $0.8\%$ drop in accuracy on GSM8K (another math
dataset) compared to standard fine-tuning, while preserving $5.4\%$ more
accuracy on the pre-training datasets. Our code is publicly available at
https://github.com/sanyalsunny111/FLOW_finetuning .
|
2502.02802
|
Consistent Client Simulation for Motivational Interviewing-based
Counseling
|
cs.CL
|
Simulating human clients in mental health counseling is crucial for training
and evaluating counselors (both human or simulated) in a scalable manner.
Nevertheless, past research on client simulation did not focus on complex
conversation tasks such as mental health counseling. In these tasks, the
challenge is to ensure that the client's actions (i.e., interactions with the
counselor) are consistent with with its stipulated profiles and negative
behavior settings. In this paper, we propose a novel framework that supports
consistent client simulation for mental health counseling. Our framework tracks
the mental state of a simulated client, controls its state transitions, and
generates for each state behaviors consistent with the client's motivation,
beliefs, preferred plan to change, and receptivity. By varying the client
profile and receptivity, we demonstrate that consistent simulated clients for
different counseling scenarios can be effectively created. Both our automatic
and expert evaluations on the generated counseling sessions also show that our
client simulation method achieves higher consistency than previous methods.
|
2502.02807
|
CAMI: A Counselor Agent Supporting Motivational Interviewing through
State Inference and Topic Exploration
|
cs.CL
|
Conversational counselor agents have become essential tools for addressing
the rising demand for scalable and accessible mental health support. This paper
introduces CAMI, a novel automated counselor agent grounded in Motivational
Interviewing (MI) -- a client-centered counseling approach designed to address
ambivalence and facilitate behavior change. CAMI employs a novel STAR
framework, consisting of client's state inference, motivation topic
exploration, and response generation modules, leveraging large language models
(LLMs). These components work together to evoke change talk, aligning with MI
principles and improving counseling outcomes for clients from diverse
backgrounds. We evaluate CAMI's performance through both automated and manual
evaluations, utilizing simulated clients to assess MI skill competency,
client's state inference accuracy, topic exploration proficiency, and overall
counseling success. Results show that CAMI not only outperforms several
state-of-the-art methods but also shows more realistic counselor-like behavior.
Additionally, our ablation study underscores the critical roles of state
inference and topic exploration in achieving this performance.
|
2502.02810
|
Mol-LLM: Generalist Molecular LLM with Improved Graph Utilization
|
cs.LG cs.AI physics.chem-ph q-bio.BM
|
Recent advances in Large Language Models (LLMs) have motivated the
development of general LLMs for molecular tasks. While several studies have
demonstrated that fine-tuned LLMs can achieve impressive benchmark
performances, they are far from genuine generalist molecular LLMs due to a lack
of fundamental understanding of molecular structure. Specifically, when given
molecular task instructions, LLMs trained with naive next-token prediction
training assign similar likelihood scores to both original and negatively
corrupted molecules, revealing their lack of molecular structure understanding
that is crucial for reliable and general molecular LLMs. To overcome this
limitation and obtain a true generalist molecular LLM, we introduce a novel
multi-modal training method based on a thorough multi-modal instruction tuning
as well as a molecular structure preference optimization between chosen and
rejected graphs. On various molecular benchmarks, the proposed generalist
molecular LLM, called Mol-LLM, achieves state-of-the-art performances among
generalist LLMs on most tasks, at the same time, surpassing or comparable to
state-of-the-art specialist LLMs. Moreover, Mol-LLM also shows superior
generalization performances in reaction prediction tasks, demonstrating the
effect of the molecular structure understanding for generalization perspective.
|
2502.02813
|
Covert Communications in Active-IOS Aided Uplink NOMA Systems With
Full-Duplex Receiver
|
cs.IT eess.SP math.IT
|
In this paper, an active intelligent omni-surface (A-IOS) is deployed to aid
uplink transmissions in a non-orthogonal multiple access (NOMA) system. In
order to shelter the covert signal embedded in the superposition transmissions,
a multi-antenna full-duplex (FD) receiver is utilized at the base-station to
recover signal in addition to jamming the warden. With the aim of maximizing
the covert rate, the FD transmit and receive beamforming, A-IOS refraction and
reflection beamforming, NOMA transmit power, and FD jamming power are jointly
optimized. To tackle the non-convex covert rate maximization problem subject to
the highly coupled system parameters, an alternating optimization algorithm is
designed to iteratively solve the decoupled sub-problems of optimizing the
system parameters. The optimal solutions for the sub-problems of the NOMA
transmit power and FD jamming power optimizations are derived in closed-form.
To tackle the rank-one constrained non-convex fractional programming of the
A-IOS beamforming and FD beamforming, a penalized Dinkelbach transformation
approach is proposed to resort to the optimal solutions via semidefinite
programming. Numerical results clarify that the deployment of the A-IOS
significantly improves the covert rate compared with the passive-IOS aided
uplink NOMA system. It is also found that the proposed scheme provides better
covert communication performance with the optimized NOMA transmit power and FD
jamming power compared with the benchmark schemes.
|
2502.02817
|
A Decade of Action Quality Assessment: Largest Systematic Survey of
Trends, Challenges, and Future Directions
|
cs.AI cs.CV
|
Action Quality Assessment (AQA) -- the ability to quantify the quality of
human motion, actions, or skill levels and provide feedback -- has far-reaching
implications in areas such as low-cost physiotherapy, sports training, and
workforce development. As such, it has become a critical field in computer
vision & video understanding over the past decade. Significant progress has
been made in AQA methodologies, datasets, & applications, yet a pressing need
remains for a comprehensive synthesis of this rapidly evolving field. In this
paper, we present a thorough survey of the AQA landscape, systematically
reviewing over 200 research papers using the preferred reporting items for
systematic reviews & meta-analyses (PRISMA) framework. We begin by covering
foundational concepts & definitions, then move to general frameworks &
performance metrics, & finally discuss the latest advances in methodologies &
datasets. This survey provides a detailed analysis of research trends,
performance comparisons, challenges, & future directions. Through this work, we
aim to offer a valuable resource for both newcomers & experienced researchers,
promoting further exploration & progress in AQA. Data are available at
https://haoyin116.github.io/Survey_of_AQA/
|
2502.02818
|
Accessible and Portable LLM Inference by Compiling Computational Graphs
into SQL
|
cs.DB cs.LG
|
Serving large language models (LLMs) often demands specialized hardware,
dedicated frameworks, and substantial development efforts, which restrict their
accessibility, especially for edge devices and organizations with limited
technical resources. We propose a novel compiler that translates LLM inference
graphs into SQL queries, enabling relational databases, one of the most widely
used and mature software systems globally, to serve as the runtime. By mapping
neural operators such as matrix multiplication and attention into relational
primitives like joins and aggregations, our approach leverages database
capabilities, including disk-based data management and native caching.
Supporting key transformer components, such as attention mechanisms and
key-value caching, our system generates SQL pipelines for end-to-end LLM
inference. Using the Llama3 family as a case study, we demonstrate up to 30x
speedup in token generation for memory-constrained scenarios comparable to
competitive CPU-based frameworks. Our work offers an accessible, portable, and
efficient solution, facilitating the serving of LLMs across diverse deployment
environments.
|
2502.02820
|
Slowing Learning by Erasing Simple Features
|
cs.LG
|
Prior work suggests that neural networks tend to learn low-order moments of
the data distribution first, before moving on to higher-order correlations. In
this work, we derive a novel closed-form concept erasure method, QLEACE, which
surgically removes all quadratically available information about a concept from
a representation. Through comparisons with linear erasure (LEACE) and two
approximate forms of quadratic erasure, we explore whether networks can still
learn when low-order statistics are removed from image classification datasets.
We find that while LEACE consistently slows learning, quadratic erasure can
exhibit both positive and negative effects on learning speed depending on the
choice of dataset, model architecture, and erasure method.
Use of QLEACE consistently slows learning in feedforward architectures, but
more sophisticated architectures learn to use injected higher order Shannon
information about class labels. Its approximate variants avoid injecting
information, but surprisingly act as data augmentation techniques on some
datasets, enhancing learning speed compared to LEACE.
|
2502.02821
|
AIoT-based smart traffic management system
|
cs.CV
|
This paper presents a novel AI-based smart traffic management system
de-signed to optimize traffic flow and reduce congestion in urban environments.
By analysing live footage from existing CCTV cameras, this approach eliminates
the need for additional hardware, thereby minimizing both deployment costs and
ongoing maintenance expenses. The AI model processes live video feeds to
accurately count vehicles and assess traffic density, allowing for adaptive
signal control that prioritizes directions with higher traffic volumes. This
real-time adaptability ensures smoother traffic flow, reduces congestion, and
minimizes waiting times for drivers. Additionally, the proposed system is
simulated using PyGame to evaluate its performance under various traffic
conditions. The simulation results demonstrate that the AI-based system
out-performs traditional static traffic light systems by 34%, leading to
significant improvements in traffic flow efficiency. The use of AI to optimize
traffic signals can play a crucial role in addressing urban traffic challenges,
offering a cost-effective, scalable, and efficient solution for modern cities.
This innovative system represents a key advancement in the field of smart city
infra-structure and intelligent transportation systems.
|
2502.02829
|
Global Contact-Rich Planning with Sparsity-Rich Semidefinite Relaxations
|
cs.RO math.OC
|
We show that contact-rich motion planning is also sparsity-rich when viewed
as polynomial optimization (POP). We can exploit not only the correlative and
term sparsity patterns that are general to all POPs, but also specialized
sparsity patterns from the robot kinematic structure and the separability of
contact modes. Such sparsity enables the design of high-order but sparse
semidefinite programming (SDPs) relaxations--building upon Lasserre's moment
and sums of squares hierarchy--that (i) can be solved in seconds by
off-the-shelf SDP solvers, and (ii) compute near globally optimal solutions to
the nonconvex contact-rich planning problems with small certified
suboptimality. Through extensive experiments both in simulation (Push Bot, Push
Box, Push Box with Obstacles, and Planar Hand) and real world (Push T), we
demonstrate the power of using convex SDP relaxations to generate global
contact-rich motion plans. As a contribution of independent interest, we
release the Sparse Polynomial Optimization Toolbox (SPOT)--implemented in C++
with interfaces to both Python and Matlab--that automates sparsity exploitation
for robotics and beyond.
|
2502.02830
|
Multimodal Brain-Computer Interfaces: AI-powered Decoding Methodologies
|
cs.HC cs.LG q-bio.NC
|
Brain-computer interfaces (BCIs) enable direct communication between the
brain and external devices. This review highlights the core decoding algorithms
that enable multimodal BCIs, including a dissection of the elements, a unified
view of diversified approaches, and a comprehensive analysis of the present
state of the field. We emphasize algorithmic advancements in cross-modality
mapping, sequential modeling, besides classic multi-modality fusion,
illustrating how these novel AI approaches enhance decoding of brain data. The
current literature of BCI applications on visual, speech, and affective
decoding are comprehensively explored. Looking forward, we draw attention on
the impact of emerging architectures like multimodal Transformers, and discuss
challenges such as brain data heterogeneity and common errors. This review also
serves as a bridge in this interdisciplinary field for experts with
neuroscience background and experts that study AI, aiming to provide a
comprehensive understanding for AI-powered multimodal BCIs.
|
2502.02831
|
How the Stroop Effect Arises from Optimal Response Times in Laterally
Connected Self-Organizing Maps
|
q-bio.NC cs.NE
|
The Stroop effect refers to cognitive interference in a color-naming task:
When the color and the word do not match, the response is slower and more
likely to be incorrect. The Stroop task is used to assess cognitive
flexibility, selective attention, and executive function. This paper implements
the Stroop task with self-organizing maps (SOMs): Target color and the
competing word are inputs for the semantic and lexical maps, associative
connections bring color information to the lexical map, and lateral connections
combine their effects over time. The model achieved an overall accuracy of
84.2%, with significantly fewer errors and faster responses in congruent
compared to no-input and incongruent conditions. The model's effect is a side
effect of optimizing response times, and can thus be seen as a cost associated
with overall efficient performance. The model can further serve studying
neurologically-inspired cognitive control and related phenomena.
|
2502.02834
|
Task-Aware Virtual Training: Enhancing Generalization in
Meta-Reinforcement Learning for Out-of-Distribution Tasks
|
cs.LG cs.AI
|
Meta reinforcement learning aims to develop policies that generalize to
unseen tasks sampled from a task distribution. While context-based meta-RL
methods improve task representation using task latents, they often struggle
with out-of-distribution (OOD) tasks. To address this, we propose Task-Aware
Virtual Training (TAVT), a novel algorithm that accurately captures task
characteristics for both training and OOD scenarios using metric-based
representation learning. Our method successfully preserves task characteristics
in virtual tasks and employs a state regularization technique to mitigate
overestimation errors in state-varying environments. Numerical results
demonstrate that TAVT significantly enhances generalization to OOD tasks across
various MuJoCo and MetaWorld environments.
|
2502.02835
|
A Survey of Sample-Efficient Deep Learning for Change Detection in
Remote Sensing: Tasks, Strategies, and Challenges
|
cs.CV
|
In the last decade, the rapid development of deep learning (DL) has made it
possible to perform automatic, accurate, and robust Change Detection (CD) on
large volumes of Remote Sensing Images (RSIs). However, despite advances in CD
methods, their practical application in real-world contexts remains limited due
to the diverse input data and the applicational context. For example, the
collected RSIs can be time-series observations, and more informative results
are required to indicate the time of change or the specific change category.
Moreover, training a Deep Neural Network (DNN) requires a massive amount of
training samples, whereas in many cases these samples are difficult to collect.
To address these challenges, various specific CD methods have been developed
considering different application scenarios and training resources.
Additionally, recent advancements in image generation, self-supervision, and
visual foundation models (VFMs) have opened up new approaches to address the
'data-hungry' issue of DL-based CD. The development of these methods in broader
application scenarios requires further investigation and discussion. Therefore,
this article summarizes the literature methods for different CD tasks and the
available strategies and techniques to train and deploy DL-based CD methods in
sample-limited scenarios. We expect that this survey can provide new insights
and inspiration for researchers in this field to develop more effective CD
methods that can be applied in a wider range of contexts.
|
2502.02844
|
Wolfpack Adversarial Attack for Robust Multi-Agent Reinforcement
Learning
|
cs.LG cs.AI cs.CR cs.MA
|
Traditional robust methods in multi-agent reinforcement learning (MARL) often
struggle against coordinated adversarial attacks in cooperative scenarios. To
address this limitation, we propose the Wolfpack Adversarial Attack framework,
inspired by wolf hunting strategies, which targets an initial agent and its
assisting agents to disrupt cooperation. Additionally, we introduce the
Wolfpack-Adversarial Learning for MARL (WALL) framework, which trains robust
MARL policies to defend against the proposed Wolfpack attack by fostering
system-wide collaboration. Experimental results underscore the devastating
impact of the Wolfpack attack and the significant robustness improvements
achieved by WALL.
|
2502.02850
|
RS-YOLOX: A High Precision Detector for Object Detection in Satellite
Remote Sensing Images
|
cs.CV
|
Automatic object detection by satellite remote sensing images is of great
significance for resource exploration and natural disaster assessment. To solve
existing problems in remote sensing image detection, this article proposes an
improved YOLOX model for satellite remote sensing image automatic detection.
This model is named RS-YOLOX. To strengthen the feature learning ability of the
network, we used Efficient Channel Attention (ECA) in the backbone network of
YOLOX and combined the Adaptively Spatial Feature Fusion (ASFF) with the neck
network of YOLOX. To balance the numbers of positive and negative samples in
training, we used the Varifocal Loss function. Finally, to obtain a
high-performance remote sensing object detector, we combined the trained model
with an open-source framework called Slicing Aided Hyper Inference (SAHI). This
work evaluated models on three aerial remote sensing datasets (DOTA-v1.5,
TGRS-HRRSD, and RSOD). Our comparative experiments demonstrate that our model
has the highest accuracy in detecting objects in remote sensing image datasets.
|
2502.02853
|
Rethinking Latent Representations in Behavior Cloning: An Information
Bottleneck Approach for Robot Manipulation
|
cs.RO cs.LG
|
Behavior Cloning (BC) is a widely adopted visual imitation learning method in
robot manipulation. Current BC approaches often enhance generalization by
leveraging large datasets and incorporating additional visual and textual
modalities to capture more diverse information. However, these methods overlook
whether the learned representations contain redundant information and lack a
solid theoretical foundation to guide the learning process. To address these
limitations, we adopt an information-theoretic perspective and introduce mutual
information to quantify and mitigate redundancy in latent representations.
Building on this, we incorporate the Information Bottleneck (IB) principle into
BC, which extends the idea of reducing redundancy by providing a structured
framework for compressing irrelevant information while preserving task-relevant
features. This work presents the first comprehensive study on redundancy in
latent representations across various methods, backbones, and experimental
settings, while extending the generalizability of the IB to BC. Extensive
experiments and analyses on the CortexBench and LIBERO benchmarks demonstrate
significant performance improvements with IB, underscoring the importance of
reducing input data redundancy and highlighting its practical value for more
practical applications. Project Page:
https://baishuanghao.github.io/BC-IB.github.io.
|
2502.02854
|
TD3: Tucker Decomposition Based Dataset Distillation Method for
Sequential Recommendation
|
cs.IR cs.LG
|
In the era of data-centric AI, the focus of recommender systems has shifted
from model-centric innovations to data-centric approaches. The success of
modern AI models is built on large-scale datasets, but this also results in
significant training costs. Dataset distillation has emerged as a key solution,
condensing large datasets to accelerate model training while preserving model
performance. However, condensing discrete and sequentially correlated user-item
interactions, particularly with extensive item sets, presents considerable
challenges. This paper introduces \textbf{TD3}, a novel \textbf{T}ucker
\textbf{D}ecomposition based \textbf{D}ataset \textbf{D}istillation method
within a meta-learning framework, designed for sequential recommendation. TD3
distills a fully expressive \emph{synthetic sequence summary} from original
data. To efficiently reduce computational complexity and extract refined latent
patterns, Tucker decomposition decouples the summary into four factors:
\emph{synthetic user latent factor}, \emph{temporal dynamics latent factor},
\emph{shared item latent factor}, and a \emph{relation core} that models their
interconnections. Additionally, a surrogate objective in bi-level optimization
is proposed to align feature spaces extracted from models trained on both
original data and synthetic sequence summary beyond the na\"ive performance
matching approach. In the \emph{inner-loop}, an augmentation technique allows
the learner to closely fit the synthetic summary, ensuring an accurate update
of it in the \emph{outer-loop}. To accelerate the optimization process and
address long dependencies, RaT-BPTT is employed for bi-level optimization.
Experiments and analyses on multiple public datasets have confirmed the
superiority and cross-architecture generalizability of the proposed designs.
Codes are released at https://github.com/USTC-StarTeam/TD3.
|
2502.02856
|
PH-VAE: A Polynomial Hierarchical Variational Autoencoder Towards
Disentangled Representation Learning
|
cs.LG
|
The variational autoencoder (VAE) is a simple and efficient generative
artificial intelligence method for modeling complex probability distributions
of various types of data, such as images and texts. However, it suffers some
main shortcomings, such as lack of interpretability in the latent variables,
difficulties in tuning hyperparameters while training, producing blurry,
unrealistic downstream outputs or loss of information due to how it calculates
loss functions and recovers data distributions, overfitting, and origin gravity
effect for small data sets, among other issues. These and other limitations
have caused unsatisfactory generation effects for the data with complex
distributions. In this work, we proposed and developed a polynomial
hierarchical variational autoencoder (PH-VAE), in which we used a polynomial
hierarchical date format to generate or to reconstruct the data distributions.
In doing so, we also proposed a novel Polynomial Divergence in the loss
function to replace or generalize the Kullback-Leibler (KL) divergence, which
results in systematic and drastic improvements in both accuracy and
reproducibility of the re-constructed distribution function as well as the
quality of re-constructed data images while keeping the dataset size the same
but capturing fine resolution of the data. Moreover, we showed that the
proposed PH-VAE has some form of disentangled representation learning ability.
|
2502.02858
|
Dexterous Safe Control for Humanoids in Cluttered Environments via
Projected Safe Set Algorithm
|
cs.RO
|
It is critical to ensure safety for humanoid robots in real-world
applications without compromising performance. In this paper, we consider the
problem of dexterous safety, featuring limb-level geometry constraints for
avoiding both external and self-collisions in cluttered environments. Compared
to safety with simplified bounding geometries in sprase environments, dexterous
safety produces numerous constraints which often lead to infeasible constraint
sets when solving for safe robot control. To address this issue, we propose
Projected Safe Set Algorithm (p-SSA), an extension of classical safe control
algorithms to multi-constraint cases. p-SSA relaxes conflicting constraints in
a principled manner, minimizing safety violations to guarantee feasible robot
control. We verify our approach in simulation and on a real Unitree G1 humanoid
robot performing complex collision avoidance tasks. Results show that p-SSA
enables the humanoid to operate robustly in challenging situations with minimal
safety violations and directly generalizes to various tasks with zero parameter
tuning.
|
2502.02859
|
Gap-Dependent Bounds for Federated $Q$-learning
|
stat.ML cs.LG
|
We present the first gap-dependent analysis of regret and communication cost
for on-policy federated $Q$-Learning in tabular episodic finite-horizon Markov
decision processes (MDPs). Existing FRL methods focus on worst-case scenarios,
leading to $\sqrt{T}$-type regret bounds and communication cost bounds with a
$\log T$ term scaling with the number of agents $M$, states $S$, and actions
$A$, where $T$ is the average total number of steps per agent. In contrast, our
novel framework leverages the benign structures of MDPs, such as a strictly
positive suboptimality gap, to achieve a $\log T$-type regret bound and a
refined communication cost bound that disentangles exploration and
exploitation. Our gap-dependent regret bound reveals a distinct multi-agent
speedup pattern, and our gap-dependent communication cost bound removes the
dependence on $MSA$ from the $\log T$ term. Notably, our gap-dependent
communication cost bound also yields a better global switching cost when $M=1$,
removing $SA$ from the $\log T$ term.
|
2502.02861
|
Algorithms with Calibrated Machine Learning Predictions
|
stat.ML cs.DS cs.LG
|
The field of algorithms with predictions incorporates machine learning advice
in the design of online algorithms to improve real-world performance. While
this theoretical framework often assumes uniform reliability across all
predictions, modern machine learning models can now provide instance-level
uncertainty estimates. In this paper, we propose calibration as a principled
and practical tool to bridge this gap, demonstrating the benefits of calibrated
advice through two case studies: the ski rental and online job scheduling
problems. For ski rental, we design an algorithm that achieves optimal
prediction-dependent performance and prove that, in high-variance settings,
calibrated advice offers more effective guidance than alternative methods for
uncertainty quantification. For job scheduling, we demonstrate that using a
calibrated predictor leads to significant performance improvements over
existing methods. Evaluations on real-world data validate our theoretical
findings, highlighting the practical impact of calibration for algorithms with
predictions.
|
2502.02862
|
Learning Generalizable Features for Tibial Plateau Fracture Segmentation
Using Masked Autoencoder and Limited Annotations
|
eess.IV cs.AI cs.CV
|
Accurate automated segmentation of tibial plateau fractures (TPF) from
computed tomography (CT) requires large amounts of annotated data to train deep
learning models, but obtaining such annotations presents unique challenges. The
process demands expert knowledge to identify diverse fracture patterns, assess
severity, and account for individual anatomical variations, making the
annotation process highly time-consuming and expensive. Although
semi-supervised learning methods can utilize unlabeled data, existing
approaches often struggle with the complexity and variability of fracture
morphologies, as well as limited generalizability across datasets. To tackle
these issues, we propose an effective training strategy based on masked
autoencoder (MAE) for the accurate TPF segmentation in CT. Our method leverages
MAE pretraining to capture global skeletal structures and fine-grained fracture
details from unlabeled data, followed by fine-tuning with a small set of
labeled data. This strategy reduces the dependence on extensive annotations
while enhancing the model's ability to learn generalizable and transferable
features. The proposed method is evaluated on an in-house dataset containing
180 CT scans with TPF. Experimental results demonstrate that our method
consistently outperforms semi-supervised methods, achieving an average Dice
similarity coefficient (DSC) of 95.81%, average symmetric surface distance
(ASSD) of 1.91mm, and Hausdorff distance (95HD) of 9.42mm with only 20
annotated cases. Moreover, our method exhibits strong transferability when
applying to another public pelvic CT dataset with hip fractures, highlighting
its potential for broader applications in fracture segmentation tasks.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.