id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
2502.05008
|
A Transformation-based Consistent Estimation Framework: Analysis, Design
and Applications
|
cs.RO
|
In this paper, we investigate the inconsistency problem arising from
observability mismatch that frequently occurs in nonlinear systems such as
multi-robot cooperative localization and simultaneous localization and mapping.
For a general nonlinear system, we discover and theoretically prove that the
unobservable subspace of the EKF estimator system is independent of the state
and belongs to the unobservable subspace of the original system. On this basis,
we establish the necessary and sufficient conditions for achieving
observability matching. These theoretical findings motivate us to introduce a
linear time-varying transformation to achieve a transformed system possessing a
state-independent unobservable subspace. We prove the existence of such
transformations and propose two design methodologies for constructing them.
Moreover, we propose two equivalent consistent transformation-based EKF
estimators, referred to as T-EKF 1 and T-EKF 2, respectively. T-EKF 1 employs
the transformed system for consistent estimation, whereas T-EKF 2 leverages the
original system but ensures consistency through state and covariance
corrections from transformations. To validate our proposed methods, we conduct
experiments on several representative examples, including multi-robot
cooperative localization, multi-source target tracking, and 3D visual-inertial
odometry, demonstrating that our approach achieves state-of-the-art performance
in terms of accuracy, consistency, computational efficiency, and practical
realizations.
|
2502.05011
|
Learning the Language of NVMe Streams for Ransomware Detection
|
cs.LG cs.CR
|
We apply language modeling techniques to detect ransomware activity in NVMe
command sequences. We design and train two types of transformer-based models:
the Command-Level Transformer (CLT) performs in-context token classification to
determine whether individual commands are initiated by ransomware, and the
Patch-Level Transformer (PLT) predicts the volume of data accessed by
ransomware within a patch of commands. We present both model designs and the
corresponding tokenization and embedding schemes and show that they improve
over state-of-the-art tabular methods by up to 24% in missed-detection rate,
66% in data loss prevention, and 84% in identifying data accessed by
ransomware.
|
2502.05013
|
Output-Feedback Full-State Targeting Model Predictive Control for
Station-Keeping on Near-Rectilinear Halo Orbits
|
eess.SY cs.SY math.OC
|
We develop a model predictive control (MPC) policy for station-keeping (SK)
on a Near-Rectilinear Halo Orbit (NRHO). The proposed policy achieves
full-state tracking of a reference NRHO via a two-maneuver control horizon
placed one revolution apart. Our method abides by the typical mission
requirement that at most one maneuver is used for SK during each NRHO
revolution. Simultaneously, the policy has sufficient controllability for
full-state tracking, making it immune to phase deviation issues in the
along-track direction of the reference NRHO, a common drawback of existing SK
methods with a single maneuver per revolution. We report numerical simulations
with a navigation filter to demonstrate the MPC's performance with output
feedback. Our approach successfully maintains the spacecraft's motion in the
vicinity of the reference in both space and phase, with tighter tracking than
state-of-the-art SK methods and comparable delta-V performance.
|
2502.05014
|
Seasonal Station-Keeping of Short Duration High Altitude Balloons using
Deep Reinforcement Learning
|
cs.LG cs.RO physics.ao-ph
|
Station-Keeping short-duration high-altitude balloons (HABs) in a region of
interest is a challenging path-planning problem due to partially observable,
complex, and dynamic wind flows. Deep reinforcement learning is a popular
strategy for solving the station-keeping problem. A custom simulation
environment was developed to train and evaluate Deep Q-Learning (DQN) for
short-duration HAB agents in the simulation. To train the agents on realistic
winds, synthetic wind forecasts were generated from aggregated historical
radiosonde data to apply horizontal kinematics to simulated agents. The
synthetic forecasts were closely correlated with ECWMF ERA5 Reanalysis
forecasts, providing a realistic simulated wind field and seasonal and
altitudinal variances between the wind models. DQN HAB agents were then trained
and evaluated across different seasonal months. To highlight differences and
trends in months with vastly different wind fields, a Forecast Score algorithm
was introduced to independently classify forecasts based on wind diversity, and
trends between station-keeping success and the Forecast Score were evaluated
across all seasons.
|
2502.05017
|
Bridging Voting and Deliberation with Algorithms: Field Insights from
vTaiwan and Kultur Komitee
|
cs.HC cs.AI econ.GN q-fin.EC
|
Democratic processes increasingly aim to integrate large-scale voting with
face-to-face deliberation, addressing the challenge of reconciling individual
preferences with collective decision-making. This work introduces new methods
that use algorithms and computational tools to bridge online voting with
face-to-face deliberation, tested in two real-world scenarios: Kultur Komitee
2024 (KK24) and vTaiwan. These case studies highlight the practical
applications and impacts of the proposed methods.
We present three key contributions: (1) Radial Clustering for Preference
Based Subgroups, which enables both in-depth and broad discussions in
deliberative settings by computing homogeneous and heterogeneous group
compositions with balanced and adjustable group sizes; (2) Human-in-the-loop
MES, a practical method that enhances the Method of Equal Shares (MES)
algorithm with real-time digital feedback. This builds algorithmic trust by
giving participants full control over how much decision-making is delegated to
the voting aggregation algorithm as compared to deliberation; and (3) the
ReadTheRoom deliberation method, which uses opinion space mapping to identify
agreement and divergence, along with spectrum-based preference visualisation to
track opinion shifts during deliberation. This approach enhances transparency
by clarifying collective sentiment and fosters collaboration by encouraging
participants to engage constructively with differing perspectives.
By introducing these actionable frameworks, this research extends in-person
deliberation with scalable digital methods that address the complexities of
modern decision-making in participatory processes.
|
2502.05019
|
$O(\sqrt{T})$ Static Regret and Instance Dependent Constraint Violation
for Constrained Online Convex Optimization
|
cs.LG cs.DS
|
The constrained version of the standard online convex optimization (OCO)
framework, called COCO is considered, where on every round, a convex cost
function and a convex constraint function are revealed to the learner after it
chooses the action for that round. The objective is to simultaneously minimize
the static regret and cumulative constraint violation (CCV). An algorithm is
proposed that guarantees a static regret of $O(\sqrt{T})$ and a CCV of
$\min\{\cV, O(\sqrt{T}\log T) \}$, where $\cV$ depends on the distance between
the consecutively revealed constraint sets, the shape of constraint sets,
dimension of action space and the diameter of the action space. For special
cases of constraint sets, $\cV=O(1)$. Compared to the state of the art results,
static regret of $O(\sqrt{T})$ and CCV of $O(\sqrt{T}\log T)$, that were
universal, the new result on CCV is instance dependent, which is derived by
exploiting the geometric properties of the constraint sets.
|
2502.05020
|
Analog and Multi-modal Manufacturing Datasets Acquired on the Future
Factories Platform V2
|
cs.LG
|
This paper presents two industry-grade datasets captured during an 8-hour
continuous operation of the manufacturing assembly line at the Future Factories
Lab, University of South Carolina, on 08/13/2024. The datasets adhere to
industry standards, covering communication protocols, actuators, control
mechanisms, transducers, sensors, and cameras. Data collection utilized both
integrated and external sensors throughout the laboratory, including sensors
embedded within the actuators and externally installed devices. Additionally,
high-performance cameras captured key aspects of the operation. In a prior
experiment [1], a 30-hour continuous run was conducted, during which all
anomalies were documented. Maintenance procedures were subsequently implemented
to reduce potential errors and operational disruptions. The two datasets
include: (1) a time-series analog dataset, and (2) a multi-modal time-series
dataset containing synchronized system data and images. These datasets aim to
support future research in advancing manufacturing processes by providing a
platform for testing novel algorithms without the need to recreate physical
manufacturing environments. Moreover, the datasets are open-source and designed
to facilitate the training of artificial intelligence models, streamlining
research by offering comprehensive, ready-to-use resources for various
applications and projects.
|
2502.05026
|
The Role of Science in the Climate Change Discussions on Reddit
|
cs.CY cs.SI
|
Collective and individual action necessary to address climate change hinges
on the public's understanding of the relevant scientific findings. In this
study, we examine the use of scientific sources in the course of 14 years of
public deliberation around climate change on one of the largest social media
platforms, Reddit. We find that only 4.0% of the links in the Reddit posts, and
6.5% in the comments, point to domains of scientific sources, although these
rates have been increasing in the past decades. These links are dwarfed,
however, by the citations of mass media, newspapers, and social media, the
latter of which peaked especially during 2019-2020. Further, scientific sources
are more likely to be posted by users who also post links to sources having
central-left political leaning, and less so by those posting more polarized
sources. Unfortunately, scientific sources are not often used in response to
links to unreliable sources.
|
2502.05027
|
Trust-Aware Diversion for Data-Effective Distillation
|
cs.CV
|
Dataset distillation compresses a large dataset into a small synthetic subset
that retains essential information. Existing methods assume that all samples
are perfectly labeled, limiting their real-world applications where incorrect
labels are ubiquitous. These mislabeled samples introduce untrustworthy
information into the dataset, which misleads model optimization in dataset
distillation. To tackle this issue, we propose a Trust-Aware Diversion (TAD)
dataset distillation method. Our proposed TAD introduces an iterative dual-loop
optimization framework for data-effective distillation. Specifically, the outer
loop divides data into trusted and untrusted spaces, redirecting distillation
toward trusted samples to guarantee trust in the distillation process. This
step minimizes the impact of mislabeled samples on dataset distillation. The
inner loop maximizes the distillation objective by recalibrating untrusted
samples, thus transforming them into valuable ones for distillation. This
dual-loop iteratively refines and compensates for each other, gradually
expanding the trusted space and shrinking the untrusted space. Experiments
demonstrate that our method can significantly improve the performance of
existing dataset distillation methods on three widely used benchmarks (CIFAR10,
CIFAR100, and Tiny ImageNet) in three challenging mislabeled settings
(symmetric, asymmetric, and real-world).
|
2502.05028
|
Near-Optimal Online Learning for Multi-Agent Submodular Coordination:
Tight Approximation and Communication Efficiency
|
cs.MA cs.LG math.OC
|
Coordinating multiple agents to collaboratively maximize submodular functions
in unpredictable environments is a critical task with numerous applications in
machine learning, robot planning and control. The existing approaches, such as
the OSG algorithm, are often hindered by their poor approximation guarantees
and the rigid requirement for a fully connected communication graph. To address
these challenges, we firstly present a $\textbf{MA-OSMA}$ algorithm, which
employs the multi-linear extension to transfer the discrete submodular
maximization problem into a continuous optimization, thereby allowing us to
reduce the strict dependence on a complete graph through consensus techniques.
Moreover, $\textbf{MA-OSMA}$ leverages a novel surrogate gradient to avoid
sub-optimal stationary points. To eliminate the computationally intensive
projection operations in $\textbf{MA-OSMA}$, we also introduce a
projection-free $\textbf{MA-OSEA}$ algorithm, which effectively utilizes the KL
divergence by mixing a uniform distribution. Theoretically, we confirm that
both algorithms achieve a regret bound of
$\widetilde{O}(\sqrt{\frac{C_{T}T}{1-\beta}})$ against a
$(\frac{1-e^{-c}}{c})$-approximation to the best comparator in hindsight, where
$C_{T}$ is the deviation of maximizer sequence, $\beta$ is the spectral gap of
the network and $c$ is the joint curvature of submodular objectives. This
result significantly improves the $(\frac{1}{1+c})$-approximation provided by
the state-of-the-art OSG algorithm. Finally, we demonstrate the effectiveness
of our proposed algorithms through simulation-based multi-target tracking.
|
2502.05032
|
News about Global North considered Truthful! The Geo-political Veracity
Gradient in Global South News
|
cs.LG
|
While there has been much research into developing AI techniques for fake
news detection aided by various benchmark datasets, it has often been pointed
out that fake news in different geo-political regions traces different
contours. In this work we uncover, through analytical arguments and empirical
evidence, the existence of an important characteristic in news originating from
the Global South viz., the geo-political veracity gradient. In particular, we
show that Global South news about topics from Global North -- such as news from
an Indian news agency on US elections -- tend to be less likely to be fake.
Observing through the prism of the political economy of fake news creation, we
posit that this pattern could be due to the relative lack of monetarily aligned
incentives in producing fake news about a different region than the regional
remit of the audience. We provide empirical evidence for this from benchmark
datasets. We also empirically analyze the consequences of this effect in
applying AI-based fake news detection models for fake news AI trained on one
region within another regional context. We locate our work within emerging
critical scholarship on geo-political biases within AI in general, particularly
with AI usage in fake news identification; we hope our insight into the
geo-political veracity gradient could help steer fake news AI scholarship
towards positively impacting Global South societies.
|
2502.05034
|
MindAligner: Explicit Brain Functional Alignment for Cross-Subject
Visual Decoding from Limited fMRI Data
|
cs.CV
|
Brain decoding aims to reconstruct visual perception of human subject from
fMRI signals, which is crucial for understanding brain's perception mechanisms.
Existing methods are confined to the single-subject paradigm due to substantial
brain variability, which leads to weak generalization across individuals and
incurs high training costs, exacerbated by limited availability of fMRI data.
To address these challenges, we propose MindAligner, an explicit functional
alignment framework for cross-subject brain decoding from limited fMRI data.
The proposed MindAligner enjoys several merits. First, we learn a Brain
Transfer Matrix (BTM) that projects the brain signals of an arbitrary new
subject to one of the known subjects, enabling seamless use of pre-trained
decoding models. Second, to facilitate reliable BTM learning, a Brain
Functional Alignment module is proposed to perform soft cross-subject brain
alignment under different visual stimuli with a multi-level brain alignment
loss, uncovering fine-grained functional correspondences with high
interpretability. Experiments indicate that MindAligner not only outperforms
existing methods in visual decoding under data-limited conditions, but also
provides valuable neuroscience insights in cross-subject functional analysis.
The code will be made publicly available.
|
2502.05036
|
nvAgent: Automated Data Visualization from Natural Language via
Collaborative Agent Workflow
|
cs.CL
|
Natural Language to Visualization (NL2Vis) seeks to convert natural-language
descriptions into visual representations of given tables, empowering users to
derive insights from large-scale data. Recent advancements in Large Language
Models (LLMs) show promise in automating code generation to transform tabular
data into accessible visualizations. However, they often struggle with complex
queries that require reasoning across multiple tables. To address this
limitation, we propose a collaborative agent workflow, termed nvAgent, for
NL2Vis. Specifically, nvAgent comprises three agents: a processor agent for
database processing and context filtering, a composer agent for planning
visualization generation, and a validator agent for code translation and output
verification. Comprehensive evaluations on the new VisEval benchmark
demonstrate that nvAgent consistently surpasses state-of-the-art baselines,
achieving a 7.88% improvement in single-table and a 9.23% improvement in
multi-table scenarios. Qualitative analyses further highlight that nvAgent
maintains nearly a 20% performance margin over previous models, underscoring
its capacity to produce high-quality visual representations from complex,
heterogeneous data sources.
|
2502.05037
|
Leveraging a Simulator for Learning Causal Representations from
Post-Treatment Covariates for CATE
|
cs.LG
|
Treatment effect estimation involves assessing the impact of different
treatments on individual outcomes. Current methods estimate Conditional Average
Treatment Effect (CATE) using observational datasets where covariates are
collected before treatment assignment and outcomes are observed afterward,
under assumptions like positivity and unconfoundedness. In this paper, we
address a scenario where both covariates and outcomes are gathered after
treatment. We show that post-treatment covariates render CATE unidentifiable,
and recovering CATE requires learning treatment-independent causal
representations. Prior work shows that such representations can be learned
through contrastive learning if counterfactual supervision is available in
observational data. However, since counterfactuals are rare, other works have
explored using simulators that offer synthetic counterfactual supervision. Our
goal in this paper is to systematically analyze the role of simulators in
estimating CATE. We analyze the CATE error of several baselines and highlight
their limitations. We then establish a generalization bound that characterizes
the CATE error from jointly training on real and simulated distributions, as a
function of the real-simulator mismatch. Finally, we introduce SimPONet, a
novel method whose loss function is inspired from our generalization bound. We
further show how SimPONet adjusts the simulator's influence on the learning
objective based on the simulator's relevance to the CATE task. We experiment
with various DGPs, by systematically varying the real-simulator distribution
gap to evaluate SimPONet's efficacy against state-of-the-art CATE baselines.
|
2502.05038
|
FlightForge: Advancing UAV Research with Procedural Generation of
High-Fidelity Simulation and Integrated Autonomy
|
cs.RO cs.CV
|
Robotic simulators play a crucial role in the development and testing of
autonomous systems, particularly in the realm of Uncrewed Aerial Vehicles
(UAV). However, existing simulators often lack high-level autonomy, hindering
their immediate applicability to complex tasks such as autonomous navigation in
unknown environments. This limitation stems from the challenge of integrating
realistic physics, photorealistic rendering, and diverse sensor modalities into
a single simulation environment. At the same time, the existing photorealistic
UAV simulators use mostly hand-crafted environments with limited environment
sizes, which prevents the testing of long-range missions. This restricts the
usage of existing simulators to only low-level tasks such as control and
collision avoidance. To this end, we propose the novel FlightForge UAV
open-source simulator. FlightForge offers advanced rendering capabilities,
diverse control modalities, and, foremost, procedural generation of
environments. Moreover, the simulator is already integrated with a fully
autonomous UAV system capable of long-range flights in cluttered unknown
environments. The key innovation lies in novel procedural environment
generation and seamless integration of high-level autonomy into the simulation
environment. Experimental results demonstrate superior sensor rendering
capability compared to existing simulators, and also the ability of autonomous
navigation in almost infinite environments.
|
2502.05040
|
GaussRender: Learning 3D Occupancy with Gaussian Rendering
|
cs.CV
|
Understanding the 3D geometry and semantics of driving scenes is critical for
developing of safe autonomous vehicles. While 3D occupancy models are typically
trained using voxel-based supervision with standard losses (e.g.,
cross-entropy, Lovasz, dice), these approaches treat voxel predictions
independently, neglecting their spatial relationships. In this paper, we
propose GaussRender, a plug-and-play 3D-to-2D reprojection loss that enhances
voxel-based supervision. Our method projects 3D voxel representations into
arbitrary 2D perspectives and leverages Gaussian splatting as an efficient,
differentiable rendering proxy of voxels, introducing spatial dependencies
across projected elements. This approach improves semantic and geometric
consistency, handles occlusions more efficiently, and requires no architectural
modifications. Extensive experiments on multiple benchmarks
(SurroundOcc-nuScenes, Occ3D-nuScenes, SSCBench-KITTI360) demonstrate
consistent performance gains across various 3D occupancy models (TPVFormer,
SurroundOcc, Symphonies), highlighting the robustness and versatility of our
framework. The code is available at https://github.com/valeoai/GaussRender.
|
2502.05041
|
Federated Learning for Anomaly Detection in Energy Consumption Data:
Assessing the Vulnerability to Adversarial Attacks
|
cs.LG cs.AI cs.DC
|
Anomaly detection is crucial in the energy sector to identify irregular
patterns indicating equipment failures, energy theft, or other issues. Machine
learning techniques for anomaly detection have achieved great success, but are
typically centralized, involving sharing local data with a central server which
raises privacy and security concerns. Federated Learning (FL) has been gaining
popularity as it enables distributed learning without sharing local data.
However, FL depends on neural networks, which are vulnerable to adversarial
attacks that manipulate data, leading models to make erroneous predictions.
While adversarial attacks have been explored in the image domain, they remain
largely unexplored in time series problems, especially in the energy domain.
Moreover, the effect of adversarial attacks in the FL setting is also mostly
unknown. This paper assesses the vulnerability of FL-based anomaly detection in
energy data to adversarial attacks. Specifically, two state-of-the-art models,
Long Short Term Memory (LSTM) and Transformers, are used to detect anomalies in
an FL setting, and two white-box attack methods, Fast Gradient Sign Method
(FGSM) and Projected Gradient Descent (PGD), are employed to perturb the data.
The results show that FL is more sensitive to PGD attacks than to FGSM attacks,
attributed to PGD's iterative nature, resulting in an accuracy drop of over 10%
even with naive, weaker attacks. Moreover, FL is more affected by these attacks
than centralized learning, highlighting the need for defense mechanisms in FL.
|
2502.05044
|
Hybrid machine learning based scale bridging framework for permeability
prediction of fibrous structures
|
cs.LG
|
This study introduces a hybrid machine learning-based scale-bridging
framework for predicting the permeability of fibrous textile structures. By
addressing the computational challenges inherent to multiscale modeling, the
proposed approach evaluates the efficiency and accuracy of different
scale-bridging methodologies combining traditional surrogate models and even
integrating physics-informed neural networks (PINNs) with numerical solvers,
enabling accurate permeability predictions across micro- and mesoscales. Four
methodologies were evaluated: Single Scale Method (SSM), Simple Upscaling
Method (SUM), Scale-Bridging Method (SBM), and Fully Resolved Model (FRM). SSM,
the simplest method, neglects microscale permeability and exhibited
permeability values deviating by up to 150\% of the FRM model, which was taken
as ground truth at an equivalent lower fiber volume content. SUM improved
predictions by considering uniform microscale permeability, yielding closer
values under similar conditions, but still lacked structural variability. The
SBM method, incorporating segment-based microscale permeability assignments,
showed significant enhancements, achieving almost equivalent values while
maintaining computational efficiency and modeling runtimes of ~45 minutes per
simulation. In contrast, FRM, which provides the highest fidelity by fully
resolving microscale and mesoscale geometries, required up to 270 times more
computational time than SSM, with model files exceeding 300 GB. Additionally, a
hybrid dual-scale solver incorporating PINNs has been developed and shows the
potential to overcome generalization errors and the problem of data scarcity of
the data-driven surrogate approaches. The hybrid framework advances
permeability modelling by balancing computational cost and prediction
reliability, laying the foundation for further applications in fibrous
composite manufacturing.
|
2502.05049
|
On the Inference of Sociodemographics on Reddit
|
cs.SI cs.CY
|
Inference of sociodemographic attributes of social media users is an
essential step for computational social science (CSS) research to link online
and offline behavior. However, there is a lack of a systematic evaluation and
clear guidelines for optimal methodologies for this task on Reddit, one of
today's largest social media. In this study, we fill this gap by comparing
state-of-the-art (SOTA) and probabilistic models.
To this end, first we collect a novel data set of more than 850k
self-declarations on age, gender, and partisan affiliation from Reddit
comments. Then, we systematically compare alternatives to the widely used
embedding-based model and labeling techniques for the definition of the
ground-truth. We do so on two tasks: ($i$) predicting binary labels
(classification); and ($ii$)~predicting the prevalence of a demographic class
among a set of users (quantification).
Our findings reveal that Naive Bayes models not only offer transparency and
interpretability by design but also consistently outperform the SOTA.
Specifically, they achieve an improvement in ROC AUC of up to $19\%$ and
maintain a mean absolute error (MAE) below $15\%$ in quantification for
large-scale data settings. Finally, we discuss best practices for researchers
in CSS, emphasizing coverage, interpretability, reliability, and scalability.
The code and model weights used for the experiments are publicly
available.\footnote{https://anonymous.4open.science/r/SDI-submission-5234}
|
2502.05053
|
Gaze-Guided Robotic Vascular Ultrasound Leveraging Human Intention
Estimation
|
cs.RO
|
Medical ultrasound has been widely used to examine vascular structure in
modern clinical practice. However, traditional ultrasound examination often
faces challenges related to inter- and intra-operator variation. The robotic
ultrasound system (RUSS) appears as a potential solution for such challenges
because of its superiority in stability and reproducibility. Given the complex
anatomy of human vasculature, multiple vessels often appear in ultrasound
images, or a single vessel bifurcates into branches, complicating the
examination process. To tackle this challenge, this work presents a gaze-guided
RUSS for vascular applications. A gaze tracker captures the eye movements of
the operator. The extracted gaze signal guides the RUSS to follow the correct
vessel when it bifurcates. Additionally, a gaze-guided segmentation network is
proposed to enhance segmentation robustness by exploiting gaze information.
However, gaze signals are often noisy, requiring interpretation to accurately
discern the operator's true intentions. To this end, this study proposes a
stabilization module to process raw gaze data. The inferred attention heatmap
is utilized as a region proposal to aid segmentation and serve as a trigger
signal when the operator needs to adjust the scanning target, such as when a
bifurcation appears. To ensure appropriate contact between the probe and
surface during scanning, an automatic ultrasound confidence-based orientation
correction method is developed. In experiments, we demonstrated the efficiency
of the proposed gaze-guided segmentation pipeline by comparing it with other
methods. Besides, the performance of the proposed gaze-guided RUSS was also
validated as a whole on a realistic arm phantom with an uneven surface.
|
2502.05055
|
Differentiable Mobile Display Photometric Stereo
|
cs.CV cs.AI cs.GR cs.LG
|
Display photometric stereo uses a display as a programmable light source to
illuminate a scene with diverse illumination conditions. Recently,
differentiable display photometric stereo (DDPS) demonstrated improved normal
reconstruction accuracy by using learned display patterns. However, DDPS faced
limitations in practicality, requiring a fixed desktop imaging setup using a
polarization camera and a desktop-scale monitor. In this paper, we propose a
more practical physics-based photometric stereo, differentiable mobile display
photometric stereo (DMDPS), that leverages a mobile phone consisting of a
display and a camera. We overcome the limitations of using a mobile device by
developing a mobile app and method that simultaneously displays patterns and
captures high-quality HDR images. Using this technique, we capture real-world
3D-printed objects and learn display patterns via a differentiable learning
process. We demonstrate the effectiveness of DMDPS on both a 3D printed dataset
and a first dataset of fallen leaves. The leaf dataset contains reconstructed
surface normals and albedos of fallen leaves that may enable future research
beyond computer graphics and vision. We believe that DMDPS takes a step forward
for practical physics-based photometric stereo.
|
2502.05060
|
Preference-aware compensation policies for crowdsourced on-demand
services
|
cs.LG cs.AI math.OC
|
Crowdsourced on-demand services offer benefits such as reduced costs, faster
service fulfillment times, greater adaptability, and contributions to
sustainable urban transportation in on-demand delivery contexts. However, the
success of an on-demand platform that utilizes crowdsourcing relies on finding
a compensation policy that strikes a balance between creating attractive offers
for gig workers and ensuring profitability. In this work, we examine a dynamic
pricing problem for an on-demand platform that sets request-specific
compensation of gig workers in a discrete-time framework, where requests and
workers arrive stochastically. The operator's goal is to determine a
compensation policy that maximizes the total expected reward over the time
horizon. Our approach introduces compensation strategies that explicitly
account for gig worker request preferences. To achieve this, we employ the
Multinomial Logit model to represent the acceptance probabilities of gig
workers, and, as a result, derive an analytical solution that utilizes
post-decision states. Subsequently, we integrate this solution into an
approximate dynamic programming algorithm. We compare our algorithm against
benchmark algorithms, including formula-based policies and an upper bound
provided by the full information linear programming solution. Our algorithm
demonstrates consistent performance across diverse settings, achieving
improvements of at least 2.5-7.5% in homogeneous gig worker populations and 9%
in heterogeneous populations over benchmarks, based on fully synthetic data.
For real-world data, it surpasses benchmarks by 8% in weak and 20% in strong
location preference scenarios.
|
2502.05063
|
Computing and Learning on Combinatorial Data
|
cs.AI cs.DM cs.DS
|
The twenty-first century is a data-driven era where human activities and
behavior, physical phenomena, scientific discoveries, technology advancements,
and almost everything that happens in the world resulting in massive
generation, collection, and utilization of data.
Connectivity in data is a crucial property. A straightforward example is the
World Wide Web, where every webpage is connected to other web pages through
hyperlinks, providing a form of directed connectivity. Combinatorial data
refers to combinations of data items based on certain connectivity rules. Other
forms of combinatorial data include social networks, meshes, community
clusters, set systems, and molecules.
This Ph.D. dissertation focuses on learning and computing with combinatorial
data. We study and examine topological and connectivity features within and
across connected data to improve the performance of learning and achieve high
algorithmic efficiency.
|
2502.05066
|
Beautiful Images, Toxic Words: Understanding and Addressing Offensive
Text in Generated Images
|
cs.CV
|
State-of-the-art visual generation models, such as Diffusion Models (DMs) and
Vision Auto-Regressive Models (VARs), produce highly realistic images. While
prior work has successfully mitigated Not Safe For Work (NSFW) content in the
visual domain, we identify a novel threat: the generation of NSFW text embedded
within images. This includes offensive language, such as insults, racial slurs,
and sexually explicit terms, posing significant risks to users. We show that
all state-of-the-art DMs (e.g., SD3, Flux, DeepFloyd IF) and VARs (e.g.,
Infinity) are vulnerable to this issue. Through extensive experiments, we
demonstrate that existing mitigation techniques, effective for visual content,
fail to prevent harmful text generation while substantially degrading benign
text generation. As an initial step toward addressing this threat, we explore
safety fine-tuning of the text encoder underlying major DM architectures using
a customized dataset. Thereby, we suppress NSFW generation while preserving
overall image and text generation quality. Finally, to advance research in this
area, we introduce ToxicBench, an open-source benchmark for evaluating NSFW
text generation in images. ToxicBench provides a curated dataset of harmful
prompts, new metrics, and an evaluation pipeline assessing both NSFW-ness and
generation quality. Our benchmark aims to guide future efforts in mitigating
NSFW text generation in text-to-image models.
|
2502.05069
|
Exploring the Generalizability of Geomagnetic Navigation: A Deep
Reinforcement Learning approach with Policy Distillation
|
cs.RO
|
The advancement in autonomous vehicles has empowered navigation and
exploration in unknown environments. Geomagnetic navigation for autonomous
vehicles has drawn increasing attention with its independence from GPS or
inertial navigation devices. While geomagnetic navigation approaches have been
extensively investigated, the generalizability of learned geomagnetic
navigation strategies remains unexplored. The performance of a learned strategy
can degrade outside of its source domain where the strategy is learned, due to
a lack of knowledge about the geomagnetic characteristics in newly entered
areas. This paper explores the generalization of learned geomagnetic navigation
strategies via deep reinforcement learning (DRL). Particularly, we employ DRL
agents to learn multiple teacher models from distributed domains that represent
dispersed navigation strategies, and amalgamate the teacher models for
generalizability across navigation areas. We design a reward shaping mechanism
in training teacher models where we integrate both potential-based and
intrinsic-motivated rewards. The designed reward shaping can enhance the
exploration efficiency of the DRL agent and improve the representation of the
teacher models. Upon the gained teacher models, we employ multi-teacher policy
distillation to merge the policies learned by individual teachers, leading to a
navigation strategy with generalizability across navigation domains. We conduct
numerical simulations, and the results demonstrate an effective transfer of the
learned DRL model from a source domain to new navigation areas. Compared to
existing evolutionary-based geomagnetic navigation methods, our approach
provides superior performance in terms of navigation length, duration, heading
deviation, and success rate in cross-domain navigation.
|
2502.05073
|
Noise Sensitivity of Hierarchical Functions and Deep Learning Lower
Bounds in General Product Measures
|
math.PR cs.CC cs.LG math.CO
|
Recent works explore deep learning's success by examining functions or data
with hierarchical structure. Complementarily, research on gradient descent
performance for deep nets has shown that noise sensitivity of functions under
independent and identically distributed (i.i.d.) Bernoulli inputs establishes
learning complexity bounds. This paper aims to bridge these research streams by
demonstrating that functions constructed through repeated composition of
non-linear functions are noise sensitive under general product measures.
|
2502.05074
|
Two-Point Deterministic Equivalence for Stochastic Gradient Dynamics in
Linear Models
|
cond-mat.dis-nn cs.LG stat.ML
|
We derive a novel deterministic equivalence for the two-point function of a
random matrix resolvent. Using this result, we give a unified derivation of the
performance of a wide variety of high-dimensional linear models trained with
stochastic gradient descent. This includes high-dimensional linear regression,
kernel regression, and random feature models. Our results include previously
known asymptotics as well as novel ones.
|
2502.05075
|
Discrepancies are Virtue: Weak-to-Strong Generalization through Lens of
Intrinsic Dimension
|
cs.LG cs.NA math.NA stat.ML
|
Weak-to-strong (W2S) generalization is a type of finetuning (FT) where a
strong (large) student model is trained on pseudo-labels generated by a weak
teacher. Surprisingly, W2S FT often outperforms the weak teacher. We seek to
understand this phenomenon through the observation that FT often occurs in
intrinsically low-dimensional spaces. Leveraging the low intrinsic
dimensionality of FT, we analyze W2S in the ridgeless regression setting from a
variance reduction perspective. For a strong student - weak teacher pair with
sufficiently expressive low-dimensional feature subspaces $\mathcal{V}_s,
\mathcal{V}_w$, we provide an exact characterization of the variance that
dominates the generalization error of W2S. This unveils a virtue of discrepancy
between the strong and weak models in W2S: the variance of the weak teacher is
inherited by the strong student in $\mathcal{V}_s \cap \mathcal{V}_w$, while
reduced by a factor of $\dim(\mathcal{V}_s)/N$ in the subspace of discrepancy
$\mathcal{V}_w \setminus \mathcal{V}_s$ with $N$ pseudo-labels for W2S.
Further, our analysis casts light on the sample complexities and the scaling of
performance gap recovery in W2S. The analysis is supported with experiments on
both synthetic regression problems and real vision tasks.
|
2502.05076
|
Paying Attention to Facts: Quantifying the Knowledge Capacity of
Attention Layers
|
cs.LG cs.CL
|
In this paper, we investigate the ability of single-layer attention-only
transformers (i.e. attention layers) to memorize facts contained in databases
from a linear-algebraic perspective. We associate with each database a
3-tensor, propose the rank of this tensor as a measure of the size of the
database, and provide bounds on the rank in terms of properties of the
database. We also define a 3-tensor corresponding to an attention layer, and
empirically demonstrate the relationship between its rank and database rank on
a dataset of toy models and random databases. By highlighting the roles played
by the value-output and query-key weights, and the effects of argmax and
softmax on rank, our results shed light on the `additive motif' of factual
recall in transformers, while also suggesting a way of increasing layer
capacity without increasing the number of parameters.
|
2502.05078
|
Adaptive Graph of Thoughts: Test-Time Adaptive Reasoning Unifying Chain,
Tree, and Graph Structures
|
cs.AI cs.CL
|
Large Language Models (LLMs) have demonstrated impressive reasoning
capabilities, yet their performance is highly dependent on the prompting
strategy and model scale. While reinforcement learning and fine-tuning have
been deployed to boost reasoning, these approaches incur substantial
computational and data overhead. In this work, we introduce Adaptive Graph of
Thoughts (AGoT), a dynamic, graph-based inference framework that enhances LLM
reasoning solely at test time. Rather than relying on fixed-step methods like
Chain of Thought (CoT) or Tree of Thoughts (ToT), AGoT recursively decomposes
complex queries into structured subproblems, forming an dynamic directed
acyclic graph (DAG) of interdependent reasoning steps. By selectively expanding
only those subproblems that require further analysis, AGoT unifies the
strengths of chain, tree, and graph paradigms into a cohesive framework that
allocates computation where it is most needed. We validate our approach on
diverse benchmarks spanning multi-hop retrieval, scientific reasoning, and
mathematical problem-solving, achieving up to 46.2% improvement on scientific
reasoning tasks (GPQA) - comparable to gains achieved through computationally
intensive reinforcement learning approaches and outperforming state-of-the-art
iterative approaches. These results suggest that dynamic decomposition and
structured recursion offer a scalable, cost-effective alternative to
post-training modifications, paving the way for more robust, general-purpose
reasoning in LLMs.
|
2502.05084
|
ChallengeMe: An Adversarial Learning-enabled Text Summarization
Framework
|
cs.CL cs.AI
|
The astonishing performance of large language models (LLMs) and their
remarkable achievements in production and daily life have led to their
widespread application in collaborative tasks. However, current large models
face challenges such as hallucination and lack of specificity in content
generation in vertical domain tasks. Inspired by the contrast and
classification mechanisms in human cognitive processes, this paper constructs
an adversarial learning-based prompt framework named ChallengeMe, which
includes three cascaded solutions: generation prompts, evaluation prompts, and
feedback optimization. In this process, we designed seven core optimization
dimensions and set the threshold for adversarial learning. The results of mixed
case studies on the text summarization task show that the proposed framework
can generate more accurate and fluent text summaries compared to the current
advanced mainstream LLMs.
|
2502.05085
|
Causality can systematically address the monsters under the bench(marks)
|
cs.LG cs.AI
|
Effective and reliable evaluation is essential for advancing empirical
machine learning. However, the increasing accessibility of generalist models
and the progress towards ever more complex, high-level tasks make systematic
evaluation more challenging. Benchmarks are plagued by various biases,
artifacts, or leakage, while models may behave unreliably due to poorly
explored failure modes. Haphazard treatments and inconsistent formulations of
such "monsters" can contribute to a duplication of efforts, a lack of trust in
results, and unsupported inferences. In this position paper, we argue causality
offers an ideal framework to systematically address these challenges. By making
causal assumptions in an approach explicit, we can faithfully model phenomena,
formulate testable hypotheses with explanatory power, and leverage principled
tools for analysis. To make causal model design more accessible, we identify
several useful Common Abstract Topologies (CATs) in causal graphs which help
gain insight into the reasoning abilities in large language models. Through a
series of case studies, we demonstrate how the precise yet pragmatic language
of causality clarifies the strengths and limitations of a method and inspires
new approaches for systematic progress.
|
2502.05086
|
REASSEMBLE: A Multimodal Dataset for Contact-rich Robotic Assembly and
Disassembly
|
cs.RO
|
Robotic manipulation remains a core challenge in robotics, particularly for
contact-rich tasks such as industrial assembly and disassembly. Existing
datasets have significantly advanced learning in manipulation but are primarily
focused on simpler tasks like object rearrangement, falling short of capturing
the complexity and physical dynamics involved in assembly and disassembly. To
bridge this gap, we present REASSEMBLE (Robotic assEmbly disASSEMBLy datasEt),
a new dataset designed specifically for contact-rich manipulation tasks. Built
around the NIST Assembly Task Board 1 benchmark, REASSEMBLE includes four
actions (pick, insert, remove, and place) involving 17 objects. The dataset
contains 4,551 demonstrations, of which 4,035 were successful, spanning a total
of 781 minutes. Our dataset features multi-modal sensor data including event
cameras, force-torque sensors, microphones, and multi-view RGB cameras. This
diverse dataset supports research in areas such as learning contact-rich
manipulation, task condition identification, action segmentation, and more. We
believe REASSEMBLE will be a valuable resource for advancing robotic
manipulation in complex, real-world scenarios. The dataset is publicly
available on our project website:
https://dsliwowski1.github.io/REASSEMBLE_page.
|
2502.05087
|
Mitigating Unintended Memorization with LoRA in Federated Learning for
LLMs
|
cs.LG cs.AI cs.CL
|
Federated learning (FL) is a popular paradigm for collaborative training
which avoids direct data exposure between clients. However, data privacy issues
still remain: FL-trained large language models are capable of memorizing and
completing phrases and sentences contained in training data when given with
their prefixes. Thus, it is possible for adversarial and honest-but-curious
clients to recover training data of other participants simply through targeted
prompting. In this work, we demonstrate that a popular and simple fine-tuning
strategy, low-rank adaptation (LoRA), reduces memorization during FL up to a
factor of 10. We study this effect by performing a medical question-answering
fine-tuning task and injecting multiple replicas of out-of-distribution
sensitive sequences drawn from an external clinical dataset. We observe a
reduction in memorization for a wide variety of Llama 2 and 3 models, and find
that LoRA can reduce memorization in centralized learning as well. Furthermore,
we show that LoRA can be combined with other privacy-preserving techniques such
as gradient clipping and Gaussian noising, secure aggregation, and Goldfish
loss to further improve record-level privacy while maintaining performance.
|
2502.05091
|
DCFormer: Efficient 3D Vision-Language Modeling with Decomposed
Convolutions
|
cs.CV
|
Vision-language models (VLMs) align visual and textual representations,
enabling high-performance zero-shot classification and image-text retrieval in
2D medical imaging. However, extending VLMs to 3D medical imaging remains
computationally challenging. Existing 3D VLMs rely on Vision Transformers
(ViTs), which are computationally expensive due to self-attention's quadratic
complexity, or 3D convolutions, which demand excessive parameters and FLOPs as
kernel size increases. We introduce DCFormer, an efficient 3D medical image
encoder that factorizes 3D convolutions into three parallel 1D convolutions
along depth, height, and width. This design preserves spatial information while
significantly reducing computational cost. Integrated into a CLIP-based
vision-language framework, DCFormer is evaluated on CT-RATE, a dataset of
50,188 paired 3D chest CT volumes and radiology reports, for zero-shot
multi-abnormality detection across 18 pathologies. Compared to ViT, ConvNeXt,
PoolFormer, and TransUNet, DCFormer achieves superior efficiency and accuracy,
with DCFormer-Tiny reaching 62.0% accuracy and a 46.3% F1-score while using
significantly fewer parameters. These results highlight DCFormer's potential
for scalable, clinically deployable 3D medical VLMs. Our codes will be publicly
available.
|
2502.05092
|
Lost in Time: Clock and Calendar Understanding Challenges in Multimodal
LLMs
|
cs.CV cs.AI cs.CL
|
Understanding time from visual representations is a fundamental cognitive
skill, yet it remains a challenge for multimodal large language models (MLLMs).
In this work, we investigate the capabilities of MLLMs in interpreting time and
date through analogue clocks and yearly calendars. To facilitate this, we
curated a structured dataset comprising two subsets: 1) $\textit{ClockQA}$,
which comprises various types of clock styles$-$standard, black-dial,
no-second-hand, Roman numeral, and arrow-hand clocks$-$paired with time related
questions; and 2) $\textit{CalendarQA}$, which consists of yearly calendar
images with questions ranging from commonly known dates (e.g., Christmas, New
Year's Day) to computationally derived ones (e.g., the 100th or 153rd day of
the year). We aim to analyse how MLLMs can perform visual recognition,
numerical reasoning, and temporal inference when presented with time-related
visual data. Our evaluations show that despite recent advancements, reliably
understanding time remains a significant challenge for MLLMs.
|
2502.05094
|
Non-linear Quantum Monte Carlo
|
quant-ph cs.LG cs.NA math.NA stat.CO stat.ML
|
The mean of a random variable can be understood as a $\textit{linear}$
functional on the space of probability distributions. Quantum computing is
known to provide a quadratic speedup over classical Monte Carlo methods for
mean estimation. In this paper, we investigate whether a similar quadratic
speedup is achievable for estimating $\textit{non-linear}$ functionals of
probability distributions. We propose a quantum-inside-quantum Monte Carlo
algorithm that achieves such a speedup for a broad class of non-linear
estimation problems, including nested conditional expectations and stochastic
optimization. Our algorithm improves upon the direct application of the quantum
multilevel Monte Carlo algorithm introduced by An et al.. The existing lower
bound indicates that our algorithm is optimal up polylogarithmic factors. A key
innovation of our approach is a new sequence of multilevel Monte Carlo
approximations specifically designed for quantum computing, which is central to
the algorithm's improved performance.
|
2502.05098
|
Learning Temporal Invariance in Android Malware Detectors
|
cs.CR cs.AI
|
Learning-based Android malware detectors degrade over time due to natural
distribution drift caused by malware variants and new families. This paper
systematically investigates the challenges classifiers trained with empirical
risk minimization (ERM) face against such distribution shifts and attributes
their shortcomings to their inability to learn stable discriminative features.
Invariant learning theory offers a promising solution by encouraging models to
generate stable representations crossing environments that expose the
instability of the training set. However, the lack of prior environment labels,
the diversity of drift factors, and low-quality representations caused by
diverse families make this task challenging. To address these issues, we
propose TIF, the first temporal invariant training framework for malware
detection, which aims to enhance the ability of detectors to learn stable
representations across time. TIF organizes environments based on application
observation dates to reveal temporal drift, integrating specialized multi-proxy
contrastive learning and invariant gradient alignment to generate and align
environments with high-quality, stable representations. TIF can be seamlessly
integrated into any learning-based detector. Experiments on a decade-long
dataset show that TIF excels, particularly in early deployment stages,
addressing real-world needs and outperforming state-of-the-art methods.
|
2502.05104
|
Leveraging Hypernetworks and Learnable Kernels for Consumer Energy
Forecasting Across Diverse Consumer Types
|
cs.LG cs.AI
|
Consumer energy forecasting is essential for managing energy consumption and
planning, directly influencing operational efficiency, cost reduction,
personalized energy management, and sustainability efforts. In recent years,
deep learning techniques, especially LSTMs and transformers, have been greatly
successful in the field of energy consumption forecasting. Nevertheless, these
techniques have difficulties in capturing complex and sudden variations, and,
moreover, they are commonly examined only on a specific type of consumer (e.g.,
only offices, only schools). Consequently, this paper proposes HyperEnergy, a
consumer energy forecasting strategy that leverages hypernetworks for improved
modeling of complex patterns applicable across a diversity of consumers.
Hypernetwork is responsible for predicting the parameters of the primary
prediction network, in our case LSTM. A learnable adaptable kernel, comprised
of polynomial and radial basis function kernels, is incorporated to enhance
performance. The proposed HyperEnergy was evaluated on diverse consumers
including, student residences, detached homes, a home with electric vehicle
charging, and a townhouse. Across all consumer types, HyperEnergy consistently
outperformed 10 other techniques, including state-of-the-art models such as
LSTM, AttentionLSTM, and transformer.
|
2502.05107
|
3DMolFormer: A Dual-channel Framework for Structure-based Drug Discovery
|
cs.CE cs.LG
|
Structure-based drug discovery, encompassing the tasks of protein-ligand
docking and pocket-aware 3D drug design, represents a core challenge in drug
discovery. However, no existing work can deal with both tasks to effectively
leverage the duality between them, and current methods for each task are
hindered by challenges in modeling 3D information and the limitations of
available data. To address these issues, we propose 3DMolFormer, a unified
dual-channel transformer-based framework applicable to both docking and 3D drug
design tasks, which exploits their duality by utilizing docking functionalities
within the drug design process. Specifically, we represent 3D pocket-ligand
complexes using parallel sequences of discrete tokens and continuous numbers,
and we design a corresponding dual-channel transformer model to handle this
format, thereby overcoming the challenges of 3D information modeling.
Additionally, we alleviate data limitations through large-scale pre-training on
a mixed dataset, followed by supervised and reinforcement learning fine-tuning
techniques respectively tailored for the two tasks. Experimental results
demonstrate that 3DMolFormer outperforms previous approaches in both
protein-ligand docking and pocket-aware 3D drug design, highlighting its
promising application in structure-based drug discovery. The code is available
at: https://github.com/HXYfighter/3DMolFormer .
|
2502.05109
|
Graph Contrastive Learning for Connectome Classification
|
cs.LG
|
With recent advancements in non-invasive techniques for measuring brain
activity, such as magnetic resonance imaging (MRI), the study of structural and
functional brain networks through graph signal processing (GSP) has gained
notable prominence. GSP stands as a key tool in unraveling the interplay
between the brain's function and structure, enabling the analysis of graphs
defined by the connections between regions of interest -- referred to as
connectomes in this context. Our work represents a further step in this
direction by exploring supervised contrastive learning methods within the realm
of graph representation learning. The main objective of this approach is to
generate subject-level (i.e., graph-level) vector representations that bring
together subjects sharing the same label while separating those with different
labels. These connectome embeddings are derived from a graph neural network
Encoder-Decoder architecture, which jointly considers structural and functional
connectivity. By leveraging data augmentation techniques, the proposed
framework achieves state-of-the-art performance in a gender classification task
using Human Connectome Project data. More broadly, our connectome-centric
methodological advances support the promising prospect of using GSP to discover
more about brain function, with potential impact to understanding heterogeneity
in the neurodegeneration for precision medicine and diagnosis.
|
2502.05110
|
ApplE: An Applied Ethics Ontology with Event Context
|
cs.CY cs.AI
|
Applied ethics is ubiquitous in most domains, requiring much deliberation due
to its philosophical nature. Varying views often lead to conflicting courses of
action where ethical dilemmas become challenging to resolve. Although many
factors contribute to such a decision, the major driving forces can be
discretized and thus simplified to provide an indicative answer. Knowledge
representation and reasoning offer a way to explicitly translate abstract
ethical concepts into applicable principles within the context of an event. To
achieve this, we propose ApplE, an Applied Ethics ontology that captures
philosophical theory and event context to holistically describe the morality of
an action. The development process adheres to a modified version of the
Simplified Agile Methodology for Ontology Development (SAMOD) and utilizes
standard design and publication practices. Using ApplE, we model a use case
from the bioethics domain that demonstrates our ontology's social and
scientific value. Apart from the ontological reasoning and quality checks,
ApplE is also evaluated using the three-fold testing process of SAMOD. ApplE
follows FAIR principles and aims to be a viable resource for applied ethicists
and ontology engineers.
|
2502.05111
|
Flexible and Efficient Grammar-Constrained Decoding
|
cs.CL cs.AI
|
Large Language Models (LLMs) are often asked to generate structured outputs
that obey precise syntactic rules, such as code snippets or formatted data.
Grammar-constrained decoding (GCD) can guarantee that LLM outputs matches such
rules by masking out tokens that will provably lead to outputs that do not
belong to a specified context-free grammar (CFG). To guarantee soundness, GCD
algorithms have to compute how a given LLM subword tokenizer can align with the
tokens used
by a given context-free grammar and compute token masks based on this
information. Doing so efficiently is challenging and existing GCD algorithms
require tens of minutes to preprocess common grammars. We present a new GCD
algorithm together with an implementation that offers 17.71x faster offline
preprocessing than existing approaches while preserving state-of-the-art
efficiency in online mask computation.
|
2502.05113
|
GiesKaNe: Bridging Past and Present in Grammatical Theory and Practical
Application
|
cs.CL
|
This article explores the requirements for corpus compilation within the
GiesKaNe project (University of Giessen and Kassel, Syntactic Basic Structures
of New High German). The project is defined by three central characteristics:
it is a reference corpus, a historical corpus, and a syntactically deeply
annotated treebank. As a historical corpus, GiesKaNe aims to establish
connections with both historical and contemporary corpora, ensuring its
relevance across temporal and linguistic contexts. The compilation process
strikes the balance between innovation and adherence to standards, addressing
both internal project goals and the broader interests of the research
community. The methodological complexity of such a project is managed through a
complementary interplay of human expertise and machine-assisted processes. The
article discusses foundational topics such as tokenization, normalization,
sentence definition, tagging, parsing, and inter-annotator agreement, alongside
advanced considerations. These include comparisons between grammatical models,
annotation schemas, and established de facto annotation standards as well as
the integration of human and machine collaboration. Notably, a novel method for
machine-assisted classification of texts along the continuum of conceptual
orality and literacy is proposed, offering new perspectives on text selection.
Furthermore, the article introduces an approach to deriving de facto standard
annotations from existing ones, mediating between standardization and
innovation. In the course of describing the workflow the article demonstrates
that even ambitious projects like GiesKaNe can be effectively implemented using
existing research infrastructure, requiring no specialized annotation tools.
Instead, it is shown that the workflow can be based on the strategic use of a
simple spreadsheet and integrates the capabilities of the existing
infrastructure.
|
2502.05114
|
SpecTUS: Spectral Translator for Unknown Structures annotation from
EI-MS spectra
|
cs.LG physics.data-an
|
Compound identification and structure annotation from mass spectra is a
well-established task widely applied in drug detection, criminal forensics,
small molecule biomarker discovery and chemical engineering.
We propose SpecTUS: Spectral Translator for Unknown Structures, a deep neural
model that addresses the task of structural annotation of small molecules from
low-resolution gas chromatography electron ionization mass spectra (GC-EI-MS).
Our model analyzes the spectra in \textit{de novo} manner -- a direct
translation from the spectra into 2D-structural representation. Our approach is
particularly useful for analyzing compounds unavailable in spectral libraries.
In a rigorous evaluation of our model on the novel structure annotation task
across different libraries, we outperformed standard database search techniques
by a wide margin. On a held-out testing set, including \numprint{28267} spectra
from the NIST database, we show that our model's single suggestion perfectly
reconstructs 43\% of the subset's compounds. This single suggestion is strictly
better than the candidate of the database hybrid search (common method among
practitioners)
in 76\% of cases. In a~still affordable scenario of~10 suggestions, perfect
reconstruction is achieved in 65\%, and 84\% are better than the hybrid search.
|
2502.05115
|
"It Felt Like I Was Left in the Dark": Exploring Information Needs and
Design Opportunities for Family Caregivers of Older Adult Patients in
Critical Care Settings
|
cs.HC cs.AI
|
Older adult patients constitute a rapidly growing subgroup of Intensive Care
Unit (ICU) patients. In these situations, their family caregivers are expected
to represent the unconscious patients to access and interpret patients' medical
information. However, caregivers currently have to rely on overloaded
clinicians for information updates and typically lack the health literacy to
understand complex medical information. Our project aims to explore the
information needs of caregivers of ICU older adult patients, from which we can
propose design opportunities to guide future AI systems. The project begins
with formative interviews with 11 caregivers to identify their challenges in
accessing and interpreting medical information; From these findings, we then
synthesize design requirements and propose an AI system prototype to cope with
caregivers' challenges. The system prototype has two key features: a timeline
visualization to show the AI extracted and summarized older adult patients' key
medical events; and an LLM-based chatbot to provide context-aware informational
support. We conclude our paper by reporting on the follow-up user evaluation of
the system and discussing future AI-based systems for ICU caregivers of older
adults.
|
2502.05116
|
Optimizing Wireless Resource Management and Synchronization in Digital
Twin Networks
|
cs.NI cs.LG cs.SY eess.SY
|
In this paper, we investigate an accurate synchronization between a physical
network and its digital network twin (DNT), which serves as a virtual
representation of the physical network. The considered network includes a set
of base stations (BSs) that must allocate its limited spectrum resources to
serve a set of users while also transmitting its partially observed physical
network information to a cloud server to generate the DNT. Since the DNT can
predict the physical network status based on its historical status, the BSs may
not need to send their physical network information at each time slot, allowing
them to conserve spectrum resources to serve the users. However, if the DNT
does not receive the physical network information of the BSs over a large time
period, the DNT's accuracy in representing the physical network may degrade. To
this end, each BS must decide when to send the physical network information to
the cloud server to update the DNT, while also determining the spectrum
resource allocation policy for both DNT synchronization and serving the users.
We formulate this resource allocation task as an optimization problem, aiming
to maximize the total data rate of all users while minimizing the
asynchronization between the physical network and the DNT. To address this
problem, we propose a method based on the GRUs and the value decomposition
network (VDN). Simulation results show that our GRU and VDN based algorithm
improves the weighted sum of data rates and the similarity between the status
of the DNT and the physical network by up to 28.96%, compared to a baseline
method combining GRU with the independent Q learning.
|
2502.05118
|
Use of Winsome Robots for Understanding Human Feedback (UWU)
|
cs.RO cs.HC
|
As social robots become more common, many have adopted cute aesthetics aiming
to enhance user comfort and acceptance. However, the effect of this aesthetic
choice on human feedback in reinforcement learning scenarios remains unclear.
Previous research has shown that humans tend to give more positive than
negative feedback, which can cause failure to reach optimal robot behavior. We
hypothesize that this positive bias may be exacerbated by the robot's level of
perceived cuteness. To investigate, we conducted a user study where
participants critique a robot's trajectories while it performs a task. We then
analyzed the impact of the robot's aesthetic cuteness on the type of
participant feedback. Our results suggest that there is a shift in the ratio of
positive to negative feedback when perceived cuteness changes. In light of
this, we experiment with a stochastic version of TAMER which adapts based on
the user's level of positive feedback bias to mitigate these effects.
|
2502.05119
|
Investigating the impact of kernel harmonization and deformable
registration on inspiratory and expiratory chest CT images for people with
COPD
|
eess.IV cs.CV
|
Paired inspiratory-expiratory CT scans enable the quantification of gas
trapping due to small airway disease and emphysema by analyzing lung tissue
motion in COPD patients. Deformable image registration of these scans assesses
regional lung volumetric changes. However, variations in reconstruction kernels
between paired scans introduce errors in quantitative analysis. This work
proposes a two-stage pipeline to harmonize reconstruction kernels and perform
deformable image registration using data acquired from the COPDGene study. We
use a cycle generative adversarial network (GAN) to harmonize inspiratory scans
reconstructed with a hard kernel (BONE) to match expiratory scans reconstructed
with a soft kernel (STANDARD). We then deformably register the expiratory scans
to inspiratory scans. We validate harmonization by measuring emphysema using a
publicly available segmentation algorithm before and after harmonization.
Results show harmonization significantly reduces emphysema measurement
inconsistencies, decreasing median emphysema scores from 10.479% to 3.039%,
with a reference median score of 1.305% from the STANDARD kernel as the target.
Registration accuracy is evaluated via Dice overlap between emphysema regions
on inspiratory, expiratory, and deformed images. The Dice coefficient between
inspiratory emphysema masks and deformably registered emphysema masks increases
significantly across registration stages (p<0.001). Additionally, we
demonstrate that deformable registration is robust to kernel variations.
|
2502.05121
|
Refining Integration-by-Parts Reduction of Feynman Integrals with
Machine Learning
|
hep-th cs.LG hep-ph
|
Integration-by-parts reductions of Feynman integrals pose a frequent
bottle-neck in state-of-the-art calculations in theoretical particle and
gravitational-wave physics, and rely on heuristic approaches for selecting
integration-by-parts identities, whose quality heavily influences the
performance. In this paper, we investigate the use of machine-learning
techniques to find improved heuristics. We use funsearch, a genetic programming
variant based on code generation by a Large Language Model, in order to explore
possible approaches, then use strongly typed genetic programming to zero in on
useful solutions. Both approaches manage to re-discover the state-of-the-art
heuristics recently incorporated into integration-by-parts solvers, and in one
example find a small advance on this state of the art.
|
2502.05122
|
Distinguishing Cause from Effect with Causal Velocity Models
|
stat.ML cs.LG stat.ME
|
Bivariate structural causal models (SCM) are often used to infer causal
direction by examining their goodness-of-fit under restricted model classes. In
this paper, we describe a parametrization of bivariate SCMs in terms of a
causal velocity by viewing the cause variable as time in a dynamical system.
The velocity implicitly defines counterfactual curves via the solution of
initial value problems where the observation specifies the initial condition.
Using tools from measure transport, we obtain a unique correspondence between
SCMs and the score function of the generated distribution via its causal
velocity. Based on this, we derive an objective function that directly
regresses the velocity against the score function, the latter of which can be
estimated non-parametrically from observational data. We use this to develop a
method for bivariate causal discovery that extends beyond known model classes
such as additive or location scale noise, and that requires no assumptions on
the noise distributions. When the score is estimated well, the objective is
also useful for detecting model non-identifiability and misspecification. We
present positive results in simulation and benchmark experiments where many
existing methods fail, and perform ablation studies to examine the method's
sensitivity to accurate score estimation.
|
2502.05127
|
Self-supervised Conformal Prediction for Uncertainty Quantification in
Imaging Problems
|
cs.CV stat.ME
|
Most image restoration problems are ill-conditioned or ill-posed and hence
involve significant uncertainty. Quantifying this uncertainty is crucial for
reliably interpreting experimental results, particularly when reconstructed
images inform critical decisions and science. However, most existing image
restoration methods either fail to quantify uncertainty or provide estimates
that are highly inaccurate. Conformal prediction has recently emerged as a
flexible framework to equip any estimator with uncertainty quantification
capabilities that, by construction, have nearly exact marginal coverage. To
achieve this, conformal prediction relies on abundant ground truth data for
calibration. However, in image restoration problems, reliable ground truth data
is often expensive or not possible to acquire. Also, reliance on ground truth
data can introduce large biases in situations of distribution shift between
calibration and deployment. This paper seeks to develop a more robust approach
to conformal prediction for image restoration problems by proposing a
self-supervised conformal prediction method that leverages Stein's Unbiased
Risk Estimator (SURE) to self-calibrate itself directly from the observed noisy
measurements, bypassing the need for ground truth. The method is suitable for
any linear imaging inverse problem that is ill-conditioned, and it is
especially powerful when used with modern self-supervised image restoration
techniques that can also be trained directly from measurement data. The
proposed approach is demonstrated through numerical experiments on image
denoising and deblurring, where it delivers results that are remarkably
accurate and comparable to those obtained by supervised conformal prediction
with ground truth data.
|
2502.05129
|
Counting Fish with Temporal Representations of Sonar Video
|
cs.CV
|
Accurate estimates of salmon escapement - the number of fish migrating
upstream to spawn - are key data for conservation and fishery management.
Existing methods for salmon counting using high-resolution imaging sonar
hardware are non-invasive and compatible with computer vision processing. Prior
work in this area has utilized object detection and tracking based methods for
automated salmon counting. However, these techniques remain inaccessible to
many sonar deployment sites due to limited compute and connectivity in the
field. We propose an alternative lightweight computer vision method for fish
counting based on analyzing echograms - temporal representations that compress
several hundred frames of imaging sonar video into a single image. We predict
upstream and downstream counts within 200-frame time windows directly from
echograms using a ResNet-18 model, and propose a set of domain-specific image
augmentations and a weakly-supervised training protocol to further improve
results. We achieve a count error of 23% on representative data from the Kenai
River in Alaska, demonstrating the feasibility of our approach.
|
2502.05130
|
Latent Swap Joint Diffusion for Long-Form Audio Generation
|
cs.SD cs.AI cs.CV cs.MM eess.AS
|
Previous work on long-form audio generation using global-view diffusion or
iterative generation demands significant training or inference costs. While
recent advancements in multi-view joint diffusion for panoramic generation
provide an efficient option, they struggle with spectrum generation with severe
overlap distortions and high cross-view consistency costs. We initially explore
this phenomenon through the connectivity inheritance of latent maps and uncover
that averaging operations excessively smooth the high-frequency components of
the latent map. To address these issues, we propose Swap Forward (SaFa), a
frame-level latent swap framework that synchronizes multiple diffusions to
produce a globally coherent long audio with more spectrum details in a
forward-only manner. At its core, the bidirectional Self-Loop Latent Swap is
applied between adjacent views, leveraging stepwise diffusion trajectory to
adaptively enhance high-frequency components without disrupting low-frequency
components. Furthermore, to ensure cross-view consistency, the unidirectional
Reference-Guided Latent Swap is applied between the reference and the
non-overlap regions of each subview during the early stages, providing
centralized trajectory guidance. Quantitative and qualitative experiments
demonstrate that SaFa significantly outperforms existing joint diffusion
methods and even training-based long audio generation models. Moreover, we find
that it also adapts well to panoramic generation, achieving comparable
state-of-the-art performance with greater efficiency and model
generalizability. Project page is available at https://swapforward.github.io/.
|
2502.05133
|
Data-Parallel Neural Network Training via Nonlinearly Preconditioned
Trust-Region Method
|
cs.LG cs.NA math.NA
|
Parallel training methods are increasingly relevant in machine learning (ML)
due to the continuing growth in model and dataset sizes. We propose a variant
of the Additively Preconditioned Trust-Region Strategy (APTS) for training deep
neural networks (DNNs). The proposed APTS method utilizes a data-parallel
approach to construct a nonlinear preconditioner employed in the nonlinear
optimization strategy. In contrast to the common employment of Stochastic
Gradient Descent (SGD) and Adaptive Moment Estimation (Adam), which are both
variants of gradient descent (GD) algorithms, the APTS method implicitly
adjusts the step sizes in each iteration, thereby removing the need for costly
hyperparameter tuning. We demonstrate the performance of the proposed APTS
variant using the MNIST and CIFAR-10 datasets. The results obtained indicate
that the APTS variant proposed here achieves comparable validation accuracy to
SGD and Adam, all while allowing for parallel training and obviating the need
for expensive hyperparameter tuning.
|
2502.05134
|
Information-Theoretic Guarantees for Recovering Low-Rank Tensors from
Symmetric Rank-One Measurements
|
math.ST cs.IT math.IT math.PR stat.ML stat.TH
|
In this paper, we investigate the sample complexity of recovering tensors
with low symmetric rank from symmetric rank-one measurements. This setting is
particularly motivated by the study of higher-order interactions and the
analysis of two-layer neural networks with polynomial activations (polynomial
networks). Using a covering numbers argument, we analyze the performance of the
symmetric rank minimization program and establish near-optimal sample
complexity bounds when the underlying distribution is log-concave. Our
measurement model involves random symmetric rank-one tensors, which lead to
involved probability calculations. To address these challenges, we employ the
Carbery-Wright inequality, a powerful tool for studying anti-concentration
properties of random polynomials, and leverage orthogonal polynomials.
Additionally, we provide a sample complexity lower bound based on Fano's
inequality, and discuss broader implications of our results for two-layer
polynomial networks.
|
2502.05139
|
Meta Audiobox Aesthetics: Unified Automatic Quality Assessment for
Speech, Music, and Sound
|
cs.SD cs.LG eess.AS
|
The quantification of audio aesthetics remains a complex challenge in audio
processing, primarily due to its subjective nature, which is influenced by
human perception and cultural context. Traditional methods often depend on
human listeners for evaluation, leading to inconsistencies and high resource
demands. This paper addresses the growing need for automated systems capable of
predicting audio aesthetics without human intervention. Such systems are
crucial for applications like data filtering, pseudo-labeling large datasets,
and evaluating generative audio models, especially as these models become more
sophisticated. In this work, we introduce a novel approach to audio aesthetic
evaluation by proposing new annotation guidelines that decompose human
listening perspectives into four distinct axes. We develop and train
no-reference, per-item prediction models that offer a more nuanced assessment
of audio quality. Our models are evaluated against human mean opinion scores
(MOS) and existing methods, demonstrating comparable or superior performance.
This research not only advances the field of audio aesthetics but also provides
open-source models and datasets to facilitate future work and benchmarking. We
release our code and pre-trained model at:
https://github.com/facebookresearch/audiobox-aesthetics
|
2502.05142
|
Chest X-ray Foundation Model with Global and Local Representations
Integration
|
eess.IV cs.CV
|
Chest X-ray (CXR) is the most frequently ordered imaging test, supporting
diverse clinical tasks from thoracic disease detection to postoperative
monitoring. However, task-specific classification models are limited in scope,
require costly labeled data, and lack generalizability to out-of-distribution
datasets. To address these challenges, we introduce CheXFound, a
self-supervised vision foundation model that learns robust CXR representations
and generalizes effectively across a wide range of downstream tasks. We
pretrain CheXFound on a curated CXR-1M dataset, comprising over one million
unique CXRs from publicly available sources. We propose a Global and Local
Representations Integration (GLoRI) module for downstream adaptations, by
incorporating disease-specific local features with global image features for
enhanced performance in multilabel classification. Our experimental results
show that CheXFound outperforms state-of-the-art models in classifying 40
disease findings across different prevalence levels on the CXR-LT 24 dataset
and exhibits superior label efficiency on downstream tasks with limited
training data. Additionally, CheXFound achieved significant improvements on new
tasks with out-of-distribution datasets, including opportunistic cardiovascular
disease risk estimation and mortality prediction. These results highlight
CheXFound's strong generalization capabilities, enabling diverse adaptations
with improved label efficiency. The project source code is publicly available
at https://github.com/RPIDIAL/CheXFound.
|
2502.05145
|
From Restless to Contextual: A Thresholding Bandit Approach to Improve
Finite-horizon Performance
|
cs.LG
|
Online restless bandits extend classic contextual bandits by incorporating
state transitions and budget constraints, representing each agent as a Markov
Decision Process (MDP). This framework is crucial for finite-horizon strategic
resource allocation, optimizing limited costly interventions for long-term
benefits. However, learning the underlying MDP for each agent poses a major
challenge in finite-horizon settings. To facilitate learning, we reformulate
the problem as a scalable budgeted thresholding contextual bandit problem,
carefully integrating the state transitions into the reward design and focusing
on identifying agents with action benefits exceeding a threshold. We establish
the optimality of an oracle greedy solution in a simple two-state setting, and
propose an algorithm that achieves minimax optimal constant regret in the
online multi-state setting with heterogeneous agents and knowledge of outcomes
under no intervention. We numerically show that our algorithm outperforms
existing online restless bandit methods, offering significant improvements in
finite-horizon performance.
|
2502.05147
|
LP-DETR: Layer-wise Progressive Relations for Object Detection
|
cs.CV cs.AI
|
This paper presents LP-DETR (Layer-wise Progressive DETR), a novel approach
that enhances DETR-based object detection through multi-scale relation
modeling. Our method introduces learnable spatial relationships between object
queries through a relation-aware self-attention mechanism, which adaptively
learns to balance different scales of relations (local, medium and global)
across decoder layers. This progressive design enables the model to effectively
capture evolving spatial dependencies throughout the detection pipeline.
Extensive experiments on COCO 2017 dataset demonstrate that our method improves
both convergence speed and detection accuracy compared to standard
self-attention module. The proposed method achieves competitive results,
reaching 52.3\% AP with 12 epochs and 52.5\% AP with 24 epochs using ResNet-50
backbone, and further improving to 58.0\% AP with Swin-L backbone. Furthermore,
our analysis reveals an interesting pattern: the model naturally learns to
prioritize local spatial relations in early decoder layers while gradually
shifting attention to broader contexts in deeper layers, providing valuable
insights for future research in object detection.
|
2502.05148
|
An Annotated Reading of 'The Singer of Tales' in the LLM Era
|
cs.CY cs.CL
|
The Parry-Lord oral-formulaic theory was a breakthrough in understanding how
oral narrative poetry is learned, composed, and transmitted by illiterate
bards. In this paper, we provide an annotated reading of the mechanism
underlying this theory from the lens of large language models (LLMs) and
generative artificial intelligence (AI). We point out the the similarities and
differences between oral composition and LLM generation, and comment on the
implications to society and AI policy.
|
2502.05150
|
CodeSCM: Causal Analysis for Multi-Modal Code Generation
|
cs.CL
|
In this paper, we propose CodeSCM, a Structural Causal Model (SCM) for
analyzing multi-modal code generation using large language models (LLMs). By
applying interventions to CodeSCM, we measure the causal effects of different
prompt modalities, such as natural language, code, and input-output examples,
on the model. CodeSCM introduces latent mediator variables to separate the code
and natural language semantics of a multi-modal code generation prompt. Using
the principles of Causal Mediation Analysis on these mediators we quantify
direct effects representing the model's spurious leanings. We find that, in
addition to natural language instructions, input-output examples significantly
influence code generation.
|
2502.05151
|
Transforming Science with Large Language Models: A Survey on AI-assisted
Scientific Discovery, Experimentation, Content Generation, and Evaluation
|
cs.CL cs.AI cs.CV cs.LG
|
With the advent of large multimodal language models, science is now at a
threshold of an AI-based technological transformation. Recently, a plethora of
new AI models and tools has been proposed, promising to empower researchers and
academics worldwide to conduct their research more effectively and efficiently.
This includes all aspects of the research cycle, especially (1) searching for
relevant literature; (2) generating research ideas and conducting
experimentation; generating (3) text-based and (4) multimodal content (e.g.,
scientific figures and diagrams); and (5) AI-based automatic peer review. In
this survey, we provide an in-depth overview over these exciting recent
developments, which promise to fundamentally alter the scientific research
process for good. Our survey covers the five aspects outlined above, indicating
relevant datasets, methods and results (including evaluation) as well as
limitations and scope for future research. Ethical concerns regarding
shortcomings of these tools and potential for misuse (fake science, plagiarism,
harms to research integrity) take a particularly prominent place in our
discussion. We hope that our survey will not only become a reference guide for
newcomers to the field but also a catalyst for new AI-based initiatives in the
area of "AI4Science".
|
2502.05153
|
Hummingbird: High Fidelity Image Generation via Multimodal Context
Alignment
|
cs.CV
|
While diffusion models are powerful in generating high-quality, diverse
synthetic data for object-centric tasks, existing methods struggle with
scene-aware tasks such as Visual Question Answering (VQA) and Human-Object
Interaction (HOI) Reasoning, where it is critical to preserve scene attributes
in generated images consistent with a multimodal context, i.e. a reference
image with accompanying text guidance query. To address this, we introduce
Hummingbird, the first diffusion-based image generator which, given a
multimodal context, generates highly diverse images w.r.t. the reference image
while ensuring high fidelity by accurately preserving scene attributes, such as
object interactions and spatial relationships from the text guidance.
Hummingbird employs a novel Multimodal Context Evaluator that simultaneously
optimizes our formulated Global Semantic and Fine-grained Consistency Rewards
to ensure generated images preserve the scene attributes of reference images in
relation to the text guidance while maintaining diversity. As the first model
to address the task of maintaining both diversity and fidelity given a
multimodal context, we introduce a new benchmark formulation incorporating MME
Perception and Bongard HOI datasets. Benchmark experiments show Hummingbird
outperforms all existing methods by achieving superior fidelity while
maintaining diversity, validating Hummingbird's potential as a robust
multimodal context-aligned image generator in complex visual tasks.
|
2502.05155
|
Deep Dynamic Probabilistic Canonical Correlation Analysis
|
cs.LG stat.ML
|
This paper presents Deep Dynamic Probabilistic Canonical Correlation Analysis
(D2PCCA), a model that integrates deep learning with probabilistic modeling to
analyze nonlinear dynamical systems. Building on the probabilistic extensions
of Canonical Correlation Analysis (CCA), D2PCCA captures nonlinear latent
dynamics and supports enhancements such as KL annealing for improved
convergence and normalizing flows for a more flexible posterior approximation.
D2PCCA naturally extends to multiple observed variables, making it a versatile
tool for encoding prior knowledge about sequential datasets and providing a
probabilistic understanding of the system's dynamics. Experimental validation
on real financial datasets demonstrates the effectiveness of D2PCCA and its
extensions in capturing latent dynamics.
|
2502.05157
|
Efficient distributional regression trees learning algorithms for
calibrated non-parametric probabilistic forecasts
|
cs.LG cs.DS
|
The perspective of developing trustworthy AI for critical applications in
science and engineering requires machine learning techniques that are capable
of estimating their own uncertainty. In the context of regression, instead of
estimating a conditional mean, this can be achieved by producing a predictive
interval for the output, or to even learn a model of the conditional
probability $p(y|x)$ of an output $y$ given input features $x$. While this can
be done under parametric assumptions with, e.g. generalized linear model, these
are typically too strong, and non-parametric models offer flexible
alternatives. In particular, for scalar outputs, learning directly a model of
the conditional cumulative distribution function of $y$ given $x$ can lead to
more precise probabilistic estimates, and the use of proper scoring rules such
as the weighted interval score (WIS) and the continuous ranked probability
score (CRPS) lead to better coverage and calibration properties.
This paper introduces novel algorithms for learning probabilistic regression
trees for the WIS or CRPS loss functions. These algorithms are made
computationally efficient thanks to an appropriate use of known data structures
- namely min-max heaps, weight-balanced binary trees and Fenwick trees. Through
numerical experiments, we demonstrate that the performance of our methods is
competitive with alternative approaches. Additionally, our methods benefit from
the inherent interpretability and explainability of trees. As a by-product, we
show how our trees can be used in the context of conformal prediction and
explain why they are particularly well-suited for achieving group-conditional
coverage guarantees.
|
2502.05159
|
A Lightweight Method to Disrupt Memorized Sequences in LLM
|
cs.LG cs.CL
|
Large language models (LLMs) demonstrate impressive capabilities across many
tasks yet risk reproducing copyrighted content verbatim, raising legal and
ethical concerns. Although methods like differential privacy or neuron editing
can reduce memorization, they typically require costly retraining or direct
access to model weights and may degrade performance. To address these
challenges, we propose TokenSwap, a lightweight, post-hoc approach that
replaces the probabilities of grammar-related tokens with those from a small
auxiliary model (e.g., DistilGPT-2). We run extensive experiments on commercial
grade models such as Pythia-6.9b and LLaMA-3-8b and demonstrate that our method
effectively reduces well-known cases of memorized generation by upto 10x with
little to no impact on downstream tasks. Our approach offers a uniquely
accessible and effective solution to users of real-world systems.
|
2502.05163
|
DuoGuard: A Two-Player RL-Driven Framework for Multilingual LLM
Guardrails
|
cs.CL cs.LG
|
The rapid advancement of large language models (LLMs) has increased the need
for guardrail models to ensure responsible use, particularly in detecting
unsafe and illegal content. While substantial safety data exist in English,
multilingual guardrail modeling remains underexplored due to the scarcity of
open-source safety data in other languages. To address this gap, we propose a
novel two-player Reinforcement Learning (RL) framework, where a generator and a
guardrail model co-evolve adversarially to produce high-quality synthetic data
for multilingual guardrail training. We theoretically formalize this
interaction as a two-player game, proving convergence to a Nash equilibrium.
Empirical evaluations show that our model \ours outperforms state-of-the-art
models, achieving nearly 10% improvement over LlamaGuard3 (8B) on English
benchmarks while being 4.5x faster at inference with a significantly smaller
model (0.5B). We achieve substantial advancements in multilingual safety tasks,
particularly in addressing the imbalance for lower-resource languages in a
collected real dataset. Ablation studies emphasize the critical role of
synthetic data generation in bridging the imbalance in open-source data between
English and other languages. These findings establish a scalable and efficient
approach to synthetic data generation, paving the way for improved multilingual
guardrail models to enhance LLM safety. Code, model, and data will be
open-sourced at https://github.com/yihedeng9/DuoGuard.
|
2502.05164
|
In-context denoising with one-layer transformers: connections between
attention and associative memory retrieval
|
cs.LG cond-mat.dis-nn
|
We introduce in-context denoising, a task that refines the connection between
attention-based architectures and dense associative memory (DAM) networks, also
known as modern Hopfield networks. Using a Bayesian framework, we show
theoretically and empirically that certain restricted denoising problems can be
solved optimally even by a single-layer transformer. We demonstrate that a
trained attention layer processes each denoising prompt by performing a single
gradient descent update on a context-aware DAM energy landscape, where context
tokens serve as associative memories and the query token acts as an initial
state. This one-step update yields better solutions than exact retrieval of
either a context token or a spurious local minimum, providing a concrete
example of DAM networks extending beyond the standard retrieval paradigm.
Overall, this work solidifies the link between associative memory and attention
mechanisms first identified by Ramsauer et al., and demonstrates the relevance
of associative memory models in the study of in-context learning.
|
2502.05165
|
Multitwine: Multi-Object Compositing with Text and Layout Control
|
cs.CV
|
We introduce the first generative model capable of simultaneous multi-object
compositing, guided by both text and layout. Our model allows for the addition
of multiple objects within a scene, capturing a range of interactions from
simple positional relations (e.g., next to, in front of) to complex actions
requiring reposing (e.g., hugging, playing guitar). When an interaction implies
additional props, like `taking a selfie', our model autonomously generates
these supporting objects. By jointly training for compositing and
subject-driven generation, also known as customization, we achieve a more
balanced integration of textual and visual inputs for text-driven object
compositing. As a result, we obtain a versatile model with state-of-the-art
performance in both tasks. We further present a data generation pipeline
leveraging visual and language models to effortlessly synthesize multimodal,
aligned training data.
|
2502.05167
|
NoLiMa: Long-Context Evaluation Beyond Literal Matching
|
cs.CL
|
Recent large language models (LLMs) support long contexts ranging from 128K
to 1M tokens. A popular method for evaluating these capabilities is the
needle-in-a-haystack (NIAH) test, which involves retrieving a "needle"
(relevant information) from a "haystack" (long irrelevant context). Extensions
of this approach include increasing distractors, fact chaining, and in-context
reasoning. However, in these benchmarks, models can exploit existing literal
matches between the needle and haystack to simplify the task. To address this,
we introduce NoLiMa, a benchmark extending NIAH with a carefully designed
needle set, where questions and needles have minimal lexical overlap, requiring
models to infer latent associations to locate the needle within the haystack.
We evaluate 12 popular LLMs that claim to support contexts of at least 128K
tokens. While they perform well in short contexts (<1K), performance degrades
significantly as context length increases. At 32K, for instance, 10 models drop
below 50% of their strong short-length baselines. Even GPT-4o, one of the
top-performing exceptions, experiences a reduction from an almost-perfect
baseline of 99.3% to 69.7%. Our analysis suggests these declines stem from the
increased difficulty the attention mechanism faces in longer contexts when
literal matches are absent, making it harder to retrieve relevant information.
|
2502.05169
|
Flopping for FLOPs: Leveraging equivariance for computational efficiency
|
cs.CV cs.LG
|
Incorporating geometric invariance into neural networks enhances parameter
efficiency but typically increases computational costs. This paper introduces
new equivariant neural networks that preserve symmetry while maintaining a
comparable number of floating-point operations (FLOPs) per parameter to
standard non-equivariant networks. We focus on horizontal mirroring (flopping)
invariance, common in many computer vision tasks. The main idea is to
parametrize the feature spaces in terms of mirror-symmetric and
mirror-antisymmetric features, i.e., irreps of the flopping group. This
decomposes the linear layers to be block-diagonal, requiring half the number of
FLOPs. Our approach reduces both FLOPs and wall-clock time, providing a
practical solution for efficient, scalable symmetry-aware architectures.
|
2502.05171
|
Scaling up Test-Time Compute with Latent Reasoning: A Recurrent Depth
Approach
|
cs.LG cs.CL
|
We study a novel language model architecture that is capable of scaling
test-time computation by implicitly reasoning in latent space. Our model works
by iterating a recurrent block, thereby unrolling to arbitrary depth at
test-time. This stands in contrast to mainstream reasoning models that scale up
compute by producing more tokens. Unlike approaches based on chain-of-thought,
our approach does not require any specialized training data, can work with
small context windows, and can capture types of reasoning that are not easily
represented in words. We scale a proof-of-concept model to 3.5 billion
parameters and 800 billion tokens. We show that the resulting model can improve
its performance on reasoning benchmarks, sometimes dramatically, up to a
computation load equivalent to 50 billion parameters.
|
2502.05172
|
Joint MoE Scaling Laws: Mixture of Experts Can Be Memory Efficient
|
cs.LG cs.AI cs.CL
|
Mixture of Experts (MoE) architectures have significantly increased
computational efficiency in both research and real-world applications of
large-scale machine learning models. However, their scalability and efficiency
under memory constraints remain relatively underexplored. In this work, we
present joint scaling laws for dense and MoE models, incorporating key factors
such as the number of active parameters, dataset size, and the number of
experts. Our findings provide a principled framework for selecting the optimal
MoE configuration under fixed memory and compute budgets. Surprisingly, we show
that MoE models can be more memory-efficient than dense models, contradicting
conventional wisdom. To derive and validate the theoretical predictions of our
scaling laws, we conduct over 280 experiments with up to 2.7B active parameters
and up to 5B total parameters. These results offer actionable insights for
designing and deploying MoE models in practical large-scale training scenarios.
|
2502.05173
|
VideoRoPE: What Makes for Good Video Rotary Position Embedding?
|
cs.CV
|
While Rotary Position Embedding (RoPE) and its variants are widely adopted
for their long-context capabilities, the extension of the 1D RoPE to video,
with its complex spatio-temporal structure, remains an open challenge. This
work first introduces a comprehensive analysis that identifies four key
characteristics essential for the effective adaptation of RoPE to video, which
have not been fully considered in prior work. As part of our analysis, we
introduce a challenging V-NIAH-D (Visual Needle-In-A-Haystack with Distractors)
task, which adds periodic distractors into V-NIAH. The V-NIAH-D task
demonstrates that previous RoPE variants, lacking appropriate temporal
dimension allocation, are easily misled by distractors. Based on our analysis,
we introduce \textbf{VideoRoPE}, with a \textit{3D structure} designed to
preserve spatio-temporal relationships. VideoRoPE features
\textit{low-frequency temporal allocation} to mitigate periodic oscillations, a
\textit{diagonal layout} to maintain spatial symmetry, and \textit{adjustable
temporal spacing} to decouple temporal and spatial indexing. VideoRoPE
consistently surpasses previous RoPE variants, across diverse downstream tasks
such as long video retrieval, video understanding, and video hallucination. Our
code will be available at
\href{https://github.com/Wiselnn570/VideoRoPE}{https://github.com/Wiselnn570/VideoRoPE}.
|
2502.05174
|
MELON: Indirect Prompt Injection Defense via Masked Re-execution and
Tool Comparison
|
cs.CR cs.AI
|
Recent research has explored that LLM agents are vulnerable to indirect
prompt injection (IPI) attacks, where malicious tasks embedded in
tool-retrieved information can redirect the agent to take unauthorized actions.
Existing defenses against IPI have significant limitations: either require
essential model training resources, lack effectiveness against sophisticated
attacks, or harm the normal utilities. We present MELON (Masked re-Execution
and TooL comparisON), a novel IPI defense. Our approach builds on the
observation that under a successful attack, the agent's next action becomes
less dependent on user tasks and more on malicious tasks. Following this, we
design MELON to detect attacks by re-executing the agent's trajectory with a
masked user prompt modified through a masking function. We identify an attack
if the actions generated in the original and masked executions are similar. We
also include three key designs to reduce the potential false positives and
false negatives. Extensive evaluation on the IPI benchmark AgentDojo
demonstrates that MELON outperforms SOTA defenses in both attack prevention and
utility preservation. Moreover, we show that combining MELON with a SOTA prompt
augmentation defense (denoted as MELON-Aug) further improves its performance.
We also conduct a detailed ablation study to validate our key designs.
|
2502.05175
|
Fillerbuster: Multi-View Scene Completion for Casual Captures
|
cs.CV cs.GR
|
We present Fillerbuster, a method that completes unknown regions of a 3D
scene by utilizing a novel large-scale multi-view latent diffusion transformer.
Casual captures are often sparse and miss surrounding content behind objects or
above the scene. Existing methods are not suitable for handling this challenge
as they focus on making the known pixels look good with sparse-view priors, or
on creating the missing sides of objects from just one or two photos. In
reality, we often have hundreds of input frames and want to complete areas that
are missing and unobserved from the input frames. Additionally, the images
often do not have known camera parameters. Our solution is to train a
generative model that can consume a large context of input frames while
generating unknown target views and recovering image poses when desired. We
show results where we complete partial captures on two existing datasets. We
also present an uncalibrated scene completion task where our unified model
predicts both poses and creates new content. Our model is the first to predict
many images and poses together for scene completion.
|
2502.05176
|
AuraFusion360: Augmented Unseen Region Alignment for Reference-based
360{\deg} Unbounded Scene Inpainting
|
cs.CV
|
Three-dimensional scene inpainting is crucial for applications from virtual
reality to architectural visualization, yet existing methods struggle with view
consistency and geometric accuracy in 360{\deg} unbounded scenes. We present
AuraFusion360, a novel reference-based method that enables high-quality object
removal and hole filling in 3D scenes represented by Gaussian Splatting. Our
approach introduces (1) depth-aware unseen mask generation for accurate
occlusion identification, (2) Adaptive Guided Depth Diffusion, a zero-shot
method for accurate initial point placement without requiring additional
training, and (3) SDEdit-based detail enhancement for multi-view coherence. We
also introduce 360-USID, the first comprehensive dataset for 360{\deg}
unbounded scene inpainting with ground truth. Extensive experiments demonstrate
that AuraFusion360 significantly outperforms existing methods, achieving
superior perceptual quality while maintaining geometric accuracy across
dramatic viewpoint changes. See our project page for video results and the
dataset at https://kkennethwu.github.io/aurafusion360/.
|
2502.05177
|
Long-VITA: Scaling Large Multi-modal Models to 1 Million Tokens with
Leading Short-Context Accuracy
|
cs.CV
|
We introduce Long-VITA, a simple yet effective large multi-modal model for
long-context visual-language understanding tasks. It is adept at concurrently
processing and analyzing modalities of image, video, and text over 4K frames or
1M tokens while delivering advanced performances on short-context multi-modal
tasks. We propose an effective multi-modal training schema that starts with
large language models and proceeds through vision-language alignment, general
knowledge learning, and two sequential stages of long-sequence fine-tuning. We
further implement context-parallelism distributed inference and logits-masked
language modeling head to scale Long-VITA to infinitely long inputs of images
and texts during model inference. Regarding training data, Long-VITA is built
on a mix of 17M samples from public datasets only and demonstrates the
state-of-the-art performance on various multi-modal benchmarks, compared
against recent cutting-edge models with internal data. Long-VITA is fully
reproducible and supports both NPU and GPU platforms for training and testing.
By leveraging our inference designs, Long-VITA models achieve a remarkable 2x
prefill speedup and 4x context length extension in single node with 8 GPUs. We
hope Long-VITA can serve as a competitive baseline and offer valuable insights
for the open-source community in advancing long-context multi-modal
understanding.
|
2502.05178
|
QLIP: Text-Aligned Visual Tokenization Unifies Auto-Regressive
Multimodal Understanding and Generation
|
cs.CV
|
We introduce Quantized Language-Image Pretraining (QLIP), a visual
tokenization method that combines state-of-the-art reconstruction quality with
state-of-the-art zero-shot image understanding. QLIP trains a
binary-spherical-quantization-based autoencoder with reconstruction and
language-image alignment objectives. We are the first to show that the two
objectives do not need to be at odds. We balance the two loss terms dynamically
during training and show that a two-stage training pipeline effectively mixes
the large-batch requirements of image-language pre-training with the memory
bottleneck imposed by the reconstruction objective. We validate the
effectiveness of QLIP for multimodal understanding and text-conditioned image
generation with a single model. Specifically, QLIP serves as a drop-in
replacement for the visual encoder for LLaVA and the image tokenizer for
LlamaGen with comparable or even better performance. Finally, we demonstrate
that QLIP enables a unified mixed-modality auto-regressive model for
understanding and generation.
|
2502.05179
|
FlashVideo:Flowing Fidelity to Detail for Efficient High-Resolution
Video Generation
|
cs.CV
|
DiT diffusion models have achieved great success in text-to-video generation,
leveraging their scalability in model capacity and data scale. High content and
motion fidelity aligned with text prompts, however, often require large model
parameters and a substantial number of function evaluations (NFEs). Realistic
and visually appealing details are typically reflected in high resolution
outputs, further amplifying computational demands especially for single stage
DiT models. To address these challenges, we propose a novel two stage
framework, FlashVideo, which strategically allocates model capacity and NFEs
across stages to balance generation fidelity and quality. In the first stage,
prompt fidelity is prioritized through a low resolution generation process
utilizing large parameters and sufficient NFEs to enhance computational
efficiency. The second stage establishes flow matching between low and high
resolutions, effectively generating fine details with minimal NFEs.
Quantitative and visual results demonstrate that FlashVideo achieves
state-of-the-art high resolution video generation with superior computational
efficiency. Additionally, the two-stage design enables users to preview the
initial output before committing to full resolution generation, thereby
significantly reducing computational costs and wait times as well as enhancing
commercial viability .
|
2502.05181
|
Enhancing Team Diversity with Generative AI: A Novel Project Management
Framework
|
cs.CY cs.AI cs.LG
|
This research-in-progress paper presents a new project management framework
that utilises GenAI technology. The framework is designed to address the common
challenge of uniform team compositions in academic and research project teams,
particularly in universities and research institutions. It does so by
integrating sociologically identified patterns of successful team member
personalities and roles, using GenAI agents to fill gaps in team dynamics. This
approach adds an additional layer of analysis to conventional project
management processes by evaluating team members' personalities and roles and
employing GenAI agents, fine-tuned on personality datasets, to fill specific
team roles. Our initial experiments have shown improvements in the model's
ability to understand and process personality traits, suggesting the potential
effectiveness of GenAI teammates in real-world project settings. This paper
aims to explore the practical application of AI in enhancing team diversity and
project management
|
2502.05183
|
Modelling hydrogen integration in energy system models: Best practices
for policy insights
|
physics.soc-ph cs.CE math.OC
|
The rapid emergence of hydrogen in long-term energy strategies requires a
broad understanding on how hydrogen is currently modelled in national energy
system models. This study provides a review on hydrogen representation within
selected energy system models that are tailored towards providing policy
insights. The paper adopts a multi-layered review approach and selects eleven
notable models for the review. The review covers hydrogen production, storage,
transportation, trade, demand, modeling strategies, and hydrogen policies. The
review suggests existing models would often opt for a simplified representation
that can capture each stage of the hydrogen supply chain. This approach allows
models to strike a balance between accuracy and preserving computational
resources. The paper provides several suggestions for modeling hydrogen in
national energy system models.
|
2502.05186
|
Multimodal Stock Price Prediction
|
q-fin.ST cs.AI cs.LG
|
In an era where financial markets are heavily influenced by many static and
dynamic factors, it has become increasingly critical to carefully integrate
diverse data sources with machine learning for accurate stock price prediction.
This paper explores a multimodal machine learning approach for stock price
prediction by combining data from diverse sources, including traditional
financial metrics, tweets, and news articles. We capture real-time market
dynamics and investor mood through sentiment analysis on these textual data
using both ChatGPT-4o and FinBERT models. We look at how these integrated data
streams augment predictions made with a standard Long Short-Term Memory (LSTM
model) to illustrate the extent of performance gains. Our study's results
indicate that incorporating the mentioned data sources considerably increases
the forecast effectiveness of the reference model by up to 5%. We also provide
insights into the individual and combined predictive capacities of these
modalities, highlighting the substantial impact of incorporating sentiment
analysis from tweets and news articles. This research offers a systematic and
effective framework for applying multimodal data analytics techniques in
financial time series forecasting that provides a new view for investors to
leverage data for decision-making.
|
2502.05187
|
An Adaptable Budget Planner for Enhancing Budget-Constrained
Auto-Bidding in Online Advertising
|
cs.GT cs.LG
|
In online advertising, advertisers commonly utilize auto-bidding services to
bid for impression opportunities. A typical objective of the auto-bidder is to
optimize the advertiser's cumulative value of winning impressions within
specified budget constraints. However, such a problem is challenging due to the
complex bidding environment faced by diverse advertisers. To address this
challenge, we introduce ABPlanner, a few-shot adaptable budget planner designed
to improve budget-constrained auto-bidding. ABPlanner is based on a
hierarchical bidding framework that decomposes the bidding process into
shorter, manageable stages. Within this framework, ABPlanner allocates the
budget across all stages, allowing a low-level auto-bidder to bids based on the
budget allocation plan. The adaptability of ABPlanner is achieved through a
sequential decision-making approach, inspired by in-context reinforcement
learning. For each advertiser, ABPlanner adjusts the budget allocation plan
episode by episode, using data from previous episodes as prompt for current
decisions. This enables ABPlanner to quickly adapt to different advertisers
with few-shot data, providing a sample-efficient solution. Extensive simulation
experiments and real-world A/B testing validate the effectiveness of ABPlanner,
demonstrating its capability to enhance the cumulative value achieved by
auto-bidders.
|
2502.05189
|
Physics-Driven Self-Supervised Deep Learning for Free-Surface Multiple
Elimination
|
physics.geo-ph cs.LG
|
In recent years, deep learning (DL) has emerged as a promising alternative
approach for various seismic processing tasks, including primary estimation (or
multiple elimination), a crucial step for accurate subsurface imaging. In
geophysics, DL methods are commonly based on supervised learning from large
amounts of high-quality labelled data. Instead of relying on traditional
supervised learning, in the context of free-surface multiple elimination, we
propose a method in which the DL model learns to effectively parameterize the
free-surface multiple-free wavefield from the full wavefield by incorporating
the underlying physics into the loss computation. This, in turn, yields
high-quality estimates without ever being shown any ground truth data.
Currently, the network reparameterization is performed independently for each
dataset. We demonstrate its effectiveness through tests on both synthetic and
field data. We employ industry-standard Surface-Related Multiple Elimination
(SRME) using, respectively, global least-squares adaptive subtraction and local
least-squares adaptive subtraction as benchmarks. The comparison shows that the
proposed method outperforms the benchmarks in estimation accuracy, achieving
the most complete primary estimation and the least multiple energy leakage, but
at the cost of a higher computational burden.
|
2502.05190
|
Physics-Trained Neural Network as Inverse Problem Solver for Potential
Fields: An Example of Downward Continuation between Arbitrary Surfaces
|
physics.geo-ph cs.LG
|
Downward continuation is a critical task in potential field processing,
including gravity and magnetic fields, which aims to transfer data from one
observation surface to another that is closer to the source of the field. Its
effectiveness directly impacts the success of detecting and highlighting
subsurface anomalous sources. We treat downward continuation as an inverse
problem that relies on solving a forward problem defined by the formula for
upward continuation, and we propose a new physics-trained deep neural network
(DNN)-based solution for this task. We hard-code the upward continuation
process into the DNN's learning framework, where the DNN itself learns to act
as the inverse problem solver and can perform downward continuation without
ever being shown any ground truth data. We test the proposed method on both
synthetic magnetic data and real-world magnetic data from West Antarctica. The
preliminary results demonstrate its effectiveness through comparison with
selected benchmarks, opening future avenues for the combined use of DNNs and
established geophysical theories to address broader potential field inverse
problems, such as density and geometry modelling.
|
2502.05195
|
Using Large Language Models for Solving Thermodynamic Problems
|
cs.CE
|
Large Language Models (LLMs) have made significant progress in reasoning,
demonstrating their capability to generate human-like responses. This study
analyzes the problem-solving capabilities of LLMs in the domain of
thermodynamics. A benchmark of 22 thermodynamic problems to evaluate LLMs is
presented that contains both simple and advanced problems. Five different LLMs
are assessed: GPT-3.5, GPT-4, and GPT-4o from OpenAI, Llama 3.1 from Meta, and
le Chat from MistralAI. The answers of these LLMs were evaluated by trained
human experts, following a methodology akin to the grading of academic exam
responses. The scores and the consistency of the answers are discussed,
together with the analytical skills of the LLMs. Both strengths and weaknesses
of the LLMs become evident. They generally yield good results for the simple
problems, but also limitations become clear: The LLMs do not provide consistent
results, they often fail to fully comprehend the context and make wrong
assumptions. Given the complexity and domain-specific nature of the problems,
the statistical language modeling approach of the LLMs struggles with the
accurate interpretation and the required reasoning. The present results
highlight the need for more systematic integration of thermodynamic knowledge
with LLMs, for example, by using knowledge-based methods.
|
2502.05196
|
LLMs Provide Unstable Answers to Legal Questions
|
cs.CL cs.CY
|
An LLM is stable if it reaches the same conclusion when asked the identical
question multiple times. We find leading LLMs like gpt-4o, claude-3.5, and
gemini-1.5 are unstable when providing answers to hard legal questions, even
when made as deterministic as possible by setting temperature to 0. We curate
and release a novel dataset of 500 legal questions distilled from real cases,
involving two parties, with facts, competing legal arguments, and the question
of which party should prevail. When provided the exact same question, we
observe that LLMs sometimes say one party should win, while other times saying
the other party should win. This instability has implications for the
increasing numbers of legal AI products, legal processes, and lawyers relying
on these LLMs.
|
2502.05198
|
A finite element-based machine learning model for hydro-mechanical
analysis of swelling behavior in clay-sulfate rocks
|
physics.geo-ph cs.LG physics.comp-ph
|
The hydro-mechanical behavior of clay-sulfate rocks, especially their
swelling properties, poses significant challenges in geotechnical engineering.
This study presents a hybrid constrained machine learning (ML) model developed
using the categorical boosting algorithm (CatBoost) tuned with a Bayesian
optimization algorithm to predict and analyze the swelling behavior of these
complex geological materials. Initially, a coupled hydro-mechanical model based
on the Richards' equation coupled to a deformation process with linear
kinematics implemented within the finite element framework OpenGeoSys was used
to simulate the observed ground heave in Staufen, Germany, caused by water
inflow into the clay-sulfate bearing Triassic Grabfeld Formation. A systematic
parametric analysis using Gaussian distributions of key parameters, including
Young's modulus, Poisson's ratio, maximum swelling pressure, permeability, and
air entry pressure, was performed to construct a synthetic database. The ML
model takes time, spatial coordinates, and these parameter values as inputs,
while water saturation, porosity, and vertical displacement are outputs. In
addition, penalty terms were incorporated into the CatBoost objective function
to enforce physically meaningful predictions. Results show that the hybrid
approach effectively captures the nonlinear and dynamic interactions that
govern hydro-mechanical processes. The study demonstrates the ability of the
model to predict the swelling behavior of clay-sulfate rocks, providing a
robust tool for risk assessment and management in affected regions. The results
highlight the potential of ML-driven models to address complex geotechnical
challenges.
|
2502.05202
|
Accelerating LLM Inference with Lossless Speculative Decoding Algorithms
for Heterogeneous Vocabularies
|
cs.CL cs.AI cs.LG
|
Accelerating the inference of large language models (LLMs) is a critical
challenge in generative AI. Speculative decoding (SD) methods offer substantial
efficiency gains by generating multiple tokens using a single target forward
pass. However, existing SD approaches require the drafter and target models to
share the same vocabulary, thus limiting the pool of possible drafters, often
necessitating the training of a drafter from scratch. We present three new SD
methods that remove this shared-vocabulary constraint. All three methods
preserve the target distribution (i.e., they are lossless) and work with
off-the-shelf models without requiring additional training or modifications.
Empirically, on summarization, programming, and long-context tasks, our
algorithms achieve significant speedups over standard autoregressive decoding.
By enabling any off-the-shelf model to serve as drafter and requiring no
retraining, this work substantially broadens the applicability of the SD
framework in practice.
|
2502.05203
|
Adversarial Machine Learning: Attacking and Safeguarding Image Datasets
|
cs.LG cs.CV
|
This paper examines the vulnerabilities of convolutional neural networks
(CNNs) to adversarial attacks and explores a method for their safeguarding. In
this study, CNNs were implemented on four of the most common image datasets,
namely CIFAR-10, ImageNet, MNIST, and Fashion-MNIST, and achieved high baseline
accuracy. To assess the strength of these models, the Fast Gradient Sign Method
was used, which is a type of exploit on the model that is used to bring down
the models accuracies by adding a very minimal perturbation to the input image.
To counter the FGSM attack, a safeguarding approach went through, which
includes retraining the models on clear and pollutant or adversarial images to
increase their resistance ability. The next step involves applying FGSM again,
but this time to the adversarially trained models, to see how much the accuracy
of the models has gone down and evaluate the effectiveness of the defense. It
appears that while most level of robustness is achieved against the models
after adversarial training, there are still a few losses in the performance of
these models against adversarial perturbations. This work emphasizes the need
to create better defenses for models deployed in real-world scenarios against
adversaries.
|
2502.05204
|
Invariant Measures for Data-Driven Dynamical System Identification:
Analysis and Application
|
math.DS cs.LG nlin.CD physics.data-an
|
We propose a novel approach for performing dynamical system identification,
based upon the comparison of simulated and observed physical invariant
measures. While standard methods adopt a Lagrangian perspective by directly
treating time-trajectories as inference data, we take on an Eulerian
perspective and instead seek models fitting the observed global time-invariant
statistics. With this change in perspective, we gain robustness against
pervasive challenges in system identification including noise, chaos, and slow
sampling. In the first half of this paper, we pose the system identification
task as a partial differential equation (PDE) constrained optimization problem,
in which synthetic stationary solutions of the Fokker-Planck equation, obtained
as fixed points of a finite-volume discretization, are compared to physical
invariant measures extracted from observed trajectory data. In the latter half
of the paper, we improve upon this approach in two crucial directions. First,
we develop a Galerkin-inspired modification to the finite-volume surrogate
model, based on data-adaptive unstructured meshes and Monte-Carlo integration,
enabling the approach to efficiently scale to high-dimensional problems.
Second, we leverage Takens' seminal time-delay embedding theory to introduce a
critical data-dependent coordinate transformation which can guarantee unique
system identifiability from the invariant measure alone. This contribution
resolves a major challenge of system identification through invariant measures,
as systems exhibiting distinct transient behaviors may still share the same
time-invariant statistics in their state-coordinates. Throughout, we present
comprehensive numerical tests which highlight the effectiveness of our approach
on a variety of challenging system identification tasks.
|
2502.05206
|
Safety at Scale: A Comprehensive Survey of Large Model Safety
|
cs.CR cs.AI cs.CL cs.CV
|
The rapid advancement of large models, driven by their exceptional abilities
in learning and generalization through large-scale pre-training, has reshaped
the landscape of Artificial Intelligence (AI). These models are now
foundational to a wide range of applications, including conversational AI,
recommendation systems, autonomous driving, content generation, medical
diagnostics, and scientific discovery. However, their widespread deployment
also exposes them to significant safety risks, raising concerns about
robustness, reliability, and ethical implications. This survey provides a
systematic review of current safety research on large models, covering Vision
Foundation Models (VFMs), Large Language Models (LLMs), Vision-Language
Pre-training (VLP) models, Vision-Language Models (VLMs), Diffusion Models
(DMs), and large-model-based Agents. Our contributions are summarized as
follows: (1) We present a comprehensive taxonomy of safety threats to these
models, including adversarial attacks, data poisoning, backdoor attacks,
jailbreak and prompt injection attacks, energy-latency attacks, data and model
extraction attacks, and emerging agent-specific threats. (2) We review defense
strategies proposed for each type of attacks if available and summarize the
commonly used datasets and benchmarks for safety research. (3) Building on
this, we identify and discuss the open challenges in large model safety,
emphasizing the need for comprehensive safety evaluations, scalable and
effective defense mechanisms, and sustainable data practices. More importantly,
we highlight the necessity of collective efforts from the research community
and international collaboration. Our work can serve as a useful reference for
researchers and practitioners, fostering the ongoing development of
comprehensive defense systems and platforms to safeguard AI models.
|
2502.05208
|
Mitigation of Camouflaged Adversarial Attacks in Autonomous Vehicles--A
Case Study Using CARLA Simulator
|
cs.CR cs.AI cs.LG
|
Autonomous vehicles (AVs) rely heavily on cameras and artificial intelligence
(AI) to make safe and accurate driving decisions. However, since AI is the core
enabling technology, this raises serious cyber threats that hinder the
large-scale adoption of AVs. Therefore, it becomes crucial to analyze the
resilience of AV security systems against sophisticated attacks that manipulate
camera inputs, deceiving AI models. In this paper, we develop
camera-camouflaged adversarial attacks targeting traffic sign recognition (TSR)
in AVs. Specifically, if the attack is initiated by modifying the texture of a
stop sign to fool the AV's object detection system, thereby affecting the AV
actuators. The attack's effectiveness is tested using the CARLA AV simulator
and the results show that such an attack can delay the auto-braking response to
the stop sign, resulting in potential safety issues. We conduct extensive
experiments under various conditions, confirming that our new attack is
effective and robust. Additionally, we address the attack by presenting
mitigation strategies. The proposed attack and defense methods are applicable
to other end-to-end trained autonomous cyber-physical systems.
|
2502.05209
|
Model Tampering Attacks Enable More Rigorous Evaluations of LLM
Capabilities
|
cs.CR cs.AI
|
Evaluations of large language model (LLM) risks and capabilities are
increasingly being incorporated into AI risk management and governance
frameworks. Currently, most risk evaluations are conducted by designing inputs
that elicit harmful behaviors from the system. However, a fundamental
limitation of this approach is that the harmfulness of the behaviors identified
during any particular evaluation can only lower bound the model's
worst-possible-case behavior. As a complementary method for eliciting harmful
behaviors, we propose evaluating LLMs with model tampering attacks which allow
for modifications to latent activations or weights. We pit state-of-the-art
techniques for removing harmful LLM capabilities against a suite of 5
input-space and 6 model tampering attacks. In addition to benchmarking these
methods against each other, we show that (1) model resilience to capability
elicitation attacks lies on a low-dimensional robustness subspace; (2) the
attack success rate of model tampering attacks can empirically predict and
offer conservative estimates for the success of held-out input-space attacks;
and (3) state-of-the-art unlearning methods can easily be undone within 16
steps of fine-tuning. Together these results highlight the difficulty of
removing harmful LLM capabilities and show that model tampering attacks enable
substantially more rigorous evaluations than input-space attacks alone. We
release models at https://huggingface.co/LLM-GAT
|
2502.05210
|
Regression and Forecasting of U.S. Stock Returns Based on LSTM
|
q-fin.ST cs.LG
|
This paper analyses the investment returns of three stock sectors, Manuf,
Hitec, and Other, in the U.S. stock market, based on the Fama-French
three-factor model, the Carhart four-factor model, and the Fama-French
five-factor model, in order to test the validity of the Fama-French
three-factor model, the Carhart four-factor model, and the Fama-French
five-factor model for the three sectors of the market. French five-factor model
for the three sectors of the market. Also, the LSTM model is used to explore
the additional factors affecting stock returns. The empirical results show that
the Fama-French five-factor model has better validity for the three segments of
the market under study, and the LSTM model has the ability to capture the
factors affecting the returns of certain industries, and can better regress and
predict the stock returns of the relevant industries. Keywords- Fama-French
model; Carhart model; Factor model; LSTM model.
|
2502.05211
|
Decoding FL Defenses: Systemization, Pitfalls, and Remedies
|
cs.CR cs.AI
|
While the community has designed various defenses to counter the threat of
poisoning attacks in Federated Learning (FL), there are no guidelines for
evaluating these defenses. These defenses are prone to subtle pitfalls in their
experimental setups that lead to a false sense of security, rendering them
unsuitable for practical deployment. In this paper, we systematically
understand, identify, and provide a better approach to address these
challenges. First, we design a comprehensive systemization of FL defenses along
three dimensions: i) how client updates are processed, ii) what the server
knows, and iii) at what stage the defense is applied. Next, we thoroughly
survey 50 top-tier defense papers and identify the commonly used components in
their evaluation setups. Based on this survey, we uncover six distinct pitfalls
and study their prevalence. For example, we discover that around 30% of these
works solely use the intrinsically robust MNIST dataset, and 40% employ
simplistic attacks, which may inadvertently portray their defense as robust.
Using three representative defenses as case studies, we perform a critical
reevaluation to study the impact of the identified pitfalls and show how they
lead to incorrect conclusions about robustness. We provide actionable
recommendations to help researchers overcome each pitfall.
|
2502.05213
|
DERMARK: A Dynamic, Efficient and Robust Multi-bit Watermark for Large
Language Models
|
cs.CR cs.AI
|
Well-trained large language models (LLMs) present significant risks,
including potential malicious use and copyright infringement. Current studies
aim to trace the distribution of LLM-generated texts by implicitly embedding
watermarks. Among these, the single-bit watermarking method can only determine
whether a given text was generated by an LLM. In contrast, the multi-bit
watermarking method embeds richer information into the generated text, which
can identify which LLM generated and distributed a given text to which user.
However, existing efforts embed the multi-bit watermark directly into the
generated text without accounting for its watermarking capacity. This approach
can result in embedding failures when the text's watermarking capacity is
insufficient. In this paper, we derive the watermark embedding distribution
based on the logits of LLMs and propose a formal inequality to segment the text
optimally for watermark embedding. Building on this foundation, we propose
DERMARK, a dynamic, efficient, and robust multi-bit watermarking method.
DERMARK divides the text into segments of varying lengths for each bit
embedding, adaptively matching the text's capacity. It achieves this with
negligible overhead and robust performance against text editing by minimizing
watermark extraction loss. Comprehensive experiments demonstrate that, compared
to the SOTA method, our method reduces the number of tokens required for
embedding each bit by 20\%, reduces watermark embedding time by 50\%, and is
robust to text editing and watermark erasure attacks.
|
2502.05214
|
CoRPA: Adversarial Image Generation for Chest X-rays Using Concept
Vector Perturbations and Generative Models
|
eess.IV cs.AI cs.CV
|
Deep learning models for medical image classification tasks are becoming
widely implemented in AI-assisted diagnostic tools, aiming to enhance
diagnostic accuracy, reduce clinician workloads, and improve patient outcomes.
However, their vulnerability to adversarial attacks poses significant risks to
patient safety. Current attack methodologies use general techniques such as
model querying or pixel value perturbations to generate adversarial examples
designed to fool a model. These approaches may not adequately address the
unique characteristics of clinical errors stemming from missed or incorrectly
identified clinical features. We propose the Concept-based Report Perturbation
Attack (CoRPA), a clinically-focused black-box adversarial attack framework
tailored to the medical imaging domain. CoRPA leverages clinical concepts to
generate adversarial radiological reports and images that closely mirror
realistic clinical misdiagnosis scenarios. We demonstrate the utility of CoRPA
using the MIMIC-CXR-JPG dataset of chest X-rays and radiological reports. Our
evaluation reveals that deep learning models exhibiting strong resilience to
conventional adversarial attacks are significantly less robust when subjected
to CoRPA's clinically-focused perturbations. This underscores the importance of
addressing domain-specific vulnerabilities in medical AI systems. By
introducing a specialized adversarial attack framework, this study provides a
foundation for developing robust, real-world-ready AI models in healthcare,
ensuring their safe and reliable deployment in high-stakes clinical
environments.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.