id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
2501.08807
|
Multi-visual modality micro drone-based structural damage detection
|
cs.CV
|
Accurate detection and resilience of object detectors in structural damage
detection are important in ensuring the continuous use of civil infrastructure.
However, achieving robustness in object detectors remains a persistent
challenge, impacting their ability to generalize effectively. This study
proposes DetectorX, a robust framework for structural damage detection coupled
with a micro drone. DetectorX addresses the challenges of object detector
robustness by incorporating two innovative modules: a stem block and a spiral
pooling technique. The stem block introduces a dynamic visual modality by
leveraging the outputs of two Deep Convolutional Neural Network (DCNN) models.
The framework employs the proposed event-based reward reinforcement learning to
constrain the actions of a parent and child DCNN model leading to a reward.
This results in the induction of two dynamic visual modalities alongside the
Red, Green, and Blue (RGB) data. This enhancement significantly augments
DetectorX's perception and adaptability in diverse environmental situations.
Further, a spiral pooling technique, an online image augmentation method,
strengthens the framework by increasing feature representations by
concatenating spiraled and average/max pooled features. In three extensive
experiments: (1) comparative and (2) robustness, which use the Pacific
Earthquake Engineering Research Hub ImageNet dataset, and (3) field-experiment,
DetectorX performed satisfactorily across varying metrics, including precision
(0.88), recall (0.84), average precision (0.91), mean average precision (0.76),
and mean average recall (0.73), compared to the competing detectors including
You Only Look Once X-medium (YOLOX-m) and others. The study's findings indicate
that DetectorX can provide satisfactory results and demonstrate resilience in
challenging environments.
|
2501.08808
|
A Bayesian Hierarchical Model for Generating Synthetic Unbalanced Power
Distribution Grids
|
eess.SY cs.SY
|
The real-world data of power networks is often inaccessible due to privacy
and security concerns, highlighting the need for tools to generate realistic
synthetic network data. Existing methods leverage geographic tools like
OpenStreetMap with heuristic rules to model system topology and typically focus
on single-phase, balanced systems, limiting their applicability to real-world
distribution systems, which are usually unbalanced. This work proposes a
Bayesian Hierarchical Model (BHM) to generate unbalanced three-phase
distribution systems learning from existing networks. The scheme takes as input
the base topology and aggregated demand per node and outputs a three-phase
unbalanced system. The proposed scheme achieves a Mean Absolute Percentage
Error (MAPE) of less than $8\%$ across all phases, with computation times of
20.4 seconds for model training and 3.1 seconds per sample generation. The tool
is applied to learn from publicly available SMART-DS dataset and applied to
generate European 906 and IEEE-123 systems. We demonstrate the transfer
learning capability of the proposed tool by leveraging a model trained on an
observed system to generate a synthetic network for an unobserved system.
Specifically, the tool is trained using the publicly available SMART-DS dataset
and subsequently applied to generate synthetic networks for the European
906-bus system and the IEEE 123-bus system. This tool allows researchers to
simulate realistic unbalanced three-phase power data with high accuracy and
speed, enhancing planning and operational analysis for modern power grids.
|
2501.08809
|
XMusic: Towards a Generalized and Controllable Symbolic Music Generation
Framework
|
cs.SD cs.AI eess.AS
|
In recent years, remarkable advancements in artificial intelligence-generated
content (AIGC) have been achieved in the fields of image synthesis and text
generation, generating content comparable to that produced by humans. However,
the quality of AI-generated music has not yet reached this standard, primarily
due to the challenge of effectively controlling musical emotions and ensuring
high-quality outputs. This paper presents a generalized symbolic music
generation framework, XMusic, which supports flexible prompts (i.e., images,
videos, texts, tags, and humming) to generate emotionally controllable and
high-quality symbolic music. XMusic consists of two core components, XProjector
and XComposer. XProjector parses the prompts of various modalities into
symbolic music elements (i.e., emotions, genres, rhythms and notes) within the
projection space to generate matching music. XComposer contains a Generator and
a Selector. The Generator generates emotionally controllable and melodious
music based on our innovative symbolic music representation, whereas the
Selector identifies high-quality symbolic music by constructing a multi-task
learning scheme involving quality assessment, emotion recognition, and genre
recognition tasks. In addition, we build XMIDI, a large-scale symbolic music
dataset that contains 108,023 MIDI files annotated with precise emotion and
genre labels. Objective and subjective evaluations show that XMusic
significantly outperforms the current state-of-the-art methods with impressive
music quality. Our XMusic has been awarded as one of the nine Highlights of
Collectibles at WAIC 2023. The project homepage of XMusic is
https://xmusic-project.github.io.
|
2501.08814
|
SAIF: A Comprehensive Framework for Evaluating the Risks of Generative
AI in the Public Sector
|
cs.AI cs.CL cs.CY
|
The rapid adoption of generative AI in the public sector, encompassing
diverse applications ranging from automated public assistance to welfare
services and immigration processes, highlights its transformative potential
while underscoring the pressing need for thorough risk assessments. Despite its
growing presence, evaluations of risks associated with AI-driven systems in the
public sector remain insufficiently explored. Building upon an established
taxonomy of AI risks derived from diverse government policies and corporate
guidelines, we investigate the critical risks posed by generative AI in the
public sector while extending the scope to account for its multimodal
capabilities. In addition, we propose a Systematic dAta generatIon Framework
for evaluating the risks of generative AI (SAIF). SAIF involves four key
stages: breaking down risks, designing scenarios, applying jailbreak methods,
and exploring prompt types. It ensures the systematic and consistent generation
of prompt data, facilitating a comprehensive evaluation while providing a solid
foundation for mitigating the risks. Furthermore, SAIF is designed to
accommodate emerging jailbreak methods and evolving prompt types, thereby
enabling effective responses to unforeseen risk scenarios. We believe that this
study can play a crucial role in fostering the safe and responsible integration
of generative AI into the public sector.
|
2501.08815
|
Human Pose-Constrained UV Map Estimation
|
cs.CV
|
UV map estimation is used in computer vision for detailed analysis of human
posture or activity. Previous methods assign pixels to body model vertices by
comparing pixel descriptors independently, without enforcing global coherence
or plausibility in the UV map. We propose Pose-Constrained Continuous Surface
Embeddings (PC-CSE), which integrates estimated 2D human pose into the
pixel-to-vertex assignment process. The pose provides global anatomical
constraints, ensuring that UV maps remain coherent while preserving local
precision. Evaluation on DensePose COCO demonstrates consistent improvement,
regardless of the chosen 2D human pose model. Whole-body poses offer better
constraints by incorporating additional details about the hands and feet.
Conditioning UV maps with human pose reduces invalid mappings and enhances
anatomical plausibility. In addition, we highlight inconsistencies in the
ground-truth annotations.
|
2501.08816
|
IDEA: Image Description Enhanced CLIP-Adapter
|
cs.CV cs.AI cs.LG
|
CLIP (Contrastive Language-Image Pre-training) has attained great success in
pattern recognition and computer vision. Transferring CLIP to downstream tasks
(e.g. zero- or few-shot classification) is a hot topic in multimodal learning.
However, current studies primarily focus on either prompt learning for text or
adapter tuning for vision, without fully exploiting the complementary
information and correlations among image-text pairs. In this paper, we propose
an Image Description Enhanced CLIP-Adapter (IDEA) method to adapt CLIP to
few-shot image classification tasks. This method captures fine-grained features
by leveraging both visual features and textual descriptions of images. IDEA is
a training-free method for CLIP, and it can be comparable to or even exceeds
state-of-the-art models on multiple tasks. Furthermore, we introduce
Trainable-IDEA (T-IDEA), which extends IDEA by adding two lightweight learnable
components (i.e., a projector and a learnable latent space), further enhancing
the model's performance and achieving SOTA results on 11 datasets. As one
important contribution, we employ the Llama model and design a comprehensive
pipeline to generate textual descriptions for images of 11 datasets, resulting
in a total of 1,637,795 image-text pairs, named "IMD-11". Our code and data are
released at https://github.com/FourierAI/IDEA.
|
2501.08819
|
Boosting Diffusion Guidance via Learning Degradation-Aware Models for
Blind Super Resolution
|
eess.IV cs.CV
|
Recently, diffusion-based blind super-resolution (SR) methods have shown
great ability to generate high-resolution images with abundant high-frequency
detail, but the detail is often achieved at the expense of fidelity. Meanwhile,
another line of research focusing on rectifying the reverse process of
diffusion models (i.e., diffusion guidance), has demonstrated the power to
generate high-fidelity results for non-blind SR. However, these methods rely on
known degradation kernels, making them difficult to apply to blind SR. To
address these issues, we present DADiff in this paper. DADiff incorporates
degradation-aware models into the diffusion guidance framework, eliminating the
need to know degradation kernels. Additionally, we propose two novel
techniques: input perturbation and guidance scalar, to further improve our
performance. Extensive experimental results show that our proposed method has
superior performance over state-of-the-art methods on blind SR benchmarks.
|
2501.08821
|
A Closer Look at the Learnability of Out-of-Distribution (OOD) Detection
|
cs.LG
|
Machine learning algorithms often encounter different or
"out-of-distribution" (OOD) data at deployment time, and OOD detection is
frequently employed to detect these examples. While it works reasonably well in
practice, existing theoretical results on OOD detection are highly pessimistic.
In this work, we take a closer look at this problem, and make a distinction
between uniform and non-uniform learnability, following PAC learning theory. We
characterize under what conditions OOD detection is uniformly and non-uniformly
learnable, and we show that in several cases, non-uniform learnability turns a
number of negative results into positive. In all cases where OOD detection is
learnable, we provide concrete learning algorithms and a sample-complexity
analysis.
|
2501.08822
|
Deep Learning Meets Queue-Reactive: A Framework for Realistic Limit
Order Book Simulation
|
q-fin.TR cs.LG
|
The Queue-Reactive model introduced by Huang et al. (2015) has become a
standard tool for limit order book modeling, widely adopted by both researchers
and practitioners for its simplicity and effectiveness. We present the
Multidimensional Deep Queue-Reactive (MDQR) model, which extends this framework
in three ways: it relaxes the assumption of queue independence, enriches the
state space with market features, and models the distribution of order sizes.
Through a neural network architecture, the model learns complex dependencies
between different price levels and adapts to varying market conditions, while
preserving the interpretable point-process foundation of the original
framework. Using data from the Bund futures market, we show that MDQR captures
key market properties including the square-root law of market impact,
cross-queue correlations, and realistic order size patterns. The model
demonstrates particular strength in reproducing both conditional and stationary
distributions of order sizes, as well as various stylized facts of market
microstructure. The model achieves this while maintaining the computational
efficiency needed for practical applications such as strategy development
through reinforcement learning or realistic backtesting.
|
2501.08828
|
MMDocIR: Benchmarking Multi-Modal Retrieval for Long Documents
|
cs.IR cs.AI cs.CL cs.CV
|
Multi-modal document retrieval is designed to identify and retrieve various
forms of multi-modal content, such as figures, tables, charts, and layout
information from extensive documents. Despite its significance, there is a
notable lack of a robust benchmark to effectively evaluate the performance of
systems in multi-modal document retrieval. To address this gap, this work
introduces a new benchmark, named as MMDocIR, encompassing two distinct tasks:
page-level and layout-level retrieval. The former focuses on localizing the
most relevant pages within a long document, while the latter targets the
detection of specific layouts, offering a more fine-grained granularity than
whole-page analysis. A layout can refer to a variety of elements such as
textual paragraphs, equations, figures, tables, or charts. The MMDocIR
benchmark comprises a rich dataset featuring expertly annotated labels for
1,685 questions and bootstrapped labels for 173,843 questions, making it a
pivotal resource for advancing multi-modal document retrieval for both training
and evaluation. Through rigorous experiments, we reveal that (i) visual
retrievers significantly outperform their text counterparts, (ii) MMDocIR train
set can effectively benefit the training process of multi-modal document
retrieval and (iii) text retrievers leveraging on VLM-text perform much better
than those using OCR-text. These findings underscores the potential advantages
of integrating visual elements for multi-modal document retrieval.
|
2501.08837
|
MANTA: Diffusion Mamba for Efficient and Effective Stochastic Long-Term
Dense Anticipation
|
cs.CV
|
Our work addresses the problem of stochastic long-term dense anticipation.
The goal of this task is to predict actions and their durations several minutes
into the future based on provided video observations. Anticipation over
extended horizons introduces high uncertainty, as a single observation can lead
to multiple plausible future outcomes. To address this uncertainty, stochastic
models are designed to predict several potential future action sequences.
Recent work has further proposed to incorporate uncertainty modelling for
observed frames by simultaneously predicting per-frame past and future actions
in a unified manner. While such joint modelling of actions is beneficial, it
requires long-range temporal capabilities to connect events across distant past
and future time points. However, the previous work struggles to achieve such a
long-range understanding due to its limited and/or sparse receptive field. To
alleviate this issue, we propose a novel MANTA (MAmba for ANTicipation)
network. Our model enables effective long-term temporal modelling even for very
long sequences while maintaining linear complexity in sequence length. We
demonstrate that our approach achieves state-of-the-art results on three
datasets - Breakfast, 50Salads, and Assembly101 - while also significantly
improving computational and memory efficiency.
|
2501.08838
|
ToMATO: Verbalizing the Mental States of Role-Playing LLMs for
Benchmarking Theory of Mind
|
cs.CL cs.AI
|
Existing Theory of Mind (ToM) benchmarks diverge from real-world scenarios in
three aspects: 1) they assess a limited range of mental states such as beliefs,
2) false beliefs are not comprehensively explored, and 3) the diverse
personality traits of characters are overlooked. To address these challenges,
we introduce ToMATO, a new ToM benchmark formulated as multiple-choice QA over
conversations. ToMATO is generated via LLM-LLM conversations featuring
information asymmetry. By employing a prompting method that requires
role-playing LLMs to verbalize their thoughts before each utterance, we capture
both first- and second-order mental states across five categories: belief,
intention, desire, emotion, and knowledge. These verbalized thoughts serve as
answers to questions designed to assess the mental states of characters within
conversations. Furthermore, the information asymmetry introduced by hiding
thoughts from others induces the generation of false beliefs about various
mental states. Assigning distinct personality traits to LLMs further
diversifies both utterances and thoughts. ToMATO consists of 5.4k questions,
753 conversations, and 15 personality trait patterns. Our analysis shows that
this dataset construction approach frequently generates false beliefs due to
the information asymmetry between role-playing LLMs, and effectively reflects
diverse personalities. We evaluate nine LLMs on ToMATO and find that even
GPT-4o mini lags behind human performance, especially in understanding false
beliefs, and lacks robustness to various personality traits.
|
2501.08841
|
Exploring Task-Level Optimal Prompts for Visual In-Context Learning
|
cs.AI cs.CV
|
With the development of Vision Foundation Models (VFMs) in recent years,
Visual In-Context Learning (VICL) has become a better choice compared to
modifying models in most scenarios. Different from retraining or fine-tuning
model, VICL does not require modifications to the model's weights or
architecture, and only needs a prompt with demonstrations to teach VFM how to
solve tasks. Currently, significant computational cost for finding optimal
prompts for every test sample hinders the deployment of VICL, as determining
which demonstrations to use for constructing prompts is very costly. In this
paper, however, we find a counterintuitive phenomenon that most test samples
actually achieve optimal performance under the same prompts, and searching for
sample-level prompts only costs more time but results in completely identical
prompts. Therefore, we propose task-level prompting to reduce the cost of
searching for prompts during the inference stage and introduce two time-saving
yet effective task-level prompt search strategies. Extensive experimental
results show that our proposed method can identify near-optimal prompts and
reach the best VICL performance with a minimal cost that prior work has never
achieved.
|
2501.08847
|
Automatic tuning of communication protocols for vehicular ad hoc
networks using metaheuristics
|
cs.NE cs.AI cs.NI
|
The emerging field of vehicular ad hoc networks (VANETs) deals with a set of
communicating vehicles which are able to spontaneously interconnect without any
pre-existing infrastructure. In such kind of networks, it is crucial to make an
optimal configuration of the communication protocols previously to the final
network deployment. This way, a human designer can obtain an optimal QoS of the
network beforehand. The problem we consider in this work lies in configuring
the File Transfer protocol Configuration (FTC) with the aim of optimizing the
transmission time, the number of lost packets, and the amount of data
transferred in realistic VANET scenarios. We face the FTC with five
representative state-of-the-art optimization techniques and compare their
performance. These algorithms are: Particle Swarm Optimization (PSO),
Differential Evolution (DE), Genetic Algorithm (GA), Evolutionary Strategy
(ES), and Simulated Annealing (SA). For our tests, two typical environment
instances of VANETs for Urban and Highway scenarios have been defined. The
experiments using ns- 2 (a well-known realistic VANET simulator) reveal that
PSO outperforms all the compared algorithms for both studied VANET instances.
|
2501.08848
|
RouteNet-Gauss: Hardware-Enhanced Network Modeling with Machine Learning
|
cs.NI cs.AI cs.LG
|
Network simulation is pivotal in network modeling, assisting with tasks
ranging from capacity planning to performance estimation. Traditional
approaches such as Discrete Event Simulation (DES) face limitations in terms of
computational cost and accuracy. This paper introduces RouteNet-Gauss, a novel
integration of a testbed network with a Machine Learning (ML) model to address
these challenges. By using the testbed as a hardware accelerator,
RouteNet-Gauss generates training datasets rapidly and simulates network
scenarios with high fidelity to real-world conditions. Experimental results
show that RouteNet-Gauss significantly reduces prediction errors by up to 95%
and achieves a 488x speedup in inference time compared to state-of-the-art
DES-based methods. RouteNet-Gauss's modular architecture is dynamically
constructed based on the specific characteristics of the network scenario, such
as topology and routing. This enables it to understand and generalize to
different network configurations beyond those seen during training, including
networks up to 10x larger. Additionally, it supports Temporal Aggregated
Performance Estimation (TAPE), providing configurable temporal granularity and
maintaining high accuracy in flow performance metrics. This approach shows
promise in improving both simulation efficiency and accuracy, offering a
valuable tool for network operators.
|
2501.08850
|
Graph Counterfactual Explainable AI via Latent Space Traversal
|
cs.LG cs.AI stat.ML
|
Explaining the predictions of a deep neural network is a nontrivial task, yet
high-quality explanations for predictions are often a prerequisite for
practitioners to trust these models. Counterfactual explanations aim to explain
predictions by finding the ''nearest'' in-distribution alternative input whose
prediction changes in a pre-specified way. However, it remains an open question
how to define this nearest alternative input, whose solution depends on both
the domain (e.g. images, graphs, tabular data, etc.) and the specific
application considered. For graphs, this problem is complicated i) by their
discrete nature, as opposed to the continuous nature of state-of-the-art graph
classifiers; and ii) by the node permutation group acting on the graphs. We
propose a method to generate counterfactual explanations for any differentiable
black-box graph classifier, utilizing a case-specific permutation equivariant
graph variational autoencoder. We generate counterfactual explanations in a
continuous fashion by traversing the latent space of the autoencoder across the
classification boundary of the classifier, allowing for seamless integration of
discrete graph structure and continuous graph attributes. We empirically
validate the approach on three graph datasets, showing that our model is
consistently high-performing and more robust than the baselines.
|
2501.08851
|
Digital Phenotyping for Adolescent Mental Health: A Feasibility Study
Employing Machine Learning to Predict Mental Health Risk From Active and
Passive Smartphone Data
|
cs.LG cs.AI
|
Background: Adolescents are particularly vulnerable to mental disorders, with
over 75% of cases manifesting before the age of 25. Research indicates that
only 18 to 34% of young people experiencing high levels of depression or
anxiety symptoms seek support. Digital tools leveraging smartphones offer
scalable and early intervention opportunities. Objective: Using a novel machine
learning framework, this study evaluated the feasibility of integrating active
and passive smartphone data to predict mental disorders in non-clinical
adolescents. Specifically, we investigated the utility of the Mindcraft app in
predicting risks for internalising and externalising disorders, eating
disorders, insomnia and suicidal ideation. Methods: Participants (N=103; mean
age 16.1 years) were recruited from three London schools. Participants
completed the Strengths and Difficulties Questionnaire, the Eating Disorders-15
Questionnaire, Sleep Condition Indicator Questionnaire and indicated the
presence/absence of suicidal ideation. They used the Mindcraft app for 14 days,
contributing active data via self-reports and passive data from smartphone
sensors. A contrastive pretraining phase was applied to enhance user-specific
feature stability, followed by supervised fine-tuning. The model evaluation
employed leave-one-subject-out cross-validation using balanced accuracy as the
primary metric. Results: The integration of active and passive data achieved
superior performance compared to individual data sources, with mean balanced
accuracies of 0.71 for SDQ-High risk, 0.67 for insomnia, 0.77 for suicidal
ideation and 0.70 for eating disorders. The contrastive learning framework
stabilised daily behavioural representations, enhancing predictive robustness.
This study demonstrates the potential of integrating active and passive
smartphone data with advanced machine-learning techniques for predicting mental
health risks.
|
2501.08853
|
Achieving Stability and Optimality: Control Strategy for a Wind Turbine
Supplying an Electrolyzer in the Islanded Storage-less Microgrid
|
eess.SY cs.SY
|
Wind power generation supplying electrolyzers in islanded microgrids is an
essential technical pathway for green hydrogen production, attracting growing
attention in the transition towards net zero carbon emissions. Both academia
and industry widely recognize that islanded AC microgrids normally rely on
battery energy storage systems (BESSs) for grid-forming functions. However, the
high cost of BESS significantly increases the levelized cost of hydrogen
(LCOH), compromising economic feasibility. To address this challenge and reduce
the LCOH, this paper focuses on a wind turbine (WT) supplying an electrolyzer
in a storage-less microgrid and identifies a unique characteristic that
challenges the conventional understanding of this microgrid: active power is
coupled with microgrid voltage rather than frequency, the latter being entirely
decoupled from active power balance. Based on this unique characteristic, this
paper develops a new control strategy that maintains power balance, stabilizes
the voltage and frequency, and maximizes hydrogen production. The effectiveness
of the control strategy is validated through case studies conducted in
Matlab/Simulink, especially its capability to maintain stability while
maximizing hydrogen production under various conditions.
|
2501.08861
|
Generative Planning with 3D-vision Language Pre-training for End-to-End
Autonomous Driving
|
cs.CV
|
Autonomous driving is a challenging task that requires perceiving and
understanding the surrounding environment for safe trajectory planning. While
existing vision-based end-to-end models have achieved promising results, these
methods are still facing the challenges of vision understanding, decision
reasoning and scene generalization. To solve these issues, a generative
planning with 3D-vision language pre-training model named GPVL is proposed for
end-to-end autonomous driving. The proposed paradigm has two significant
aspects. On one hand, a 3D-vision language pre-training module is designed to
bridge the gap between visual perception and linguistic understanding in the
bird's eye view. On the other hand, a cross-modal language model is introduced
to generate holistic driving decisions and fine-grained trajectories with
perception and navigation information in an auto-regressive manner. Experiments
on the challenging nuScenes dataset demonstrate that the proposed scheme
achieves excellent performances compared with state-of-the-art methods.
Besides, the proposed GPVL presents strong generalization ability and real-time
potential when handling high-level commands in various scenarios. It is
believed that the effective, robust and efficient performance of GPVL is
crucial for the practical application of future autonomous driving systems.
Code is available at https://github.com/ltp1995/GPVL
|
2501.08862
|
ARMOR: Shielding Unlearnable Examples against Data Augmentation
|
cs.LG cs.AI cs.CR
|
Private data, when published online, may be collected by unauthorized parties
to train deep neural networks (DNNs). To protect privacy, defensive noises can
be added to original samples to degrade their learnability by DNNs. Recently,
unlearnable examples are proposed to minimize the training loss such that the
model learns almost nothing. However, raw data are often pre-processed before
being used for training, which may restore the private information of protected
data. In this paper, we reveal the data privacy violation induced by data
augmentation, a commonly used data pre-processing technique to improve model
generalization capability, which is the first of its kind as far as we are
concerned. We demonstrate that data augmentation can significantly raise the
accuracy of the model trained on unlearnable examples from 21.3% to 66.1%. To
address this issue, we propose a defense framework, dubbed ARMOR, to protect
data privacy from potential breaches of data augmentation. To overcome the
difficulty of having no access to the model training process, we design a
non-local module-assisted surrogate model that better captures the effect of
data augmentation. In addition, we design a surrogate augmentation selection
strategy that maximizes distribution alignment between augmented and
non-augmented samples, to choose the optimal augmentation strategy for each
class. We also use a dynamic step size adjustment algorithm to enhance the
defensive noise generation process. Extensive experiments are conducted on 4
datasets and 5 data augmentation methods to verify the performance of ARMOR.
Comparisons with 6 state-of-the-art defense methods have demonstrated that
ARMOR can preserve the unlearnability of protected private data under data
augmentation. ARMOR reduces the test accuracy of the model trained on augmented
protected samples by as much as 60% more than baselines.
|
2501.08865
|
The geometry of moral decision making
|
cs.IT math.IT physics.data-an
|
We show how (resource) bounded rationality can be understood as the interplay
of two fundamental moral principles: deontology and utilitarianism. In
particular, we interpret deontology as a regularisation function in an optimal
control problem, coupled with a free parameter, the inverse temperature, to
shield the individual from expected utility. We discuss the information
geometry of bounded rationality and aspects of its relation to rate distortion
theory. A central role is played by Markov kernels and regular conditional
probability, which are also studied geometrically. A gradient equation is used
to determine the utility expansion path. Finally, the framework is applied to
the analysis of a disutility model of the restriction of constitutional rights
that we derive from legal doctrine. The methods discussed here are also
relevant to the theory of autonomous agents.
|
2501.08868
|
Processing and Analyzing Real-World Driving Data: Insights on Trips,
Scenarios, and Human Driving Behaviors
|
eess.SY cs.HC cs.SY
|
Analyzing large volumes of real-world driving data is essential for providing
meaningful and reliable insights into real-world trips, scenarios, and human
driving behaviors. To this end, we developed a multi-level data processing
approach that adds new information, segments data, and extracts desired
parameters. Leveraging a confidential but extensive dataset (over 1 million
km), this approach leads to three levels of in-depth analysis: trip, scenario,
and driving. The trip-level analysis explains representative properties
observed in real-world trips, while the scenario-level analysis focuses on
scenario conditions resulting from road events that reduce vehicle speed. The
driving-level analysis identifies the cause of driving regimes for specific
situations and characterizes typical human driving behaviors. Such analyses can
support the design of both trip- and scenario-based tests, the modeling of
human drivers, and the establishment of guidelines for connected and automated
vehicles.
|
2501.08869
|
Silent Abandonment in Text-Based Contact Centers: Identifying,
Quantifying, and Mitigating its Operational Impacts
|
cs.SI cs.AI
|
In the quest to improve services, companies offer customers the option to
interact with agents via texting. Such contact centers face unique challenges
compared to traditional call centers, as measuring customer experience proxies
like abandonment and patience involves uncertainty. A key source of this
uncertainty is silent abandonment, where customers leave without notifying the
system, wasting agent time and leaving their status unclear. Silent abandonment
also obscures whether a customer was served or left. Our goals are to measure
the magnitude of silent abandonment and mitigate its effects. Classification
models show that 3%-70% of customers across 17 companies abandon silently. In
one study, 71.3% of abandoning customers did so silently, reducing agent
efficiency by 3.2% and system capacity by 15.3%, incurring $5,457 in annual
costs per agent. We develop an expectation-maximization (EM) algorithm to
estimate customer patience under uncertainty and identify influencing
covariates. We find that companies should use classification models to estimate
abandonment scope and our EM algorithm to assess patience. We suggest
strategies to operationally mitigate the impact of silent abandonment by
predicting suspected silent-abandonment behavior or changing service design.
Specifically, we show that while allowing customers to write while waiting in
the queue creates a missing data challenge, it also significantly increases
patience and reduces service time, leading to reduced abandonment and lower
staffing requirements.
|
2501.08871
|
Joint Detection and Decoding: A Graph Neural Network Approach
|
cs.IT math.IT
|
Narrowing the performance gap between optimal and feasible detection in
inter-symbol interference (ISI) channels, this paper proposes to use graph
neural networks (GNNs) for detection that can also be used to perform joint
detection and decoding (JDD). For detection, the GNN is build upon the factor
graph representations of the channel, while for JDD, the factor graph is
expanded by the Tanner graph of the parity-check matrix (PCM) of the channel
code, sharing the variable nodes (VNs). A particularly advantageous property of
the GNN is a) the robustness against cycles in the factor graphs which is the
main problem for sum-product algorithm (SPA)-based detection, and b) the
robustness against channel state information (CSI) uncertainty at the receiver.
Additionally, we propose using an input embedding resulting in a GNN
independent of the channel impulse response (CIR). Consequently, a fully deep
learning-based receiver enables joint optimization instead of individual
optimization of the components, so-called end-to-end learning. Furthermore, we
propose a parallel flooding schedule that also reduces the latency, which turns
out to improve the error correcting performance. The proposed approach is
analyzed and compared to state-of-the-art baselines for different modulations
and codes in terms of error correcting capability and latency. The gain
compared to SPA-based detection might be explained with improved messages
between nodes and adaptive damping of messages. For a higher order modulation
in a high-rate turbo detection and decoding (TDD) scenario the GNN shows a, at
first glance, surprisingly high gain of 6.25 dB compared to the best, feasible
non-neural baseline.
|
2501.08878
|
Incrementally Learning Multiple Diverse Data Domains via Multi-Source
Dynamic Expansion Model
|
cs.LG cs.AI
|
Continual Learning seeks to develop a model capable of incrementally
assimilating new information while retaining prior knowledge. However, current
research predominantly addresses a straightforward learning context, wherein
all data samples originate from a singular data domain. This paper shifts focus
to a more complex and realistic learning environment, characterized by data
samples sourced from multiple distinct domains. We tackle this intricate
learning challenge by introducing a novel methodology, termed the Multi-Source
Dynamic Expansion Model (MSDEM), which leverages various pre-trained models as
backbones and progressively establishes new experts based on them to adapt to
emerging tasks. Additionally, we propose an innovative dynamic expandable
attention mechanism designed to selectively harness knowledge from multiple
backbones, thereby accelerating the new task learning. Moreover, we introduce a
dynamic graph weight router that strategically reuses all previously acquired
parameters and representations for new task learning, maximizing the positive
knowledge transfer effect, which further improves generalization performance.
We conduct a comprehensive series of experiments, and the empirical findings
indicate that our proposed approach achieves state-of-the-art performance.
|
2501.08880
|
SLC$^2$-SLAM: Semantic-guided Loop Closure with Shared Latent Code for
NeRF SLAM
|
cs.RO
|
Targeting the notorious cumulative drift errors in NeRF SLAM, we propose a
Semantic-guided Loop Closure with Shared Latent Code, dubbed SLC$^2$-SLAM.
Especially, we argue that latent codes stored in many NeRF SLAM systems are not
fully exploited, as they are only used for better reconstruction. In this
paper, we propose a simple yet effective way to detect potential loops using
the same latent codes as local features. To further improve the loop detection
performance, we use the semantic information, which are also decoded from the
same latent codes to guide the aggregation of local features. Finally, with the
potential loops detected, we close them with a graph optimization followed by
bundle adjustment to refine both the estimated poses and the reconstructed
scene. To evaluate the performance of our SLC$^2$-SLAM, we conduct extensive
experiments on Replica and ScanNet datasets. Our proposed semantic-guided loop
closure significantly outperforms the pre-trained NetVLAD and ORB combined with
Bag-of-Words, which are used in all the other NeRF SLAM with loop closure. As a
result, our SLC$^2$-SLAM also demonstrated better tracking and reconstruction
performance, especially in larger scenes with more loops, like ScanNet.
|
2501.08883
|
Increasing Batch Size Improves Convergence of Stochastic Gradient
Descent with Momentum
|
cs.LG
|
Stochastic gradient descent with momentum (SGDM), which is defined by adding
a momentum term to SGD, has been well studied in both theory and practice.
Theoretically investigated results showed that the settings of the learning
rate and momentum weight affect the convergence of SGDM. Meanwhile, practical
results showed that the setting of batch size strongly depends on the
performance of SGDM. In this paper, we focus on mini-batch SGDM with constant
learning rate and constant momentum weight, which is frequently used to train
deep neural networks in practice. The contribution of this paper is showing
theoretically that using a constant batch size does not always minimize the
expectation of the full gradient norm of the empirical loss in training a deep
neural network, whereas using an increasing batch size definitely minimizes it,
that is, increasing batch size improves convergence of mini-batch SGDM. We also
provide numerical results supporting our analyses, indicating specifically that
mini-batch SGDM with an increasing batch size converges to stationary points
faster than with a constant batch size. Python implementations of the
optimizers used in the numerical experiments are available at
https://anonymous.4open.science/r/momentum-increasing-batch-size-888C/.
|
2501.08884
|
Improved Compression Bounds for Scenario Decision Making
|
math.OC cs.LG
|
Scenario decision making offers a flexible way of making decision in an
uncertain environment while obtaining probabilistic guarantees on the risk of
failure of the decision. The idea of this approach is to draw samples of the
uncertainty and make a decision based on the samples, called "scenarios". The
probabilistic guarantees take the form of a bound on the probability of
sampling a set of scenarios that will lead to a decision whose risk of failure
is above a given maximum tolerance. This bound can be expressed as a function
of the number of sampled scenarios, the maximum tolerated risk, and some
intrinsic property of the problem called the "compression size". Several such
bounds have been proposed in the literature under various assumptions on the
problem. We propose new bounds that improve upon the existing ones without
requiring stronger assumptions on the problem.
|
2501.08885
|
Feature-based One-For-All: A Universal Framework for Heterogeneous
Knowledge Distillation
|
cs.CV
|
Knowledge distillation (KD) involves transferring knowledge from a
pre-trained heavy teacher model to a lighter student model, thereby reducing
the inference cost while maintaining comparable effectiveness. Prior KD
techniques typically assume homogeneity between the teacher and student models.
However, as technology advances, a wide variety of architectures have emerged,
ranging from initial Convolutional Neural Networks (CNNs) to Vision
Transformers (ViTs), and Multi-Level Perceptrons (MLPs). Consequently,
developing a universal KD framework compatible with any architecture has become
an important research topic. In this paper, we introduce a feature-based
one-for-all (FOFA) KD framework to enable feature distillation across diverse
architecture. Our framework comprises two key components. First, we design
prompt tuning blocks that incorporate student feedback, allowing teacher
features to adapt to the student model's learning process. Second, we propose
region-aware attention to mitigate the view mismatch problem between
heterogeneous architecture. By leveraging these two modules, effective
distillation of intermediate features can be achieved across heterogeneous
architectures. Extensive experiments on CIFAR, ImageNet, and COCO demonstrate
the superiority of the proposed method.
|
2501.08887
|
PAC Learnability of Scenario Decision-Making Algorithms: Necessary and
Sufficient Conditions
|
cs.LG math.OC
|
We study the PAC property of scenario decision-making algorithms, that is,
the ability to make a decision that has an arbitrarily low risk of violating an
unknown safety constraint, provided sufficiently many realizations (called
scenarios) of the safety constraint are sampled. Sufficient conditions for
scenario decision-making algorithms to be PAC are available in the literature,
such as finiteness of the VC dimension of its associated classifier and
existence of a compression scheme. We study the question of whether these
sufficient conditions are also necessary. We show with counterexamples that
this is not the case in general. This contrasts with binary classification
learning, for which the analogous conditions are sufficient and necessary.
Popular scenario decision-making algorithms, such as scenario optimization,
enjoy additional properties, such as stability and consistency. We show that
even under these additional assumptions the above conclusions hold. Finally, we
derive a necessary condition for scenario decision-making algorithms to be PAC,
inspired by the VC dimension and the so-called no-free-lunch theorem.
|
2501.08888
|
A Partial Initialization Strategy to Mitigate the Overfitting Problem in
CATE Estimation with Hidden Confounding
|
cs.LG
|
Estimating the conditional average treatment effect (CATE) from observational
data plays a crucial role in areas such as e-commerce, healthcare, and
economics. Existing studies mainly rely on the strong ignorability assumption
that there are no hidden confounders, whose existence cannot be tested from
observational data and can invalidate any causal conclusion. In contrast, data
collected from randomized controlled trials (RCT) do not suffer from
confounding but are usually limited by a small sample size. To avoid
overfitting caused by the small-scale RCT data, we propose a novel two-stage
pretraining-finetuning (TSPF) framework with a partial parameter initialization
strategy to estimate the CATE in the presence of hidden confounding. In the
first stage, a foundational representation of covariates is trained to estimate
counterfactual outcomes through large-scale observational data. In the second
stage, we propose to train an augmented representation of the covariates, which
is concatenated with the foundational representation obtained in the first
stage to adjust for the hidden confounding. Rather than training a separate
network from scratch, part of the prediction heads are initialized from the
first stage. The superiority of our approach is validated on two datasets with
extensive experiments.
|
2501.08889
|
Karatsuba Matrix Multiplication and its Efficient Custom Hardware
Implementations
|
cs.AR cs.AI cs.PF
|
While the Karatsuba algorithm reduces the complexity of large integer
multiplication, the extra additions required minimize its benefits for smaller
integers of more commonly-used bitwidths. In this work, we propose the
extension of the scalar Karatsuba multiplication algorithm to matrix
multiplication, showing how this maintains the reduction in multiplication
complexity of the original Karatsuba algorithm while reducing the complexity of
the extra additions. Furthermore, we propose new matrix multiplication hardware
architectures for efficiently exploiting this extension of the Karatsuba
algorithm in custom hardware. We show that the proposed algorithm and hardware
architectures can provide real area or execution time improvements for integer
matrix multiplication compared to scalar Karatsuba or conventional matrix
multiplication algorithms, while also supporting implementation through proven
systolic array and conventional multiplier architectures at the core. We
provide a complexity analysis of the algorithm and architectures and evaluate
the proposed designs both in isolation and in an end-to-end deep learning
accelerator system compared to baseline designs and prior state-of-the-art
works implemented on the same type of compute platform, demonstrating their
ability to increase the performance-per-area of matrix multiplication hardware.
|
2501.08896
|
Parallel Query Processing with Heterogeneous Machines
|
cs.DB
|
We study the problem of computing a full Conjunctive Query in parallel using
$p$ heterogeneous machines. Our computational model is similar to the MPC
model, but each machine has its own cost function mapping from the number of
bits it receives to a cost. An optimal algorithm should minimize the maximum
cost across all machines. We consider algorithms over a single communication
round and give a lower bound and matching upper bound for databases where each
relation has the same cardinality. We do this for both linear cost functions
like in previous work, but also for more general cost functions. For databases
with relations of different cardinalities, we also find a lower bound, and give
matching upper bounds for specific queries like the cartesian product, the
join, the star query, and the triangle query. Our approach is inspired by the
HyperCube algorithm, but there are additional challenges involved when machines
have heterogeneous cost functions.
|
2501.08897
|
Leveraging Large Language Models as Knowledge-Driven Agents for Reliable
Retrosynthesis Planning
|
cs.AI
|
Identifying reliable synthesis pathways in materials chemistry is a complex
task, particularly in polymer science, due to the intricate and often
non-unique nomenclature of macromolecules. To address this challenge, we
propose an agent system that integrates large language models (LLMs) and
knowledge graphs (KGs). By leveraging LLMs' powerful capabilities for
extracting and recognizing chemical substance names, and storing the extracted
data in a structured knowledge graph, our system fully automates the retrieval
of relevant literatures, extraction of reaction data, database querying,
construction of retrosynthetic pathway trees, further expansion through the
retrieval of additional literature and recommendation of optimal reaction
pathways. A novel Multi-branched Reaction Pathway Search (MBRPS) algorithm
enables the exploration of all pathways, with a particular focus on
multi-branched ones, helping LLMs overcome weak reasoning in multi-branched
paths. This work represents the first attempt to develop a fully automated
retrosynthesis planning agent tailored specially for macromolecules powered by
LLMs. Applied to polyimide synthesis, our new approach constructs a
retrosynthetic pathway tree with hundreds of pathways and recommends optimized
routes, including both known and novel pathways, demonstrating its
effectiveness and potential for broader applications.
|
2501.08900
|
Enhanced Multi-Scale Cross-Attention for Person Image Generation
|
cs.CV
|
In this paper, we propose a novel cross-attention-based generative
adversarial network (GAN) for the challenging person image generation task.
Cross-attention is a novel and intuitive multi-modal fusion method in which an
attention/correlation matrix is calculated between two feature maps of
different modalities. Specifically, we propose the novel XingGAN (or
CrossingGAN), which consists of two generation branches that capture the
person's appearance and shape, respectively. Moreover, we propose two novel
cross-attention blocks to effectively transfer and update the person's shape
and appearance embeddings for mutual improvement. This has not been considered
by any other existing GAN-based image generation work. To further learn the
long-range correlations between different person poses at different scales and
sub-regions, we propose two novel multi-scale cross-attention blocks. To tackle
the issue of independent correlation computations within the cross-attention
mechanism leading to noisy and ambiguous attention weights, which hinder
performance improvements, we propose a module called enhanced attention (EA).
Lastly, we introduce a novel densely connected co-attention module to fuse
appearance and shape features at different stages effectively. Extensive
experiments on two public datasets demonstrate that the proposed method
outperforms current GAN-based methods and performs on par with diffusion-based
methods. However, our method is significantly faster than diffusion-based
methods in both training and inference.
|
2501.08902
|
Multi-View Transformers for Airway-To-Lung Ratio Inference on Cardiac CT
Scans: The C4R Study
|
eess.IV cs.CV cs.LG
|
The ratio of airway tree lumen to lung size (ALR), assessed at full
inspiration on high resolution full-lung computed tomography (CT), is a major
risk factor for chronic obstructive pulmonary disease (COPD). There is growing
interest to infer ALR from cardiac CT images, which are widely available in
epidemiological cohorts, to investigate the relationship of ALR to severe
COVID-19 and post-acute sequelae of SARS-CoV-2 infection (PASC). Previously,
cardiac scans included approximately 2/3 of the total lung volume with 5-6x
greater slice thickness than high-resolution (HR) full-lung (FL) CT. In this
study, we present a novel attention-based Multi-view Swin Transformer to infer
FL ALR values from segmented cardiac CT scans. For the supervised training we
exploit paired full-lung and cardiac CTs acquired in the Multi-Ethnic Study of
Atherosclerosis (MESA). Our network significantly outperforms a proxy direct
ALR inference on segmented cardiac CT scans and achieves accuracy and
reproducibility comparable with a scan-rescan reproducibility of the FL ALR
ground-truth.
|
2501.08905
|
Computing Game Symmetries and Equilibria That Respect Them
|
cs.GT cs.AI cs.CC cs.MA
|
Strategic interactions can be represented more concisely, and analyzed and
solved more efficiently, if we are aware of the symmetries within the
multiagent system. Symmetries also have conceptual implications, for example
for equilibrium selection. We study the computational complexity of identifying
and using symmetries. Using the classical framework of normal-form games, we
consider game symmetries that can be across some or all players and/or actions.
We find a strong connection between game symmetries and graph automorphisms,
yielding graph automorphism and graph isomorphism completeness results for
characterizing the symmetries present in a game. On the other hand, we also
show that the problem becomes polynomial-time solvable when we restrict the
consideration of actions in one of two ways.
Next, we investigate when exactly game symmetries can be successfully
leveraged for Nash equilibrium computation. We show that finding a Nash
equilibrium that respects a given set of symmetries is PPAD- and CLS-complete
in general-sum and team games respectively -- that is, exactly as hard as
Brouwer fixed point and gradient descent problems. Finally, we present
polynomial-time methods for the special cases where we are aware of a vast
number of symmetries, or where the game is two-player zero-sum and we do not
even know the symmetries.
|
2501.08907
|
Projection Implicit Q-Learning with Support Constraint for Offline
Reinforcement Learning
|
cs.LG cs.AI
|
Offline Reinforcement Learning (RL) faces a critical challenge of
extrapolation errors caused by out-of-distribution (OOD) actions. Implicit
Q-Learning (IQL) algorithm employs expectile regression to achieve in-sample
learning, effectively mitigating the risks associated with OOD actions.
However, the fixed hyperparameter in policy evaluation and density-based policy
improvement method limit its overall efficiency. In this paper, we propose
Proj-IQL, a projective IQL algorithm enhanced with the support constraint. In
the policy evaluation phase, Proj-IQL generalizes the one-step approach to a
multi-step approach through vector projection, while maintaining in-sample
learning and expectile regression framework. In the policy improvement phase,
Proj-IQL introduces support constraint that is more aligned with the policy
evaluation approach. Furthermore, we theoretically demonstrate that Proj-IQL
guarantees monotonic policy improvement and enjoys a progressively more
rigorous criterion for superior actions. Empirical results demonstrate the
Proj-IQL achieves state-of-the-art performance on D4RL benchmarks, especially
in challenging navigation domains.
|
2501.08908
|
When Uncertainty Leads to Unsafety: Empirical Insights into the Role of
Uncertainty in Unmanned Aerial Vehicle Safety
|
cs.SE cs.RO
|
Despite the recent developments in obstacle avoidance and other safety
features, autonomous Unmanned Aerial Vehicles (UAVs) continue to face safety
challenges. No previous work investigated the relationship between the
behavioral uncertainty of a UAV and the unsafety of its flight. By quantifying
uncertainty, it is possible to develop a predictor for unsafety, which acts as
a flight supervisor. We conducted a large-scale empirical investigation of
safety violations using PX4-Autopilot, an open-source UAV software platform.
Our dataset of over 5,000 simulated flights, created to challenge obstacle
avoidance, allowed us to explore the relation between uncertain UAV decisions
and safety violations: up to 89% of unsafe UAV states exhibit significant
decision uncertainty, and up to 74% of uncertain decisions lead to unsafe
states. Based on these findings, we implemented Superialist (Supervising
Autonomous Aerial Vehicles), a runtime uncertainty detector based on
autoencoders, the state-of-the-art technology for anomaly detection.
Superialist achieved high performance in detecting uncertain behaviors with up
to 96% precision and 93% recall. Despite the observed performance degradation
when using the same approach for predicting unsafety (up to 74% precision and
87% recall), Superialist enabled early prediction of unsafe states up to 50
seconds in advance.
|
2501.08910
|
Lights, Camera, Matching: The Role of Image Illumination in Fair Face
Recognition
|
cs.CV
|
Facial brightness is a key image quality factor impacting face recognition
accuracy differentials across demographic groups. In this work, we aim to
decrease the accuracy gap between the similarity score distributions for
Caucasian and African American female mated image pairs, as measured by d'
between distributions. To balance brightness across demographic groups, we
conduct three experiments, interpreting brightness in the face skin region
either as median pixel value or as the distribution of pixel values. Balancing
based on median brightness alone yields up to a 46.8% decrease in d', while
balancing based on brightness distribution yields up to a 57.6% decrease. In
all three cases, the similarity scores of the individual distributions improve,
with mean scores maximally improving 5.9% for Caucasian females and 3.7% for
African American females.
|
2501.08912
|
Empowering Agricultural Insights: RiceLeafBD -- A Novel Dataset and
Optimal Model Selection for Rice Leaf Disease Diagnosis through Transfer
Learning Technique
|
cs.CV
|
The number of people living in this agricultural nation of ours, which is
surrounded by lush greenery, is growing on a daily basis. As a result of this,
the level of arable land is decreasing, as well as residential houses and
industrial factories. The food crisis is becoming the main threat for us in the
upcoming days. Because on the one hand, the population is increasing, and on
the other hand, the amount of food crop production is decreasing due to the
attack of diseases. Rice is one of the most significant cultivated crops since
it provides food for more than half of the world's population. Bangladesh is
dependent on rice (Oryza sativa) as a vital crop for its agriculture, but it
faces a significant problem as a result of the ongoing decline in rice yield
brought on by common diseases. Early disease detection is the main difficulty
in rice crop cultivation. In this paper, we proposed our own dataset, which was
collected from the Bangladesh field, and also applied deep learning and
transfer learning models for the evaluation of the datasets. We elaborately
explain our dataset and also give direction for further research work to serve
society using this dataset. We applied a light CNN model and pre-trained
InceptionNet-V2, EfficientNet-V2, and MobileNet-V2 models, which achieved 91.5%
performance for the EfficientNet-V2 model of this work. The results obtained
assaulted other models and even exceeded approaches that are considered to be
part of the state of the art. It has been demonstrated by this study that it is
possible to precisely and effectively identify diseases that affect rice leaves
using this unbiased datasets. After analysis of the performance of different
models, the proposed datasets are significant for the society for research work
to provide solutions for decreasing rice leaf disease.
|
2501.08913
|
GenAI Content Detection Task 3: Cross-Domain Machine-Generated Text
Detection Challenge
|
cs.CL cs.LG
|
Recently there have been many shared tasks targeting the detection of
generated text from Large Language Models (LLMs). However, these shared tasks
tend to focus either on cases where text is limited to one particular domain or
cases where text can be from many domains, some of which may not be seen during
test time. In this shared task, using the newly released RAID benchmark, we aim
to answer whether or not models can detect generated text from a large, yet
fixed, number of domains and LLMs, all of which are seen during training. Over
the course of three months, our task was attempted by 9 teams with 23 detector
submissions. We find that multiple participants were able to obtain accuracies
of over 99% on machine-generated text from RAID while maintaining a 5% False
Positive Rate -- suggesting that detectors are able to robustly detect text
from many domains and models simultaneously. We discuss potential
interpretations of this result and provide directions for future research.
|
2501.08916
|
Integrating Cybersecurity in Predictive Cost-Benefit Power Scheduling: A
DeepStack Model with Dynamic Defense Mechanism
|
eess.SY cs.SY
|
This paper introduces a novel, deep learning-based predictive model tailored
to address wind curtailment in contemporary power systems, while enhancing
cybersecurity measures through the implementation of a Dynamic Defense
Mechanism (DDM). The augmented BiLSTM architecture facilitates accurate
short-term predictions for wind power. In addition, a ConvGAN-driven step for
stochastic scenario generation and a hierarchical, multi-stage optimization
framework, which includes cases with and without Battery Energy Storage (BES),
significantly minimizes operational costs. The inclusion of DDM strategically
alters network reactances, thereby obfuscating the system's operational
parameters to deter cyber threats. This robust solution not only integrates
wind power more efficiently into power grids, leveraging BES potential to
improve the economic efficiency of the system, but also boosting the cyber
security of the system. Validation using the Illinois 200-bus system
demonstrates the model's potential, achieving a 98% accuracy in forecasting and
substantial cost reductions of over 3.8%. The results underscore the dual
benefits of enhancing system reliability and security through advanced deep
learning architectures and the strategic application of cybersecurity measures.
|
2501.08918
|
Efficient Planning in Large-scale Systems Using Hierarchical Finite
State Machines
|
eess.SY cs.SY
|
We consider optimal planning in a large-scale system formalised as a
hierarchical finite state machine (HFSM). A planning algorithm is proposed
computing an optimal plan between any two states in the HFSM, consisting of two
steps: A pre-processing step that computes optimal exit costs of the machines
in the HFSM, with time complexity scaling with the number of machines; and a
query step that efficiently computes an optimal plan by removing irrelevant
subtrees of the HFSM using the optimal exit costs. The algorithm is
reconfigurable in the sense that changes in the HFSM are handled with ease,
where the pre-processing step recomputes only the optimal exit costs affected
by the change. The algorithm can also exploit compact representations that
groups together identical machines in the HFSM, where the algorithm only needs
to compute the optimal exit costs for one of the identical machines within each
group, thereby avoid unnecessary recomputations. We validate the algorithm on
large systems with millions of states and a robotic application. It is shown
that our approach outperforms Dijkstra's algorithm, Bidirectional Dijkstra and
Contraction Hierarchies.
|
2501.08922
|
Discovery of Spatter Constitutive Models in Additive Manufacturing Using
Machine Learning
|
cs.LG cs.AI
|
Additive manufacturing (AM) is a rapidly evolving technology that has
attracted applications across a wide range of fields due to its ability to
fabricate complex geometries. However, one of the key challenges in AM is
achieving consistent print quality. This inconsistency is often attributed to
uncontrolled melt pool dynamics, partly caused by spatter which can lead to
defects. Therefore, capturing and controlling the evolution of the melt pool is
crucial for enhancing process stability and part quality. In this study, we
developed a framework to support decision-making towards efficient AM process
operations, capable of facilitating quality control and minimizing defects via
machine learning (ML) and polynomial symbolic regression models. We implemented
experimentally validated computational tools, specifically for laser powder bed
fusion (LPBF) processes as a cost-effective approach to collect large datasets.
For a dataset consisting of 281 varying process conditions, parameters such as
melt pool dimensions (length, width, depth), melt pool geometry (area, volume),
and volume indicated as spatter were extracted. Using machine learning (ML) and
polynomial symbolic regression models, a high R2 of over 95 % was achieved in
predicting the melt pool dimensions and geometry features on both the training
and testing datasets, with either process conditions (power and velocity) or
melt pool dimensions as the model inputs. In the case of volume indicated as
spatter the value of the R2 improved after logarithmic transforming the model
inputs, which were either the process conditions or the melt pool dimensions.
Among the investigated ML models, the ExtraTree model achieved the highest R2
values of 96.7 % and 87.5 %.
|
2501.08924
|
Learning Joint Denoising, Demosaicing, and Compression from the Raw
Natural Image Noise Dataset
|
cs.CV eess.IV
|
This paper introduces the Raw Natural Image Noise Dataset (RawNIND), a
diverse collection of paired raw images designed to support the development of
denoising models that generalize across sensors, image development workflows,
and styles. Two denoising methods are proposed: one operates directly on raw
Bayer data, leveraging computational efficiency, while the other processes
linear RGB images for improved generalization to different sensors, with both
preserving flexibility for subsequent development. Both methods outperform
traditional approaches which rely on developed images. Additionally, the
integration of denoising and compression at the raw data level significantly
enhances rate-distortion performance and computational efficiency. These
findings suggest a paradigm shift toward raw data workflows for efficient and
flexible image processing.
|
2501.08925
|
Disentangling Exploration of Large Language Models by Optimal
Exploitation
|
cs.LG cs.AI cs.CL
|
Exploration is a crucial skill for self-improvement and open-ended
problem-solving. However, it remains unclear if large language models can
effectively explore the state-space within an unknown environment. This work
isolates exploration as the sole objective, tasking the agent with delivering
information that enhances future returns. Within this framework, we argue that
measuring agent returns is not sufficient for a fair evaluation and decompose
missing rewards into exploration and exploitation components based on the
optimal achievable return. Comprehensive experiments with various models reveal
that most struggle to sufficiently explore the state-space and weak exploration
is insufficient. We observe a positive correlation between parameter count and
exploration performance, with larger models demonstrating superior
capabilities. Furthermore, we show that our decomposition provides insights
into differences in behaviors driven by prompt engineering, offering a valuable
tool for refining performance in exploratory tasks.
|
2501.08927
|
Continuous Approach to Phase (Norm) Retrieval Frames
|
math.FA cs.IR cs.NA math-ph math.MP math.NA physics.optics
|
This paper investigates the properties of continuous frames, with a
particular focus on phase retrieval and norm retrieval in the context of
Hilbert spaces. We introduce the concept of continuous near-Riesz bases and
prove their invariance under invertible operators. Some equivalent conditions
for phase and norm retrieval property of continuous frames are presented. We
study the stability of phase retrieval under perturbations. Furthermore, tensor
product frames for separable Hilbert spaces are studied, and we establish the
equivalence of phase retrieval and norm retrieval properties between components
and their tensor products.
|
2501.08931
|
Visual WetlandBirds Dataset: Bird Species Identification and Behavior
Recognition in Videos
|
cs.CV cs.AI
|
The current biodiversity loss crisis makes animal monitoring a relevant field
of study. In light of this, data collected through monitoring can provide
essential insights, and information for decision-making aimed at preserving
global biodiversity. Despite the importance of such data, there is a notable
scarcity of datasets featuring videos of birds, and none of the existing
datasets offer detailed annotations of bird behaviors in video format. In
response to this gap, our study introduces the first fine-grained video dataset
specifically designed for bird behavior detection and species classification.
This dataset addresses the need for comprehensive bird video datasets and
provides detailed data on bird actions, facilitating the development of deep
learning models to recognize these, similar to the advancements made in human
action recognition. The proposed dataset comprises 178 videos recorded in
Spanish wetlands, capturing 13 different bird species performing 7 distinct
behavior classes. In addition, we also present baseline results using state of
the art models on two tasks: bird behavior recognition and species
classification.
|
2501.08933
|
Separation Assurance in Urban Air Mobility Systems using Shared
Scheduling Protocols
|
cs.MA
|
Ensuring safe separation between aircraft is a critical challenge in air
traffic management, particularly in urban air mobility (UAM) environments where
high traffic density and low altitudes require precise control. In these
environments, conflicts often arise at the intersections of flight corridors,
posing significant risks. We propose a tactical separation approach leveraging
shared scheduling protocols, originally designed for Ethernet networks and
operating systems, to coordinate access to these intersections. Using a
decentralized Markov decision process framework, the proposed approach enables
aircraft to autonomously adjust their speed and timing as they navigate these
critical areas, maintaining safe separation without a central controller. We
evaluate the effectiveness of this approach in simulated UAM scenarios,
demonstrating its ability to reduce separation violations to zero while
acknowledging trade-offs in flight times as traffic density increases.
Additionally, we explore the impact of non-compliant aircraft, showing that
while shared scheduling protocols can no longer guarantee safe separation, they
still provide significant improvements over systems without scheduling
protocols.
|
2501.08941
|
A Reinforcement Learning Approach to Quiet and Safe UAM Traffic
Management
|
cs.MA cs.LG cs.RO
|
Urban air mobility (UAM) is a transformative system that operates various
small aerial vehicles in urban environments to reshape urban transportation.
However, integrating UAM into existing urban environments presents a variety of
complex challenges. Recent analyses of UAM's operational constraints highlight
aircraft noise and system safety as key hurdles to UAM system implementation.
Future UAM air traffic management schemes must ensure that the system is both
quiet and safe. We propose a multi-agent reinforcement learning approach to
manage UAM traffic, aiming at both vertical separation assurance and noise
mitigation. Through extensive training, the reinforcement learning agent learns
to balance the two primary objectives by employing altitude adjustments in a
multi-layer UAM network. The results reveal the tradeoffs among noise impact,
traffic congestion, and separation. Overall, our findings demonstrate the
potential of reinforcement learning in mitigating UAM's noise impact while
maintaining safe separation using altitude adjustments
|
2501.08943
|
Neuromorphic Retina: An FPGA-based Emulator
|
eess.IV cs.NE
|
Implementing accurate models of the retina is a challenging task,
particularly in the context of creating visual prosthetics and devices.
Notwithstanding the presence of diverse artificial renditions of the retina,
the imperative task persists to pursue a more realistic model. In this work, we
are emulating a neuromorphic retina model on an FPGA. The key feature of this
model is its powerful adaptation to luminance and contrast, which allows it to
accurately emulate the sensitivity of the biological retina to changes in light
levels. Phasic and tonic cells are realizable in the retina in the simplest way
possible. Our FPGA implementation of the proposed biologically inspired digital
retina, incorporating a receptive field with a center-surround structure, is
reconfigurable and can support 128*128 pixel images at a frame rate of 200fps.
It consumes 1720 slices, approximately 3.7k Look-Up Tables (LUTs), and
Flip-Flops (FFs) on the FPGA. This implementation provides a high-performance,
low-power, and small-area solution and could be a significant step forward in
the development of biologically plausible retinal prostheses with enhanced
information processing capabilities
|
2501.08944
|
Physical AI Agents: Integrating Cognitive Intelligence with Real-World
Action
|
cs.MA
|
Vertical AI Agents are revolutionizing industries by delivering
domain-specific intelligence and tailored solutions. However, many sectors,
such as manufacturing, healthcare, and logistics, demand AI systems capable of
extending their intelligence into the physical world, interacting directly with
objects, environments, and dynamic conditions. This need has led to the
emergence of Physical AI Agents--systems that integrate cognitive reasoning,
powered by specialized LLMs, with precise physical actions to perform
real-world tasks.
This work introduces Physical AI Agents as an evolution of shared principles
with Vertical AI Agents, tailored for physical interaction. We propose a
modular architecture with three core blocks--perception, cognition, and
actuation--offering a scalable framework for diverse industries. Additionally,
we present the Physical Retrieval Augmented Generation (Ph-RAG) design pattern,
which connects physical intelligence to industry-specific LLMs for real-time
decision-making and reporting informed by physical context.
Through case studies, we demonstrate how Physical AI Agents and the Ph-RAG
framework are transforming industries like autonomous vehicles, warehouse
robotics, healthcare, and manufacturing, offering businesses a pathway to
integrate embodied AI for operational efficiency and innovation.
|
2501.08946
|
Applying General Turn-taking Models to Conversational Human-Robot
Interaction
|
cs.CL cs.RO
|
Turn-taking is a fundamental aspect of conversation, but current Human-Robot
Interaction (HRI) systems often rely on simplistic, silence-based models,
leading to unnatural pauses and interruptions. This paper investigates, for the
first time, the application of general turn-taking models, specifically TurnGPT
and Voice Activity Projection (VAP), to improve conversational dynamics in HRI.
These models are trained on human-human dialogue data using self-supervised
learning objectives, without requiring domain-specific fine-tuning. We propose
methods for using these models in tandem to predict when a robot should begin
preparing responses, take turns, and handle potential interruptions. We
evaluated the proposed system in a within-subject study against a traditional
baseline system, using the Furhat robot with 39 adults in a conversational
setting, in combination with a large language model for autonomous response
generation. The results show that participants significantly prefer the
proposed system, and it significantly reduces response delays and
interruptions.
|
2501.08950
|
Computing Approximated Fixpoints via Dampened Mann Iteration
|
cs.LO cs.LG
|
Fixpoints are ubiquitous in computer science and when dealing with
quantitative semantics and verification one is commonly led to consider least
fixpoints of (higher-dimensional) functions over the nonnegative reals. We show
how to approximate the least fixpoint of such functions, focusing on the case
in which they are not known precisely, but represented by a sequence of
approximating functions that converge to them. We concentrate on monotone and
non-expansive functions, for which uniqueness of fixpoints is not guaranteed
and standard fixpoint iteration schemes might get stuck at a fixpoint that is
not the least. Our main contribution is the identification of an iteration
scheme, a variation of Mann iteration with a dampening factor, which, under
suitable conditions, is shown to guarantee convergence to the least fixpoint of
the function of interest. We then argue that these results are relevant in the
context of model-based reinforcement learning for Markov decision processes
(MDPs), showing that the proposed iteration scheme instantiates to MDPs and
allows us to derive convergence to the optimal expected return. More generally,
we show that our results can be used to iterate to the least fixpoint almost
surely for systems where the function of interest can be approximated with
given probabilistic error bounds, as it happens for probabilistic systems, such
as simple stochastic games, that can be explored via sampling.
|
2501.08951
|
Analyzing the Ethical Logic of Six Large Language Models
|
cs.AI cs.CY
|
This study examines the ethical reasoning of six prominent generative large
language models: OpenAI GPT-4o, Meta LLaMA 3.1, Perplexity, Anthropic Claude
3.5 Sonnet, Google Gemini, and Mistral 7B. The research explores how these
models articulate and apply ethical logic, particularly in response to moral
dilemmas such as the Trolley Problem, and Heinz Dilemma. Departing from
traditional alignment studies, the study adopts an explainability-transparency
framework, prompting models to explain their ethical reasoning. This approach
is analyzed through three established ethical typologies: the
consequentialist-deontological analytic, Moral Foundations Theory, and the
Kohlberg Stages of Moral Development Model. Findings reveal that LLMs exhibit
largely convergent ethical logic, marked by a rationalist, consequentialist
emphasis, with decisions often prioritizing harm minimization and fairness.
Despite similarities in pre-training and model architecture, a mixture of
nuanced and significant differences in ethical reasoning emerge across models,
reflecting variations in fine-tuning and post-training processes. The models
consistently display erudition, caution, and self-awareness, presenting ethical
reasoning akin to a graduate-level discourse in moral philosophy. In striking
uniformity these systems all describe their ethical reasoning as more
sophisticated than what is characteristic of typical human moral logic.
|
2501.08958
|
Kolmogorov-Arnold Networks for Time Series Granger Causality Inference
|
cs.LG cs.AI
|
We propose the Granger causality inference Kolmogorov-Arnold Networks
(KANGCI), a novel architecture that extends the recently proposed
Kolmogorov-Arnold Networks (KAN) to the domain of causal inference. By
extracting base weights from KAN layers and incorporating the sparsity-inducing
penalty and ridge regularization, KANGCI effectively infers the Granger
causality from time series. Additionally, we propose an algorithm based on
time-reversed Granger causality that automatically selects causal relationships
with better inference performance from the original or time-reversed time
series or integrates the results to mitigate spurious connectivities.
Comprehensive experiments conducted on Lorenz-96, Gene regulatory networks,
fMRI BOLD signals, VAR, and real-world EEG datasets demonstrate that the
proposed model achieves competitive performance to state-of-the-art methods in
inferring Granger causality from nonlinear, high-dimensional, and
limited-sample time series.
|
2501.08962
|
An analysis of data variation and bias in image-based dermatological
datasets for machine learning classification
|
cs.CV cs.AI
|
AI algorithms have become valuable in aiding professionals in healthcare. The
increasing confidence obtained by these models is helpful in critical decision
demands. In clinical dermatology, classification models can detect malignant
lesions on patients' skin using only RGB images as input. However, most
learning-based methods employ data acquired from dermoscopic datasets on
training, which are large and validated by a gold standard. Clinical models aim
to deal with classification on users' smartphone cameras that do not contain
the corresponding resolution provided by dermoscopy. Also, clinical
applications bring new challenges. It can contain captures from uncontrolled
environments, skin tone variations, viewpoint changes, noises in data and
labels, and unbalanced classes. A possible alternative would be to use transfer
learning to deal with the clinical images. However, as the number of samples is
low, it can cause degradations on the model's performance; the source
distribution used in training differs from the test set. This work aims to
evaluate the gap between dermoscopic and clinical samples and understand how
the dataset variations impact training. It assesses the main differences
between distributions that disturb the model's prediction. Finally, from
experiments on different architectures, we argue how to combine the data from
divergent distributions, decreasing the impact on the model's final accuracy.
|
2501.08963
|
Training-Aware Risk Control for Intensity Modulated Radiation Therapies
Quality Assurance with Conformal Prediction
|
cs.LG
|
Measurement quality assurance (QA) practices play a key role in the safe use
of Intensity Modulated Radiation Therapies (IMRT) for cancer treatment. These
practices have reduced measurement-based IMRT QA failure below 1%. However,
these practices are time and labor intensive which can lead to delays in
patient care. In this study, we examine how conformal prediction methodologies
can be used to robustly triage plans. We propose a new training-aware conformal
risk control method by combining the benefit of conformal risk control and
conformal training. We incorporate the decision making thresholds based on the
gamma passing rate, along with the risk functions used in clinical evaluation,
into the design of the risk control framework. Our method achieves high
sensitivity and specificity and significantly reduces the number of plans
needing measurement without generating a huge confidence interval. Our results
demonstrate the validity and applicability of conformal prediction methods for
improving efficiency and reducing the workload of the IMRT QA process.
|
2501.08970
|
Trusted Machine Learning Models Unlock Private Inference for Problems
Currently Infeasible with Cryptography
|
cs.CR cs.AI cs.LG
|
We often interact with untrusted parties. Prioritization of privacy can limit
the effectiveness of these interactions, as achieving certain goals
necessitates sharing private data. Traditionally, addressing this challenge has
involved either seeking trusted intermediaries or constructing cryptographic
protocols that restrict how much data is revealed, such as multi-party
computations or zero-knowledge proofs. While significant advances have been
made in scaling cryptographic approaches, they remain limited in terms of the
size and complexity of applications they can be used for. In this paper, we
argue that capable machine learning models can fulfill the role of a trusted
third party, thus enabling secure computations for applications that were
previously infeasible. In particular, we describe Trusted Capable Model
Environments (TCMEs) as an alternative approach for scaling secure computation,
where capable machine learning model(s) interact under input/output
constraints, with explicit information flow control and explicit statelessness.
This approach aims to achieve a balance between privacy and computational
efficiency, enabling private inference where classical cryptographic solutions
are currently infeasible. We describe a number of use cases that are enabled by
TCME, and show that even some simple classic cryptographic problems can already
be solved with TCME. Finally, we outline current limitations and discuss the
path forward in implementing them.
|
2501.08974
|
Learning to Extract Cross-Domain Aspects and Understanding Sentiments
Using Large Language Models
|
cs.CL
|
Aspect-based sentiment analysis (ASBA) is a refined approach to sentiment
analysis that aims to extract and classify sentiments based on specific aspects
or features of a product, service, or entity. Unlike traditional sentiment
analysis, which assigns a general sentiment score to entire reviews or texts,
ABSA focuses on breaking down the text into individual components or aspects
(e.g., quality, price, service) and evaluating the sentiment towards each. This
allows for a more granular level of understanding of customer opinions,
enabling businesses to pinpoint specific areas of strength and improvement. The
process involves several key steps, including aspect extraction, sentiment
classification, and aspect-level sentiment aggregation for a review paragraph
or any other form that the users have provided. ABSA has significant
applications in areas such as product reviews, social media monitoring,
customer feedback analysis, and market research. By leveraging techniques from
natural language processing (NLP) and machine learning, ABSA facilitates the
extraction of valuable insights, enabling companies to make data-driven
decisions that enhance customer satisfaction and optimize offerings. As ABSA
evolves, it holds the potential to greatly improve personalized customer
experiences by providing a deeper understanding of sentiment across various
product aspects. In this work, we have analyzed the strength of LLMs for a
complete cross-domain aspect-based sentiment analysis with the aim of defining
the framework for certain products and using it for other similar situations.
We argue that it is possible to that at an effectiveness of 92\% accuracy for
the Aspect Based Sentiment Analysis dataset of SemEval-2015 Task 12.
|
2501.08977
|
Development and Validation of the Provider Documentation Summarization
Quality Instrument for Large Language Models
|
cs.AI
|
As Large Language Models (LLMs) are integrated into electronic health record
(EHR) workflows, validated instruments are essential to evaluate their
performance before implementation. Existing instruments for provider
documentation quality are often unsuitable for the complexities of
LLM-generated text and lack validation on real-world data. The Provider
Documentation Summarization Quality Instrument (PDSQI-9) was developed to
evaluate LLM-generated clinical summaries. Multi-document summaries were
generated from real-world EHR data across multiple specialties using several
LLMs (GPT-4o, Mixtral 8x7b, and Llama 3-8b). Validation included Pearson
correlation for substantive validity, factor analysis and Cronbach's alpha for
structural validity, inter-rater reliability (ICC and Krippendorff's alpha) for
generalizability, a semi-Delphi process for content validity, and comparisons
of high-versus low-quality summaries for discriminant validity. Seven physician
raters evaluated 779 summaries and answered 8,329 questions, achieving over 80%
power for inter-rater reliability. The PDSQI-9 demonstrated strong internal
consistency (Cronbach's alpha = 0.879; 95% CI: 0.867-0.891) and high
inter-rater reliability (ICC = 0.867; 95% CI: 0.867-0.868), supporting
structural validity and generalizability. Factor analysis identified a 4-factor
model explaining 58% of the variance, representing organization, clarity,
accuracy, and utility. Substantive validity was supported by correlations
between note length and scores for Succinct (rho = -0.200, p = 0.029) and
Organized ($\rho = -0.190$, $p = 0.037$). Discriminant validity distinguished
high- from low-quality summaries ($p < 0.001$). The PDSQI-9 demonstrates robust
construct validity, supporting its use in clinical practice to evaluate
LLM-generated summaries and facilitate safer integration of LLMs into
healthcare workflows.
|
2501.08982
|
CityLoc: 6DoF Pose Distributional Localization for Text Descriptions in
Large-Scale Scenes with Gaussian Representation
|
cs.CV
|
Localizing textual descriptions within large-scale 3D scenes presents
inherent ambiguities, such as identifying all traffic lights in a city.
Addressing this, we introduce a method to generate distributions of camera
poses conditioned on textual descriptions, facilitating robust reasoning for
broadly defined concepts.
Our approach employs a diffusion-based architecture to refine noisy 6DoF
camera poses towards plausible locations, with conditional signals derived from
pre-trained text encoders. Integration with the pretrained Vision-Language
Model, CLIP, establishes a strong linkage between text descriptions and pose
distributions. Enhancement of localization accuracy is achieved by rendering
candidate poses using 3D Gaussian splatting, which corrects misaligned samples
through visual reasoning.
We validate our method's superiority by comparing it against standard
distribution estimation methods across five large-scale datasets, demonstrating
consistent outperformance. Code, datasets and more information will be publicly
available at our project page.
|
2501.08983
|
CityDreamer4D: Compositional Generative Model of Unbounded 4D Cities
|
cs.CV
|
3D scene generation has garnered growing attention in recent years and has
made significant progress. Generating 4D cities is more challenging than 3D
scenes due to the presence of structurally complex, visually diverse objects
like buildings and vehicles, and heightened human sensitivity to distortions in
urban environments. To tackle these issues, we propose CityDreamer4D, a
compositional generative model specifically tailored for generating unbounded
4D cities. Our main insights are 1) 4D city generation should separate dynamic
objects (e.g., vehicles) from static scenes (e.g., buildings and roads), and 2)
all objects in the 4D scene should be composed of different types of neural
fields for buildings, vehicles, and background stuff. Specifically, we propose
Traffic Scenario Generator and Unbounded Layout Generator to produce dynamic
traffic scenarios and static city layouts using a highly compact BEV
representation. Objects in 4D cities are generated by combining stuff-oriented
and instance-oriented neural fields for background stuff, buildings, and
vehicles. To suit the distinct characteristics of background stuff and
instances, the neural fields employ customized generative hash grids and
periodic positional embeddings as scene parameterizations. Furthermore, we
offer a comprehensive suite of datasets for city generation, including OSM,
GoogleEarth, and CityTopia. The OSM dataset provides a variety of real-world
city layouts, while the Google Earth and CityTopia datasets deliver
large-scale, high-quality city imagery complete with 3D instance annotations.
Leveraging its compositional design, CityDreamer4D supports a range of
downstream applications, such as instance editing, city stylization, and urban
simulation, while delivering state-of-the-art performance in generating
realistic 4D cities.
|
2501.08985
|
Personality Modeling for Persuasion of Misinformation using AI Agent
|
cs.CL cs.AI cs.GT
|
The proliferation of misinformation on social media platforms has highlighted
the need to understand how individual personality traits influence
susceptibility to and propagation of misinformation. This study employs an
innovative agent-based modeling approach to investigate the relationship
between personality traits and misinformation dynamics. Using six AI agents
embodying different dimensions of the Big Five personality traits
(Extraversion, Agreeableness, and Neuroticism), we simulated interactions
across six diverse misinformation topics. The experiment, implemented through
the AgentScope framework using the GLM-4-Flash model, generated 90 unique
interactions, revealing complex patterns in how personality combinations affect
persuasion and resistance to misinformation. Our findings demonstrate that
analytical and critical personality traits enhance effectiveness in
evidence-based discussions, while non-aggressive persuasion strategies show
unexpected success in misinformation correction. Notably, agents with critical
traits achieved a 59.4% success rate in HIV-related misinformation discussions,
while those employing non-aggressive approaches maintained consistent
persuasion rates above 40% across different personality combinations. The study
also revealed a non-transitive pattern in persuasion effectiveness, challenging
conventional assumptions about personality-based influence. These results
provide crucial insights for developing personality-aware interventions in
digital environments and suggest that effective misinformation countermeasures
should prioritize emotional connection and trust-building over confrontational
approaches. The findings contribute to both theoretical understanding of
personality-misinformation dynamics and practical strategies for combating
misinformation in social media contexts.
|
2501.08987
|
Degradedness Under Cooperation
|
cs.IT math.IT
|
We study cooperation problems in broadcast and relay networks, where the
receivers do not satisfy the classical physical degradedness assumptions. New
notions of degradedness, strongly less noisy and strongly more capable are
introduced. We show that under these conditions, decode and forward (D&F) is
optimal for classes of cooperative systems with limited conference rates, thus
yielding new capacity results for these systems. In particular, we derive
bounds on the capacity region of a class of broadcast channels with
cooperation, that are tight on part of the capacity region. It is shown that
the cut-set bound is tight for classes of primitive relay and diamond channels,
beyond the physically or stochastically degraded models.
|
2501.08994
|
RepVideo: Rethinking Cross-Layer Representation for Video Generation
|
cs.CV
|
Video generation has achieved remarkable progress with the introduction of
diffusion models, which have significantly improved the quality of generated
videos. However, recent research has primarily focused on scaling up model
training, while offering limited insights into the direct impact of
representations on the video generation process. In this paper, we initially
investigate the characteristics of features in intermediate layers, finding
substantial variations in attention maps across different layers. These
variations lead to unstable semantic representations and contribute to
cumulative differences between features, which ultimately reduce the similarity
between adjacent frames and negatively affect temporal coherence. To address
this, we propose RepVideo, an enhanced representation framework for
text-to-video diffusion models. By accumulating features from neighboring
layers to form enriched representations, this approach captures more stable
semantic information. These enhanced representations are then used as inputs to
the attention mechanism, thereby improving semantic expressiveness while
ensuring feature consistency across adjacent frames. Extensive experiments
demonstrate that our RepVideo not only significantly enhances the ability to
generate accurate spatial appearances, such as capturing complex spatial
relationships between multiple objects, but also improves temporal consistency
in video generation.
|
2501.08995
|
VECT-GAN: A variationally encoded generative model for overcoming data
scarcity in pharmaceutical science
|
cs.LG
|
Data scarcity in pharmaceutical research has led to reliance on
labour-intensive trial-and-error approaches for development rather than
data-driven methods. While Machine Learning offers a solution, existing
datasets are often small and noisy, limiting their utility. To address this, we
developed a Variationally Encoded Conditional Tabular Generative Adversarial
Network (VECT-GAN), a novel generative model specifically designed for
augmenting small, noisy datasets. We introduce a pipeline where data is
augmented before regression model development and demonstrate that this
consistently and significantly improves performance over other state-of-the-art
tabular generative models. We apply this pipeline across six pharmaceutical
datasets, and highlight its real-world applicability by developing novel
polymers with medically desirable mucoadhesive properties, which we made and
experimentally characterised. Additionally, we pre-train the model on the
ChEMBL database of drug-like molecules, leveraging knowledge distillation to
enhance its generalisability, making it readily available for use on
pharmaceutical datasets containing small molecules, an extremely common
pharmaceutical task. We demonstrate the power of synthetic data for
regularising small tabular datasets, highlighting its potential to become
standard practice in pharmaceutical model development, and make our method,
including VECT-GAN pre-trained on ChEMBL available as a pip package.
|
2501.08998
|
CrystalGRW: Generative Modeling of Crystal Structures with Targeted
Properties via Geodesic Random Walks
|
cond-mat.mtrl-sci cond-mat.stat-mech cs.LG physics.comp-ph
|
Determining whether a candidate crystalline material is thermodynamically
stable depends on identifying its true ground-state structure, a central
challenge in computational materials science. We introduce CrystalGRW, a
diffusion-based generative model on Riemannian manifolds that proposes novel
crystal configurations and can predict stable phases validated by density
functional theory. The crystal properties, such as fractional coordinates,
atomic types, and lattice matrices, are represented on suitable Riemannian
manifolds, ensuring that new predictions generated through the diffusion
process preserve the periodicity of crystal structures. We incorporate an
equivariant graph neural network to also account for rotational and
translational symmetries during the generation process. CrystalGRW demonstrates
the ability to generate realistic crystal structures that are close to their
ground states with accuracy comparable to existing models, while also enabling
conditional control, such as specifying a desired crystallographic point group.
These features help accelerate materials discovery and inverse design by
offering stable, symmetry-consistent crystal candidates for experimental
validation.
|
2501.09001
|
Vision Foundation Models for Computed Tomography
|
eess.IV cs.CV
|
Foundation models (FMs) have shown transformative potential in radiology by
performing diverse, complex tasks across imaging modalities. Here, we developed
CT-FM, a large-scale 3D image-based pre-trained model designed explicitly for
various radiological tasks. CT-FM was pre-trained using 148,000 computed
tomography (CT) scans from the Imaging Data Commons through label-agnostic
contrastive learning. We evaluated CT-FM across four categories of tasks,
namely, whole-body and tumor segmentation, head CT triage, medical image
retrieval, and semantic understanding, showing superior performance against
state-of-the-art models. Beyond quantitative success, CT-FM demonstrated the
ability to cluster regions anatomically and identify similar anatomical and
structural concepts across scans. Furthermore, it remained robust across
test-retest settings and indicated reasonable salient regions attached to its
embeddings. This study demonstrates the value of large-scale medical imaging
foundation models and by open-sourcing the model weights, code, and data, aims
to support more adaptable, reliable, and interpretable AI solutions in
radiology.
|
2501.09004
|
Aegis2.0: A Diverse AI Safety Dataset and Risks Taxonomy for Alignment
of LLM Guardrails
|
cs.CL
|
As Large Language Models (LLMs) and generative AI become increasingly
widespread, concerns about content safety have grown in parallel. Currently,
there is a clear lack of high-quality, human-annotated datasets that address
the full spectrum of LLM-related safety risks and are usable for commercial
applications. To bridge this gap, we propose a comprehensive and adaptable
taxonomy for categorizing safety risks, structured into 12 top-level hazard
categories with an extension to 9 fine-grained subcategories. This taxonomy is
designed to meet the diverse requirements of downstream users, offering more
granular and flexible tools for managing various risk types. Using a hybrid
data generation pipeline that combines human annotations with a multi-LLM
"jury" system to assess the safety of responses, we obtain Aegis 2.0, a
carefully curated collection of 34,248 samples of human-LLM interactions,
annotated according to our proposed taxonomy. To validate its effectiveness, we
demonstrate that several lightweight models, trained using parameter-efficient
techniques on Aegis 2.0, achieve performance competitive with leading safety
models fully fine-tuned on much larger, non-commercial datasets. In addition,
we introduce a novel training blend that combines safety with topic following
data.This approach enhances the adaptability of guard models, enabling them to
generalize to new risk categories defined during inference. We plan to
open-source Aegis 2.0 data and models to the research community to aid in the
safety guardrailing of LLMs.
|
2501.09005
|
Lightweight Security for Ambient-Powered Programmable Reflections with
Reconfigurable Intelligent Surfaces
|
cs.IT cs.ET math.IT
|
Ambient Internet-of-Things (AIoT) form a new class of emerging technology
that promises to deliver pervasive wireless connectivity to previously
disconnected devices and products, assisting dependent industries (for example,
supply chain, clothing, remote surveillance, climate monitoring, and sensors)
to obtain granular real-time service visibility. Such ultra-low complexity and
power consumption devices, that are either battery-less or have the capability
for limited energy storage, can provide data feeds about the condition of any
aspect (e.g., an environment or an item) that is being monitored, enabling
proactive or reactive control by any application server. Although the security
of data involving AIoT devices is critical for key decisions of any dependent
operational system, the implementation of resource intensive cryptographic
algorithms and other security mechanisms becomes nearly infeasible, or very
challenging, due to the device energy and computational limitations. In this
article, we present a lightweight security solution that enables
confidentiality, integrity, and privacy protection in wireless links including
AIoT. We consider, as a case study, an ambient-powered Reconfigurable
Intelligent Surface (RIS) that harvests energy from its incident radio waves to
realize programmable reflective beamforming, enabling the communication between
a Base Station (BS) and end-user terminals. The proposed lightweight security
solution is applied to the control channel between the BS and the RIS
controller which is responsible for the metasurface's dynamic management and
phase configuration optimization.
|
2501.09006
|
Improving Stability Estimates in Adversarial Explainable AI through
Alternate Search Methods
|
cs.LG
|
Advances in the effectiveness of machine learning models have come at the
cost of enormous complexity resulting in a poor understanding of how they
function. Local surrogate methods have been used to approximate the workings of
these complex models, but recent work has revealed their vulnerability to
adversarial attacks where the explanation produced is appreciably different
while the meaning and structure of the complex model's output remains similar.
This prior work has focused on the existence of these weaknesses but not on
their magnitude. Here we explore using an alternate search method with the goal
of finding minimum viable perturbations, the fewest perturbations necessary to
achieve a fixed similarity value between the original and altered text's
explanation. Intuitively, a method that requires fewer perturbations to expose
a given level of instability is inferior to one which requires more. This
nuance allows for superior comparisons of the stability of explainability
methods.
|
2501.09007
|
AI-RAN: Transforming RAN with AI-driven Computing Infrastructure
|
cs.AI cs.NI eess.SP
|
The radio access network (RAN) landscape is undergoing a transformative shift
from traditional, communication-centric infrastructures towards converged
compute-communication platforms. This article introduces AI-RAN which
integrates both RAN and artificial intelligence (AI) workloads on the same
infrastructure. By doing so, AI-RAN not only meets the performance demands of
future networks but also improves asset utilization. We begin by examining how
RANs have evolved beyond mobile broadband towards AI-RAN and articulating
manifestations of AI-RAN into three forms: AI-for-RAN, AI-on-RAN, and
AI-and-RAN. Next, we identify the key requirements and enablers for the
convergence of communication and computing in AI-RAN. We then provide a
reference architecture for advancing AI-RAN from concept to practice. To
illustrate the practical potential of AI-RAN, we present a proof-of-concept
that concurrently processes RAN and AI workloads utilizing NVIDIA Grace-Hopper
GH200 servers. Finally, we conclude the article by outlining future work
directions to guide further developments of AI-RAN.
|
2501.09008
|
SimGen: A Diffusion-Based Framework for Simultaneous Surgical Image and
Segmentation Mask Generation
|
cs.CV
|
Acquiring and annotating surgical data is often resource-intensive, ethical
constraining, and requiring significant expert involvement. While generative AI
models like text-to-image can alleviate data scarcity, incorporating spatial
annotations, such as segmentation masks, is crucial for precision-driven
surgical applications, simulation, and education. This study introduces both a
novel task and method, SimGen, for Simultaneous Image and Mask Generation.
SimGen is a diffusion model based on the DDPM framework and Residual U-Net,
designed to jointly generate high-fidelity surgical images and their
corresponding segmentation masks. The model leverages cross-correlation priors
to capture dependencies between continuous image and discrete mask
distributions. Additionally, a Canonical Fibonacci Lattice (CFL) is employed to
enhance class separability and uniformity in the RGB space of the masks. SimGen
delivers high-fidelity images and accurate segmentation masks, outperforming
baselines across six public datasets assessed on image and semantic inception
distance metrics. Ablation study shows that the CFL improves mask quality and
spatial separation. Downstream experiments suggest generated image-mask pairs
are usable if regulations limit human data release for research. This work
offers a cost-effective solution for generating paired surgical images and
complex labels, advancing surgical AI development by reducing the need for
expensive manual annotations.
|
2501.09009
|
Towards Fast, Specialized Machine Learning Force Fields: Distilling
Foundation Models via Energy Hessians
|
physics.chem-ph cond-mat.mtrl-sci cs.LG physics.bio-ph
|
The foundation model (FM) paradigm is transforming Machine Learning Force
Fields (MLFFs), leveraging general-purpose representations and scalable
training to perform a variety of computational chemistry tasks. Although MLFF
FMs have begun to close the accuracy gap relative to first-principles methods,
there is still a strong need for faster inference speed. Additionally, while
research is increasingly focused on general-purpose models which transfer
across chemical space, practitioners typically only study a small subset of
systems at a given time. This underscores the need for fast, specialized MLFFs
relevant to specific downstream applications, which preserve test-time physical
soundness while maintaining train-time scalability. In this work, we introduce
a method for transferring general-purpose representations from MLFF foundation
models to smaller, faster MLFFs specialized to specific regions of chemical
space. We formulate our approach as a knowledge distillation procedure, where
the smaller "student" MLFF is trained to match the Hessians of the energy
predictions of the "teacher" foundation model. Our specialized MLFFs can be up
to 20 $\times$ faster than the original foundation model, while retaining, and
in some cases exceeding, its performance and that of undistilled models. We
also show that distilling from a teacher model with a direct force
parameterization into a student model trained with conservative forces (i.e.,
computed as derivatives of the potential energy) successfully leverages the
representations from the large-scale teacher for improved accuracy, while
maintaining energy conservation during test-time molecular dynamics
simulations. More broadly, our work suggests a new paradigm for MLFF
development, in which foundation models are released along with smaller,
specialized simulation "engines" for common chemical subsets.
|
2501.09012
|
Multimodal LLMs Can Reason about Aesthetics in Zero-Shot
|
cs.CV cs.AI cs.CL cs.MM
|
We present the first study on how Multimodal LLMs' (MLLMs) reasoning ability
shall be elicited to evaluate the aesthetics of artworks. To facilitate this
investigation, we construct MM-StyleBench, a novel high-quality dataset for
benchmarking artistic stylization. We then develop a principled method for
human preference modeling and perform a systematic correlation analysis between
MLLMs' responses and human preference. Our experiments reveal an inherent
hallucination issue of MLLMs in art evaluation, associated with response
subjectivity. ArtCoT is proposed, demonstrating that art-specific task
decomposition and the use of concrete language boost MLLMs' reasoning ability
for aesthetics. Our findings offer valuable insights into MLLMs for art and can
benefit a wide range of downstream applications, such as style transfer and
artistic image generation. Code available at
https://github.com/songrise/MLLM4Art.
|
2501.09014
|
How Do Generative Models Draw a Software Engineer? A Case Study on
Stable Diffusion Bias
|
cs.SE cs.AI
|
Generative models are nowadays widely used to generate graphical content used
for multiple purposes, e.g. web, art, advertisement. However, it has been shown
that the images generated by these models could reinforce societal biases
already existing in specific contexts. In this paper, we focus on understanding
if this is the case when one generates images related to various software
engineering tasks. In fact, the Software Engineering (SE) community is not
immune from gender and ethnicity disparities, which could be amplified by the
use of these models. Hence, if used without consciousness, artificially
generated images could reinforce these biases in the SE domain. Specifically,
we perform an extensive empirical evaluation of the gender and ethnicity bias
exposed by three versions of the Stable Diffusion (SD) model (a very popular
open-source text-to-image model) - SD 2, SD XL, and SD 3 - towards SE tasks. We
obtain 6,720 images by feeding each model with two sets of prompts describing
different software-related tasks: one set includes the Software Engineer
keyword, and one set does not include any specification of the person
performing the task. Next, we evaluate the gender and ethnicity disparities in
the generated images. Results show how all models are significantly biased
towards male figures when representing software engineers. On the contrary,
while SD 2 and SD XL are strongly biased towards White figures, SD 3 is
slightly more biased towards Asian figures. Nevertheless, all models
significantly under-represent Black and Arab figures, regardless of the prompt
style used. The results of our analysis highlight severe concerns about
adopting those models to generate content for SE tasks and open the field for
future research on bias mitigation in this context.
|
2501.09019
|
Ouroboros-Diffusion: Exploring Consistent Content Generation in
Tuning-free Long Video Diffusion
|
cs.CV
|
The first-in-first-out (FIFO) video diffusion, built on a pre-trained
text-to-video model, has recently emerged as an effective approach for
tuning-free long video generation. This technique maintains a queue of video
frames with progressively increasing noise, continuously producing clean frames
at the queue's head while Gaussian noise is enqueued at the tail. However,
FIFO-Diffusion often struggles to keep long-range temporal consistency in the
generated videos due to the lack of correspondence modeling across frames. In
this paper, we propose Ouroboros-Diffusion, a novel video denoising framework
designed to enhance structural and content (subject) consistency, enabling the
generation of consistent videos of arbitrary length. Specifically, we introduce
a new latent sampling technique at the queue tail to improve structural
consistency, ensuring perceptually smooth transitions among frames. To enhance
subject consistency, we devise a Subject-Aware Cross-Frame Attention (SACFA)
mechanism, which aligns subjects across frames within short segments to achieve
better visual coherence. Furthermore, we introduce self-recurrent guidance.
This technique leverages information from all previous cleaner frames at the
front of the queue to guide the denoising of noisier frames at the end,
fostering rich and contextual global information interaction. Extensive
experiments of long video generation on the VBench benchmark demonstrate the
superiority of our Ouroboros-Diffusion, particularly in terms of subject
consistency, motion smoothness, and temporal consistency.
|
2501.09021
|
Navigating Ethical Challenges in Generative AI-Enhanced Research: The
ETHICAL Framework for Responsible Generative AI Use
|
cs.CY cs.AI
|
The rapid adoption of generative artificial intelligence (GenAI) in research
presents both opportunities and ethical challenges that should be carefully
navigated. Although GenAI tools can enhance research efficiency through
automation of tasks such as literature review and data analysis, their use
raises concerns about aspects such as data accuracy, privacy, bias, and
research integrity. This paper develops the ETHICAL framework, which is a
practical guide for responsible GenAI use in research. Employing a
constructivist case study examining multiple GenAI tools in real research
contexts, the framework consists of seven key principles: Examine policies and
guidelines, Think about social impacts, Harness understanding of the
technology, Indicate use, Critically engage with outputs, Access secure
versions, and Look at user agreements. Applying these principles will enable
researchers to uphold research integrity while leveraging GenAI benefits. The
framework addresses a critical gap between awareness of ethical issues and
practical action steps, providing researchers with concrete guidance for
ethical GenAI integration. This work has implications for research practice,
institutional policy development, and the broader academic community while
adapting to an AI-enhanced research landscape. The ETHICAL framework can serve
as a foundation for developing AI literacy in academic settings and promoting
responsible innovation in research methodologies.
|
2501.09022
|
Generative Models with ELBOs Converging to Entropy Sums
|
stat.ML cs.IT cs.LG math.IT math.PR math.ST stat.TH
|
The evidence lower bound (ELBO) is one of the most central objectives for
probabilistic unsupervised learning. For the ELBOs of several generative models
and model classes, we here prove convergence to entropy sums. As one result, we
provide a list of generative models for which entropy convergence has been
shown, so far, along with the corresponding expressions for entropy sums. Our
considerations include very prominent generative models such as probabilistic
PCA, sigmoid belief nets or Gaussian mixture models. However, we treat more
models and entire model classes such as general mixtures of exponential family
distributions. Our main contributions are the proofs for the individual models.
For each given model we show that the conditions stated in Theorem 1 or Theorem
2 of [arXiv:2209.03077] are fulfilled such that by virtue of the theorems the
given model's ELBO is equal to an entropy sum at all stationary points. The
equality of the ELBO at stationary points applies under realistic conditions:
for finite numbers of data points, for model/data mismatches, at any stationary
point including saddle points etc, and it applies for any well behaved family
of variational distributions.
|
2501.09024
|
Social-LLaVA: Enhancing Robot Navigation through Human-Language
Reasoning in Social Spaces
|
cs.CV cs.HC cs.RO
|
Most existing social robot navigation techniques either leverage hand-crafted
rules or human demonstrations to connect robot perception to socially compliant
actions. However, there remains a significant gap in effectively translating
perception into socially compliant actions, much like how human reasoning
naturally occurs in dynamic environments. Considering the recent success of
Vision-Language Models (VLMs), we propose using language to bridge the gap in
human-like reasoning between perception and socially aware robot actions. We
create a vision-language dataset, Social robot Navigation via Explainable
Interactions (SNEI), featuring 40K human-annotated Visual Question Answers
(VQAs) based on 2K human-robot social interactions in unstructured, crowded
public spaces, spanning perception, prediction, chain-of-thought reasoning,
action, and explanation. We fine-tune a VLM, Social-LLaVA, using SNEI to
demonstrate the practical application of our dataset. Social-LLaVA outperforms
state-of-the-art models like GPT-4V and Gemini, based on the average of fifteen
different human-judge scores across 50 VQA. Deployed onboard a mobile robot,
Social-LLaVA enables human-like reasoning, marking a promising step toward
socially compliant robot navigation in dynamic public spaces through language
reasoning.
|
2501.09025
|
Cyber Shadows: Neutralizing Security Threats with AI and Targeted Policy
Measures
|
cs.CR cs.AI cs.CY econ.GN q-fin.EC
|
The digital age, driven by the AI revolution, brings significant
opportunities but also conceals security threats, which we refer to as cyber
shadows. These threats pose risks at individual, organizational, and societal
levels. This paper examines the systemic impact of these cyber threats and
proposes a comprehensive cybersecurity strategy that integrates AI-driven
solutions, such as Intrusion Detection Systems (IDS), with targeted policy
interventions. By combining technological and regulatory measures, we create a
multilevel defense capable of addressing both direct threats and indirect
negative externalities. We emphasize that the synergy between AI-driven
solutions and policy interventions is essential for neutralizing cyber threats
and mitigating their negative impact on the digital economy. Finally, we
underscore the need for continuous adaptation of these strategies, especially
in response to the rapid advancement of autonomous AI-driven attacks, to ensure
the creation of secure and resilient digital ecosystems.
|
2501.09026
|
Intelligent Anti-Money Laundering Solution Based upon Novel Community
Detection in Massive Transaction Networks on Spark
|
cs.SI cs.AI cs.CY
|
Criminals are using every means available to launder the profits from their
illegal activities into ostensibly legitimate assets. Meanwhile, most
commercial anti-money laundering systems are still rule-based, which cannot
adapt to the ever-changing tricks. Although some machine learning methods have
been proposed, they are mainly focused on the perspective of abnormal behavior
for single accounts. Considering money laundering activities are often involved
in gang criminals, these methods are still not intelligent enough to crack down
on criminal gangs all-sidedly. In this paper, a systematic solution is
presented to find suspicious money laundering gangs. A temporal-directed
Louvain algorithm has been proposed to detect communities according to relevant
anti-money laundering patterns. All processes are implemented and optimized on
Spark platform. This solution can greatly improve the efficiency of anti-money
laundering work for financial regulation agencies.
|
2501.09027
|
Unveiling Behavioral Differences in Bilingual Information Operations: A
Network-Based Approach
|
cs.SI
|
Twitter has become a pivotal platform for conducting information operations
(IOs), particularly during high-stakes political events. In this study, we
analyze over a million tweets about the 2024 U.S. presidential election to
explore an under-studied area: the behavioral differences of IO drivers from
English- and Spanish-speaking communities. Using similarity graphs constructed
from behavioral patterns, we identify IO drivers in both languages and evaluate
the clustering quality of these graphs in an unsupervised setting. Our analysis
demonstrates how different network dismantling strategies, such as node pruning
and edge filtering, can impact clustering quality and the identification of
coordinated IO drivers. We also reveal significant differences in the topics
and political indicators between English and Spanish IO drivers. Additionally,
we investigate bilingual users who post in both languages, systematically
uncovering their distinct roles and behaviors compared to monolingual users.
These findings underscore the importance of robust, culturally and
linguistically adaptable IO detection methods to mitigate the risks of
influence campaigns on social media. Our code and data are available on GitHub:
https://github.com/bowenyi-pierre/humans-lab-hackathon-24.
|
2501.09028
|
Emergence of the Traffic Autonomous Zone (TAZ) for Telecommunication
Operations from Spatial Heterogeneity in Cellular Networks
|
cs.SI
|
In the field of telecommunications, various operations are driven by
different physical quantities. Each has its own patterns in time and space, but
all show some clustered structures in their spatial distribution. This reflects
a unified rule of human mobility, suggesting the consistency among different
telecommunication regionalization objectives. With this in mind,
regionalization can be used to identify these patterns and can be applied to
improve management efficiency in the context of "autonomous networks". This
article introduces the "Traffic Autonomous Zone (TAZ)" concept. This approach
aims to create a reasonable unified regionalization scheme by identifying
spatial clusters. It is not just a practical way to partition cities based on
telecommunications needs, but it also captures self-organization structure of
cities in essence. We present examples of this regionalization method using
real data. Compared to the popular Louvain community detection method, our
approach is on the Pareto frontier, allowing for a balance among various
metrics in telecommunications.
|
2501.09029
|
Enhancing Data Integrity through Provenance Tracking in Semantic Web
Frameworks
|
cs.CR cs.AI
|
This paper explores the integration of provenance tracking systems within the
context of Semantic Web technologies to enhance data integrity in diverse
operational environments. SURROUND Australia Pty Ltd demonstrates innovative
applica-tions of the PROV Data Model (PROV-DM) and its Semantic Web variant,
PROV-O, to systematically record and manage provenance information across
multiple data processing domains. By employing RDF and Knowledge Graphs,
SURROUND ad-dresses the critical challenges of shared entity identification and
provenance granularity. The paper highlights the company's architecture for
capturing comprehensive provenance data, en-abling robust validation,
traceability, and knowledge inference. Through the examination of two projects,
we illustrate how provenance mechanisms not only improve data reliability but
also facilitate seamless integration across heterogeneous systems. Our findings
underscore the importance of sophisticated provenance solutions in maintaining
data integrity, serving as a reference for industry peers and academics engaged
in provenance research and implementation.
|
2501.09031
|
Synthetic Data and Health Privacy
|
cs.CR cs.AI cs.CY
|
This Viewpoint discusses generative artificial intelligence and safeguarding
privacy by using synthetic data as a substitute for private health data.
|
2501.09034
|
Physics-Informed Machine Learning for Microscale Drying of Plant-Based
Foods: A Systematic Review of Computational Models and Experimental Insights
|
cs.LG physics.bio-ph physics.comp-ph
|
This review examines the current state of research on microscale cellular
changes during the drying of plant-based food materials (PBFM), with particular
emphasis on computational modelling approaches. The review addresses the
critical need for advanced computational methods in microscale investigations.
We systematically analyse experimental studies in PBFM drying, highlighting
their contributions and limitations in capturing cellular-level phenomena,
including challenges in data acquisition and measurement accuracy under varying
drying conditions. The evolution of computational models for microstructural
investigations is thoroughly examined, from traditional numerical methods to
contemporary state-of-the-art approaches, with specific focus on their ability
to handle the complex, nonlinear properties of plant cellular materials.
Special attention is given to the emergence of data-driven models and their
limitations in predicting microscale cellular behaviour during PBFM drying,
particularly addressing challenges in dataset acquisition and model
generalization. The review provides an in-depth analysis of Physics-Informed
Machine Learning (PIML) frameworks, examining their theoretical foundations,
current applications in related fields, and unique advantages in combining
physical principles with neural network architectures. Through this
comprehensive assessment, we identify critical gaps in existing methodologies,
evaluate the trade-offs between different modelling approaches, and provide
insights into future research directions for improving our understanding of
cellular-level transformations during PBFM drying processes. The review
concludes with recommendations for integrating experimental and computational
approaches to advance the field of food preservation technology.
|
2501.09035
|
DomainDemo: a dataset of domain-sharing activities among different
demographic groups on Twitter
|
cs.SI cs.CY
|
Social media play a pivotal role in disseminating web content, particularly
during elections, yet our understanding of the association between demographic
factors and political discourse online remains limited. Here, we introduce a
unique dataset, DomainDemo, linking domains shared on Twitter (X) with the
demographic characteristics of associated users, including age, gender, race,
political affiliation, and geolocation, from 2011 to 2022. This new resource
was derived from a panel of over 1.5 million Twitter users matched against
their U.S. voter registration records, facilitating a better understanding of a
decade of information flows on one of the most prominent social media platforms
and trends in political and public discourse among registered U.S. voters from
different sociodemographic groups. By aggregating user demographic information
onto the domains, we derive five metrics that provide critical insights into
over 129,000 websites. In particular, the localness and partisan audience
metrics quantify the domains' geographical reach and ideological orientation,
respectively. These metrics show substantial agreement with existing
classifications, suggesting the effectiveness and reliability of DomainDemo's
approach.
|
2501.09038
|
Do generative video models learn physical principles from watching
videos?
|
cs.CV cs.AI cs.GR cs.LG
|
AI video generation is undergoing a revolution, with quality and realism
advancing rapidly. These advances have led to a passionate scientific debate:
Do video models learn "world models" that discover laws of physics -- or,
alternatively, are they merely sophisticated pixel predictors that achieve
visual realism without understanding the physical principles of reality? We
address this question by developing Physics-IQ, a comprehensive benchmark
dataset that can only be solved by acquiring a deep understanding of various
physical principles, like fluid dynamics, optics, solid mechanics, magnetism
and thermodynamics. We find that across a range of current models (Sora,
Runway, Pika, Lumiere, Stable Video Diffusion, and VideoPoet), physical
understanding is severely limited, and unrelated to visual realism. At the same
time, some test cases can already be successfully solved. This indicates that
acquiring certain physical principles from observation alone may be possible,
but significant challenges remain. While we expect rapid advances ahead, our
work demonstrates that visual realism does not imply physical understanding.
Our project page is at https://physics-iq.github.io; code at
https://github.com/google-deepmind/physics-IQ-benchmark.
|
2501.09039
|
Playing Devil's Advocate: Unmasking Toxicity and Vulnerabilities in
Large Vision-Language Models
|
cs.CR cs.AI cs.CY
|
The rapid advancement of Large Vision-Language Models (LVLMs) has enhanced
capabilities offering potential applications from content creation to
productivity enhancement. Despite their innovative potential, LVLMs exhibit
vulnerabilities, especially in generating potentially toxic or unsafe
responses. Malicious actors can exploit these vulnerabilities to propagate
toxic content in an automated (or semi-) manner, leveraging the susceptibility
of LVLMs to deception via strategically crafted prompts without fine-tuning or
compute-intensive procedures. Despite the red-teaming efforts and inherent
potential risks associated with the LVLMs, exploring vulnerabilities of LVLMs
remains nascent and yet to be fully addressed in a systematic manner. This
study systematically examines the vulnerabilities of open-source LVLMs,
including LLaVA, InstructBLIP, Fuyu, and Qwen, using adversarial prompt
strategies that simulate real-world social manipulation tactics informed by
social theories. Our findings show that (i) toxicity and insulting are the most
prevalent behaviors, with the mean rates of 16.13% and 9.75%, respectively;
(ii) Qwen-VL-Chat, LLaVA-v1.6-Vicuna-7b, and InstructBLIP-Vicuna-7b are the
most vulnerable models, exhibiting toxic response rates of 21.50%, 18.30% and
17.90%, and insulting responses of 13.40%, 11.70% and 10.10%, respectively;
(iii) prompting strategies incorporating dark humor and multimodal toxic prompt
completion significantly elevated these vulnerabilities. Despite being
fine-tuned for safety, these models still generate content with varying degrees
of toxicity when prompted with adversarial inputs, highlighting the urgent need
for enhanced safety mechanisms and robust guardrails in LVLM development.
|
2501.09040
|
Pseudolabel guided pixels contrast for domain adaptive semantic
segmentation
|
cs.CV cs.LG
|
Semantic segmentation is essential for comprehending images, but the process
necessitates a substantial amount of detailed annotations at the pixel level.
Acquiring such annotations can be costly in the real-world. Unsupervised domain
adaptation (UDA) for semantic segmentation is a technique that uses virtual
data with labels to train a model and adapts it to real data without labels.
Some recent works use contrastive learning, which is a powerful method for
self-supervised learning, to help with this technique. However, these works do
not take into account the diversity of features within each class when using
contrastive learning, which leads to errors in class prediction. We analyze the
limitations of these works and propose a novel framework called Pseudo-label
Guided Pixel Contrast (PGPC), which overcomes the disadvantages of previous
methods. We also investigate how to use more information from target images
without adding noise from pseudo-labels. We test our method on two standard UDA
benchmarks and show that it outperforms existing methods. Specifically, we
achieve relative improvements of 5.1% mIoU and 4.6% mIoU on the Grand Theft
Auto V (GTA5) to Cityscapes and SYNTHIA to Cityscapes tasks based on DAFormer,
respectively. Furthermore, our approach can enhance the performance of other
UDA approaches without increasing model complexity. Code is available at
https://github.com/embar111/pgpc
|
2501.09041
|
Generative Visual Commonsense Answering and Explaining with Generative
Scene Graph Constructing
|
cs.CV cs.CL
|
Visual Commonsense Reasoning, which is regarded as one challenging task to
pursue advanced visual scene comprehension, has been used to diagnose the
reasoning ability of AI systems. However, reliable reasoning requires a good
grasp of the scene's details. Existing work fails to effectively exploit the
real-world object relationship information present within the scene, and
instead overly relies on knowledge from training memory. Based on these
observations, we propose a novel scene-graph-enhanced visual commonsense
reasoning generation method named \textit{\textbf{G2}}, which first utilizes
the image patches and LLMs to construct a location-free scene graph, and then
answer and explain based on the scene graph's information. We also propose
automatic scene graph filtering and selection strategies to absorb valuable
scene graph information during training. Extensive experiments are conducted on
the tasks and datasets of scene graph constructing and visual commonsense
answering and explaining, respectively. Experimental results and ablation
analysis demonstrate the effectiveness of our proposed framework.
|
2501.09042
|
CookingDiffusion: Cooking Procedural Image Generation with Stable
Diffusion
|
cs.CV cs.GR cs.LG
|
Recent advancements in text-to-image generation models have excelled in
creating diverse and realistic images. This success extends to food imagery,
where various conditional inputs like cooking styles, ingredients, and recipes
are utilized. However, a yet-unexplored challenge is generating a sequence of
procedural images based on cooking steps from a recipe. This could enhance the
cooking experience with visual guidance and possibly lead to an intelligent
cooking simulation system. To fill this gap, we introduce a novel task called
\textbf{cooking procedural image generation}. This task is inherently
demanding, as it strives to create photo-realistic images that align with
cooking steps while preserving sequential consistency. To collectively tackle
these challenges, we present \textbf{CookingDiffusion}, a novel approach that
leverages Stable Diffusion and three innovative Memory Nets to model procedural
prompts. These prompts encompass text prompts (representing cooking steps),
image prompts (corresponding to cooking images), and multi-modal prompts
(mixing cooking steps and images), ensuring the consistent generation of
cooking procedural images. To validate the effectiveness of our approach, we
preprocess the YouCookII dataset, establishing a new benchmark. Our
experimental results demonstrate that our model excels at generating
high-quality cooking procedural images with remarkable consistency across
sequential cooking steps, as measured by both the FID and the proposed Average
Procedure Consistency metrics. Furthermore, CookingDiffusion demonstrates the
ability to manipulate ingredients and cooking methods in a recipe. We will make
our code, models, and dataset publicly accessible.
|
2501.09044
|
TCMM: Token Constraint and Multi-Scale Memory Bank of Contrastive
Learning for Unsupervised Person Re-identification
|
cs.CV cs.AI
|
This paper proposes the ViT Token Constraint and Multi-scale Memory bank
(TCMM) method to address the patch noises and feature inconsistency in
unsupervised person re-identification works. Many excellent methods use ViT
features to obtain pseudo labels and clustering prototypes, then train the
model with contrastive learning. However, ViT processes images by performing
patch embedding, which inevitably introduces noise in patches and may
compromise the performance of the re-identification model. On the other hand,
previous memory bank based contrastive methods may lead data inconsistency due
to the limitation of batch size. Furthermore, existing pseudo label methods
often discard outlier samples that are difficult to cluster. It sacrifices the
potential value of outlier samples, leading to limited model diversity and
robustness. This paper introduces the ViT Token Constraint to mitigate the
damage caused by patch noises to the ViT architecture. The proposed Multi-scale
Memory enhances the exploration of outlier samples and maintains feature
consistency. Experimental results demonstrate that our system achieves
state-of-the-art performance on common benchmarks. The project is available at
\href{https://github.com/andy412510/TCMM}{https://github.com/andy412510/TCMM}.
|
2501.09045
|
Spatio-Temporal Foundation Models: Vision, Challenges, and Opportunities
|
cs.CV cs.AI cs.ET
|
Foundation models have revolutionized artificial intelligence, setting new
benchmarks in performance and enabling transformative capabilities across a
wide range of vision and language tasks. However, despite the prevalence of
spatio-temporal data in critical domains such as transportation, public health,
and environmental monitoring, spatio-temporal foundation models (STFMs) have
not yet achieved comparable success. In this paper, we articulate a vision for
the future of STFMs, outlining their essential characteristics and the
generalization capabilities necessary for broad applicability. We critically
assess the current state of research, identifying gaps relative to these ideal
traits, and highlight key challenges that impede their progress. Finally, we
explore potential opportunities and directions to advance research towards the
aim of effective and broadly applicable STFMs.
|
2501.09046
|
Learning Hemodynamic Scalar Fields on Coronary Artery Meshes: A
Benchmark of Geometric Deep Learning Models
|
eess.IV cs.CV cs.LG
|
Coronary artery disease, caused by the narrowing of coronary vessels due to
atherosclerosis, is the leading cause of death worldwide. The diagnostic gold
standard, fractional flow reserve (FFR), measures the trans-stenotic pressure
ratio during maximal vasodilation but is invasive and costly. This has driven
the development of virtual FFR (vFFR) using computational fluid dynamics (CFD)
to simulate coronary flow. Geometric deep learning algorithms have shown
promise for learning features on meshes, including cardiovascular research
applications. This study empirically analyzes various backends for predicting
vFFR fields in coronary arteries as CFD surrogates, comparing six backends for
learning hemodynamics on meshes using CFD solutions as ground truth.
The study has two parts: i) Using 1,500 synthetic left coronary artery
bifurcations, models were trained to predict pressure-related fields for vFFR
reconstruction, comparing different learning variables. ii) Using 427
patient-specific CFD simulations, experiments were repeated focusing on the
best-performing learning variable from the synthetic dataset.
Most backends performed well on the synthetic dataset, especially when
predicting pressure drop over the manifold. Transformer-based backends
outperformed others when predicting pressure and vFFR fields and were the only
models achieving strong performance on patient-specific data, excelling in both
average per-point error and vFFR accuracy in stenotic lesions.
These results suggest geometric deep learning backends can effectively
replace CFD for simple geometries, while transformer-based networks are
superior for complex, heterogeneous datasets. Pressure drop was identified as
the optimal network output for learning pressure-related fields.
|
2501.09048
|
Anthropomorphic Features for On-Line Signatures
|
cs.CV cs.LG
|
Many features have been proposed in on-line signature verification.
Generally, these features rely on the position of the on-line signature samples
and their dynamic properties, as recorded by a tablet. This paper proposes a
novel feature space to describe efficiently on-line signatures. Since producing
a signature requires a skeletal arm system and its associated muscles, the new
feature space is based on characterizing the movement of the shoulder, the
elbow and the wrist joints when signing. As this motion is not directly
obtained from a digital tablet, the new features are calculated by means of a
virtual skeletal arm (VSA) model, which simulates the architecture of a real
arm and forearm. Specifically, the VSA motion is described by its 3D joint
position and its joint angles. These anthropomorphic features are worked out
from both pen position and orientation through the VSA forward and direct
kinematic model. The anthropomorphic features' robustness is proved by
achieving state-of-the-art performance with several verifiers and multiple
benchmarks on third party signature databases, which were collected with
different devices and in different languages and scripts.
|
2501.09049
|
Dynamic-Aware Spatio-temporal Representation Learning for Dynamic MRI
Reconstruction
|
eess.IV cs.AI cs.CV
|
Dynamic MRI reconstruction, one of inverse problems, has seen a surge by the
use of deep learning techniques. Especially, the practical difficulty of
obtaining ground truth data has led to the emergence of unsupervised learning
approaches. A recent promising method among them is implicit neural
representation (INR), which defines the data as a continuous function that maps
coordinate values to the corresponding signal values. This allows for filling
in missing information only with incomplete measurements and solving the
inverse problem effectively. Nevertheless, previous works incorporating this
method have faced drawbacks such as long optimization time and the need for
extensive hyperparameter tuning. To address these issues, we propose
Dynamic-Aware INR (DA-INR), an INR-based model for dynamic MRI reconstruction
that captures the spatial and temporal continuity of dynamic MRI data in the
image domain and explicitly incorporates the temporal redundancy of the data
into the model structure. As a result, DA-INR outperforms other models in
reconstruction quality even at extreme undersampling ratios while significantly
reducing optimization time and requiring minimal hyperparameter tuning.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.