id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
2501.06720
|
Multi-Label Scene Classification in Remote Sensing Benefits from Image
Super-Resolution
|
cs.CV cs.AI
|
Satellite imagery is a cornerstone for numerous Remote Sensing (RS)
applications; however, limited spatial resolution frequently hinders the
precision of such systems, especially in multi-label scene classification tasks
as it requires a higher level of detail and feature differentiation. In this
study, we explore the efficacy of image Super-Resolution (SR) as a
pre-processing step to enhance the quality of satellite images and thus improve
downstream classification performance. We investigate four SR models -
SRResNet, HAT, SeeSR, and RealESRGAN - and evaluate their impact on multi-label
scene classification across various CNN architectures, including ResNet-50,
ResNet-101, ResNet-152, and Inception-v4. Our results show that applying SR
significantly improves downstream classification performance across various
metrics, demonstrating its ability to preserve spatial details critical for
multi-label tasks. Overall, this work offers valuable insights into the
selection of SR techniques for multi-label prediction in remote sensing and
presents an easy-to-integrate framework to improve existing RS systems.
|
2501.06721
|
On the effect of the average clustering coefficient on topology-based
link prediction in featureless graphs
|
cs.SI
|
Link prediction is a fundamental problem in graph theory with diverse
applications, including recommender systems, community detection, and
identifying spurious connections. While feature-based methods achieve high
accuracy, their reliance on node attributes limits their applicability in
featureless graphs. For such graphs, structure-based approaches, including
common neighbor-based and degree-dependent methods, are commonly employed.
However, the effectiveness of these methods depends on graph density, with
common neighbor-based algorithms performing well in dense graphs and
degree-dependent methods being more suitable for sparse or tree-like graphs.
Despite this, the literature lacks a clear criterion to distinguish between
dense and sparse graphs. This paper introduces the average clustering
coefficient as a criterion for assessing graph density to assist with the
choice of link prediction algorithms. To address the scarcity of datasets for
empirical analysis, we propose a novel graph generation method based on the
Barabasi-Albert model, which enables controlled variation of graph density
while preserving structural heterogeneity. Through comprehensive experiments on
synthetic and real-world datasets, we establish an empirical boundary for the
average clustering coefficient that facilitates the selection of effective link
prediction techniques.
|
2501.06724
|
Wavelet Integrated Convolutional Neural Network for ECG Signal Denoising
|
eess.SP cs.CV
|
Wearable electrocardiogram (ECG) measurement using dry electrodes has a
problem with high-intensity noise distortion. Hence, a robust noise reduction
method is required. However, overlapping frequency bands of ECG and noise make
noise reduction difficult. Hence, it is necessary to provide a mechanism that
changes the characteristics of the noise based on its intensity and type. This
study proposes a convolutional neural network (CNN) model with an additional
wavelet transform layer that extracts the specific frequency features in a
clean ECG. Testing confirms that the proposed method effectively predicts
accurate ECG behavior with reduced noise by accounting for all frequency
domains. In an experiment, noisy signals in the signal-to-noise ratio (SNR)
range of -10-10 are evaluated, demonstrating that the efficiency of the
proposed method is higher when the SNR is small.
|
2501.06726
|
Integrated Sensing and Edge AI: Realizing Intelligent Perception in 6G
|
cs.IT eess.SP math.IT
|
Sensing and edge artificial intelligence (AI) are envisioned as two essential
and interconnected functions in sixth-generation (6G) mobile networks. On the
one hand, sensing-empowered applications rely on powerful AI models to extract
features and understand semantics from ubiquitous wireless sensors. On the
other hand, the massive amount of sensory data serves as the fuel to
continuously refine edge AI models. This deep integration of sensing and edge
AI has given rise to a new task-oriented paradigm known as integrated sensing
and edge AI (ISEA), which features a holistic design approach to communication,
AI computation, and sensing for optimal sensing-task performance. In this
article, we present a comprehensive survey for ISEA. We first provide technical
preliminaries for sensing, edge AI, and new communication paradigms in ISEA.
Then, we study several use cases of ISEA to demonstrate its practical relevance
and introduce current standardization and industrial progress. Next, the design
principles, metrics, tradeoffs, and architectures of ISEA are established,
followed by a thorough overview of ISEA techniques, including digital air
interface, over-the-air computation, and advanced signal processing. Its
interplay with various 6G advancements, e.g., new physical-layer and networking
techniques, are presented. Finally, we present future research opportunities in
ISEA, including the integration of foundation models, convergence of ISEA and
integrated sensing and communications (ISAC), and ultra-low-latency ISEA.
|
2501.06728
|
Measuring the Robustness of Reference-Free Dialogue Evaluation Systems
|
cs.CL
|
Advancements in dialogue systems powered by large language models (LLMs) have
outpaced the development of reliable evaluation metrics, particularly for
diverse and creative responses. We present a benchmark for evaluating the
robustness of reference-free dialogue metrics against four categories of
adversarial attacks: speaker tag prefixes, static responses, ungrammatical
responses, and repeated conversational context. We analyze metrics such as
DialogRPT, UniEval, and PromptEval -- a prompt-based method leveraging LLMs --
across grounded and ungrounded datasets. By examining both their correlation
with human judgment and susceptibility to adversarial attacks, we find that
these two axes are not always aligned; metrics that appear to be equivalent
when judged by traditional benchmarks may, in fact, vary in their scores of
adversarial responses. These findings motivate the development of nuanced
evaluation frameworks to address real-world dialogue challenges.
|
2501.06730
|
Better Prompt Compression Without Multi-Layer Perceptrons
|
cs.CL cs.LG
|
Prompt compression is a promising approach to speeding up language model
inference without altering the generative model. Prior works compress prompts
into smaller sequences of learned tokens using an encoder that is trained as a
LowRank Adaptation (LoRA) of the inference language model. However, we show
that the encoder does not need to keep the original language model's
architecture to achieve useful compression. We introduce the Attention-Only
Compressor (AOC), which learns a prompt compression encoder after removing the
multilayer perceptron (MLP) layers in the Transformer blocks of a language
model, resulting in an encoder with roughly 67% less parameters compared to the
original model. Intriguingly we find that, across a range of compression ratios
up to 480x, AOC can better regenerate prompts and outperform a baseline
compression encoder that is a LoRA of the inference language model without
removing MLP layers. These results demonstrate that the architecture of prompt
compression encoders does not need to be identical to that of the original
decoder language model, paving the way for further research into architectures
and approaches for prompt compression.
|
2501.06736
|
ZOQO: Zero-Order Quantized Optimization
|
cs.LG cs.CL
|
The increasing computational and memory demands in deep learning present
significant challenges, especially in resource-constrained environments. We
introduce a zero-order quantized optimization (ZOQO) method designed for
training models with quantized parameters and operations. Our approach
leverages zero-order approximations of the gradient sign and adapts the
learning process to maintain the parameters' quantization without the need for
full-precision gradient calculations. We demonstrate the effectiveness of ZOQO
through experiments in fine-tuning of large language models and black-box
adversarial attacks. Despite the limitations of zero-order and quantized
operations training, our method achieves competitive performance compared to
full-precision methods, highlighting its potential for low-resource
environments.
|
2501.06740
|
Rice Leaf Disease Detection: A Comparative Study Between CNN,
Transformer and Non-neural Network Architectures
|
cs.CV
|
In nations such as Bangladesh, agriculture plays a vital role in providing
livelihoods for a significant portion of the population. Identifying and
classifying plant diseases early is critical to prevent their spread and
minimize their impact on crop yield and quality. Various computer vision
techniques can be used for such detection and classification. While CNNs have
been dominant on such image classification tasks, vision transformers has
become equally good in recent time also. In this paper we study the various
computer vision techniques for Bangladeshi rice leaf disease detection. We use
the Dhan-Shomadhan -- a Bangladeshi rice leaf disease dataset, to experiment
with various CNN and ViT models. We also compared the performance of such deep
neural network architecture with traditional machine learning architecture like
Support Vector Machine(SVM). We leveraged transfer learning for better
generalization with lower amount of training data. Among the models tested,
ResNet50 exhibited the best performance over other CNN and transformer-based
models making it the optimal choice for this task.
|
2501.06741
|
Hierarchical Divide-and-Conquer for Fine-Grained Alignment in LLM-Based
Medical Evaluation
|
cs.CL
|
In the rapidly evolving landscape of large language models (LLMs) for medical
applications, ensuring the reliability and accuracy of these models in clinical
settings is paramount. Existing benchmarks often focus on fixed-format tasks
like multiple-choice QA, which fail to capture the complexity of real-world
clinical diagnostics. Moreover, traditional evaluation metrics and LLM-based
evaluators struggle with misalignment, often providing oversimplified
assessments that do not adequately reflect human judgment. To address these
challenges, we introduce HDCEval, a Hierarchical Divide-and-Conquer Evaluation
framework tailored for fine-grained alignment in medical evaluation. HDCEval is
built on a set of fine-grained medical evaluation guidelines developed in
collaboration with professional doctors, encompassing Patient Question
Relevance, Medical Knowledge Correctness, and Expression. The framework
decomposes complex evaluation tasks into specialized subtasks, each evaluated
by expert models trained through Attribute-Driven Token Optimization (ADTO) on
a meticulously curated preference dataset. This hierarchical approach ensures
that each aspect of the evaluation is handled with expert precision, leading to
a significant improvement in alignment with human evaluators.
|
2501.06746
|
Diversified Augmentation with Domain Adaptation for Debiased Video
Temporal Grounding
|
cs.CV
|
Temporal sentence grounding in videos (TSGV) faces challenges due to public
TSGV datasets containing significant temporal biases, which are attributed to
the uneven temporal distributions of target moments. Existing methods generate
augmented videos, where target moments are forced to have varying temporal
locations. However, since the video lengths of the given datasets have small
variations, only changing the temporal locations results in poor generalization
ability in videos with varying lengths. In this paper, we propose a novel
training framework complemented by diversified data augmentation and a domain
discriminator. The data augmentation generates videos with various lengths and
target moment locations to diversify temporal distributions. However, augmented
videos inevitably exhibit distinct feature distributions which may introduce
noise. To address this, we design a domain adaptation auxiliary task to
diminish feature discrepancies between original and augmented videos. We also
encourage the model to produce distinct predictions for videos with the same
text queries but different moment locations to promote debiased training.
Experiments on Charades-CD and ActivityNet-CD datasets demonstrate the
effectiveness and generalization abilities of our method in multiple grounding
structures, achieving state-of-the-art results.
|
2501.06749
|
Static Segmentation by Tracking: A Frustratingly Label-Efficient
Approach to Fine-Grained Segmentation
|
cs.CV cs.AI
|
We study image segmentation in the biological domain, particularly trait and
part segmentation from specimen images (e.g., butterfly wing stripes or beetle
body parts). This is a crucial, fine-grained task that aids in understanding
the biology of organisms. The conventional approach involves hand-labeling
masks, often for hundreds of images per species, and training a segmentation
model to generalize these labels to other images, which can be exceedingly
laborious. We present a label-efficient method named Static Segmentation by
Tracking (SST). SST is built upon the insight: while specimens of the same
species have inherent variations, the traits and parts we aim to segment show
up consistently. This motivates us to concatenate specimen images into a
``pseudo-video'' and reframe trait and part segmentation as a tracking problem.
Concretely, SST generates masks for unlabeled images by propagating annotated
or predicted masks from the ``pseudo-preceding'' images. Powered by Segment
Anything Model 2 (SAM~2) initially developed for video segmentation, we show
that SST can achieve high-quality trait and part segmentation with merely one
labeled image per species -- a breakthrough for analyzing specimen images. We
further develop a cycle-consistent loss to fine-tune the model, again using one
labeled image. Additionally, we highlight the broader potential of SST,
including one-shot instance segmentation on images taken in the wild and
trait-based image retrieval.
|
2501.06751
|
Padding Tone: A Mechanistic Analysis of Padding Tokens in T2I Models
|
cs.CL cs.CV
|
Text-to-image (T2I) diffusion models rely on encoded prompts to guide the
image generation process. Typically, these prompts are extended to a fixed
length by adding padding tokens before text encoding. Despite being a default
practice, the influence of padding tokens on the image generation process has
not been investigated. In this work, we conduct the first in-depth analysis of
the role padding tokens play in T2I models. We develop two causal techniques to
analyze how information is encoded in the representation of tokens across
different components of the T2I pipeline. Using these techniques, we
investigate when and how padding tokens impact the image generation process.
Our findings reveal three distinct scenarios: padding tokens may affect the
model's output during text encoding, during the diffusion process, or be
effectively ignored. Moreover, we identify key relationships between these
scenarios and the model's architecture (cross or self-attention) and its
training process (frozen or trained text encoder). These insights contribute to
a deeper understanding of the mechanisms of padding tokens, potentially
informing future model design and training practices in T2I systems.
|
2501.06753
|
Procedural Fairness and Its Relationship with Distributive Fairness in
Machine Learning
|
cs.LG cs.CY
|
Fairness in machine learning (ML) has garnered significant attention in
recent years. While existing research has predominantly focused on the
distributive fairness of ML models, there has been limited exploration of
procedural fairness. This paper proposes a novel method to achieve procedural
fairness during the model training phase. The effectiveness of the proposed
method is validated through experiments conducted on one synthetic and six
real-world datasets. Additionally, this work studies the relationship between
procedural fairness and distributive fairness in ML models. On one hand, the
impact of dataset bias and the procedural fairness of ML model on its
distributive fairness is examined. The results highlight a significant
influence of both dataset bias and procedural fairness on distributive
fairness. On the other hand, the distinctions between optimizing procedural and
distributive fairness metrics are analyzed. Experimental results demonstrate
that optimizing procedural fairness metrics mitigates biases introduced or
amplified by the decision-making process, thereby ensuring fairness in the
decision-making process itself, as well as improving distributive fairness. In
contrast, optimizing distributive fairness metrics encourages the ML model's
decision-making process to favor disadvantaged groups, counterbalancing the
inherent preferences for advantaged groups present in the dataset and
ultimately achieving distributive fairness.
|
2501.06756
|
Generative AI Enabled Robust Sensor Placement in Cyber-Physical Power
Systems: A Graph Diffusion Approach
|
eess.SY cs.SY
|
With advancements in physical power systems and network technologies,
integrated Cyber-Physical Power Systems (CPPS) have significantly enhanced
system monitoring and control efficiency and reliability. This integration,
however, introduces complex challenges in designing coherent CPPS, particularly
as few studies concurrently address the deployment of physical layers and
communication connections in the cyber layer. This paper addresses these
challenges by proposing a framework for robust sensor placement to optimize
anomaly detection in the physical layer and enhance communication resilience in
the cyber layer. We model the CPPS as an interdependent network via a graph,
allowing for simultaneous consideration of both layers. Then, we adopt the
Log-normal Shadowing Path Loss (LNSPL) model to ensure reliable data
transmission. Additionally, we leverage the Fiedler value to measure graph
resilience against line failures and three anomaly detectors to fortify system
safety. However, the optimization problem is NP-hard. Therefore, we introduce
the Experience Feedback Graph Diffusion (EFGD) algorithm, which utilizes a
diffusion process to generate optimal sensor placement strategies. This
algorithm incorporates cross-entropy gradient and experience feedback
mechanisms to expedite convergence and generate higher reward strategies.
Extensive simulations demonstrate that the EFGD algorithm enhances model
convergence by 18.9% over existing graph diffusion methods and improves average
reward by 22.90% compared to Denoising Diffusion Policy Optimization (DDPO) and
19.57% compared to Graph Diffusion Policy Optimization (GDPO), thereby
significantly bolstering the robustness and reliability of CPPS operations.
|
2501.06760
|
Metaprism Design for Wireless Communications: Angle-Frequency Analysis,
Physical Realizability Constraints, and Performance Optimization
|
cs.IT math.IT
|
Recent advancements in smart radio environment technologies aim to enhance
wireless network performance through the use of low-cost electromagnetic (EM)
devices. Among these, reconfigurable intelligent surfaces (RIS) have garnered
attention for their ability to modify incident waves via programmable
scattering elements. An RIS is a nearly passive device, in which the tradeoff
between performance, power consumption, and optimization overhead depend on how
often the RIS needs to be reconfigured. This paper focuses on the metaprism
(MTP), a static frequency-selective metasurface which relaxes the
reconfiguration requirements of RISs and allows for the creation of different
beams at various frequencies. In particular, we address the design of an ideal
MTP based on its frequency-dependent reflection coefficients, defining the
general properties necessary to achieve the desired beam steering function in
the angle-frequency domain. We also discuss the limitations of previous studies
that employed oversimplified models, which may compromise performance. Key
contributions include a detailed exploration of the equivalence of the MTP to
an ideal S-parameter multiport model and an analysis of its implementation
using Foster's circuits. Additionally, we introduce a realistic multiport
network model that incorporates aspects overlooked by ideal scattering models,
along with an ad hoc optimization strategy for this model. The performance of
the proposed optimization approach and circuits implementation are validated
through simulations using a commercial full-wave EM simulator, confirming the
effectiveness of the proposed method.
|
2501.06761
|
VidChain: Chain-of-Tasks with Metric-based Direct Preference
Optimization for Dense Video Captioning
|
cs.CV
|
Despite the advancements of Video Large Language Models (VideoLLMs) in
various tasks, they struggle with fine-grained temporal understanding, such as
Dense Video Captioning (DVC). DVC is a complicated task of describing all
events within a video while also temporally localizing them, which integrates
multiple fine-grained tasks, including video segmentation, video captioning,
and temporal video grounding. Previous VideoLLMs attempt to solve DVC in a
single step, failing to utilize their reasoning capability. Moreover, previous
training objectives for VideoLLMs do not fully reflect the evaluation metrics,
therefore not providing supervision directly aligned to target tasks. To
address such a problem, we propose a novel framework named VidChain comprised
of Chain-of-Tasks (CoTasks) and Metric-based Direct Preference Optimization
(M-DPO). CoTasks decompose a complex task into a sequence of sub-tasks,
allowing VideoLLMs to leverage their reasoning capabilities more effectively.
M-DPO aligns a VideoLLM with evaluation metrics, providing fine-grained
supervision to each task that is well-aligned with metrics. Applied to two
different VideoLLMs, VidChain consistently improves their fine-grained video
understanding, thereby outperforming previous VideoLLMs on two different DVC
benchmarks and also on the temporal video grounding task. Code is available at
\url{https://github.com/mlvlab/VidChain}.
|
2501.06762
|
Improving the adaptive and continuous learning capabilities of
artificial neural networks: Lessons from multi-neuromodulatory dynamics
|
q-bio.NC cs.LG cs.NE
|
Continuous, adaptive learning-the ability to adapt to the environment and
improve performance-is a hallmark of both natural and artificial intelligence.
Biological organisms excel in acquiring, transferring, and retaining knowledge
while adapting to dynamic environments, making them a rich source of
inspiration for artificial neural networks (ANNs). This study explores how
neuromodulation, a fundamental feature of biological learning systems, can help
address challenges such as catastrophic forgetting and enhance the robustness
of ANNs in continuous learning scenarios. Driven by neuromodulators including
dopamine (DA), acetylcholine (ACh), serotonin (5-HT) and noradrenaline (NA),
neuromodulatory processes in the brain operate at multiple scales, facilitating
dynamic responses to environmental changes through mechanisms ranging from
local synaptic plasticity to global network-wide adaptability. Importantly, the
relationship between neuromodulators, and their interplay in the modulation of
sensory and cognitive processes are more complex than expected, demonstrating a
"many-to-one" neuromodulator-to-task mapping. To inspire the design of novel
neuromodulation-aware learning rules, we highlight (i) how
multi-neuromodulatory interactions enrich single-neuromodulator-driven
learning, (ii) the impact of neuromodulators at multiple spatial and temporal
scales, and correspondingly, (iii) strategies to integrate neuromodulated
learning into or approximate it in ANNs. To illustrate these principles, we
present a case study to demonstrate how neuromodulation-inspired mechanisms,
such as DA-driven reward processing and NA-based cognitive flexibility, can
enhance ANN performance in a Go/No-Go task. By integrating multi-scale
neuromodulation, we aim to bridge the gap between biological learning and
artificial systems, paving the way for ANNs with greater flexibility,
robustness, and adaptability.
|
2501.06764
|
MTPareto: A MultiModal Targeted Pareto Framework for Fake News Detection
|
cs.LG
|
Multimodal fake news detection is essential for maintaining the authenticity
of Internet multimedia information. Significant differences in form and content
of multimodal information lead to intensified optimization conflicts, hindering
effective model training as well as reducing the effectiveness of existing
fusion methods for bimodal. To address this problem, we propose the MTPareto
framework to optimize multimodal fusion, using a Targeted Pareto(TPareto)
optimization algorithm for fusion-level-specific objective learning with a
certain focus. Based on the designed hierarchical fusion network, the algorithm
defines three fusion levels with corresponding losses and implements
all-modal-oriented Pareto gradient integration for each. This approach
accomplishes superior multimodal fusion by utilizing the information obtained
from intermediate fusion to provide positive effects to the entire process.
Experiment results on FakeSV and FVC datasets show that the proposed framework
outperforms baselines and the TPareto optimization algorithm achieves 2.40% and
1.89% accuracy improvement respectively.
|
2501.06766
|
On the Complexity of Global Necessary Reasons to Explain Classification
|
cs.AI
|
Explainable AI has garnered considerable attention in recent years, as
understanding the reasons behind decisions or predictions made by AI systems is
crucial for their successful adoption. Explaining classifiers' behavior is one
prominent problem. Work in this area has proposed notions of both local and
global explanations, where the former are concerned with explaining a
classifier's behavior for a specific instance, while the latter are concerned
with explaining the overall classifier's behavior regardless of any specific
instance. In this paper, we focus on global explanations, and explain
classification in terms of ``minimal'' necessary conditions for the classifier
to assign a specific class to a generic instance. We carry out a thorough
complexity analysis of the problem for natural minimality criteria and
important families of classifiers considered in the literature.
|
2501.06769
|
ODPG: Outfitting Diffusion with Pose Guided Condition
|
cs.CV
|
Virtual Try-On (VTON) technology allows users to visualize how clothes would
look on them without physically trying them on, gaining traction with the rise
of digitalization and online shopping. Traditional VTON methods, often using
Generative Adversarial Networks (GANs) and Diffusion models, face challenges in
achieving high realism and handling dynamic poses. This paper introduces
Outfitting Diffusion with Pose Guided Condition (ODPG), a novel approach that
leverages a latent diffusion model with multiple conditioning inputs during the
denoising process. By transforming garment, pose, and appearance images into
latent features and integrating these features in a UNet-based denoising model,
ODPG achieves non-explicit synthesis of garments on dynamically posed human
images. Our experiments on the FashionTryOn and a subset of the DeepFashion
dataset demonstrate that ODPG generates realistic VTON images with fine-grained
texture details across various poses, utilizing an end-to-end architecture
without the need for explicit garment warping processes. Future work will focus
on generating VTON outputs in video format and on applying our attention
mechanism, as detailed in the Method section, to other domains with limited
data.
|
2501.06770
|
SuperNeRF-GAN: A Universal 3D-Consistent Super-Resolution Framework for
Efficient and Enhanced 3D-Aware Image Synthesis
|
cs.CV
|
Neural volume rendering techniques, such as NeRF, have revolutionized
3D-aware image synthesis by enabling the generation of images of a single scene
or object from various camera poses. However, the high computational cost of
NeRF presents challenges for synthesizing high-resolution (HR) images. Most
existing methods address this issue by leveraging 2D super-resolution, which
compromise 3D-consistency. Other methods propose radiance manifolds or
two-stage generation to achieve 3D-consistent HR synthesis, yet they are
limited to specific synthesis tasks, reducing their universality. To tackle
these challenges, we propose SuperNeRF-GAN, a universal framework for
3D-consistent super-resolution. A key highlight of SuperNeRF-GAN is its
seamless integration with NeRF-based 3D-aware image synthesis methods and it
can simultaneously enhance the resolution of generated images while preserving
3D-consistency and reducing computational cost. Specifically, given a
pre-trained generator capable of producing a NeRF representation such as
tri-plane, we first perform volume rendering to obtain a low-resolution image
with corresponding depth and normal map. Then, we employ a NeRF
Super-Resolution module which learns a network to obtain a high-resolution
NeRF. Next, we propose a novel Depth-Guided Rendering process which contains
three simple yet effective steps, including the construction of a
boundary-correct multi-depth map through depth aggregation, a normal-guided
depth super-resolution and a depth-guided NeRF rendering. Experimental results
demonstrate the superior efficiency, 3D-consistency, and quality of our
approach. Additionally, ablation studies confirm the effectiveness of our
proposed components.
|
2501.06773
|
Pareto Set Learning for Multi-Objective Reinforcement Learning
|
cs.LG
|
Multi-objective decision-making problems have emerged in numerous real-world
scenarios, such as video games, navigation and robotics. Considering the clear
advantages of Reinforcement Learning (RL) in optimizing decision-making
processes, researchers have delved into the development of Multi-Objective RL
(MORL) methods for solving multi-objective decision problems. However, previous
methods either cannot obtain the entire Pareto front, or employ only a single
policy network for all the preferences over multiple objectives, which may not
produce personalized solutions for each preference. To address these
limitations, we propose a novel decomposition-based framework for MORL, Pareto
Set Learning for MORL (PSL-MORL), that harnesses the generation capability of
hypernetwork to produce the parameters of the policy network for each
decomposition weight, generating relatively distinct policies for various
scalarized subproblems with high efficiency. PSL-MORL is a general framework,
which is compatible for any RL algorithm. The theoretical result guarantees the
superiority of the model capacity of PSL-MORL and the optimality of the
obtained policy network. Through extensive experiments on diverse benchmarks,
we demonstrate the effectiveness of PSL-MORL in achieving dense coverage of the
Pareto front, significantly outperforming state-of-the-art MORL methods in the
hypervolume and sparsity indicators.
|
2501.06775
|
Hierarchy-Boosted Funnel Learning for Identifying Semiconductors with
Ultralow Lattice Thermal Conductivity
|
cond-mat.mtrl-sci cs.LG
|
Data-driven machine learning (ML) has demonstrated tremendous potential in
material property predictions. However, the scarcity of materials data with
costly property labels in the vast chemical space presents a significant
challenge for ML in efficiently predicting properties and uncovering
structure-property relationships. Here, we propose a novel hierarchy-boosted
funnel learning (HiBoFL) framework, which is successfully applied to identify
semiconductors with ultralow lattice thermal conductivity
($\kappa_\mathrm{L}$). By training on only a few hundred materials targeted by
unsupervised learning from a pool of hundreds of thousands, we achieve
efficient and interpretable supervised predictions of ultralow
$\kappa_\mathrm{L}$, thereby circumventing large-scale brute-force calculations
without clear objectives. As a result, we provide a list of candidates with
ultralow $\kappa_\mathrm{L}$ for potential thermoelectric applications and
discover a new factor that significantly influences structural anharmonicity.
This study offers a novel practical pathway for accelerating the discovery of
functional materials.
|
2501.06780
|
COMPASS: A Compiler Framework for Resource-Constrained Crossbar-Array
Based In-Memory Deep Learning Accelerators
|
cs.AR cs.DC cs.ET cs.LG cs.PL
|
Recently, crossbar array based in-memory accelerators have been gaining
interest due to their high throughput and energy efficiency. While software and
compiler support for the in-memory accelerators has also been introduced, they
are currently limited to the case where all weights are assumed to be on-chip.
This limitation becomes apparent with the significantly increasing network
sizes compared to the in-memory footprint.
Weight replacement schemes are essential to address this issue. We propose
COMPASS, a compiler framework for resource-constrained crossbar-based
processing-in-memory (PIM) deep neural network (DNN) accelerators. COMPASS is
specially targeted for networks that exceed the capacity of PIM crossbar
arrays, necessitating access to external memories. We propose an algorithm to
determine the optimal partitioning that divides the layers so that each
partition can be accelerated on chip. Our scheme takes into account the data
dependence between layers, core utilization, and the number of write
instructions to minimize latency, memory accesses, and improve energy
efficiency. Simulation results demonstrate that COMPASS can accommodate much
more networks using a minimal memory footprint, while improving throughput by
1.78X and providing 1.28X savings in energy-delay product (EDP) over baseline
partitioning methods.
|
2501.06781
|
Eliza: A Web3 friendly AI Agent Operating System
|
cs.AI
|
AI Agent, powered by large language models (LLMs) as its cognitive core, is
an intelligent agentic system capable of autonomously controlling and
determining the execution paths under user's instructions. With the burst of
capabilities of LLMs and various plugins, such as RAG, text-to-image/video/3D,
etc., the potential of AI Agents has been vastly expanded, with their
capabilities growing stronger by the day. However, at the intersection between
AI and web3, there is currently no ideal agentic framework that can seamlessly
integrate web3 applications into AI agent functionalities. In this paper, we
propose Eliza, the first open-source web3-friendly Agentic framework that makes
the deployment of web3 applications effortless. We emphasize that every aspect
of Eliza is a regular Typescript program under the full control of its user,
and it seamlessly integrates with web3 (i.e., reading and writing blockchain
data, interacting with smart contracts, etc.). Furthermore, we show how stable
performance is achieved through the pragmatic implementation of the key
components of Eliza's runtime. Our code is publicly available at
https://github.com/ai16z/eliza.
|
2501.06783
|
Cost-Effective Robotic Handwriting System with AI Integration
|
cs.RO cs.AI cs.SY eess.SY
|
This paper introduces a cost-effective robotic handwriting system designed to
replicate human-like handwriting with high precision. Combining a Raspberry Pi
Pico microcontroller, 3D-printed components, and a machine learning-based
handwriting generation model implemented via TensorFlow, the system converts
user-supplied text into realistic stroke trajectories. By leveraging
lightweight 3D-printed materials and efficient mechanical designs, the system
achieves a total hardware cost of approximately \$56, significantly
undercutting commercial alternatives. Experimental evaluations demonstrate
handwriting precision within $\pm$0.3 millimeters and a writing speed of
approximately 200 mm/min, positioning the system as a viable solution for
educational, research, and assistive applications. This study seeks to lower
the barriers to personalized handwriting technologies, making them accessible
to a broader audience.
|
2501.06785
|
3DCoMPaT200: Language-Grounded Compositional Understanding of Parts and
Materials of 3D Shapes
|
cs.CV cs.CL
|
Understanding objects in 3D at the part level is essential for humans and
robots to navigate and interact with the environment. Current datasets for
part-level 3D object understanding encompass a limited range of categories. For
instance, the ShapeNet-Part and PartNet datasets only include 16, and 24 object
categories respectively. The 3DCoMPaT dataset, specifically designed for
compositional understanding of parts and materials, contains only 42 object
categories. To foster richer and fine-grained part-level 3D understanding, we
introduce 3DCoMPaT200, a large-scale dataset tailored for compositional
understanding of object parts and materials, with 200 object categories with
$\approx$5 times larger object vocabulary compared to 3DCoMPaT and $\approx$ 4
times larger part categories. Concretely, 3DCoMPaT200 significantly expands
upon 3DCoMPaT, featuring 1,031 fine-grained part categories and 293 distinct
material classes for compositional application to 3D object parts.
Additionally, to address the complexities of compositional 3D modeling, we
propose a novel task of Compositional Part Shape Retrieval using ULIP to
provide a strong 3D foundational model for 3D Compositional Understanding. This
method evaluates the model shape retrieval performance given one, three, or six
parts described in text format. These results show that the model's performance
improves with an increasing number of style compositions, highlighting the
critical role of the compositional dataset. Such results underscore the
dataset's effectiveness in enhancing models' capability to understand complex
3D shapes from a compositional perspective. Code and Data can be found at
http://github.com/3DCoMPaT200/3DCoMPaT200
|
2501.06786
|
Temporal-Aware Spiking Transformer Hashing Based on 3D-DWT
|
cs.CV
|
With the rapid growth of dynamic vision sensor (DVS) data, constructing a
low-energy, efficient data retrieval system has become an urgent task. Hash
learning is one of the most important retrieval technologies which can keep the
distance between hash codes consistent with the distance between DVS data. As
spiking neural networks (SNNs) can encode information through spikes, they
demonstrate great potential in promoting energy efficiency. Based on the binary
characteristics of SNNs, we first propose a novel supervised hashing method
named Spikinghash with a hierarchical lightweight structure. Spiking WaveMixer
(SWM) is deployed in shallow layers, utilizing a multilevel 3D discrete wavelet
transform (3D-DWT) to decouple spatiotemporal features into various
low-frequency and high frequency components, and then employing efficient
spectral feature fusion. SWM can effectively capture the temporal dependencies
and local spatial features. Spiking Self-Attention (SSA) is deployed in deeper
layers to further extract global spatiotemporal information. We also design a
hash layer utilizing binary characteristic of SNNs, which integrates
information over multiple time steps to generate final hash codes. Furthermore,
we propose a new dynamic soft similarity loss for SNNs, which utilizes membrane
potentials to construct a learnable similarity matrix as soft labels to fully
capture the similarity differences between classes and compensate information
loss in SNNs, thereby improving retrieval performance. Experiments on multiple
datasets demonstrate that Spikinghash can achieve state-of-the-art results with
low energy consumption and fewer parameters.
|
2501.06787
|
Improving Pain Classification using Spatio-Temporal Deep Learning
Approaches with Facial Expressions
|
cs.CV cs.AI
|
Pain management and severity detection are crucial for effective treatment,
yet traditional self-reporting methods are subjective and may be unsuitable for
non-verbal individuals (people with limited speaking skills). To address this
limitation, we explore automated pain detection using facial expressions. Our
study leverages deep learning techniques to improve pain assessment by
analyzing facial images from the Pain Emotion Faces Database (PEMF). We propose
two novel approaches1: (1) a hybrid ConvNeXt model combined with Long
Short-Term Memory (LSTM) blocks to analyze video frames and predict pain
presence, and (2) a Spatio-Temporal Graph Convolution Network (STGCN)
integrated with LSTM to process landmarks from facial images for pain
detection. Our work represents the first use of the PEMF dataset for binary
pain classification and demonstrates the effectiveness of these models through
extensive experimentation. The results highlight the potential of combining
spatial and temporal features for enhanced pain detection, offering a promising
advancement in objective pain assessment methodologies.
|
2501.06793
|
Differentially Private Gradient-Tracking-Based Distributed Stochastic
Optimization over Directed Graphs
|
eess.SY cs.SY
|
This paper proposes a new differentially private gradient-tracking-based
distributed stochastic optimization algorithm over directed graphs.
Specifically, privacy noises are added to each agent's state and tracking
variable to prevent information leakage, and then perturbed states and tracking
variables are transmitted to neighbors. We design two novel schemes of the
iteration step-sizes and the sampling number for the algorithm. By using the
sampling parameter-controlled subsampling method, both schemes enhance the
differential privacy level, and achieve the finite cumulative privacy budget
even over infinite iterations. The convergence rate of the algorithm is shown
for both nonconvex with the Polyak-Lojasiewicz condition and strongly convex
objectives: Scheme (S1) achieves the polynomial convergence rate, and Scheme
(S2) achieves the exponential convergence rate. The trade-off between the
privacy and the convergence rate is presented. The algorithm's effectiveness
and superior performance over the existing works are demonstrated through
numerical examples of distributed training on benchmark datasets "MNIST" and
"CIFAR-10".
|
2501.06795
|
Bridging the Fairness Gap: Enhancing Pre-trained Models with
LLM-Generated Sentences
|
cs.CL cs.AI
|
Pre-trained language models (PLMs) are trained on data that inherently
contains gender biases, leading to undesirable impacts. Traditional debiasing
methods often rely on external corpora, which may lack quality, diversity, or
demographic balance, affecting the effectiveness of debiasing. With the rise of
large language models and their extensive knowledge, we propose enhancing
fairness (Fair-Gender) in PLMs by absorbing coherent, attribute-balanced, and
semantically rich sentences. However, these sentences cannot be directly used
for debiasing due to alignment issues and the risk of negative transfer. We
address this by applying causal analysis to estimate causal effects, filtering
out unaligned sentences, and identifying aligned ones for incorporation into
PLMs, thereby ensuring positive transfer. Experiments show that our approach
significantly reduces gender biases in PLMs while preserving their language
expressiveness.
|
2501.06801
|
Optimizing Sequencing Coverage Depth in DNA Storage: Insights From DNA
Storage Data
|
cs.IT math.IT
|
DNA data storage is now being considered as a new archival storage method for
its durability and high information density, but still facing some challenges
like high costs and low throughput. By reducing sequencing sample size for
decoding digital data, minimizing DNA coverage depth helps lower both costs and
system latency. Previous studies have mainly focused on minimizing coverage
depth in uniform distribution channels under theoretical assumptions. In
contrast, our work uses real DNA storage experimental data to extend this
problem to log-normal distribution channels, a conclusion derived from our PCR
and sequencing data analysis. In this framework, we investigate both noiseless
and noisy channels. We first demonstrate a detailed negative correlation
between linear coding redundancy and the expected minimum sequencing coverage
depth. Moreover, we observe that the probability of successfully decoding all
data in a single sequencing run increases and then decreases as coding
redundancy rises, when the sample size is optimized for complete decoding. Then
we extend the lower bounds of DNA coverage depth from uniform to log-normal
noisy channels. The findings of this study provide valuable insights for the
efficient execution of DNA storage experiments.
|
2501.06802
|
Unifying Two Types of Scaling Laws from the Perspective of Conditional
Kolmogorov Complexity
|
cs.AI
|
In 2020, OpenAI proposed the first type of Scaling Laws, describing the
relationships between model loss and the scale of parameters, data, and
training computation. In 2024, OpenAI proposed the second type of Scaling Laws,
describing the relationship between model inference performance and inference
computation. In this paper, we analyze LLMs training and inference processes
from the perspective of lossless compression using conditional Kolmogorov
complexity, and unify these two types of Scaling Laws. We find that both types
of Scaling Laws improve approximation of conditional Kolmogorov complexity by
increasing execution steps of Turing machine. The first type of Scaling Laws
increases execution steps by increasing number of model parameters. The second
type of Scaling Laws increases execution steps by increasing the number of
intermediate tokens.
|
2501.06805
|
A Pan-cancer Classification Model using Multi-view Feature Selection
Method and Ensemble Classifier
|
cs.LG q-bio.GN
|
Accurately identifying cancer samples is crucial for precise diagnosis and
effective patient treatment. Traditional methods falter with high-dimensional
and high feature-to-sample count ratios, which are critical for classifying
cancer samples. This study aims to develop a novel feature selection framework
specifically for transcriptome data and propose two ensemble classifiers. For
feature selection, we partition the transcriptome dataset vertically based on
feature types. Then apply the Boruta feature selection process on each of the
partitions, combine the results, and apply Boruta again on the combined result.
We repeat the process with different parameters of Boruta and prepare the final
feature set. Finally, we constructed two ensemble ML models based on LR, SVM
and XGBoost classifiers with max voting and averaging probability approach. We
used 10-fold cross-validation to ensure robust and reliable classification
performance. With 97.11\% accuracy and 0.9996 AUC value, our approach performs
better compared to existing state-of-the-art methods to classify 33 types of
cancers. A set of 12 types of cancer is traditionally challenging to
differentiate between each other due to their similarity in tissue of origin.
Our method accurately identifies over 90\% of samples from these 12 types of
cancers, which outperforms all known methods presented in existing literature.
The gene set enrichment analysis reveals that our framework's selected features
have enriched the pathways highly related to cancers. This study develops a
feature selection framework to select features highly related to cancer
development and leads to identifying different types of cancer samples with
higher accuracy.
|
2501.06806
|
Soft Vision-Based Tactile-Enabled SixthFinger: Advancing Daily Objects
Manipulation for Stroke Survivors
|
cs.RO
|
The presence of post-stroke grasping deficiencies highlights the critical
need for the development and implementation of advanced compensatory
strategies. This paper introduces a novel system to aid chronic stroke
survivors through the development of a soft, vision-based, tactile-enabled
extra robotic finger. By incorporating vision-based tactile sensing, the system
autonomously adjusts grip force in response to slippage detection. This synergy
not only ensures mechanical stability but also enriches tactile feedback,
mimicking the dynamics of human-object interactions. At the core of our
approach is a transformer-based framework trained on a comprehensive tactile
dataset encompassing objects with a wide range of morphological properties,
including variations in shape, size, weight, texture, and hardness.
Furthermore, we validated the system's robustness in real-world applications,
where it successfully manipulated various everyday objects. The promising
results highlight the potential of this approach to improve the quality of life
for stroke survivors.
|
2501.06808
|
Semantic-CD: Remote Sensing Image Semantic Change Detection towards
Open-vocabulary Setting
|
cs.CV
|
Remote sensing image semantic change detection is a method used to analyze
remote sensing images, aiming to identify areas of change as well as categorize
these changes within images of the same location taken at different times.
Traditional change detection methods often face challenges in generalizing
across semantic categories in practical scenarios. To address this issue, we
introduce a novel approach called Semantic-CD, specifically designed for
semantic change detection in remote sensing images. This method incorporates
the open vocabulary semantics from the vision-language foundation model, CLIP.
By utilizing CLIP's extensive vocabulary knowledge, our model enhances its
ability to generalize across categories and improves segmentation through fully
decoupled multi-task learning, which includes both binary change detection and
semantic change detection tasks. Semantic-CD consists of four main components:
a bi-temporal CLIP visual encoder for extracting features from bi-temporal
images, an open semantic prompter for creating semantic cost volume maps with
open vocabulary, a binary change detection decoder for generating binary change
detection masks, and a semantic change detection decoder for producing semantic
labels. Experimental results on the SECOND dataset demonstrate that Semantic-CD
achieves more accurate masks and reduces semantic classification errors,
illustrating its effectiveness in applying semantic priors from vision-language
foundation models to SCD tasks.
|
2501.06809
|
RSRefSeg: Referring Remote Sensing Image Segmentation with Foundation
Models
|
cs.CV
|
Referring remote sensing image segmentation is crucial for achieving
fine-grained visual understanding through free-format textual input, enabling
enhanced scene and object extraction in remote sensing applications. Current
research primarily utilizes pre-trained language models to encode textual
descriptions and align them with visual modalities, thereby facilitating the
expression of relevant visual features. However, these approaches often
struggle to establish robust alignments between fine-grained semantic concepts,
leading to inconsistent representations across textual and visual information.
To address these limitations, we introduce a referring remote sensing image
segmentation foundational model, RSRefSeg. RSRefSeg leverages CLIP for visual
and textual encoding, employing both global and local textual semantics as
filters to generate referring-related visual activation features in the latent
space. These activated features then serve as input prompts for SAM, which
refines the segmentation masks through its robust visual generalization
capabilities. Experimental results on the RRSIS-D dataset demonstrate that
RSRefSeg outperforms existing methods, underscoring the effectiveness of
foundational models in enhancing multimodal task comprehension. The code is
available at \url{https://github.com/KyanChen/RSRefSeg}.
|
2501.06810
|
Improving Cross-Lingual Phonetic Representation of Low-Resource
Languages Through Language Similarity Analysis
|
eess.AS cs.CL cs.SD
|
This paper examines how linguistic similarity affects cross-lingual phonetic
representation in speech processing for low-resource languages, emphasizing
effective source language selection. Previous cross-lingual research has used
various source languages to enhance performance for the target low-resource
language without thorough consideration of selection. Our study stands out by
providing an in-depth analysis of language selection, supported by a practical
approach to assess phonetic proximity among multiple language families. We
investigate how within-family similarity impacts performance in multilingual
training, which aids in understanding language dynamics. We also evaluate the
effect of using phonologically similar languages, regardless of family. For the
phoneme recognition task, utilizing phonologically similar languages
consistently achieves a relative improvement of 55.6% over monolingual
training, even surpassing the performance of a large-scale self-supervised
learning model. Multilingual training within the same language family
demonstrates that higher phonological similarity enhances performance, while
lower similarity results in degraded performance compared to monolingual
training.
|
2501.06813
|
Pareto Optimization with Robust Evaluation for Noisy Subset Selection
|
cs.NE
|
Subset selection is a fundamental problem in combinatorial optimization,
which has a wide range of applications such as influence maximization and
sparse regression. The goal is to select a subset of limited size from a ground
set in order to maximize a given objective function. However, the evaluation of
the objective function in real-world scenarios is often noisy. Previous
algorithms, including the greedy algorithm and multi-objective evolutionary
algorithms POSS and PONSS, either struggle in noisy environments or consume
excessive computational resources. In this paper, we focus on the noisy subset
selection problem with a cardinality constraint, where the evaluation of a
subset is noisy. We propose a novel approach based on Pareto Optimization with
Robust Evaluation for noisy subset selection (PORE), which maximizes a robust
evaluation function and minimizes the subset size simultaneously. PORE can
efficiently identify well-structured solutions and handle computational
resources, addressing the limitations observed in PONSS. Our experiments,
conducted on real-world datasets for influence maximization and sparse
regression, demonstrate that PORE significantly outperforms previous methods,
including the classical greedy algorithm, POSS, and PONSS. Further validation
through ablation studies confirms the effectiveness of our robust evaluation
function.
|
2501.06818
|
UR2P-Dehaze: Learning a Simple Image Dehaze Enhancer via Unpaired Rich
Physical Prior
|
cs.CV
|
Image dehazing techniques aim to enhance contrast and restore details, which
are essential for preserving visual information and improving image processing
accuracy. Existing methods rely on a single manual prior, which cannot
effectively reveal image details. To overcome this limitation, we propose an
unpaired image dehazing network, called the Simple Image Dehaze Enhancer via
Unpaired Rich Physical Prior (UR2P-Dehaze). First, to accurately estimate the
illumination, reflectance, and color information of the hazy image, we design a
shared prior estimator (SPE) that is iteratively trained to ensure the
consistency of illumination and reflectance, generating clear, high-quality
images. Additionally, a self-monitoring mechanism is introduced to eliminate
undesirable features, providing reliable priors for image reconstruction. Next,
we propose Dynamic Wavelet Separable Convolution (DWSC), which effectively
integrates key features across both low and high frequencies, significantly
enhancing the preservation of image details and ensuring global consistency.
Finally, to effectively restore the color information of the image, we propose
an Adaptive Color Corrector that addresses the problem of unclear colors. The
PSNR, SSIM, LPIPS, FID and CIEDE2000 metrics on the benchmark dataset show that
our method achieves state-of-the-art performance. It also contributes to the
performance improvement of downstream tasks. The project code will be available
at https://github.com/Fan-pixel/UR2P-Dehaze. \end{abstract}
|
2501.06819
|
A Study on Educational Data Analysis and Personalized Feedback Report
Generation Based on Tags and ChatGPT
|
cs.AI
|
This study introduces a novel method that employs tag annotation coupled with
the ChatGPT language model to analyze student learning behaviors and generate
personalized feedback. Central to this approach is the conversion of complex
student data into an extensive set of tags, which are then decoded through
tailored prompts to deliver constructive feedback that encourages rather than
discourages students. This methodology focuses on accurately feeding student
data into large language models and crafting prompts that enhance the
constructive nature of feedback. The effectiveness of this approach was
validated through surveys conducted with over 20 mathematics teachers, who
confirmed the reliability of the generated reports. This method can be
seamlessly integrated into intelligent adaptive learning systems or provided as
a tool to significantly reduce the workload of teachers, providing accurate and
timely feedback to students. By transforming raw educational data into
interpretable tags, this method supports the provision of efficient and timely
personalized learning feedback that offers constructive suggestions tailored to
individual learner needs.
|
2501.06823
|
MEXA-CTP: Mode Experts Cross-Attention for Clinical Trial Outcome
Prediction
|
cs.LG cs.AI q-bio.QM
|
Clinical trials are the gold standard for assessing the effectiveness and
safety of drugs for treating diseases. Given the vast design space of drug
molecules, elevated financial cost, and multi-year timeline of these trials,
research on clinical trial outcome prediction has gained immense traction.
Accurate predictions must leverage data of diverse modes such as drug
molecules, target diseases, and eligibility criteria to infer successes and
failures. Previous Deep Learning approaches for this task, such as HINT, often
require wet lab data from synthesized molecules and/or rely on prior knowledge
to encode interactions as part of the model architecture. To address these
limitations, we propose a light-weight attention-based model, MEXA-CTP, to
integrate readily-available multi-modal data and generate effective
representations via specialized modules dubbed "mode experts", while avoiding
human biases in model design. We optimize MEXA-CTP with the Cauchy loss to
capture relevant interactions across modes. Our experiments on the Trial
Outcome Prediction (TOP) benchmark demonstrate that MEXA-CTP improves upon
existing approaches by, respectively, up to 11.3% in F1 score, 12.2% in PR-AUC,
and 2.5% in ROC-AUC, compared to HINT. Ablation studies are provided to
quantify the effectiveness of each component in our proposed method.
|
2501.06825
|
Event Argument Extraction with Enriched Prompts
|
cs.CL
|
This work aims to delve deeper into prompt-based event argument extraction
(EAE) models. We explore the impact of incorporating various types of
information into the prompt on model performance, including trigger, other role
arguments for the same event, and role arguments across multiple events within
the same document. Further, we provide the best possible performance that the
prompt-based EAE model can attain and demonstrate such models can be further
optimized from the perspective of the training objective. Experiments are
carried out on three small language models and two large language models in
RAMS.
|
2501.06826
|
Correcting Annotator Bias in Training Data: Population-Aligned Instance
Replication (PAIR)
|
stat.ME cs.CL
|
Models trained on crowdsourced labels may not reflect broader population
views when annotator pools are not representative. Since collecting
representative labels is challenging, we propose Population-Aligned Instance
Replication (PAIR), a method to address this bias through statistical
adjustment. Using a simulation study of hate speech and offensive language
detection, we create two types of annotators with different labeling tendencies
and generate datasets with varying proportions of the types. Models trained on
unbalanced annotator pools show poor calibration compared to those trained on
representative data. However, PAIR, which duplicates labels from
underrepresented annotator groups to match population proportions,
significantly reduces bias without requiring new data collection. These results
suggest statistical techniques from survey research can help align model
training with target populations even when representative annotator pools are
unavailable. We conclude with three practical recommendations for improving
training data quality.
|
2501.06827
|
Leveraging Taxonomy and LLMs for Improved Multimodal Hierarchical
Classification
|
cs.AI
|
Multi-level Hierarchical Classification (MLHC) tackles the challenge of
categorizing items within a complex, multi-layered class structure. However,
traditional MLHC classifiers often rely on a backbone model with independent
output layers, which tend to ignore the hierarchical relationships between
classes. This oversight can lead to inconsistent predictions that violate the
underlying taxonomy. Leveraging Large Language Models (LLMs), we propose a
novel taxonomy-embedded transitional LLM-agnostic framework for multimodality
classification. The cornerstone of this advancement is the ability of models to
enforce consistency across hierarchical levels. Our evaluations on the MEP-3M
dataset - a multi-modal e-commerce product dataset with various hierarchical
levels - demonstrated a significant performance improvement compared to
conventional LLM structures.
|
2501.06828
|
GeoPix: Multi-Modal Large Language Model for Pixel-level Image
Understanding in Remote Sensing
|
cs.CV
|
Multi-modal large language models (MLLMs) have achieved remarkable success in
image- and region-level remote sensing (RS) image understanding tasks, such as
image captioning, visual question answering, and visual grounding. However,
existing RS MLLMs lack the pixel-level dialogue capability, which involves
responding to user instructions with segmentation masks for specific instances.
In this paper, we propose GeoPix, a RS MLLM that extends image understanding
capabilities to the pixel level. This is achieved by equipping the MLLM with a
mask predictor, which transforms visual features from the vision encoder into
masks conditioned on the LLM's segmentation token embeddings. To facilitate the
segmentation of multi-scale objects in RS imagery, a class-wise learnable
memory module is integrated into the mask predictor to capture and store
class-wise geo-context at the instance level across the entire dataset. In
addition, to address the absence of large-scale datasets for training
pixel-level RS MLLMs, we construct the GeoPixInstruct dataset, comprising
65,463 images and 140,412 instances, with each instance annotated with text
descriptions, bounding boxes, and masks. Furthermore, we develop a two-stage
training strategy to balance the distinct requirements of text generation and
masks prediction in multi-modal multi-task optimization. Extensive experiments
verify the effectiveness and superiority of GeoPix in pixel-level segmentation
tasks, while also maintaining competitive performance in image- and
region-level benchmarks.
|
2501.06831
|
Towards Counterfactual and Contrastive Explainability and Transparency
of DCNN Image Classifiers
|
cs.CV cs.AI
|
Explainability of deep convolutional neural networks (DCNNs) is an important
research topic that tries to uncover the reasons behind a DCNN model's
decisions and improve their understanding and reliability in high-risk
environments. In this regard, we propose a novel method for generating
interpretable counterfactual and contrastive explanations for DCNN models. The
proposed method is model intrusive that probes the internal workings of a DCNN
instead of altering the input image to generate explanations. Given an input
image, we provide contrastive explanations by identifying the most important
filters in the DCNN representing features and concepts that separate the
model's decision between classifying the image to the original inferred class
or some other specified alter class. On the other hand, we provide
counterfactual explanations by specifying the minimal changes necessary in such
filters so that a contrastive output is obtained.
Using these identified filters and concepts, our method can provide
contrastive and counterfactual reasons behind a model's decisions and makes the
model more transparent. One of the interesting applications of this method is
misclassification analysis, where we compare the identified concepts from a
particular input image and compare them with class-specific concepts to
establish the validity of the model's decisions. The proposed method is
compared with state-of-the-art and evaluated on the Caltech-UCSD Birds (CUB)
2011 dataset to show the usefulness of the explanations provided.
|
2501.06832
|
A novel multi-agent dynamic portfolio optimization learning system based
on hierarchical deep reinforcement learning
|
cs.LG cs.MA
|
Deep Reinforcement Learning (DRL) has been extensively used to address
portfolio optimization problems. The DRL agents acquire knowledge and make
decisions through unsupervised interactions with their environment without
requiring explicit knowledge of the joint dynamics of portfolio assets. Among
these DRL algorithms, the combination of actor-critic algorithms and deep
function approximators is the most widely used DRL algorithm. Here, we find
that training the DRL agent using the actor-critic algorithm and deep function
approximators may lead to scenarios where the improvement in the DRL agent's
risk-adjusted profitability is not significant. We propose that such situations
primarily arise from the following two problems: sparsity in positive reward
and the curse of dimensionality. These limitations prevent DRL agents from
comprehensively learning asset price change patterns in the training
environment. As a result, the DRL agents cannot explore the dynamic portfolio
optimization policy to improve the risk-adjusted profitability in the training
process. To address these problems, we propose a novel multi-agent Hierarchical
Deep Reinforcement Learning (HDRL) algorithmic framework in this research.
Under this framework, the agents work together as a learning system for
portfolio optimization. Specifically, by designing an auxiliary agent that
works together with the executive agent for optimal policy exploration, the
learning system can focus on exploring the policy with higher risk-adjusted
return in the action space with positive return and low variance. In this way,
we can overcome the issue of the curse of dimensionality and improve the
training efficiency in the positive reward sparse environment.
|
2501.06833
|
Unveiling Temporal Trends in 19th Century Literature: An Information
Retrieval Approach
|
cs.DL cs.IR
|
In English literature, the 19th century witnessed a significant transition in
styles, themes, and genres. Consequently, the novels from this period display
remarkable diversity. This paper explores these variations by examining the
evolution of term usage in 19th century English novels through the lens of
information retrieval. By applying a query expansion-based approach to a
decade-segmented collection of fiction from the British Library, we examine how
related terms vary over time. Our analysis employs multiple standard metrics
including Kendall's tau, Jaccard similarity, and Jensen-Shannon divergence to
assess overlaps and shifts in expanded query term sets. Our results indicate a
significant degree of divergence in the related terms across decades as
selected by the query expansion technique, suggesting substantial linguistic
and conceptual changes throughout the 19th century novels.
|
2501.06834
|
LLMs Model Non-WEIRD Populations: Experiments with Synthetic Cultural
Agents
|
cs.AI cs.CL econ.GN q-fin.EC
|
Despite its importance, studying economic behavior across diverse, non-WEIRD
(Western, Educated, Industrialized, Rich, and Democratic) populations presents
significant challenges. We address this issue by introducing a novel
methodology that uses Large Language Models (LLMs) to create synthetic cultural
agents (SCAs) representing these populations. We subject these SCAs to classic
behavioral experiments, including the dictator and ultimatum games. Our results
demonstrate substantial cross-cultural variability in experimental behavior.
Notably, for populations with available data, SCAs' behaviors qualitatively
resemble those of real human subjects. For unstudied populations, our method
can generate novel, testable hypotheses about economic behavior. By integrating
AI into experimental economics, this approach offers an effective and ethical
method to pilot experiments and refine protocols for hard-to-reach populations.
Our study provides a new tool for cross-cultural economic studies and
demonstrates how LLMs can help experimental behavioral research.
|
2501.06835
|
X-LeBench: A Benchmark for Extremely Long Egocentric Video Understanding
|
cs.CV
|
Long-form egocentric video understanding provides rich contextual information
and unique insights into long-term human behaviors, holding significant
potential for applications in embodied intelligence, long-term activity
analysis, and personalized assistive technologies. However, existing benchmark
datasets primarily focus on single, short-duration videos or moderately long
videos up to dozens of minutes, leaving a substantial gap in evaluating
extensive, ultra-long egocentric video recordings. To address this, we
introduce X-LeBench, a novel benchmark dataset specifically crafted for
evaluating tasks on extremely long egocentric video recordings. Leveraging the
advanced text processing capabilities of large language models (LLMs),
X-LeBench develops a life-logging simulation pipeline that produces realistic,
coherent daily plans aligned with real-world video data. This approach enables
the flexible integration of synthetic daily plans with real-world footage from
Ego4D-a massive-scale egocentric video dataset covers a wide range of daily
life scenarios-resulting in 432 simulated video life logs that mirror realistic
daily activities in contextually rich scenarios. The video life-log durations
span from 23 minutes to 16.4 hours. The evaluation of several baseline systems
and multimodal large language models (MLLMs) reveals their poor performance
across the board, highlighting the inherent challenges of long-form egocentric
video understanding and underscoring the need for more advanced models.
|
2501.06836
|
SAM-DA: Decoder Adapter for Efficient Medical Domain Adaptation
|
cs.CV
|
This paper addresses the domain adaptation challenge for semantic
segmentation in medical imaging. Despite the impressive performance of recent
foundational segmentation models like SAM on natural images, they struggle with
medical domain images. Beyond this, recent approaches that perform end-to-end
fine-tuning of models are simply not computationally tractable. To address
this, we propose a novel SAM adapter approach that minimizes the number of
trainable parameters while achieving comparable performances to full
fine-tuning. The proposed SAM adapter is strategically placed in the mask
decoder, offering excellent and broad generalization capabilities and improved
segmentation across both fully supervised and test-time domain adaptation
tasks. Extensive validation on four datasets showcases the adapter's efficacy,
outperforming existing methods while training less than 1% of SAM's total
parameters.
|
2501.06837
|
An efficient approach to represent enterprise web application structure
using Large Language Model in the service of Intelligent Quality Engineering
|
cs.AI cs.SE
|
This paper presents a novel approach to represent enterprise web application
structures using Large Language Models (LLMs) to enable intelligent quality
engineering at scale. We introduce a hierarchical representation methodology
that optimizes the few-shot learning capabilities of LLMs while preserving the
complex relationships and interactions within web applications. The approach
encompasses five key phases: comprehensive DOM analysis, multi-page synthesis,
test suite generation, execution, and result analysis. Our methodology
addresses existing challenges around usage of Generative AI techniques in
automated software testing by developing a structured format that enables LLMs
to understand web application architecture through in-context learning. We
evaluated our approach using two distinct web applications: an e-commerce
platform (Swag Labs) and a healthcare application (MediBox) which is deployed
within Atalgo engineering environment. The results demonstrate success rates of
90\% and 70\%, respectively, in achieving automated testing, with high
relevance scores for test cases across multiple evaluation criteria. The
findings suggest that our representation approach significantly enhances LLMs'
ability to generate contextually relevant test cases and provide better quality
assurance overall, while reducing the time and effort required for testing.
|
2501.06838
|
Generalized and Efficient 2D Gaussian Splatting for Arbitrary-scale
Super-Resolution
|
eess.IV cs.CV
|
Equipped with the continuous representation capability of Multi-Layer
Perceptron (MLP), Implicit Neural Representation (INR) has been successfully
employed for Arbitrary-scale Super-Resolution (ASR). However, the limited
receptive field of the linear layers in MLP restricts the representation
capability of INR, while it is computationally expensive to query the MLP
numerous times to render each pixel. Recently, Gaussian Splatting (GS) has
shown its advantages over INR in both visual quality and rendering speed in 3D
tasks, which motivates us to explore whether GS can be employed for the ASR
task. However, directly applying GS to ASR is exceptionally challenging because
the original GS is an optimization-based method through overfitting each single
scene, while in ASR we aim to learn a single model that can generalize to
different images and scaling factors. We overcome these challenges by
developing two novel techniques. Firstly, to generalize GS for ASR, we
elaborately design an architecture to predict the corresponding
image-conditioned Gaussians of the input low-resolution image in a feed-forward
manner. Secondly, we implement an efficient differentiable 2D GPU/CUDA-based
scale-aware rasterization to render super-resolved images by sampling discrete
RGB values from the predicted contiguous Gaussians. Via end-to-end training,
our optimized network, namely GSASR, can perform ASR for any image and unseen
scaling factors. Extensive experiments validate the effectiveness of our
proposed method. The project page can be found at
\url{https://mt-cly.github.io/GSASR.github.io/}.
|
2501.06841
|
Faithful Counterfactual Visual Explanations (FCVE)
|
cs.CV
|
Deep learning models in computer vision have made remarkable progress, but
their lack of transparency and interpretability remains a challenge. The
development of explainable AI can enhance the understanding and performance of
these models. However, existing techniques often struggle to provide convincing
explanations that non-experts easily understand, and they cannot accurately
identify models' intrinsic decision-making processes. To address these
challenges, we propose to develop a counterfactual explanation (CE) model that
balances plausibility and faithfulness. This model generates easy-to-understand
visual explanations by making minimum changes necessary in images without
altering the pixel data. Instead, the proposed method identifies internal
concepts and filters learned by models and leverages them to produce plausible
counterfactual explanations. The provided explanations reflect the internal
decision-making process of the model, thus ensuring faithfulness to the model.
|
2501.06842
|
SPAM: Spike-Aware Adam with Momentum Reset for Stable LLM Training
|
cs.LG cs.AI cs.CL
|
Large Language Models (LLMs) have demonstrated exceptional performance across
diverse tasks, yet their training remains highly resource-intensive and
susceptible to critical challenges such as training instability. A predominant
source of this instability stems from gradient and loss spikes, which disrupt
the learning process, often leading to costly interventions like checkpoint
recovery and experiment restarts, further amplifying inefficiencies. This paper
presents a comprehensive investigation into gradient spikes observed during LLM
training, revealing their prevalence across multiple architectures and
datasets. Our analysis shows that these spikes can be up to $1000\times$ larger
than typical gradients, substantially deteriorating model performance. To
address this issue, we propose Spike-Aware Adam with Momentum Reset SPAM, a
novel optimizer designed to counteract gradient spikes through momentum reset
and spike-aware gradient clipping. Extensive experiments, including both
pre-training and fine-tuning, demonstrate that SPAM consistently surpasses Adam
and its variants across various tasks, including (1) LLM pre-training from 60M
to 1B, (2) 4-bit LLM pre-training,(3) reinforcement learning, and (4) Time
Series Forecasting. Additionally, SPAM facilitates memory-efficient training by
enabling sparse momentum, where only a subset of momentum terms are maintained
and updated. When operating under memory constraints, SPAM outperforms
state-of-the-art memory-efficient optimizers such as GaLore and Adam-Mini. Our
work underscores the importance of mitigating gradient spikes in LLM training
and introduces an effective optimization strategy that enhances both training
stability and resource efficiency at scale. Code is available at
https://github.com/TianjinYellow/SPAM-Optimizer.git
|
2501.06843
|
Leveraging the Global Research Infrastructure to Characterize the Impact
of National Science Foundation Research
|
cs.DL cs.SI
|
The Global Research infrastructure (GRI) is made up of the repositories and
organizations that provide persistent identifiers (PIDs) and metadata for many
kinds of research objects and connect these objects to funders, research
institutions, researchers, and one another using PIDs. The INFORMATE Project
has combined three data sources to focus on understanding how the global
research infrastructure might help the US National Science Foundation (NSF) and
other federal agencies identify and characterize the impact of their support.
In this paper we present INFORMATE observations of three data systems. The NSF
Award database represents NSF funding while the NSF Public Access Repository
(PAR) and CHORUS, as a proxy for the GRI, represent two different view of
results of that funding. We compare the first at the level of awards and the
second two at the level of published research articles. Our findings
demonstrate that CHORUS datasets include significantly more NSF awards and more
related papers than does PAR. Our findings also suggest that time plays a
significant role in the inclusion of award metadata across the sources
analyzed. Data in those sources travel very different journeys, each presenting
different obstacles to metadata completeness and suggesting necessary actions
on the parts of authors and publishers to ensure that publication and funding
metadata are captured. We discuss these actions, as well as implications our
findings have for emergent technologies such as artificial intelligence and
natural language processing.
|
2501.06847
|
Accelerating Discovery in Natural Science Laboratories with AI and
Robotics: Perspectives and Challenges from the 2024 IEEE ICRA Workshop,
Yokohama, Japan
|
cs.RO
|
Science laboratory automation enables accelerated discovery in life sciences
and materials. However, it requires interdisciplinary collaboration to address
challenges such as robust and flexible autonomy, reproducibility, throughput,
standardization, the role of human scientists, and ethics. This article
highlights these issues, reflecting perspectives from leading experts in
laboratory automation across different disciplines of the natural sciences.
|
2501.06848
|
A General Framework for Inference-time Scaling and Steering of Diffusion
Models
|
cs.LG cs.CL cs.CV
|
Diffusion models produce impressive results in modalities ranging from images
and video to protein design and text. However, generating samples with
user-specified properties remains a challenge. Recent research proposes
fine-tuning models to maximize rewards that capture desired properties, but
these methods require expensive training and are prone to mode collapse. In
this work, we propose Feynman Kac (FK) steering, an inference-time framework
for steering diffusion models with reward functions. FK steering works by
sampling a system of multiple interacting diffusion processes, called
particles, and resampling particles at intermediate steps based on scores
computed using functions called potentials. Potentials are defined using
rewards for intermediate states and are selected such that a high value
indicates that the particle will yield a high-reward sample. We explore various
choices of potentials, intermediate rewards, and samplers. We evaluate FK
steering on text-to-image and text diffusion models. For steering text-to-image
models with a human preference reward, we find that FK steering a 0.8B
parameter model outperforms a 2.6B parameter fine-tuned model on prompt
fidelity, with faster sampling and no training. For steering text diffusion
models with rewards for text quality and specific text attributes, we find that
FK steering generates lower perplexity, more linguistically acceptable outputs
and enables gradient-free control of attributes like toxicity. Our results
demonstrate that inference-time scaling and steering of diffusion models, even
with off-the-shelf rewards, can provide significant sample quality gains and
controllability benefits. Code is available at
https://github.com/zacharyhorvitz/Fk-Diffusion-Steering .
|
2501.06857
|
What Is a Counterfactual Cause in Action Theories?
|
cs.AI
|
Since the proposal by Halpern and Pearl, reasoning about actual causality has
gained increasing attention in artificial intelligence, ranging from domains
such as model-checking and verification to reasoning about actions and
knowledge. More recently, Batusov and Soutchanski proposed a notion of actual
achievement cause in the situation calculus, amongst others, they can determine
the cause of quantified effects in a given action history. While intuitively
appealing, this notion of cause is not defined in a counterfactual perspective.
In this paper, we propose a notion of cause based on counterfactual analysis.
In the context of action history, we show that our notion of cause generalizes
naturally to a notion of achievement cause. We analyze the relationship between
our notion of the achievement cause and the achievement cause by Batusov and
Soutchanski. Finally, we relate our account of cause to Halpern and Pearl's
account of actual causality. Particularly, we note some nuances in applying a
counterfactual viewpoint to disjunctive goals, a common thorn to definitions of
actual causes.
|
2501.06859
|
A Comprehensive Evaluation of Large Language Models on Mental Illnesses
in Arabic Context
|
cs.CL cs.AI
|
Mental health disorders pose a growing public health concern in the Arab
world, emphasizing the need for accessible diagnostic and intervention tools.
Large language models (LLMs) offer a promising approach, but their application
in Arabic contexts faces challenges including limited labeled datasets,
linguistic complexity, and translation biases. This study comprehensively
evaluates 8 LLMs, including general multi-lingual models, as well as bi-lingual
ones, on diverse mental health datasets (such as AraDepSu, Dreaddit, MedMCQA),
investigating the impact of prompt design, language configuration (native
Arabic vs. translated English, and vice versa), and few-shot prompting on
diagnostic performance. We find that prompt engineering significantly
influences LLM scores mainly due to reduced instruction following, with our
structured prompt outperforming a less structured variant on multi-class
datasets, with an average difference of 14.5\%. While language influence on
performance was modest, model selection proved crucial: Phi-3.5 MoE excelled in
balanced accuracy, particularly for binary classification, while Mistral NeMo
showed superior performance in mean absolute error for severity prediction
tasks. Few-shot prompting consistently improved performance, with particularly
substantial gains observed for GPT-4o Mini on multi-class classification,
boosting accuracy by an average factor of 1.58. These findings underscore the
importance of prompt optimization, multilingual analysis, and few-shot learning
for developing culturally sensitive and effective LLM-based mental health tools
for Arabic-speaking populations.
|
2501.06862
|
LarvSeg: Exploring Image Classification Data For Large Vocabulary
Semantic Segmentation via Category-wise Attentive Classifier
|
cs.CV cs.AI
|
Scaling up the vocabulary of semantic segmentation models is extremely
challenging because annotating large-scale mask labels is labour-intensive and
time-consuming. Recently, language-guided segmentation models have been
proposed to address this challenge. However, their performance drops
significantly when applied to out-of-distribution categories. In this paper, we
propose a new large vocabulary semantic segmentation framework, called LarvSeg.
Different from previous works, LarvSeg leverages image classification data to
scale the vocabulary of semantic segmentation models as large-vocabulary
classification datasets usually contain balanced categories and are much easier
to obtain. However, for classification tasks, the category is image-level,
while for segmentation we need to predict the label at pixel level. To address
this issue, we first propose a general baseline framework to incorporate
image-level supervision into the training process of a pixel-level segmentation
model, making the trained network perform semantic segmentation on newly
introduced categories in the classification data. We then observe that a model
trained on segmentation data can group pixel features of categories beyond the
training vocabulary. Inspired by this finding, we design a category-wise
attentive classifier to apply supervision to the precise regions of
corresponding categories to improve the model performance. Extensive
experiments demonstrate that LarvSeg significantly improves the large
vocabulary semantic segmentation performance, especially in the categories
without mask labels. For the first time, we provide a 21K-category semantic
segmentation model with the help of ImageNet21K. The code is available at
https://github.com/HaojunYu1998/large_voc_seg.
|
2501.06863
|
Transfer Learning of Tabular Data by Finetuning Large Language Models
|
cs.LG cs.AI cs.CL
|
Despite the artificial intelligence (AI) revolution, deep learning has yet to
achieve much success with tabular data due to heterogeneous feature space and
limited sample sizes without viable transfer learning. The new era of
generative AI, powered by large language models (LLM), brings unprecedented
learning opportunities to diverse data and domains. This paper investigates the
effectiveness of an LLM application programming interface (API) and transfer
learning of LLM in tabular data classification. LLM APIs respond to input text
prompts with tokenized data and instructions, whereas transfer learning
finetunes an LLM for a target classification task. This paper proposes an
end-to-end finetuning of LLM to demonstrate cross-data transfer learning on ten
benchmark data sets when large pre-trained tabular data models do not exist to
facilitate transfer learning. The proposed LLM finetuning method outperforms
state-of-the-art machine and deep learning methods on tabular data with less
than ten features - a standard feature size for tabular data sets. The transfer
learning approach uses a fraction of the computational cost of other deep
learning or API-based solutions while ensuring competitive or superior
classification performance.
|
2501.06867
|
Toward a Universal Concept of Artificial Personality: Implementing
Robotic Personality in a Kinova Arm
|
cs.RO cs.HC
|
The fundamental role of personality in shaping interactions is increasingly
being exploited in robotics. A carefully designed robotic personality has been
shown to improve several key aspects of Human-Robot Interaction (HRI). However,
the fragmentation and rigidity of existing approaches reveal even greater
challenges when applied to non-humanoid robots. On one hand, the state of the
art is very dispersed; on the other hand, Industry 4.0 is moving towards a
future where humans and industrial robots are going to coexist. In this
context, the proper design of a robotic personality can lead to more successful
interactions. This research takes a first step in that direction by integrating
a comprehensive cognitive architecture built upon the definition of robotic
personality - validated on humanoid robots - into a robotic Kinova Jaco2 arm.
The robot personality is defined through the cognitive architecture as a vector
in the three-dimensional space encompassing Conscientiousness, Extroversion,
and Agreeableness, affecting how actions are executed, the action selection
process, and the internal reaction to environmental stimuli. Our main objective
is to determine whether users perceive distinct personalities in the robot,
regardless of its shape, and to understand the role language plays in shaping
these perceptions. To achieve this, we conducted a user study comprising 144
sessions of a collaborative game between a Kinova Jaco2 arm and participants,
where the robot's behavior was influenced by its assigned personality.
Furthermore, we compared two conditions: in the first, the robot communicated
solely through gestures and action choices, while in the second, it also
utilized verbal interaction.
|
2501.06868
|
Variable Selection Methods for Multivariate, Functional, and Complex
Biomedical Data in the AI Age
|
stat.ML cs.LG stat.AP stat.ME
|
Many problems within personalized medicine and digital health rely on the
analysis of continuous-time functional biomarkers and other complex data
structures emerging from high-resolution patient monitoring. In this context,
this work proposes new optimization-based variable selection methods for
multivariate, functional, and even more general outcomes in metrics spaces
based on best-subset selection. Our framework applies to several types of
regression models, including linear, quantile, or non parametric additive
models, and to a broad range of random responses, such as univariate,
multivariate Euclidean data, functional, and even random graphs. Our analysis
demonstrates that our proposed methodology outperforms state-of-the-art methods
in accuracy and, especially, in speed-achieving several orders of magnitude
improvement over competitors across various type of statistical responses as
the case of mathematical functions. While our framework is general and is not
designed for a specific regression and scientific problem, the article is
self-contained and focuses on biomedical applications. In the clinical areas,
serves as a valuable resource for professionals in biostatistics, statistics,
and artificial intelligence interested in variable selection problem in this
new technological AI-era.
|
2501.06869
|
A Foundational Generative Model for Breast Ultrasound Image Analysis
|
cs.AI cs.CV cs.HC cs.LG
|
Foundational models have emerged as powerful tools for addressing various
tasks in clinical settings. However, their potential development to breast
ultrasound analysis remains untapped. In this paper, we present BUSGen, the
first foundational generative model specifically designed for breast ultrasound
image analysis. Pretrained on over 3.5 million breast ultrasound images, BUSGen
has acquired extensive knowledge of breast structures, pathological features,
and clinical variations. With few-shot adaptation, BUSGen can generate
repositories of realistic and informative task-specific data, facilitating the
development of models for a wide range of downstream tasks. Extensive
experiments highlight BUSGen's exceptional adaptability, significantly
exceeding real-data-trained foundational models in breast cancer screening,
diagnosis, and prognosis. In breast cancer early diagnosis, our approach
outperformed all board-certified radiologists (n=9), achieving an average
sensitivity improvement of 16.5% (P-value<0.0001). Additionally, we
characterized the scaling effect of using generated data which was as effective
as the collected real-world data for training diagnostic models. Moreover,
extensive experiments demonstrated that our approach improved the
generalization ability of downstream models. Importantly, BUSGen protected
patient privacy by enabling fully de-identified data sharing, making progress
forward in secure medical data utilization. An online demo of BUSGen is
available at https://aibus.bio.
|
2501.06873
|
Causal Claims in Economics
|
econ.GN cs.CL cs.IR cs.SI q-fin.EC stat.ME
|
We analyze over 44,000 NBER and CEPR working papers from 1980 to 2023 using a
custom language model to construct knowledge graphs that map economic concepts
and their relationships. We distinguish between general claims and those
documented via causal inference methods (e.g., DiD, IV, RDD, RCTs). We document
a substantial rise in the share of causal claims-from roughly 4% in 1990 to
nearly 28% in 2020-reflecting the growing influence of the "credibility
revolution." We find that causal narrative complexity (e.g., the depth of
causal chains) strongly predicts both publication in top-5 journals and higher
citation counts, whereas non-causal complexity tends to be uncorrelated or
negatively associated with these outcomes. Novelty is also pivotal for top-5
publication, but only when grounded in credible causal methods: introducing
genuinely new causal edges or paths markedly increases both the likelihood of
acceptance at leading outlets and long-run citations, while non-causal novelty
exhibits weak or even negative effects. Papers engaging with central, widely
recognized concepts tend to attract more citations, highlighting a divergence
between factors driving publication success and long-term academic impact.
Finally, bridging underexplored concept pairs is rewarded primarily when
grounded in causal methods, yet such gap filling exhibits no consistent link
with future citations. Overall, our findings suggest that methodological rigor
and causal innovation are key drivers of academic recognition, but sustained
impact may require balancing novel contributions with conceptual integration
into established economic discourse.
|
2501.06878
|
Uncertainty-Aware Online Extrinsic Calibration: A Conformal Prediction
Approach
|
cs.CV
|
Accurate sensor calibration is crucial for autonomous systems, yet its
uncertainty quantification remains underexplored. We present the first approach
to integrate uncertainty awareness into online extrinsic calibration, combining
Monte Carlo Dropout with Conformal Prediction to generate prediction intervals
with a guaranteed level of coverage. Our method proposes a framework to enhance
existing calibration models with uncertainty quantification, compatible with
various network architectures. Validated on KITTI (RGB Camera-LiDAR) and DSEC
(Event Camera-LiDAR) datasets, we demonstrate effectiveness across different
visual sensor types, measuring performance with adapted metrics to evaluate the
efficiency and reliability of the intervals. By providing calibration
parameters with quantifiable confidence measures, we offer insights into the
reliability of calibration estimates, which can greatly improve the robustness
of sensor fusion in dynamic environments and usefully serve the Computer Vision
community.
|
2501.06879
|
Defect Detection Network In PCB Circuit Devices Based on GAN Enhanced
YOLOv11
|
cs.CE cs.AI cs.CV
|
This study proposes an advanced method for surface defect detection in
printed circuit boards (PCBs) using an improved YOLOv11 model enhanced with a
generative adversarial network (GAN). The approach focuses on identifying six
common defect types: missing hole, rat bite, open circuit, short circuit, burr,
and virtual welding. By employing GAN to generate synthetic defect images, the
dataset is augmented with diverse and realistic patterns, improving the model's
ability to generalize, particularly for complex and infrequent defects like
burrs. The enhanced YOLOv11 model is evaluated on a PCB defect dataset,
demonstrating significant improvements in accuracy, recall, and robustness,
especially when dealing with defects in complex environments or small targets.
This research contributes to the broader field of electronic design automation
(EDA), where efficient defect detection is a crucial step in ensuring
high-quality PCB manufacturing. By integrating advanced deep learning
techniques, this approach enhances the automation and precision of defect
detection, reducing reliance on manual inspection and accelerating
design-to-production workflows. The findings underscore the importance of
incorporating GAN-based data augmentation and optimized detection architectures
in EDA processes, providing valuable insights for improving reliability and
efficiency in PCB defect detection within industrial applications.
|
2501.06880
|
Real-Time Neural-Enhancement for Online Cloud Gaming
|
cs.NI cs.CV
|
Online Cloud gaming demands real-time, high-quality video transmission across
variable wide-area networks (WANs). Neural-enhanced video transmission
algorithms employing super-resolution (SR) for video quality enhancement have
effectively challenged WAN environments. However, these SR-based methods
require intensive fine-tuning for the whole video, making it infeasible in
diverse online cloud gaming. To address this, we introduce River, a cloud
gaming delivery framework designed based on the observation that video segment
features in cloud gaming are typically repetitive and redundant. This permits a
significant opportunity to reuse fine-tuned SR models, reducing the fine-tuning
latency of minutes to query latency of milliseconds. To enable the idea, we
design a practical system that addresses several challenges, such as model
organization, online model scheduler, and transfer strategy. River first builds
a content-aware encoder that fine-tunes SR models for diverse video segments
and stores them in a lookup table. When delivering cloud gaming video streams
online, River checks the video features and retrieves the most relevant SR
models to enhance the frame quality. Meanwhile, if no existing SR model
performs well enough for some video segments, River will further fine-tune new
models and update the lookup table. Finally, to avoid the overhead of streaming
model weight to the clients, River designs a prefetching strategy that predicts
the models with the highest possibility of being retrieved. Our evaluation
based on real video game streaming demonstrates River can reduce redundant
training overhead by 44% and improve the Peak-Signal-to-Noise-Ratio by 1.81dB
compared to the SOTA solutions. Practical deployment shows River meets
real-time requirements, achieving approximately 720p 20fps on mobile devices.
|
2501.06884
|
Transforming Vision Transformer: Towards Efficient Multi-Task
Asynchronous Learning
|
cs.CV
|
Multi-Task Learning (MTL) for Vision Transformer aims at enhancing the model
capability by tackling multiple tasks simultaneously. Most recent works have
predominantly focused on designing Mixture-of-Experts (MoE) structures and in
tegrating Low-Rank Adaptation (LoRA) to efficiently perform multi-task
learning. However, their rigid combination hampers both the optimization of MoE
and the ef fectiveness of reparameterization of LoRA, leading to sub-optimal
performance and low inference speed. In this work, we propose a novel approach
dubbed Efficient Multi-Task Learning (EMTAL) by transforming a pre-trained
Vision Transformer into an efficient multi-task learner during training, and
reparameterizing the learned structure for efficient inference. Specifically,
we firstly develop the MoEfied LoRA structure, which decomposes the pre-trained
Transformer into a low-rank MoE structure and employ LoRA to fine-tune the
parameters. Subsequently, we take into account the intrinsic asynchronous
nature of multi-task learning and devise a learning Quality Retaining (QR)
optimization mechanism, by leveraging the historical high-quality class logits
to prevent a well-trained task from performance degradation. Finally, we design
a router fading strategy to integrate the learned parameters into the original
Transformer, archiving efficient inference. Extensive experiments on public
benchmarks demonstrate the superiority of our method, compared to the
state-of-the-art multi-task learning approaches.
|
2501.06887
|
MedGrad E-CLIP: Enhancing Trust and Transparency in AI-Driven Skin
Lesion Diagnosis
|
cs.CV cs.AI cs.ET cs.LG
|
As deep learning models gain attraction in medical data, ensuring transparent
and trustworthy decision-making is essential. In skin cancer diagnosis, while
advancements in lesion detection and classification have improved accuracy, the
black-box nature of these methods poses challenges in understanding their
decision processes, leading to trust issues among physicians. This study
leverages the CLIP (Contrastive Language-Image Pretraining) model, trained on
different skin lesion datasets, to capture meaningful relationships between
visual features and diagnostic criteria terms. To further enhance transparency,
we propose a method called MedGrad E-CLIP, which builds on gradient-based
E-CLIP by incorporating a weighted entropy mechanism designed for complex
medical imaging like skin lesions. This approach highlights critical image
regions linked to specific diagnostic descriptions. The developed integrated
pipeline not only classifies skin lesions by matching corresponding
descriptions but also adds an essential layer of explainability developed
especially for medical data. By visually explaining how different features in
an image relates to diagnostic criteria, this approach demonstrates the
potential of advanced vision-language models in medical image analysis,
ultimately improving transparency, robustness, and trust in AI-driven
diagnostic systems.
|
2501.06892
|
Language Fusion for Parameter-Efficient Cross-lingual Transfer
|
cs.CL
|
Limited availability of multilingual text corpora for training language
models often leads to poor performance on downstream tasks due to undertrained
representation spaces for languages other than English. This
'under-representation' has motivated recent cross-lingual transfer methods to
leverage the English representation space by e.g. mixing English and
'non-English' tokens at the input level or extending model parameters to
accommodate new languages. However, these approaches often come at the cost of
increased computational complexity. We propose Fusion forLanguage
Representations (FLARE) in adapters, a novel method that enhances
representation quality and downstream performance for languages other than
English while maintaining parameter efficiency. FLARE integrates source and
target language representations within low-rank (LoRA) adapters using
lightweight linear transformations, maintaining parameter efficiency while
improving transfer performance. A series of experiments across representative
cross-lingual natural language understanding tasks, including natural language
inference, question-answering and sentiment analysis, demonstrate FLARE's
effectiveness. FLARE achieves performance improvements of 4.9% for Llama 3.1
and 2.2% for Gemma~2 compared to standard LoRA fine-tuning on
question-answering tasks, as measured by the exact match metric.
|
2501.06896
|
Introduction to the Usage of Open Data from the Large Hadron Collider
for Computer Scientists in the Context of Machine Learning
|
cs.LG hep-ex physics.data-an
|
Deep learning techniques have evolved rapidly in recent years, significantly
impacting various scientific fields, including experimental particle physics.
To effectively leverage the latest developments in computer science for
particle physics, a strengthened collaboration between computer scientists and
physicists is essential. As all machine learning techniques depend on the
availability and comprehensibility of extensive data, clear data descriptions
and commonly used data formats are prerequisites for successful collaboration.
In this study, we converted open data from the Large Hadron Collider, recorded
in the ROOT data format commonly used in high-energy physics, to pandas
DataFrames, a well-known format in computer science. Additionally, we provide a
brief introduction to the data's content and interpretation. This paper aims to
serve as a starting point for future interdisciplinary collaborations between
computer scientists and physicists, fostering closer ties and facilitating
efficient knowledge exchange.
|
2501.06897
|
ActiveGAMER: Active GAussian Mapping through Efficient Rendering
|
cs.CV cs.RO
|
We introduce ActiveGAMER, an active mapping system that utilizes 3D Gaussian
Splatting (3DGS) to achieve high-quality, real-time scene mapping and
exploration. Unlike traditional NeRF-based methods, which are computationally
demanding and restrict active mapping performance, our approach leverages the
efficient rendering capabilities of 3DGS, allowing effective and efficient
exploration in complex environments. The core of our system is a
rendering-based information gain module that dynamically identifies the most
informative viewpoints for next-best-view planning, enhancing both geometric
and photometric reconstruction accuracy. ActiveGAMER also integrates a
carefully balanced framework, combining coarse-to-fine exploration,
post-refinement, and a global-local keyframe selection strategy to maximize
reconstruction completeness and fidelity. Our system autonomously explores and
reconstructs environments with state-of-the-art geometric and photometric
accuracy and completeness, significantly surpassing existing approaches in both
aspects. Extensive evaluations on benchmark datasets such as Replica and MP3D
highlight ActiveGAMER's effectiveness in active mapping tasks.
|
2501.06903
|
Synthetic Prior for Few-Shot Drivable Head Avatar Inversion
|
cs.CV
|
We present SynShot, a novel method for the few-shot inversion of a drivable
head avatar based on a synthetic prior. We tackle two major challenges. First,
training a controllable 3D generative network requires a large number of
diverse sequences, for which pairs of images and high-quality tracked meshes
are not always available. Second, state-of-the-art monocular avatar models
struggle to generalize to new views and expressions, lacking a strong prior and
often overfitting to a specific viewpoint distribution. Inspired by machine
learning models trained solely on synthetic data, we propose a method that
learns a prior model from a large dataset of synthetic heads with diverse
identities, expressions, and viewpoints. With few input images, SynShot
fine-tunes the pretrained synthetic prior to bridge the domain gap, modeling a
photorealistic head avatar that generalizes to novel expressions and
viewpoints. We model the head avatar using 3D Gaussian splatting and a
convolutional encoder-decoder that outputs Gaussian parameters in UV texture
space. To account for the different modeling complexities over parts of the
head (e.g., skin vs hair), we embed the prior with explicit control for
upsampling the number of per-part primitives. Compared to state-of-the-art
monocular methods that require thousands of real training images, SynShot
significantly improves novel view and expression synthesis.
|
2501.06904
|
From Simulation to Field: Learning Terrain Traversability for Real-World
Deployment
|
cs.RO
|
The challenge of traversability estimation is a crucial aspect of autonomous
navigation in unstructured outdoor environments such as forests. It involves
determining whether certain areas are passable or risky for robots, taking into
account factors like terrain irregularities, slopes, and potential obstacles.
The majority of current methods for traversability estimation operate on the
assumption of an offline computation, overlooking the significant influence of
the robot's heading direction on accurate traversability estimates. In this
work, we introduce a deep neural network that uses detailed geometric
environmental data together with the robot's recent movement characteristics.
This fusion enables the generation of robot direction awareness and continuous
traversability estimates, essential for enhancing robot autonomy in challenging
terrains like dense forests. The efficacy and significance of our approach are
underscored by experiments conducted on both simulated and real robotic
platforms in various environments, yielding quantitatively superior performance
results compared to existing methods. Moreover, we demonstrate that our method,
trained exclusively in a high-fidelity simulated setting, can accurately
predict traversability in real-world applications without any real data
collection. Our experiments showcase the advantages of our method for
optimizing path-planning and exploration tasks within difficult outdoor
environments, underscoring its practicality for effective, real-world robotic
navigation. In the spirit of collaborative advancement, we have made the code
implementation available to the public.
|
2501.06907
|
Deep Learning and Foundation Models for Weather Prediction: A Survey
|
cs.LG
|
Physics-based numerical models have been the bedrock of atmospheric sciences
for decades, offering robust solutions but often at the cost of significant
computational resources. Deep learning (DL) models have emerged as powerful
tools in meteorology, capable of analyzing complex weather and climate data by
learning intricate dependencies and providing rapid predictions once trained.
While these models demonstrate promising performance in weather prediction,
often surpassing traditional physics-based methods, they still face critical
challenges. This paper presents a comprehensive survey of recent deep learning
and foundation models for weather prediction. We propose a taxonomy to classify
existing models based on their training paradigms: deterministic predictive
learning, probabilistic generative learning, and pre-training and fine-tuning.
For each paradigm, we delve into the underlying model architectures, address
major challenges, offer key insights, and propose targeted directions for
future research. Furthermore, we explore real-world applications of these
methods and provide a curated summary of open-source code repositories and
widely used datasets, aiming to bridge research advancements with practical
implementations while fostering open and trustworthy scientific practices in
adopting cutting-edge artificial intelligence for weather prediction. The
related sources are available at https://github.com/JimengShi/
DL-Foundation-Models-Weather.
|
2501.06909
|
Local Foreground Selection aware Attentive Feature Reconstruction for
few-shot fine-grained plant species classification
|
cs.CV
|
Plant species exhibit significant intra-class variation and minimal
inter-class variation. To enhance classification accuracy, it is essential to
reduce intra-class variation while maximizing inter-class variation. This paper
addresses plant species classification using a limited number of labelled
samples and introduces a novel Local Foreground Selection(LFS) attention
mechanism. LFS is a straightforward module designed to generate discriminative
support and query feature maps. It operates by integrating two types of
attention: local attention, which captures local spatial details to enhance
feature discrimination and increase inter-class differentiation, and foreground
selection attention, which emphasizes the foreground plant object while
mitigating background interference. By focusing on the foreground, the query
and support features selectively highlight relevant feature sequences and
disregard less significant background sequences, thereby reducing intra-class
differences. Experimental results from three plant species datasets demonstrate
the effectiveness of the proposed LFS attention mechanism and its complementary
advantages over previous feature reconstruction methods.
|
2501.06910
|
A General Framework for Error-controlled Unstructured Scientific Data
Compression
|
cs.IT math.IT
|
Data compression plays a key role in reducing storage and I/O costs.
Traditional lossy methods primarily target data on rectilinear grids and cannot
leverage the spatial coherence in unstructured mesh data, leading to suboptimal
compression ratios. We present a multi-component, error-bounded compression
framework designed to enhance the compression of floating-point unstructured
mesh data, which is common in scientific applications. Our approach involves
interpolating mesh data onto a rectilinear grid and then separately compressing
the grid interpolation and the interpolation residuals. This method is general,
independent of mesh types and typologies, and can be seamlessly integrated with
existing lossy compressors for improved performance. We evaluated our framework
across twelve variables from two synthetic datasets and two real-world
simulation datasets. The results indicate that the multi-component framework
consistently outperforms state-of-the-art lossy compressors on unstructured
data, achieving, on average, a $2.3-3.5\times$ improvement in compression
ratios, with error bounds ranging from $\num{1e-6}$ to $\num{1e-2}$. We further
investigate the impact of hyperparameters, such as grid spacing and error
allocation, to deliver optimal compression ratios in diverse datasets.
|
2501.06911
|
Risk-Averse Finetuning of Large Language Models
|
cs.AI cs.CL
|
We consider the challenge of mitigating the generation of negative or toxic
content by the Large Language Models (LLMs) in response to certain prompts. We
propose integrating risk-averse principles into LLM fine-tuning to minimize the
occurrence of harmful outputs, particularly rare but significant events. By
optimizing the risk measure of Conditional Value at Risk (CVaR), our
methodology trains LLMs to exhibit superior performance in avoiding toxic
outputs while maintaining effectiveness in generative tasks. Empirical
evaluations on sentiment modification and toxicity mitigation tasks demonstrate
the efficacy of risk-averse reinforcement learning with human feedback (RLHF)
in promoting a safer and more constructive online discourse environment.
|
2501.06916
|
Black-box optimization and quantum annealing for filtering out
mislabeled training instances
|
cs.LG cond-mat.stat-mech quant-ph
|
This study proposes an approach for removing mislabeled instances from
contaminated training datasets by combining surrogate model-based black-box
optimization (BBO) with postprocessing and quantum annealing. Mislabeled
training instances, a common issue in real-world datasets, often degrade model
generalization, necessitating robust and efficient noise-removal strategies.
The proposed method evaluates filtered training subsets based on validation
loss, iteratively refines loss estimates through surrogate model-based BBO with
postprocessing, and leverages quantum annealing to efficiently sample diverse
training subsets with low validation error. Experiments on a noisy majority bit
task demonstrate the method's ability to prioritize the removal of high-risk
mislabeled instances. Integrating D-Wave's clique sampler running on a physical
quantum annealer achieves faster optimization and higher-quality training
subsets compared to OpenJij's simulated quantum annealing sampler or Neal's
simulated annealing sampler, offering a scalable framework for enhancing
dataset quality. This work highlights the effectiveness of the proposed method
for supervised learning tasks, with future directions including its application
to unsupervised learning, real-world datasets, and large-scale implementations.
|
2501.06917
|
Optimizing Phase Allocation in Unbalanced Power Distribution Networks
using a Linearized DistFlow Formulation
|
eess.SY cs.SY
|
Power distribution networks, especially in North America, are often
unbalanced but are designed to keep unbalance levels within the limits
specified by IEEE, IEC, and NEMA standards. However, rapid integration of
unbalanced devices, such as electric vehicle (EV) chargers and single-phase
solar plants, can exacerbate these imbalances. This increase can trigger
protection devices, increase losses, and potentially damage devices. To address
this issue, phase swapping (or phase allocation) has been proposed. Existing
approaches predominantly rely on heuristic methods. In this work, we develop a
mixed integer linear programming (MILP) approach for phase allocation. Our
approach uses linearized DistFlow equations to represent the distribution
network and incorporates a phase consistency constraint, enforced with binary
variables, to ensure that downstream phase configurations align with upstream
configurations. We validate the proposed approach on multiple benchmark test
cases and demonstrate that it effectively improves network balance, as
quantified by various metrics.
|
2501.06918
|
Driver Age and Its Effect on Key Driving Metrics: Insights from Dynamic
Vehicle Data
|
stat.ME cs.CV
|
By 2030, the senior population aged 65 and older is expected to increase by
over 50%, significantly raising the number of older drivers on the road.
Drivers over 70 face higher crash death rates compared to those in their
forties and fifties, underscoring the importance of developing more effective
safety interventions for this demographic. Although the impact of aging on
driving behavior has been studied, there is limited research on how these
behaviors translate into real-world driving scenarios. This study addresses
this need by leveraging Naturalistic Driving Data (NDD) to analyze driving
performance measures - specifically, speed limit adherence on interstates and
deceleration at stop intersections, both of which may be influenced by
age-related declines. Using NDD, we developed Cumulative Distribution Functions
(CDFs) to establish benchmarks for key driving behaviors among senior and young
drivers. Our analysis, which included anomaly detection, benchmark comparisons,
and accuracy evaluations, revealed significant differences in driving patterns
primarily related to speed limit adherence at 75mph. While our approach shows
promising potential for enhancing Advanced Driver Assistance Systems (ADAS) by
providing tailored interventions based on age-specific adherence to speed limit
driving patterns, we recognize the need for additional data to refine and
validate metrics for other driving behaviors. By establishing precise
benchmarks for various driving performance metrics, ADAS can effectively
identify anomalies, such as abrupt deceleration, which may indicate impaired
driving or other safety concerns. This study lays a strong foundation for
future research aimed at improving safety interventions through detailed
driving behavior analysis.
|
2501.06919
|
Shake-VLA: Vision-Language-Action Model-Based System for Bimanual
Robotic Manipulations and Liquid Mixing
|
cs.RO
|
This paper introduces Shake-VLA, a Vision-Language-Action (VLA) model-based
system designed to enable bimanual robotic manipulation for automated cocktail
preparation. The system integrates a vision module for detecting ingredient
bottles and reading labels, a speech-to-text module for interpreting user
commands, and a language model to generate task-specific robotic instructions.
Force Torque (FT) sensors are employed to precisely measure the quantity of
liquid poured, ensuring accuracy in ingredient proportions during the mixing
process. The system architecture includes a Retrieval-Augmented Generation
(RAG) module for accessing and adapting recipes, an anomaly detection mechanism
to address ingredient availability issues, and bimanual robotic arms for
dexterous manipulation. Experimental evaluations demonstrated a high success
rate across system components, with the speech-to-text module achieving a 93%
success rate in noisy environments, the vision module attaining a 91% success
rate in object and label detection in cluttered environment, the anomaly module
successfully identified 95% of discrepancies between detected ingredients and
recipe requirements, and the system achieved an overall success rate of 100% in
preparing cocktails, from recipe formulation to action generation.
|
2501.06922
|
Benchmarking YOLOv8 for Optimal Crack Detection in Civil Infrastructure
|
cs.CV
|
Ensuring the structural integrity and safety of bridges is crucial for the
reliability of transportation networks and public safety. Traditional crack
detection methods are increasingly being supplemented or replaced by advanced
artificial intelligence (AI) techniques. However, most of the models rely on
two-stage target detection algorithms, which pose concerns for real-time
applications due to their lower speed. While models such as YOLO (You Only Look
Once) have emerged as transformative tools due to their remarkable speed and
accuracy. However, the potential of the latest YOLOv8 framework in this domain
remains underexplored. This study bridges that gap by rigorously evaluating
YOLOv8's performance across five model scales (nano, small, medium, large, and
extra-large) using a high-quality Roboflow dataset. A comprehensive
hyperparameter optimization was performed, testing six state-of-the-art
optimizers-Stochastic Gradient Descent, Adaptive Moment Estimation, Adam with
Decoupled Weight Decay, Root Mean Square Propagation, Rectified Adam, and
Nesterov-accelerated Adam. Results revealed that YOLOv8, optimized with
Stochastic Gradient Descent, delivered exceptional accuracy and speed, setting
a new benchmark for real-time crack detection. Beyond its immediate
application, this research positions YOLOv8 as a foundational approach for
integrating advanced computer vision techniques into infrastructure monitoring.
By enabling more reliable and proactive maintenance of aging bridge networks,
this work paves the way for safer, more efficient transportation systems
worldwide.
|
2501.06923
|
Optimal Online Bookmaking for Binary Games
|
cs.GT cs.IT cs.LG math.IT math.OC
|
In online betting, the bookmaker can update the payoffs it offers on a
particular event many times before the event takes place, and the updated
payoffs may depend on the bets accumulated thus far. We study the problem of
bookmaking with the goal of maximizing the return in the worst-case, with
respect to the gamblers' behavior and the event's outcome. We formalize this
problem as the \emph{Optimal Online Bookmaking game}, and provide the exact
solution for the binary case. To this end, we develop the optimal bookmaking
strategy, which relies on a new technique called bi-balancing trees, that
assures that the house loss is the same for all \emph{decisive} betting
sequences, where the gambler bets all its money on a single outcome in each
round.
|
2501.06925
|
A Hybrid Virtual Element Method and Deep Learning Approach for Solving
One-Dimensional Euler-Bernoulli Beams
|
cs.LG
|
A hybrid framework integrating the Virtual Element Method (VEM) with deep
learning is presented as an initial step toward developing efficient and
flexible numerical models for one-dimensional Euler-Bernoulli beams. The
primary aim is to explore a data-driven surrogate model capable of predicting
displacement fields across varying material and geometric parameters while
maintaining computational efficiency. Building upon VEM's ability to handle
higher-order polynomials and non-conforming discretizations, the method offers
a robust numerical foundation for structural mechanics. A neural network
architecture is introduced to separately process nodal and material-specific
data, effectively capturing complex interactions with minimal reliance on large
datasets. To address challenges in training, the model incorporates Sobolev
training and GradNorm techniques, ensuring balanced loss contributions and
enhanced generalization. While this framework is in its early stages, it
demonstrates the potential for further refinement and development into a
scalable alternative to traditional methods. The proposed approach lays the
groundwork for advancing numerical and data-driven techniques in beam modeling,
offering a foundation for future research in structural mechanics.
|
2501.06926
|
Automatic Double Reinforcement Learning in Semiparametric Markov
Decision Processes with Applications to Long-Term Causal Inference
|
stat.ML cs.LG stat.ME
|
Double reinforcement learning (DRL) enables statistically efficient inference
on the value of a policy in a nonparametric Markov Decision Process (MDP) given
trajectories generated by another policy. However, this approach necessarily
requires stringent overlap between the state distributions, which is often
violated in practice. To relax this requirement and extend DRL, we study
efficient inference on linear functionals of the $Q$-function (of which policy
value is a special case) in infinite-horizon, time-invariant MDPs under
semiparametric restrictions on the $Q$-function. These restrictions can reduce
the overlap requirement and lower the efficiency bound, yielding more precise
estimates. As an important example, we study the evaluation of long-term value
under domain adaptation, given a few short trajectories from the new domain and
restrictions on the difference between the domains. This can be used for
long-term causal inference. Our method combines flexible estimates of the
$Q$-function and the Riesz representer of the functional of interest (e.g., the
stationary state density ratio for policy value) and is automatic in that we do
not need to know the form of the latter - only the functional we care about. To
address potential model misspecification bias, we extend the adaptive debiased
machine learning (ADML) framework of \citet{van2023adaptive} to construct
nonparametrically valid and superefficient estimators that adapt to the
functional form of the $Q$-function. As a special case, we propose a novel
adaptive debiased plug-in estimator that uses isotonic-calibrated fitted
$Q$-iteration - a new calibration algorithm for MDPs - to circumvent the
computational challenges of estimating debiasing nuisances from min-max
objectives.
|
2501.06927
|
CULTURE3D: Cultural Landmarks and Terrain Dataset for 3D Applications
|
cs.CV
|
In this paper, we present a large-scale fine-grained dataset using
high-resolution images captured from locations worldwide. Compared to existing
datasets, our dataset offers a significantly larger size and includes a higher
level of detail, making it uniquely suited for fine-grained 3D applications.
Notably, our dataset is built using drone-captured aerial imagery, which
provides a more accurate perspective for capturing real-world site layouts and
architectural structures. By reconstructing environments with these detailed
images, our dataset supports applications such as the COLMAP format for
Gaussian Splatting and the Structure-from-Motion (SfM) method. It is compatible
with widely-used techniques including SLAM, Multi-View Stereo, and Neural
Radiance Fields (NeRF), enabling accurate 3D reconstructions and point clouds.
This makes it a benchmark for reconstruction and segmentation tasks. The
dataset enables seamless integration with multi-modal data, supporting a range
of 3D applications, from architectural reconstruction to virtual tourism. Its
flexibility promotes innovation, facilitating breakthroughs in 3D modeling and
analysis.
|
2501.06929
|
Why are we living the age of AI applications right now? The long
innovation path from AI's birth to a child's bedtime magic
|
cs.CY cs.AI
|
Today a four-year-old child who does not know how to read or write can now
create bedtime stories with graphical illustrations and narrated audio, using
AI tools that seamlessly transform speech into text, generate visuals, and
convert text back into speech in a natural and engaging manner. This remarkable
example demonstrates why we are living in the age of AI applications. This
paper examines contemporary leading AI applications and traces their historical
development, highlighting the major advancements that have enabled their
realization. Five key factors are identified: 1) The evolution of computational
hardware (CPUs and GPUs), enabling the training of complex AI models 2) The
vast digital archives provided by the World Wide Web, which serve as a
foundational data resource for AI systems 3) The ubiquity of mobile computing,
with smartphones acting as powerful, accessible small computers in the hands of
billions 4) The rise of industrial-scale cloud infrastructures, offering
elastic computational power for AI training and deployment 5) Breakthroughs in
AI research, including neural networks, backpropagation, and the "Attention is
All You Need" framework, which underpin modern AI capabilities. These
innovations have elevated AI from solving narrow tasks to enabling applications
like ChatGPT that are adaptable for numerous use cases, redefining
human-computer interaction. By situating these developments within a historical
context, the paper highlights the critical milestones that have made AI's
current capabilities both possible and widely accessible, offering profound
implications for society.
|
2501.06932
|
Harnessing Large Language Models for Disaster Management: A Survey
|
cs.CL cs.CY cs.LG
|
Large language models (LLMs) have revolutionized scientific research with
their exceptional capabilities and transformed various fields. Among their
practical applications, LLMs have been playing a crucial role in mitigating
threats to human life, infrastructure, and the environment. Despite growing
research in disaster LLMs, there remains a lack of systematic review and
in-depth analysis of LLMs for natural disaster management. To address the gap,
this paper presents a comprehensive survey of existing LLMs in natural disaster
management, along with a taxonomy that categorizes existing works based on
disaster phases and application scenarios. By collecting public datasets and
identifying key challenges and opportunities, this study aims to guide the
professional community in developing advanced LLMs for disaster management to
enhance the resilience against natural disasters.
|
2501.06933
|
Neural equilibria for long-term prediction of nonlinear conservation
laws
|
cs.LG physics.comp-ph physics.flu-dyn
|
We introduce Neural Discrete Equilibrium (NeurDE), a machine learning (ML)
approach for long-term forecasting of flow phenomena that relies on a "lifting"
of physical conservation laws into the framework of kinetic theory. The kinetic
formulation provides an excellent structure for ML algorithms by separating
nonlinear, non-local physics into a nonlinear but local relaxation to
equilibrium and a linear non-local transport. This separation allows the ML to
focus on the local nonlinear components while addressing the simpler linear
transport with efficient classical numerical algorithms. To accomplish this, we
design an operator network that maps macroscopic observables to equilibrium
states in a manner that maximizes entropy, yielding expressive BGK-type
collisions. By incorporating our surrogate equilibrium into the lattice
Boltzmann (LB) algorithm, we achieve accurate flow forecasts for a wide range
of challenging flows. We show that NeurDE enables accurate prediction of
compressible flows, including supersonic flows, while tracking shocks over
hundreds of time steps, using a small velocity lattice-a heretofore
unattainable feat without expensive numerical root finding.
|
2501.06934
|
A group-theoretic framework for machine learning in hyperbolic spaces
|
cs.LG
|
Embedding the data in hyperbolic spaces can preserve complex relationships in
very few dimensions, thus enabling compact models and improving efficiency of
machine learning (ML) algorithms. The underlying idea is that hyperbolic
representations can prevent the loss of important structural information for
certain ubiquitous types of data. However, further advances in hyperbolic ML
require more principled mathematical approaches and adequate geometric methods.
The present study aims at enhancing mathematical foundations of hyperbolic ML
by combining group-theoretic and conformal-geometric arguments with
optimization and statistical techniques. Precisely, we introduce the notion of
the mean (barycenter) and the novel family of probability distributions on
hyperbolic balls. We further propose efficient optimization algorithms for
computation of the barycenter and for maximum likelihood estimation. One can
build upon basic concepts presented here in order to design more demanding
algorithms and implement hyperbolic deep learning pipelines.
|
2501.06937
|
An Empirical Study of Deep Reinforcement Learning in Continuing Tasks
|
cs.AI
|
In reinforcement learning (RL), continuing tasks refer to tasks where the
agent-environment interaction is ongoing and can not be broken down into
episodes. These tasks are suitable when environment resets are unavailable,
agent-controlled, or predefined but where all rewards-including those beyond
resets-are critical. These scenarios frequently occur in real-world
applications and can not be modeled by episodic tasks. While modern deep RL
algorithms have been extensively studied and well understood in episodic tasks,
their behavior in continuing tasks remains underexplored. To address this gap,
we provide an empirical study of several well-known deep RL algorithms using a
suite of continuing task testbeds based on Mujoco and Atari environments,
highlighting several key insights concerning continuing tasks. Using these
testbeds, we also investigate the effectiveness of a method for improving
temporal-difference-based RL algorithms in continuing tasks by centering
rewards, as introduced by Naik et al. (2024). While their work primarily
focused on this method in conjunction with Q-learning, our results extend their
findings by demonstrating that this method is effective across a broader range
of algorithms, scales to larger tasks, and outperforms two other
reward-centering approaches.
|
2501.06938
|
Evaluating unsupervised contrastive learning framework for MRI sequences
classification
|
cs.CV eess.IV
|
The automatic identification of Magnetic Resonance Imaging (MRI) sequences
can streamline clinical workflows by reducing the time radiologists spend
manually sorting and identifying sequences, thereby enabling faster diagnosis
and treatment planning for patients. However, the lack of standardization in
the parameters of MRI scans poses challenges for automated systems and
complicates the generation and utilization of datasets for machine learning
research. To address this issue, we propose a system for MRI sequence
identification using an unsupervised contrastive deep learning framework. By
training a convolutional neural network based on the ResNet-18 architecture,
our system classifies nine common MRI sequence types as a 9-class
classification problem. The network was trained using an in-house internal
dataset and validated on several public datasets, including BraTS, ADNI, Fused
Radiology-Pathology Prostate Dataset, the Breast Cancer Dataset (ACRIN), among
others, encompassing diverse acquisition protocols and requiring only 2D slices
for training. Our system achieves a classification accuracy of over 0.95 across
the nine most common MRI sequence types.
|
2501.06939
|
Super-Resolution of 3D Micro-CT Images Using Generative Adversarial
Networks: Enhancing Resolution and Segmentation Accuracy
|
eess.IV cs.CV cs.LG
|
We develop a procedure for substantially improving the quality of segmented
3D micro-Computed Tomography (micro-CT) images of rocks with a Machine Learning
(ML) Generative Model. The proposed model enhances the resolution eightfold
(8x) and addresses segmentation inaccuracies due to the overlapping X-ray
attenuation in micro-CT measurement for different rock minerals and phases. The
proposed generative model is a 3D Deep Convolutional Wasserstein Generative
Adversarial Network with Gradient Penalty (3D DC WGAN-GP). The algorithm is
trained on segmented 3D low-resolution micro-CT images and segmented unpaired
complementary 2D high-resolution Laser Scanning Microscope (LSM) images. The
algorithm was demonstrated on multiple samples of Berea sandstones. We achieved
high-quality super-resolved 3D images with a resolution of 0.4375 micro-m/voxel
and accurate segmentation for constituting minerals and pore space. The
described procedure can significantly expand the modern capabilities of digital
rock physics.
|
2501.06940
|
Collaborative Human Activity Recognition with Passive Inter-Body
Electrostatic Field
|
eess.SY cs.SY
|
The passive body-area electrostatic field has recently been aspiringly
explored for wearable motion sensing, harnessing its two thrilling
characteristics: full-body motion sensitivity and environmental sensitivity,
which potentially empowers human activity recognition both independently and
jointly from a single sensing front-end and theoretically brings significant
competition against traditional inertial sensor that is incapable in
environmental variations sensing. While most works focus on exploring the
electrostatic field of a single body as the target, this work, for the first
time, quantitatively evaluates the mutual effect of inter-body electrostatic
fields and its contribution to collaborative activity recognition. A wearable
electrostatic field sensing front-end and wrist-worn prototypes are built, and
a sixteen-hour, manually annotated dataset is collected, involving an
experiment of manipulating objects both independently and collaboratively. A
regression model is finally used to recognize the collaborative activities
among users. Despite the theoretical advantages of the body electrostatic
field, the recognition of both single and collaborative activities shows
unanticipated less-competitive recognition performance compared with the
accelerometer. However, It is worth mentioning that this novel sensing modality
improves the recognition F-score of user collaboration by 16\% in the fusion
result of the two wearable motion sensing modalities, demonstrating the
potential of bringing body electrostatic field as a complementary
power-efficient signal for collaborative activity tracking using wearables.
|
2501.06942
|
Comparison of Autoencoders for tokenization of ASL datasets
|
cs.LG cs.CV
|
Generative AI, powered by large language models (LLMs), has revolutionized
applications across text, audio, images, and video. This study focuses on
developing and evaluating encoder-decoder architectures for the American Sign
Language (ASL) image dataset, consisting of 87,000 images across 29 hand sign
classes. Three approaches were compared: Feedforward Autoencoders,
Convolutional Autoencoders, and Diffusion Autoencoders. The Diffusion
Autoencoder outperformed the others, achieving the lowest mean squared error
(MSE) and highest Mean Opinion Score (MOS) due to its probabilistic noise
modeling and iterative denoising capabilities. The Convolutional Autoencoder
demonstrated effective spatial feature extraction but lacked the robustness of
the diffusion process, while the Feedforward Autoencoder served as a baseline
with limitations in handling complex image data. Objective and subjective
evaluations confirmed the superiority of the Diffusion Autoencoder for
high-fidelity image reconstruction, emphasizing its potential in multimodal AI
applications such as sign language recognition and generation. This work
provides critical insights into designing robust encoder-decoder systems to
advance multimodal AI capabilities.
|
2501.06946
|
Learning Implicit Social Navigation Behavior using Deep Inverse
Reinforcement Learning
|
cs.RO
|
This paper reports on learning a reward map for social navigation in dynamic
environments where the robot can reason about its path at any time, given
agents' trajectories and scene geometry. Humans navigating in dense and dynamic
indoor environments often work with several implied social rules. A rule-based
approach fails to model all possible interactions between humans, robots, and
scenes. We propose a novel Smooth Maximum Entropy Deep Inverse Reinforcement
Learning (S-MEDIRL) algorithm that can extrapolate beyond expert demos to
better encode scene navigability from few-shot demonstrations. The agent learns
to predict the cost maps reasoning on trajectory data and scene geometry. The
agent samples a trajectory that is then executed using a local crowd navigation
controller. We present results in a photo-realistic simulation environment,
with a robot and a human navigating a narrow crossing scenario. The robot
implicitly learns to exhibit social behaviors such as yielding to oncoming
traffic and avoiding deadlocks. We compare the proposed approach to the popular
model-based crowd navigation algorithm ORCA and a rule-based agent that
exhibits yielding.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.