id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
2501.13954
|
Chat3GPP: An Open-Source Retrieval-Augmented Generation Framework for
3GPP Documents
|
cs.CL cs.AI cs.DC cs.IR
|
The 3rd Generation Partnership Project (3GPP) documents is key standards in
global telecommunications, while posing significant challenges for engineers
and researchers in the telecommunications field due to the large volume and
complexity of their contents as well as the frequent updates. Large language
models (LLMs) have shown promise in natural language processing tasks, but
their general-purpose nature limits their effectiveness in specific domains
like telecommunications. To address this, we propose Chat3GPP, an open-source
retrieval-augmented generation (RAG) framework tailored for 3GPP
specifications. By combining chunking strategies, hybrid retrieval and
efficient indexing methods, Chat3GPP can efficiently retrieve relevant
information and generate accurate responses to user queries without requiring
domain-specific fine-tuning, which is both flexible and scalable, offering
significant potential for adapting to other technical standards beyond 3GPP. We
evaluate Chat3GPP on two telecom-specific datasets and demonstrate its superior
performance compared to existing methods, showcasing its potential for
downstream tasks like protocol generation and code automation.
|
2501.13955
|
Guided Persona-based AI Surveys: Can we replicate personal mobility
preferences at scale using LLMs?
|
cs.CL cs.AI cs.CY
|
This study explores the potential of Large Language Models (LLMs) to generate
artificial surveys, with a focus on personal mobility preferences in Germany.
By leveraging LLMs for synthetic data creation, we aim to address the
limitations of traditional survey methods, such as high costs, inefficiency and
scalability challenges. A novel approach incorporating "Personas" -
combinations of demographic and behavioural attributes - is introduced and
compared to five other synthetic survey methods, which vary in their use of
real-world data and methodological complexity. The MiD 2017 dataset, a
comprehensive mobility survey in Germany, serves as a benchmark to assess the
alignment of synthetic data with real-world patterns. The results demonstrate
that LLMs can effectively capture complex dependencies between demographic
attributes and preferences while offering flexibility to explore hypothetical
scenarios. This approach presents valuable opportunities for transportation
planning and social science research, enabling scalable, cost-efficient and
privacy-preserving data generation.
|
2501.13956
|
Zep: A Temporal Knowledge Graph Architecture for Agent Memory
|
cs.CL cs.AI cs.IR
|
We introduce Zep, a novel memory layer service for AI agents that outperforms
the current state-of-the-art system, MemGPT, in the Deep Memory Retrieval (DMR)
benchmark. Additionally, Zep excels in more comprehensive and challenging
evaluations than DMR that better reflect real-world enterprise use cases. While
existing retrieval-augmented generation (RAG) frameworks for large language
model (LLM)-based agents are limited to static document retrieval, enterprise
applications demand dynamic knowledge integration from diverse sources
including ongoing conversations and business data. Zep addresses this
fundamental limitation through its core component Graphiti -- a
temporally-aware knowledge graph engine that dynamically synthesizes both
unstructured conversational data and structured business data while maintaining
historical relationships. In the DMR benchmark, which the MemGPT team
established as their primary evaluation metric, Zep demonstrates superior
performance (94.8% vs 93.4%). Beyond DMR, Zep's capabilities are further
validated through the more challenging LongMemEval benchmark, which better
reflects enterprise use cases through complex temporal reasoning tasks. In this
evaluation, Zep achieves substantial results with accuracy improvements of up
to 18.5% while simultaneously reducing response latency by 90% compared to
baseline implementations. These results are particularly pronounced in
enterprise-critical tasks such as cross-session information synthesis and
long-term context maintenance, demonstrating Zep's effectiveness for deployment
in real-world applications.
|
2501.13957
|
Benchmarking Generative AI for Scoring Medical Student Interviews in
Objective Structured Clinical Examinations (OSCEs)
|
cs.CL cs.AI
|
Introduction. Objective Structured Clinical Examinations (OSCEs) are widely
used to assess medical students' communication skills, but scoring
interview-based assessments is time-consuming and potentially subject to human
bias. This study explored the potential of large language models (LLMs) to
automate OSCE evaluations using the Master Interview Rating Scale (MIRS).
Methods. We compared the performance of four state-of-the-art LLMs (GPT-4o,
Claude 3.5, Llama 3.1, and Gemini 1.5 Pro) in evaluating OSCE transcripts
across all 28 items of the MIRS under the conditions of zero-shot,
chain-of-thought (CoT), few-shot, and multi-step prompting. The models were
benchmarked against a dataset of 10 OSCE cases with 174 expert consensus scores
available. Model performance was measured using three accuracy metrics (exact,
off-by-one, thresholded).
Results. Averaging across all MIRS items and OSCE cases, LLMs performed with
low exact accuracy (0.27 to 0.44), and moderate to high off-by-one accuracy
(0.67 to 0.87) and thresholded accuracy (0.75 to 0.88). A zero temperature
parameter ensured high intra-rater reliability ($\alpha = 0.98$ for GPT-4o).
CoT, few-shot, and multi-step techniques proved valuable when tailored to
specific assessment items. The performance was consistent across MIRS items
independent of encounter phases and communication domains.
Conclusion. We demonstrated the feasibility of AI-assisted OSCE evaluation
and provided benchmarking of multiple LLMs across multiple prompt techniques.
Our work provides a baseline performance assessment for LLMs that lays a
foundation for future research in automated assessment of clinical
communication skills.
|
2501.13958
|
A Survey of Graph Retrieval-Augmented Generation for Customized Large
Language Models
|
cs.CL cs.AI cs.IR
|
Large language models (LLMs) have demonstrated remarkable capabilities in a
wide range of tasks, yet their application to specialized domains remains
challenging due to the need for deep expertise. Retrieval-augmented generation
(RAG) has emerged as a promising solution to customize LLMs for professional
fields by seamlessly integrating external knowledge bases, enabling real-time
access to domain-specific expertise during inference. Despite its potential,
traditional RAG systems, based on flat text retrieval, face three critical
challenges: (i) complex query understanding in professional contexts, (ii)
difficulties in knowledge integration across distributed sources, and (iii)
system efficiency bottlenecks at scale. This survey presents a systematic
analysis of Graph-based Retrieval-Augmented Generation (GraphRAG), a new
paradigm that revolutionizes domain-specific LLM applications. GraphRAG
addresses traditional RAG limitations through three key innovations: (i)
graph-structured knowledge representation that explicitly captures entity
relationships and domain hierarchies, (ii) efficient graph-based retrieval
techniques that enable context-preserving knowledge retrieval with multihop
reasoning ability, and (iii) structure-aware knowledge integration algorithms
that leverage retrieved knowledge for accurate and logical coherent generation
of LLMs. In this survey, we systematically analyze the technical foundations of
GraphRAG and examine current implementations across various professional
domains, identifying key technical challenges and promising research
directions. All the related resources of GraphRAG, including research papers,
open-source data, and projects, are collected for the community in
\textcolor{blue}{\url{https://github.com/DEEP-PolyU/Awesome-GraphRAG}}.
|
2501.13959
|
Assisting Mathematical Formalization with A Learning-based Premise
Retriever
|
cs.CL cs.AI cs.IR
|
Premise selection is a crucial yet challenging step in mathematical
formalization, especially for users with limited experience. Due to the lack of
available formalization projects, existing approaches that leverage language
models often suffer from data scarcity. In this work, we introduce an
innovative method for training a premise retriever to support the formalization
of mathematics. Our approach employs a BERT model to embed proof states and
premises into a shared latent space. The retrieval model is trained within a
contrastive learning framework and incorporates a domain-specific tokenizer
along with a fine-grained similarity computation method. Experimental results
show that our model is highly competitive compared to existing baselines,
achieving strong performance while requiring fewer computational resources.
Performance is further enhanced through the integration of a re-ranking module.
To streamline the formalization process, we will release a search engine that
enables users to query Mathlib theorems directly using proof states,
significantly improving accessibility and efficiency. Codes are available at
https://github.com/ruc-ai4math/Premise-Retrieval.
|
2501.13960
|
LiCAR: pseudo-RGB LiDAR image for CAR segmentation
|
eess.IV cs.CV cs.RO
|
With the advancement of computing resources, an increasing number of Neural
Networks (NNs) are appearing for image detection and segmentation appear.
However, these methods usually accept as input a RGB 2D image. On the other
side, Light Detection And Ranging (LiDAR) sensors with many layers provide
images that are similar to those obtained from a traditional low resolution RGB
camera. Following this principle, a new dataset for segmenting cars in
pseudo-RGB images has been generated. This dataset combines the information
given by the LiDAR sensor into a Spherical Range Image (SRI), concretely the
reflectivity, near infrared and signal intensity 2D images. These images are
then fed into instance segmentation NNs. These NNs segment the cars that appear
in these images, having as result a Bounding Box (BB) and mask precision of 88%
and 81.5% respectively with You Only Look Once (YOLO)-v8 large. By using this
segmentation NN, some trackers have been applied so as to follow each car
segmented instance along a video feed, having great performance in real world
experiments.
|
2501.13961
|
A Fast, Scalable, and Robust Deep Learning-based Iterative
Reconstruction Framework for Accelerated Industrial Cone-beam X-ray Computed
Tomography
|
cs.CV cs.LG
|
Cone-beam X-ray Computed Tomography (XCT) with large detectors and
corresponding large-scale 3D reconstruction plays a pivotal role in
micron-scale characterization of materials and parts across various industries.
In this work, we present a novel deep neural network-based iterative algorithm
that integrates an artifact reduction-trained CNN as a prior model with
automated regularization parameter selection, tailored for large-scale
industrial cone-beam XCT data. Our method achieves high-quality 3D
reconstructions even for extremely dense thick metal parts - which
traditionally pose challenges to industrial CT images - in just a few
iterations. Furthermore, we show the generalizability of our approach to
out-of-distribution scans obtained under diverse scanning conditions. Our
method effectively handles significant noise and streak artifacts, surpassing
state-of-the-art supervised learning methods trained on the same data.
|
2501.13962
|
Adaptive Cyber-Attack Detection in IIoT Using Attention-Based LSTM-CNN
Models
|
cs.CR cs.AI cs.LG cs.SY eess.SY
|
The rapid expansion of the industrial Internet of things (IIoT) has
introduced new challenges in securing critical infrastructures against
sophisticated cyberthreats. This study presents the development and evaluation
of an advanced Intrusion detection (IDS) based on a hybrid LSTM-convolution
neural network (CNN)-Attention architecture, specifically designed to detect
and classify cyberattacks in IIoT environments. The research focuses on two key
classification tasks: binary and multi-class classification. The proposed
models was rigorously tested using the Edge-IIoTset dataset. To mitigate the
class imbalance in the dataset, the synthetic minority over-sampling technique
(SMOTE) was employed to generate synthetic samples for the underrepresented
classes. This ensured that the model could learn effectively from all classes,
thereby improving the overall classification performance. Through systematic
experimentation, various deep learning (DL) models were compared, ultimately
demonstrating that the LSTM-CNN-Attention model consistently outperformed
others across key performance metrics. In binary classification, the model
achieved near-perfect accuracy, while in multi-class classification, it
maintained a high accuracy level (99.04%), effectively categorizing different
attack types with a loss value of 0.0220%.
|
2501.13963
|
Procedural Generation of 3D Maize Plant Architecture from LIDAR Data
|
cs.CV cs.LG
|
This study introduces a robust framework for generating procedural 3D models
of maize (Zea mays) plants from LiDAR point cloud data, offering a scalable
alternative to traditional field-based phenotyping. Our framework leverages
Non-Uniform Rational B-Spline (NURBS) surfaces to model the leaves of maize
plants, combining Particle Swarm Optimization (PSO) for an initial
approximation of the surface and a differentiable programming framework for
precise refinement of the surface to fit the point cloud data. In the first
optimization phase, PSO generates an approximate NURBS surface by optimizing
its control points, aligning the surface with the LiDAR data, and providing a
reliable starting point for refinement. The second phase uses NURBS-Diff, a
differentiable programming framework, to enhance the accuracy of the initial
fit by refining the surface geometry and capturing intricate leaf details. Our
results demonstrate that, while PSO establishes a robust initial fit, the
integration of differentiable NURBS significantly improves the overall quality
and fidelity of the reconstructed surface. This hierarchical optimization
strategy enables accurate 3D reconstruction of maize leaves across diverse
genotypes, facilitating the subsequent extraction of complex traits like
phyllotaxy. We demonstrate our approach on diverse genotypes of field-grown
maize plants. All our codes are open-source to democratize these phenotyping
approaches.
|
2501.13964
|
Advancing the Understanding and Evaluation of AR-Generated Scenes: When
Vision-Language Models Shine and Stumble
|
cs.CV cs.AI cs.HC
|
Augmented Reality (AR) enhances the real world by integrating virtual
content, yet ensuring the quality, usability, and safety of AR experiences
presents significant challenges. Could Vision-Language Models (VLMs) offer a
solution for the automated evaluation of AR-generated scenes? Could
Vision-Language Models (VLMs) offer a solution for the automated evaluation of
AR-generated scenes? In this study, we evaluate the capabilities of three
state-of-the-art commercial VLMs -- GPT, Gemini, and Claude -- in identifying
and describing AR scenes. For this purpose, we use DiverseAR, the first AR
dataset specifically designed to assess VLMs' ability to analyze virtual
content across a wide range of AR scene complexities. Our findings demonstrate
that VLMs are generally capable of perceiving and describing AR scenes,
achieving a True Positive Rate (TPR) of up to 93% for perception and 71% for
description. While they excel at identifying obvious virtual objects, such as a
glowing apple, they struggle when faced with seamlessly integrated content,
such as a virtual pot with realistic shadows. Our results highlight both the
strengths and the limitations of VLMs in understanding AR scenarios. We
identify key factors affecting VLM performance, including virtual content
placement, rendering quality, and physical plausibility. This study underscores
the potential of VLMs as tools for evaluating the quality of AR experiences.
|
2501.13965
|
ZKLoRA: Efficient Zero-Knowledge Proofs for LoRA Verification
|
cs.CR cs.AI cs.LG
|
Low-Rank Adaptation (LoRA) is a widely adopted method for customizing
large-scale language models. In distributed, untrusted training environments,
an open source base model user may want to use LoRA weights created by an
external contributor, leading to two requirements: (1) the base model user must
confirm that the LoRA weights are effective when paired with the intended base
model, and (2) the LoRA contributor must keep their proprietary weights private
until compensation is assured.
We present ZKLoRA, a zero-knowledge verification protocol that relies on
succinct proofs and our novel Multi-Party Inference procedure to verify
LoRA-base model compatibility without exposing LoRA weights. ZKLoRA produces
deterministic correctness guarantees and validates each LoRA module in only 1-2
seconds on state-of-the-art large language models. This low-latency approach
enables nearly real-time verification and promotes secure collaboration among
geographically decentralized teams and contract-based training pipelines. The
protocol ensures that the delivered LoRA module works as claimed, safeguarding
the contributor's intellectual property while providing the base model user
with verification of compatibility and lineage.
|
2501.13967
|
FedDAG: Federated Domain Adversarial Generation Towards Generalizable
Medical Image Analysis
|
cs.CV cs.AI
|
Federated domain generalization aims to train a global model from multiple
source domains and ensure its generalization ability to unseen target domains.
Due to the target domain being with unknown domain shifts, attempting to
approximate these gaps by source domains may be the key to improving model
generalization capability. Existing works mainly focus on sharing and
recombining local domain-specific attributes to increase data diversity and
simulate potential domain shifts. However, these methods may be insufficient
since only the local attribute recombination can be hard to touch the
out-of-distribution of global data. In this paper, we propose a
simple-yet-efficient framework named Federated Domain Adversarial Generation
(FedDAG). It aims to simulate the domain shift and improve the model
generalization by adversarially generating novel domains different from local
and global source domains. Specifically, it generates novel-style images by
maximizing the instance-level feature discrepancy between original and
generated images and trains a generalizable task model by minimizing their
feature discrepancy. Further, we observed that FedDAG could cause different
performance improvements for local models. It may be due to inherent data
isolation and heterogeneity among clients, exacerbating the imbalance in their
generalization contributions to the global model. Ignoring this imbalance can
lead the global model's generalization ability to be sub-optimal, further
limiting the novel domain generation procedure. Thus, to mitigate this
imbalance, FedDAG hierarchically aggregates local models at the within-client
and across-client levels by using the sharpness concept to evaluate client
model generalization contributions. Extensive experiments across four medical
benchmarks demonstrate FedDAG's ability to enhance generalization in federated
medical scenarios.
|
2501.13968
|
Triplet Synthesis For Enhancing Composed Image Retrieval via
Counterfactual Image Generation
|
cs.CV cs.LG eess.IV
|
Composed Image Retrieval (CIR) provides an effective way to manage and access
large-scale visual data. Construction of the CIR model utilizes triplets that
consist of a reference image, modification text describing desired changes, and
a target image that reflects these changes. For effectively training CIR
models, extensive manual annotation to construct high-quality training
datasets, which can be time-consuming and labor-intensive, is required. To deal
with this problem, this paper proposes a novel triplet synthesis method by
leveraging counterfactual image generation. By controlling visual feature
modifications via counterfactual image generation, our approach automatically
generates diverse training triplets without any manual intervention. This
approach facilitates the creation of larger and more expressive datasets,
leading to the improvement of CIR model's performance.
|
2501.13969
|
InsTex: Indoor Scenes Stylized Texture Synthesis
|
cs.CV cs.GR cs.LG
|
Generating high-quality textures for 3D scenes is crucial for applications in
interior design, gaming, and augmented/virtual reality (AR/VR). Although recent
advancements in 3D generative models have enhanced content creation,
significant challenges remain in achieving broad generalization and maintaining
style consistency across multiple viewpoints. Current methods, such as 2D
diffusion models adapted for 3D texturing, suffer from lengthy processing times
and visual artifacts, while approaches driven by 3D data often fail to
generalize effectively. To overcome these challenges, we introduce InsTex, a
two-stage architecture designed to generate high-quality, style-consistent
textures for 3D indoor scenes. InsTex utilizes depth-to-image diffusion priors
in a coarse-to-fine pipeline, first generating multi-view images with a
pre-trained 2D diffusion model and subsequently refining the textures for
consistency. Our method supports both textual and visual prompts, achieving
state-of-the-art results in visual quality and quantitative metrics, and
demonstrates its effectiveness across various 3D texturing applications.
|
2501.13970
|
Patch-Based and Non-Patch-Based inputs Comparison into Deep Neural
Models: Application for the Segmentation of Retinal Diseases on Optical
Coherence Tomography Volumes
|
eess.IV cs.CV cs.LG
|
Worldwide, sight loss is commonly occurred by retinal diseases, with
age-related macular degeneration (AMD) being a notable facet that affects
elderly patients. Approaching 170 million persons wide-ranging have been
spotted with AMD, a figure anticipated to rise to 288 million by 2040. For
visualizing retinal layers, optical coherence tomography (OCT) dispenses the
most compelling non-invasive method. Frequent patient visits have increased the
demand for automated analysis of retinal diseases, and deep learning networks
have shown promising results in both image and pixel-level 2D scan
classification. However, when relying solely on 2D data, accuracy may be
impaired, especially when localizing fluid volume diseases. The goal of
automatic techniques is to outperform humans in manually recognizing illnesses
in medical data. In order to further understand the benefit of deep learning
models, we studied the effects of the input size. The dice similarity
coefficient (DSC) metric showed a human performance score of 0.71 for
segmenting various retinal diseases. Yet, the deep models surpassed human
performance to establish a new era of advancement of segmenting the diseases on
medical images. However, to further improve the performance of the models,
overlapping patches enhanced the performance of the deep models compared to
feeding the full image. The highest score for a patch-based model in the DSC
metric was 0.88 in comparison to the score of 0.71 for the same model in
non-patch-based for SRF fluid segmentation. The objective of this article is to
show a fair comparison between deep learning models in relation to the input
(Patch-Based vs. NonPatch-Based).
|
2501.13971
|
GS-LiDAR: Generating Realistic LiDAR Point Clouds with Panoramic
Gaussian Splatting
|
cs.CV cs.GR eess.IV
|
LiDAR novel view synthesis (NVS) has emerged as a novel task within LiDAR
simulation, offering valuable simulated point cloud data from novel viewpoints
to aid in autonomous driving systems. However, existing LiDAR NVS methods
typically rely on neural radiance fields (NeRF) as their 3D representation,
which incurs significant computational costs in both training and rendering.
Moreover, NeRF and its variants are designed for symmetrical scenes, making
them ill-suited for driving scenarios. To address these challenges, we propose
GS-LiDAR, a novel framework for generating realistic LiDAR point clouds with
panoramic Gaussian splatting. Our approach employs 2D Gaussian primitives with
periodic vibration properties, allowing for precise geometric reconstruction of
both static and dynamic elements in driving scenarios. We further introduce a
novel panoramic rendering technique with explicit ray-splat intersection,
guided by panoramic LiDAR supervision. By incorporating intensity and ray-drop
spherical harmonic (SH) coefficients into the Gaussian primitives, we enhance
the realism of the rendered point clouds. Extensive experiments on KITTI-360
and nuScenes demonstrate the superiority of our method in terms of quantitative
metrics, visual quality, as well as training and rendering efficiency.
|
2501.13972
|
Synthetic CT image generation from CBCT: A Systematic Review
|
eess.IV cs.CV
|
The generation of synthetic CT (sCT) images from cone-beam CT (CBCT) data
using deep learning methodologies represents a significant advancement in
radiation oncology. This systematic review, following PRISMA guidelines and
using the PICO model, comprehensively evaluates the literature from 2014 to
2024 on the generation of sCT images for radiation therapy planning in
oncology. A total of 35 relevant studies were identified and analyzed,
revealing the prevalence of deep learning approaches in the generation of sCT.
This review comprehensively covers synthetic CT generation based on CBCT and
proton-based studies. Some of the commonly employed architectures explored are
convolutional neural networks (CNNs), generative adversarial networks (GANs),
transformers, and diffusion models. Evaluation metrics including mean absolute
error (MAE), root mean square error (RMSE), peak signal-to-noise ratio (PSNR)
and structural similarity index (SSIM) consistently demonstrate the
comparability of sCT images with gold-standard planning CTs (pCT), indicating
their potential to improve treatment precision and patient outcomes. Challenges
such as field-of-view (FOV) disparities and integration into clinical workflows
are discussed, along with recommendations for future research and
standardization efforts. In general, the findings underscore the promising role
of sCT-based approaches in personalized treatment planning and adaptive
radiation therapy, with potential implications for improved oncology treatment
delivery and patient care.
|
2501.13973
|
A Spatio-temporal Graph Network Allowing Incomplete Trajectory Input for
Pedestrian Trajectory Prediction
|
cs.CV cs.AI cs.LG cs.RO
|
Pedestrian trajectory prediction is important in the research of mobile robot
navigation in environments with pedestrians. Most pedestrian trajectory
prediction algorithms require the input historical trajectories to be complete.
If a pedestrian is unobservable in any frame in the past, then its historical
trajectory become incomplete, the algorithm will not predict its future
trajectory. To address this limitation, we propose the STGN-IT, a
spatio-temporal graph network allowing incomplete trajectory input, which can
predict the future trajectories of pedestrians with incomplete historical
trajectories. STGN-IT uses the spatio-temporal graph with an additional
encoding method to represent the historical trajectories and observation states
of pedestrians. Moreover, STGN-IT introduces static obstacles in the
environment that may affect the future trajectories as nodes to further improve
the prediction accuracy. A clustering algorithm is also applied in the
construction of spatio-temporal graphs. Experiments on public datasets show
that STGN-IT outperforms state of the art algorithms on these metrics.
|
2501.13974
|
Absolute Governance: A Framework for Synchronization and Certification
of the Corporate Contractual State
|
cs.CR cs.IT math.IT
|
This dissertation addresses the challenge of ensuring transactional integrity
and reducing costs in corporate governance through blockchain technology. We
propose an on-chain methodology for certifying, registering, and querying
institutional transactional status. Our decentralized governance approach
utilizes consensus mechanisms and smart contracts to automate and enforce
business rules. The framework aims to reduce the transaction costs associated
with contractual measurement reports and enhance overall transactional
integrity. We provide a detailed exploration of how blockchain technology can
be effectively harnessed to offer a robust solution to these challenges,
setting the stage for our proposed solution and its potential impact on
corporate governance. The application of the methodology resulted in as average
of 2% overbilling reduction.
|
2501.13975
|
3DGS$^2$: Near Second-order Converging 3D Gaussian Splatting
|
cs.CV cs.GR
|
3D Gaussian Splatting (3DGS) has emerged as a mainstream solution for novel
view synthesis and 3D reconstruction. By explicitly encoding a 3D scene using a
collection of Gaussian kernels, 3DGS achieves high-quality rendering with
superior efficiency. As a learning-based approach, 3DGS training has been dealt
with the standard stochastic gradient descent (SGD) method, which offers at
most linear convergence. Consequently, training often requires tens of minutes,
even with GPU acceleration. This paper introduces a (near) second-order
convergent training algorithm for 3DGS, leveraging its unique properties. Our
approach is inspired by two key observations. First, the attributes of a
Gaussian kernel contribute independently to the image-space loss, which
endorses isolated and local optimization algorithms. We exploit this by
splitting the optimization at the level of individual kernel attributes,
analytically constructing small-size Newton systems for each parameter group,
and efficiently solving these systems on GPU threads. This achieves Newton-like
convergence per training image without relying on the global Hessian. Second,
kernels exhibit sparse and structured coupling across input images. This
property allows us to effectively utilize spatial information to mitigate
overshoot during stochastic training. Our method converges an order faster than
standard GPU-based 3DGS training, requiring over $10\times$ fewer iterations
while maintaining or surpassing the quality of the compared with the SGD-based
3DGS reconstructions.
|
2501.13976
|
Towards Safer Social Media Platforms: Scalable and Performant Few-Shot
Harmful Content Moderation Using Large Language Models
|
cs.CL cs.AI cs.CY cs.SI
|
The prevalence of harmful content on social media platforms poses significant
risks to users and society, necessitating more effective and scalable content
moderation strategies. Current approaches rely on human moderators, supervised
classifiers, and large volumes of training data, and often struggle with
scalability, subjectivity, and the dynamic nature of harmful content (e.g.,
violent content, dangerous challenge trends, etc.). To bridge these gaps, we
utilize Large Language Models (LLMs) to undertake few-shot dynamic content
moderation via in-context learning. Through extensive experiments on multiple
LLMs, we demonstrate that our few-shot approaches can outperform existing
proprietary baselines (Perspective and OpenAI Moderation) as well as prior
state-of-the-art few-shot learning methods, in identifying harm. We also
incorporate visual information (video thumbnails) and assess if different
multimodal techniques improve model performance. Our results underscore the
significant benefits of employing LLM based methods for scalable and dynamic
harmful content moderation online.
|
2501.13977
|
Re-ranking Using Large Language Models for Mitigating Exposure to
Harmful Content on Social Media Platforms
|
cs.CL cs.AI cs.CY cs.SI
|
Social media platforms utilize Machine Learning (ML) and Artificial
Intelligence (AI) powered recommendation algorithms to maximize user
engagement, which can result in inadvertent exposure to harmful content.
Current moderation efforts, reliant on classifiers trained with extensive
human-annotated data, struggle with scalability and adapting to new forms of
harm. To address these challenges, we propose a novel re-ranking approach using
Large Language Models (LLMs) in zero-shot and few-shot settings. Our method
dynamically assesses and re-ranks content sequences, effectively mitigating
harmful content exposure without requiring extensive labeled data. Alongside
traditional ranking metrics, we also introduce two new metrics to evaluate the
effectiveness of re-ranking in reducing exposure to harmful content. Through
experiments on three datasets, three models and across three configurations, we
demonstrate that our LLM-based approach significantly outperforms existing
proprietary moderation approaches, offering a scalable and adaptable solution
for harm mitigation.
|
2501.13978
|
Chain of Grounded Objectives: Bridging Process and Goal-oriented
Prompting for Code Generation
|
cs.CL cs.AI cs.SE
|
The use of Large Language Models (LLMs) for code generation has gained
significant attention in recent years. Existing methods often aim to improve
the quality of generated code by incorporating additional contextual
information or guidance into input prompts. Many of these approaches adopt
sequential reasoning strategies, mimicking human-like step-by-step thinking.
However, such strategies may constrain flexibility, as they do not always align
with the structured characteristics of programming languages. This paper
introduces the Chain of Grounded Objectives (CGO), a method that embeds
functional objectives into input prompts to enhance code generation. By
leveraging appropriately structured objectives as input and avoiding explicit
sequential procedures, CGO adapts effectively to the structured nature of
programming tasks. Empirical evaluations demonstrate that CGO effectively
enhances code generation, addressing limitations of existing approaches.
|
2501.13981
|
Enhanced PEC-YOLO for Detecting Improper Safety Gear Wearing Among Power
Line Workers
|
cs.CV eess.IV
|
To address the high risks associated with improper use of safety gear in
complex power line environments, where target occlusion and large variance are
prevalent, this paper proposes an enhanced PEC-YOLO object detection algorithm.
The method integrates deep perception with multi-scale feature fusion,
utilizing PConv and EMA attention mechanisms to enhance feature extraction
efficiency and minimize model complexity. The CPCA attention mechanism is
incorporated into the SPPF module, improving the model's ability to focus on
critical information and enhance detection accuracy, particularly in
challenging conditions. Furthermore, the introduction of the BiFPN neck
architecture optimizes the utilization of low-level and high-level features,
enhancing feature representation through adaptive fusion and context-aware
mechanism. Experimental results demonstrate that the proposed PEC-YOLO achieves
a 2.7% improvement in detection accuracy compared to YOLOv8s, while reducing
model parameters by 42.58%. Under identical conditions, PEC-YOLO outperforms
other models in detection speed, meeting the stringent accuracy requirements
for safety gear detection in construction sites. This study contributes to the
development of efficient and accurate intelligent monitoring systems for
ensuring worker safety in hazardous environments.
|
2501.13982
|
Attribute-based Visual Reprogramming for Image Classification with CLIP
|
cs.CV cs.LG
|
Visual reprogramming (VR) reuses pre-trained vision models for downstream
image classification tasks by adding trainable noise patterns to inputs. When
applied to vision-language models (e.g., CLIP), existing VR approaches follow
the same pipeline used in vision models (e.g., ResNet, ViT), where ground-truth
class labels are inserted into fixed text templates to guide the optimization
of VR patterns. This label-based approach, however, overlooks the rich
information and diverse attribute-guided textual representations that CLIP can
exploit, which may lead to the misclassification of samples. In this paper, we
propose Attribute-based Visual Reprogramming (AttrVR) for CLIP, utilizing
descriptive attributes (DesAttrs) and distinctive attributes (DistAttrs), which
respectively represent common and unique feature descriptions for different
classes. Besides, as images of the same class may reflect different attributes
after VR, AttrVR iteratively refines patterns using the $k$-nearest DesAttrs
and DistAttrs for each image sample, enabling more dynamic and sample-specific
optimization. Theoretically, AttrVR is shown to reduce intra-class variance and
increase inter-class separation. Empirically, it achieves superior performance
in 12 downstream tasks for both ViT-based and ResNet-based CLIP. The success of
AttrVR facilitates more effective integration of VR from unimodal vision models
into vision-language models. Our code is available at
https://github.com/tmlr-group/AttrVR.
|
2501.13983
|
AdEval: Alignment-based Dynamic Evaluation to Mitigate Data
Contamination in Large Language Models
|
cs.CL cs.AI
|
As Large Language Models (LLMs) are pretrained on massive-scale corpora, the
issue of data contamination has become increasingly severe, leading to
potential overestimation of model performance during evaluation. To address
this, we propose AdEval (Alignment-based Dynamic Evaluation), a dynamic data
evaluation method aimed at mitigating the impact of data contamination on
evaluation reliability. AdEval extracts key knowledge points and main ideas to
align dynamically generated questions with static data's core concepts. It also
leverages online search to provide detailed explanations of related knowledge
points, thereby creating high-quality evaluation samples with robust knowledge
support. Furthermore, AdEval incorporates mechanisms to control the number and
complexity of questions, enabling dynamic alignment and flexible adjustment.
This ensures that the generated questions align with the complexity of static
data while supporting varied complexity levels. Based on Bloom's taxonomy,
AdEval conducts a multi-dimensional evaluation of LLMs across six cognitive
levels: remembering, understanding, applying, analyzing, evaluating, and
creating. Experimental results on multiple datasets demonstrate that AdEval
effectively reduces the impact of data contamination on evaluation outcomes,
enhancing both the fairness and reliability of the evaluation process.
|
2501.13984
|
Comprehensive Modeling and Question Answering of Cancer Clinical
Practice Guidelines using LLMs
|
cs.CL cs.AI cs.LG
|
The updated recommendations on diagnostic procedures and treatment pathways
for a medical condition are documented as graphical flows in Clinical Practice
Guidelines (CPGs). For effective use of the CPGs in helping medical
professionals in the treatment decision process, it is necessary to fully
capture the guideline knowledge, particularly the contexts and their
relationships in the graph. While several existing works have utilized these
guidelines to create rule bases for Clinical Decision Support Systems, limited
work has been done toward directly capturing the full medical knowledge
contained in CPGs. This work proposes an approach to create a contextually
enriched, faithful digital representation of National Comprehensive Cancer
Network (NCCN) Cancer CPGs in the form of graphs using automated extraction and
node & relationship classification. We also implement semantic enrichment of
the model by using Large Language Models (LLMs) for node classification,
achieving an accuracy of 80.86% and 88.47% with zero-shot learning and few-shot
learning, respectively. Additionally, we introduce a methodology for answering
natural language questions with constraints to guideline text by leveraging
LLMs to extract the relevant subgraph from the guideline knowledge base. By
generating natural language answers based on subgraph paths and semantic
information, we mitigate the risk of incorrect answers and hallucination
associated with LLMs, ensuring factual accuracy in medical domain Question
Answering.
|
2501.13985
|
Pilot: Building the Federated Multimodal Instruction Tuning Framework
|
cs.LG cs.AI cs.CV
|
In this paper, we explore a novel federated multimodal instruction tuning
task(FedMIT), which is significant for collaboratively fine-tuning MLLMs on
different types of multimodal instruction data on distributed devices. To solve
the new task, we propose a federated multimodal instruction tuning
framework(Pilot). Our framework integrates two stages of "adapter on adapter"
into the connector of the vision encoder and the LLM. In stage 1, we extract
task-specific features and client-specific features from visual information. In
stage 2, we build the cross-task Mixture-of-Adapters(CT-MoA) module to perform
cross-task interaction. Each client can not only capture personalized
information of local data and learn task-related multimodal information, but
also learn general knowledge from other tasks. In addition, we introduce an
adaptive parameter aggregation strategy for text training parameters, which
optimizes parameter aggregation by calculating weights based on the euclidean
distance between parameters, so that parameter aggregation can benefit from
positive effects to the greatest extent while effectively reducing negative
effects. Our framework can collaboratively exploit distributed data from
different local clients to learn cross-task knowledge without being affected by
the task heterogeneity during instruction tuning. The effectiveness of our
method is verified in two different cross-task scenarios.
|
2501.13986
|
An Efficient Sparse Kernel Generator for O(3)-Equivariant Deep Networks
|
cs.LG cs.AI
|
Rotation equivariant graph neural networks, i.e., networks designed to
guarantee certain geometric relations between their inputs and outputs, yield
state-of-the-art performance on spatial deep learning tasks. They exhibit high
data efficiency during training and significantly reduced inference time for
interatomic potential calculations compared to classical approaches. Key to
these models is the Clebsch-Gordon (CG) tensor product, a kernel that contracts
two dense feature vectors with a highly structured sparse tensor to produce a
dense output vector. The operation, which may be repeated millions of times for
typical equivariant models, is a costly and inefficient bottleneck. We
introduce a GPU sparse kernel generator for the CG tensor product that provides
significant speedup over the best existing open and closed-source
implementations. Our implementation achieves high performance by carefully
managing GPU shared memory through static analysis at model compile-time,
minimizing reads and writes to global memory. We break the tensor product into
a series of kernels with operands that fit entirely into registers, enabling us
to emit long arithmetic instruction streams that maximize instruction-level
parallelism. By fusing the CG tensor product with a subsequent graph
convolution, we reduce both intermediate storage and global memory traffic over
naive approaches that duplicate input data. We also provide optimized kernels
for the gradient of the CG tensor product and a novel identity for the higher
partial derivatives required to predict interatomic forces. Our fused kernels
offer up to 4.5x speedup for the forward pass and 3x for the backward pass over
NVIDIA cuEquivariance, as well as >10x speedup over the widely-used e3nn
package. We offer up to 5.3x inference-time speedup for the MACE chemistry
foundation model over the original unoptimized version.
|
2501.13987
|
OstQuant: Refining Large Language Model Quantization with Orthogonal and
Scaling Transformations for Better Distribution Fitting
|
cs.LG cs.AI
|
Post-training quantization (PTQ) has emerged as a widely adopted technique
for compressing and accelerating Large Language Models (LLMs). The major
challenge in LLM quantization is that uneven and heavy-tailed data
distributions can expand the quantization range, thereby reducing bit precision
for most values. Recent methods attempt to eliminate outliers and balance
inter-channel differences by employing linear transformations; however, they
remain heuristic and are often overlook optimizing the data distribution across
the entire quantization space.In this paper, we introduce Quantization Space
Utilization Rate (QSUR), a novel metric that effectively assesses the
quantizability of transformed data by measuring the space utilization of the
data in the quantization space. We complement QSUR with mathematical
derivations that examine the effects and limitations of various
transformations, guiding our development of Orthogonal and Scaling
Transformation-based Quantization (OSTQuant). OSQuant employs a learnable
equivalent transformation, consisting of an orthogonal transformation and a
scaling transformation, to optimize the distributions of weights and
activations across the entire quantization space. Futhermore, we propose the
KL-Top loss function, designed to mitigate noise during optimization while
retaining richer semantic information within the limited calibration data
imposed by PTQ. OSTQuant outperforms existing work on various LLMs and
benchmarks. In the W4-only setting, it retains 99.5\% of the floating-point
accuracy. In the more challenging W4A4KV4 configuration, OSTQuant reduces the
performance gap by 32\% on the LLaMA-3-8B model compared to state-of-the-art
methods.
\href{https://github.com/BrotherHappy/OSTQuant}{https://github.com/BrotherHappy/OSTQuant}.
|
2501.13988
|
MCRL4OR: Multimodal Contrastive Representation Learning for Off-Road
Environmental Perception
|
cs.RO cs.AI cs.CV
|
Most studies on environmental perception for autonomous vehicles (AVs) focus
on urban traffic environments, where the objects/stuff to be perceived are
mainly from man-made scenes and scalable datasets with dense annotations can be
used to train supervised learning models. By contrast, it is hard to densely
annotate a large-scale off-road driving dataset manually due to the inherently
unstructured nature of off-road environments. In this paper, we propose a
Multimodal Contrastive Representation Learning approach for Off-Road
environmental perception, namely MCRL4OR. This approach aims to jointly learn
three encoders for processing visual images, locomotion states, and control
actions by aligning the locomotion states with the fused features of visual
images and control actions within a contrastive learning framework. The
causation behind this alignment strategy is that the inertial locomotion state
is the result of taking a certain control action under the current
landform/terrain condition perceived by visual sensors. In experiments, we
pre-train the MCRL4OR with a large-scale off-road driving dataset and adopt the
learned multimodal representations for various downstream perception tasks in
off-road driving scenarios. The superior performance in downstream tasks
demonstrates the advantages of the pre-trained multimodal representations. The
codes can be found in \url{https://github.com/1uciusy/MCRL4OR}.
|
2501.13989
|
FreEformer: Frequency Enhanced Transformer for Multivariate Time Series
Forecasting
|
cs.LG cs.AI
|
This paper presents \textbf{FreEformer}, a simple yet effective model that
leverages a \textbf{Fre}quency \textbf{E}nhanced Trans\textbf{former} for
multivariate time series forecasting. Our work is based on the assumption that
the frequency spectrum provides a global perspective on the composition of
series across various frequencies and is highly suitable for robust
representation learning. Specifically, we first convert time series into the
complex frequency domain using the Discrete Fourier Transform (DFT). The
Transformer architecture is then applied to the frequency spectra to capture
cross-variate dependencies, with the real and imaginary parts processed
independently. However, we observe that the vanilla attention matrix exhibits a
low-rank characteristic, thus limiting representation diversity. This could be
attributed to the inherent sparsity of the frequency domain and the
strong-value-focused nature of Softmax in vanilla attention. To address this,
we enhance the vanilla attention mechanism by introducing an additional
learnable matrix to the original attention matrix, followed by row-wise L1
normalization. Theoretical analysis~demonstrates that this enhanced attention
mechanism improves both feature diversity and gradient flow. Extensive
experiments demonstrate that FreEformer consistently outperforms
state-of-the-art models on eighteen real-world benchmarks covering electricity,
traffic, weather, healthcare and finance. Notably, the enhanced attention
mechanism also consistently improves the performance of state-of-the-art
Transformer-based forecasters.
|
2501.13991
|
CGI: Identifying Conditional Generative Models with Example Images
|
cs.CV cs.AI
|
Generative models have achieved remarkable performance recently, and thus
model hubs have emerged. Existing model hubs typically assume basic text
matching is sufficient to search for models. However, in reality, due to
different abstractions and the large number of models in model hubs, it is not
easy for users to review model descriptions and example images, choosing which
model best meets their needs. Therefore, it is necessary to describe model
functionality wisely so that future users can efficiently search for the most
suitable model for their needs. Efforts to address this issue remain limited.
In this paper, we propose Conditional Generative Model Identification (CGI),
which aims to provide an effective way to identify the most suitable model
using user-provided example images rather than requiring users to manually
review a large number of models with example images. To address this problem,
we propose the PromptBased Model Identification (PMI) , which can adequately
describe model functionality and precisely match requirements with
specifications. To evaluate PMI approach and promote related research, we
provide a benchmark comprising 65 models and 9100 identification tasks.
Extensive experimental and human evaluation results demonstrate that PMI is
effective. For instance, 92% of models are correctly identified with
significantly better FID scores when four example images are provided.
|
2501.13992
|
Dual-Branch HNSW Approach with Skip Bridges and LID-Driven Optimization
|
cs.LG cs.AI
|
The Hierarchical Navigable Small World (HNSW) algorithm is widely used for
approximate nearest neighbor (ANN) search, leveraging the principles of
navigable small-world graphs. However, it faces some limitations. The first is
the local optima problem, which arises from the algorithm's greedy search
strategy, selecting neighbors based solely on proximity at each step. This
often leads to cluster disconnections. The second limitation is that HNSW
frequently fails to achieve logarithmic complexity, particularly in
high-dimensional datasets, due to the exhaustive traversal through each layer.
To address these limitations, we propose a novel algorithm that mitigates local
optima and cluster disconnections while enhancing the construction speed,
maintaining inference speed. The first component is a dual-branch HNSW
structure with LID-based insertion mechanisms, enabling traversal from multiple
directions. This improves outlier node capture, enhances cluster connectivity,
accelerates construction speed and reduces the risk of local minima. The second
component incorporates a bridge-building technique that bypasses redundant
intermediate layers, maintaining inference and making up the additional
computational overhead introduced by the dual-branch structure. Experiments on
various benchmarks and datasets showed that our algorithm outperforms the
original HNSW in both accuracy and speed. We evaluated six datasets across
Computer Vision (CV), and Natural Language Processing (NLP), showing recall
improvements of 18\% in NLP, and up to 30\% in CV tasks while reducing the
construction time by up to 20\% and maintaining the inference speed. We did not
observe any trade-offs in our algorithm. Ablation studies revealed that
LID-based insertion had the greatest impact on performance, followed by the
dual-branch structure and bridge-building components.
|
2501.13993
|
CAPRAG: A Large Language Model Solution for Customer Service and
Automatic Reporting using Vector and Graph Retrieval-Augmented Generation
|
cs.CL cs.AI cs.IR
|
The introduction of new features and services in the banking sector often
overwhelms customers, creating an opportunity for banks to enhance user
experience through financial chatbots powered by large language models (LLMs).
We initiated an AI agent designed to provide customers with relevant
information about banking services and insights from annual reports. We
proposed a hybrid Customer Analysis Pipeline Retrieval-Augmented Generation
(CAPRAG) that effectively addresses both relationship-based and contextual
queries, thereby improving customer engagement in the digital banking
landscape. To implement this, we developed a processing pipeline to refine text
data, which we utilized in two main frameworks: Vector RAG and Graph RAG. This
dual approach enables us to populate both vector and graph databases with
processed data for efficient retrieval. The Cypher query component is employed
to effectively query the graph database. When a user submits a query, it is
first expanded by a query expansion module before being routed to construct a
final query from the hybrid Knowledge Base (KB). This final query is then sent
to an open-source LLM for response generation. Overall, our innovative,
designed to international banks, serves bank's customers in an increasingly
complex digital environment, enhancing clarity and accessibility of
information.
|
2501.13994
|
CSAOT: Cooperative Multi-Agent System for Active Object Tracking
|
cs.CV cs.AI cs.RO
|
Object Tracking is essential for many computer vision applications, such as
autonomous navigation, surveillance, and robotics. Unlike Passive Object
Tracking (POT), which relies on static camera viewpoints to detect and track
objects across consecutive frames, Active Object Tracking (AOT) requires a
controller agent to actively adjust its viewpoint to maintain visual contact
with a moving target in complex environments. Existing AOT solutions are
predominantly single-agent-based, which struggle in dynamic and complex
scenarios due to limited information gathering and processing capabilities,
often resulting in suboptimal decision-making. Alleviating these limitations
necessitates the development of a multi-agent system where different agents
perform distinct roles and collaborate to enhance learning and robustness in
dynamic and complex environments. Although some multi-agent approaches exist
for AOT, they typically rely on external auxiliary agents, which require
additional devices, making them costly. In contrast, we introduce the
Collaborative System for Active Object Tracking (CSAOT), a method that
leverages multi-agent deep reinforcement learning (MADRL) and a Mixture of
Experts (MoE) framework to enable multiple agents to operate on a single
device, thereby improving tracking performance and reducing costs. Our approach
enhances robustness against occlusions and rapid motion while optimizing camera
movements to extend tracking duration. We validated the effectiveness of CSAOT
on various interactive maps with dynamic and stationary obstacles.
|
2501.13996
|
Integrating Persian Lip Reading in Surena-V Humanoid Robot for
Human-Robot Interaction
|
cs.CV cs.RO
|
Lip reading is vital for robots in social settings, improving their ability
to understand human communication. This skill allows them to communicate more
easily in crowded environments, especially in caregiving and customer service
roles. Generating a Persian Lip-reading dataset, this study integrates Persian
lip-reading technology into the Surena-V humanoid robot to improve its speech
recognition capabilities. Two complementary methods are explored, an indirect
method using facial landmark tracking and a direct method leveraging
convolutional neural networks (CNNs) and long short-term memory (LSTM)
networks. The indirect method focuses on tracking key facial landmarks,
especially around the lips, to infer movements, while the direct method
processes raw video data for action and speech recognition. The best-performing
model, LSTM, achieved 89\% accuracy and has been successfully implemented into
the Surena-V robot for real-time human-robot interaction. The study highlights
the effectiveness of these methods, particularly in environments where verbal
communication is limited.
|
2501.13997
|
Predictive Learning in Energy-based Models with Attractor Structures
|
cs.LG cs.AI
|
Predictive models are highly advanced in understanding the mechanisms of
brain function. Recent advances in machine learning further underscore the
power of prediction for optimal representation in learning. However, there
remains a gap in creating a biologically plausible model that explains how the
neural system achieves prediction. In this paper, we introduce a framework that
employs an energy-based model (EBM) to capture the nuanced processes of
predicting observation after action within the neural system, encompassing
prediction, learning, and inference. We implement the EBM with a hierarchical
structure and integrate a continuous attractor neural network for memory,
constructing a biologically plausible model. In experimental evaluations, our
model demonstrates efficacy across diverse scenarios. The range of actions
includes eye movement, motion in environments, head turning, and static
observation while the environment changes. Our model not only makes accurate
predictions for environments it was trained on, but also provides reasonable
predictions for unseen environments, matching the performances of machine
learning methods in multiple tasks. We hope that this study contributes to a
deep understanding of how the neural system performs prediction.
|
2501.13999
|
Framework for Progressive Knowledge Fusion in Large Language Models
Through Structured Conceptual Redundancy Analysis
|
cs.CL cs.AI
|
The organization of latent knowledge within large-scale models poses unique
challenges when addressing overlapping representations and optimizing
contextual accuracy. Conceptual redundancies embedded across layers often
result in inefficiencies that affect both computational demands and
task-specific outcomes. A framework was proposed to restructure these
redundancies through advanced clustering techniques and dynamic thresholding,
ensuring that critical semantic relationships are preserved while removing
unnecessary overlaps. Evaluations revealed improved memory efficiency and
faster inference times, alongside better alignment in latent knowledge clusters
that enhanced interpretability. Improvements in error rates and adversarial
robustness suggest that restructuring redundancies has broader implications for
increasing model reliability across diverse applications. Comparative analyses
highlighted reductions in resource consumption and notable gains in
performance, particularly in translation and summarization tasks. Energy
metrics demonstrated significant savings during training phases, further
validating the practicality of the approach for real-world deployments.
Representational fidelity was also enhanced, with latent space evaluations
indicating better cluster alignment and higher semantic consistency. The
methodology bridges a key gap in model optimization through directly addressing
redundancies at the structural level. Its application opens avenues for
scalable, efficient, and contextually aware systems that can adapt to complex,
domain-specific tasks without compromising on performance.
|
2501.14000
|
Local Control Networks (LCNs): Optimizing Flexibility in Neural Network
Data Pattern Capture
|
cs.LG cs.AI
|
The widespread use of Multi-layer perceptrons (MLPs) often relies on a fixed
activation function (e.g., ReLU, Sigmoid, Tanh) for all nodes within the hidden
layers. While effective in many scenarios, this uniformity may limit the
networks ability to capture complex data patterns. We argue that employing the
same activation function at every node is suboptimal and propose leveraging
different activation functions at each node to increase flexibility and
adaptability. To achieve this, we introduce Local Control Networks (LCNs),
which leverage B-spline functions to enable distinct activation curves at each
node. Our mathematical analysis demonstrates the properties and benefits of
LCNs over conventional MLPs. In addition, we demonstrate that more complex
architectures, such as Kolmogorov-Arnold Networks (KANs), are unnecessary in
certain scenarios, and LCNs can be a more efficient alternative. Empirical
experiments on various benchmarks and datasets validate our theoretical
findings. In computer vision tasks, LCNs achieve marginal improvements over
MLPs and outperform KANs by approximately 5\%, while also being more
computationally efficient than KANs. In basic machine learning tasks, LCNs show
a 1\% improvement over MLPs and a 0.6\% improvement over KANs. For symbolic
formula representation tasks, LCNs perform on par with KANs, with both
architectures outperforming MLPs. Our findings suggest that diverse activations
at the node level can lead to improved performance and efficiency.
|
2501.14001
|
Enhancing kelp forest detection in remote sensing images using
crowdsourced labels with Mixed Vision Transformers and ConvNeXt segmentation
models
|
cs.CV cs.AI cs.LG
|
Kelp forests, as foundation species, are vital to marine ecosystems,
providing essential food and habitat for numerous organisms. This study
explores the integration of crowdsourced labels with advanced artificial
intelligence models to develop a fast and accurate kelp canopy detection
pipeline using Landsat images. Building on the success of a machine learning
competition, where this approach ranked third and performed consistently well
on both local validation and public and private leaderboards, the research
highlights the effectiveness of combining Mixed Vision Transformers (MIT) with
ConvNeXt models. Training these models on various image sizes significantly
enhanced the accuracy of the ensemble results. U-Net emerged as the best
segmentation architecture, with UpperNet also contributing to the final
ensemble. Key Landsat bands, such as ShortWave InfraRed (SWIR1) and
Near-InfraRed (NIR), were crucial while altitude data was used in
postprocessing to eliminate false positives on land. The methodology achieved a
high detection rate, accurately identifying about three out of four pixels
containing kelp canopy while keeping false positives low. Despite the medium
resolution of Landsat satellites, their extensive historical coverage makes
them effective for studying kelp forests. This work also underscores the
potential of combining machine learning models with crowdsourced data for
effective and scalable environmental monitoring. All running code for training
all models and inference can be found at
https://github.com/IoannisNasios/Kelp_Forests.
|
2501.14002
|
Advancing Math Reasoning in Language Models: The Impact of
Problem-Solving Data, Data Synthesis Methods, and Training Stages
|
cs.CL cs.AI
|
Mathematical reasoning remains a challenging area for large language models
(LLMs), prompting the development of math-specific LLMs such as LLEMMA,
DeepSeekMath, and Qwen2-Math, among others. These models typically follow a
two-stage training paradigm: pre-training with math-related corpora and
post-training with problem datasets for supervised fine-tuning (SFT). Despite
these efforts, the improvements in mathematical reasoning achieved through
continued pre-training (CPT) are often less significant compared to those
obtained via SFT. This study addresses this discrepancy by exploring
alternative strategies during the pre-training phase, focusing on the use of
problem-solving data over general mathematical corpora. We investigate three
primary research questions: (1) Can problem-solving data enhance the model's
mathematical reasoning capabilities more effectively than general mathematical
corpora during CPT? (2) Are synthetic data from the same source equally
effective, and which synthesis methods are most efficient? (3) How do the
capabilities developed from the same problem-solving data differ between the
CPT and SFT stages, and what factors contribute to these differences? Our
findings indicate that problem-solving data significantly enhances the model's
mathematical capabilities compared to general mathematical corpora. We also
identify effective data synthesis methods, demonstrating that the tutorship
amplification synthesis method achieves the best performance. Furthermore,
while SFT facilitates instruction-following abilities, it underperforms
compared to CPT with the same data, which can be partially attributed to its
poor learning capacity for more challenging problem-solving data. These
insights provide valuable guidance for optimizing the mathematical reasoning
capabilities of LLMs, culminating in our development of a powerful mathematical
base model called MathGPT-8B.
|
2501.14003
|
PaMMA-Net: Plasmas magnetic measurement evolution based on data-driven
incremental accumulative prediction
|
physics.plasm-ph cs.AI
|
An accurate evolution model is crucial for effective control and in-depth
study of fusion plasmas. Evolution methods based on physical models often
encounter challenges such as insufficient robustness or excessive computational
costs. Given the proven strong fitting capabilities of deep learning methods
across various fields, including plasma research, this paper introduces a deep
learning-based magnetic measurement evolution method named PaMMA-Net (Plasma
Magnetic Measurements Incremental Accumulative Prediction Network). This
network is capable of evolving magnetic measurements in tokamak discharge
experiments over extended periods or, in conjunction with equilibrium
reconstruction algorithms, evolving macroscopic parameters such as plasma
shape. Leveraging a incremental prediction approach and data augmentation
techniques tailored for magnetic measurements, PaMMA-Net achieves superior
evolution results compared to existing studies. The tests conducted on real
experimental data from EAST validate the high generalization capability of the
proposed method.
|
2501.14004
|
ME-CPT: Multi-Task Enhanced Cross-Temporal Point Transformer for Urban
3D Change Detection
|
cs.CV cs.AI
|
The point clouds collected by the Airborne Laser Scanning (ALS) system
provide accurate 3D information of urban land covers. By utilizing
multi-temporal ALS point clouds, semantic changes in urban area can be
captured, demonstrating significant potential in urban planning, emergency
management, and infrastructure maintenance. Existing 3D change detection
methods struggle to efficiently extract multi-class semantic information and
change features, still facing the following challenges: (1) the difficulty of
accurately modeling cross-temporal point clouds spatial relationships for
effective change feature extraction; (2) class imbalance of change samples
which hinders distinguishability of semantic features; (3) the lack of
real-world datasets for 3D semantic change detection. To resolve these
challenges, we propose the Multi-task Enhanced Cross-temporal Point Transformer
(ME-CPT) network. ME-CPT establishes spatiotemporal correspondences between
point cloud across different epochs and employs attention mechanisms to jointly
extract semantic change features, facilitating information exchange and change
comparison. Additionally, we incorporate a semantic segmentation task and
through the multi-task training strategy, further enhance the
distinguishability of semantic features, reducing the impact of class imbalance
in change types. Moreover, we release a 22.5 $km^2$ 3D semantic change
detection dataset, offering diverse scenes for comprehensive evaluation.
Experiments on multiple datasets show that the proposed MT-CPT achieves
superior performance compared to existing state-of-the-art methods. The source
code and dataset will be released upon acceptance at
https://github.com/zhangluqi0209/ME-CPT.
|
2501.14005
|
Device-aware Optical Adversarial Attack for a Portable Projector-camera
System
|
cs.CV cs.AI
|
Deep-learning-based face recognition (FR) systems are susceptible to
adversarial examples in both digital and physical domains. Physical attacks
present a greater threat to deployed systems as adversaries can easily access
the input channel, allowing them to provide malicious inputs to impersonate a
victim. This paper addresses the limitations of existing projector-camera-based
adversarial light attacks in practical FR setups. By incorporating device-aware
adaptations into the digital attack algorithm, such as resolution-aware and
color-aware adjustments, we mitigate the degradation from digital to physical
domains. Experimental validation showcases the efficacy of our proposed
algorithm against real and spoof adversaries, achieving high physical
similarity scores in FR models and state-of-the-art commercial systems. On
average, there is only a 14% reduction in scores from digital to physical
attacks, with high attack success rate in both white- and black-box scenarios.
|
2501.14006
|
Asymmetrical Latent Representation for Individual Treatment Effect
Modeling
|
cs.LG cs.AI
|
Conditional Average Treatment Effect (CATE) estimation, at the heart of
counterfactual reasoning, is a crucial challenge for causal modeling both
theoretically and applicatively, in domains such as healthcare, sociology, or
advertising. Borrowing domain adaptation principles, a popular design maps the
sample representation to a latent space that balances control and treated
populations while enabling the prediction of the potential outcomes. This paper
presents a new CATE estimation approach based on the asymmetrical search for
two latent spaces called Asymmetrical Latent Representation for Individual
Treatment Effect (ALRITE), where the two latent spaces are respectively
intended to optimize the counterfactual prediction accuracy on the control and
the treated samples. Under moderate assumptions, ALRITE admits an upper bound
on the precision of the estimation of heterogeneous effects (PEHE), and the
approach is empirically successfully validated compared to the state-of-the-art
|
2501.14007
|
Adaptive Genetic Algorithms for Pulse-Level Quantum Error Mitigation
|
quant-ph cs.AI cs.AR
|
Noise remains a fundamental challenge in quantum computing, significantly
affecting pulse fidelity and overall circuit performance. This paper introduces
an adaptive algorithm for pulse-level quantum error mitigation, designed to
enhance fidelity by dynamically responding to noise conditions without
modifying circuit gates. By targeting pulse parameters directly, this method
reduces the impact of various noise sources, improving algorithm resilience in
quantum circuits. We show the latter by applying our protocol to Grover's and
Deutsch-Jozsa algorithms. Experimental results show that this pulse-level
strategy provides a flexible and efficient solution for increasing fidelity
during the noisy execution of quantum circuits. Our work contributes to
advancements in error mitigation techniques, essential for robust quantum
computing.
|
2501.14009
|
Scalable and Explainable Verification of Image-based Neural Network
Controllers for Autonomous Vehicles
|
cs.LG cs.AI cs.SY eess.SY
|
Existing formal verification methods for image-based neural network
controllers in autonomous vehicles often struggle with high-dimensional inputs,
computational inefficiency, and a lack of explainability. These challenges make
it difficult to ensure safety and reliability, as processing high-dimensional
image data is computationally intensive and neural networks are typically
treated as black boxes. To address these issues, we propose \textbf{SEVIN}
(Scalable and Explainable Verification of Image-Based Neural Network
Controllers), a framework that leverages a Variational Autoencoders (VAE) to
encode high-dimensional images into a lower-dimensional, explainable latent
space. By annotating latent variables with corresponding control actions, we
generate convex polytopes that serve as structured input spaces for
verification, significantly reducing computational complexity and enhancing
scalability. Integrating the VAE's decoder with the neural network controller
allows for formal and robustness verification using these explainable
polytopes. Our approach also incorporates robustness verification under
real-world perturbations by augmenting the dataset and retraining the VAE to
capture environmental variations. Experimental results demonstrate that SEVIN
achieves efficient and scalable verification while providing explainable
insights into controller behavior, bridging the gap between formal verification
techniques and practical applications in safety-critical systems.
|
2501.14011
|
QuanTaxo: A Quantum Approach to Self-Supervised Taxonomy Expansion
|
cs.SI cs.CL
|
A taxonomy is a hierarchical graph containing knowledge to provide valuable
insights for various web applications. Online retail organizations like
Microsoft and Amazon utilize taxonomies to improve product recommendations and
optimize advertisement by enhancing query interpretation. However, the manual
construction of taxonomies requires significant human effort. As web content
continues to expand at an unprecedented pace, existing taxonomies risk becoming
outdated, struggling to incorporate new and emerging information effectively.
As a consequence, there is a growing need for dynamic taxonomy expansion to
keep them relevant and up-to-date. Existing taxonomy expansion methods often
rely on classical word embeddings to represent entities. However, these
embeddings fall short in capturing hierarchical polysemy, where an entity's
meaning can vary based on its position in the hierarchy and its surrounding
context. To address this challenge, we introduce QuanTaxo, an innovative
quantum-inspired framework for taxonomy expansion. QuanTaxo encodes entity
representations in quantum space, effectively modeling hierarchical polysemy by
leveraging the principles of Hilbert space to capture interference effects
between entities, yielding richer and more nuanced representations.
Comprehensive experiments on four real-world benchmark datasets show that
QuanTaxo significantly outperforms classical embedding models, achieving
substantial improvements of 18.45% in accuracy, 20.5% in Mean Reciprocal Rank,
and 17.87% in Wu & Palmer metrics across eight classical embedding-based
baselines. We further highlight the superiority of QuanTaxo through extensive
ablation and case studies.
|
2501.14012
|
Transfer Learning of Surrogate Models via Domain Affine Transformation
Across Synthetic and Real-World Benchmarks
|
cs.LG cs.AI
|
Surrogate models are frequently employed as efficient substitutes for the
costly execution of real-world processes. However, constructing a high-quality
surrogate model often demands extensive data acquisition. A solution to this
issue is to transfer pre-trained surrogate models for new tasks, provided that
certain invariances exist between tasks. This study focuses on transferring
non-differentiable surrogate models (e.g., random forest) from a source
function to a target function, where we assume their domains are related by an
unknown affine transformation, using only a limited amount of transfer data
points evaluated on the target. Previous research attempts to tackle this
challenge for differentiable models, e.g., Gaussian process regression, which
minimizes the empirical loss on the transfer data by tuning the affine
transformations. In this paper, we extend the previous work to the random
forest model and assess its effectiveness on a widely-used artificial problem
set - Black-Box Optimization Benchmark (BBOB) testbed, and on four real-world
transfer learning problems. The results highlight the significant practical
advantages of the proposed method, particularly in reducing both the data
requirements and computational costs of training surrogate models for complex
real-world scenarios.
|
2501.14013
|
Leveraging Multiphase CT for Quality Enhancement of Portal Venous CT:
Utility for Pancreas Segmentation
|
eess.IV cs.AI cs.CV
|
Multiphase CT studies are routinely obtained in clinical practice for
diagnosis and management of various diseases, such as cancer. However, the CT
studies can be acquired with low radiation doses, different scanners, and are
frequently affected by motion and metal artifacts. Prior approaches have
targeted the quality improvement of one specific CT phase (e.g., non-contrast
CT). In this work, we hypothesized that leveraging multiple CT phases for the
quality enhancement of one phase may prove advantageous for downstream tasks,
such as segmentation. A 3D progressive fusion and non-local (PFNL) network was
developed. It was trained with three degraded (low-quality) phases
(non-contrast, arterial, and portal venous) to enhance the quality of the
portal venous phase. Then, the effect of scan quality enhancement was evaluated
using a proxy task of pancreas segmentation, which is useful for tracking
pancreatic cancer. The proposed approach improved the pancreas segmentation by
3% over the corresponding low-quality CT scan. To the best of our knowledge, we
are the first to harness multiphase CT for scan quality enhancement and
improved pancreas segmentation.
|
2501.14014
|
INDIGO+: A Unified INN-Guided Probabilistic Diffusion Algorithm for
Blind and Non-Blind Image Restoration
|
cs.CV eess.IV
|
Generative diffusion models are becoming one of the most popular prior in
image restoration (IR) tasks due to their remarkable ability to generate
realistic natural images. Despite achieving satisfactory results, IR methods
based on diffusion models present several limitations. First of all, most
non-blind approaches require an analytical expression of the degradation model
to guide the sampling process. Secondly, most existing blind approaches rely on
families of pre-defined degradation models for training their deep networks.
The above issues limit the flexibility of these approaches and so their ability
to handle real-world degradation tasks. In this paper, we propose a novel
INN-guided probabilistic diffusion algorithm for non-blind and blind image
restoration, namely INDIGO and BlindINDIGO, which combines the merits of the
perfect reconstruction property of invertible neural networks (INN) with the
strong generative capabilities of pre-trained diffusion models. Specifically,
we train the forward process of the INN to simulate an arbitrary degradation
process and use the inverse to obtain an intermediate image that we use to
guide the reverse diffusion sampling process through a gradient step. We also
introduce an initialization strategy, to further improve the performance and
inference speed of our algorithm. Experiments demonstrate that our algorithm
obtains competitive results compared with recently leading methods both
quantitatively and visually on synthetic and real-world low-quality images.
|
2501.14035
|
Human-Alignment Influences the Utility of AI-assisted Decision Making
|
cs.AI
|
Whenever an AI model is used to predict a relevant (binary) outcome in
AI-assisted decision making, it is widely agreed that, together with each
prediction, the model should provide an AI confidence value. However, it has
been unclear why decision makers have often difficulties to develop a good
sense on when to trust a prediction using AI confidence values. Very recently,
Corvelo Benz and Gomez Rodriguez have argued that, for rational decision
makers, the utility of AI-assisted decision making is inherently bounded by the
degree of alignment between the AI confidence values and the decision maker's
confidence on their own predictions. In this work, we empirically investigate
to what extent the degree of alignment actually influences the utility of
AI-assisted decision making. To this end, we design and run a large-scale human
subject study (n=703) where participants solve a simple decision making task -
an online card game - assisted by an AI model with a steerable degree of
alignment. Our results show a positive association between the degree of
alignment and the utility of AI-assisted decision making. In addition, our
results also show that post-processing the AI confidence values to achieve
multicalibration with respect to the participants' confidence on their own
predictions increases both the degree of alignment and the utility of
AI-assisted decision making.
|
2501.14036
|
Efficient Precision Control in Object Detection Models for Enhanced and
Reliable Ovarian Follicle Counting
|
cs.LG
|
Image analysis is a key tool for describing the detailed mechanisms of
folliculogenesis, such as evaluating the quantity of mouse Primordial ovarian
Follicles (PMF) in the ovarian reserve. The development of high-resolution
virtual slide scanners offers the possibility of quantifying, robustifying and
accelerating the histopathological procedure. A major challenge for machine
learning is to control the precision of predictions while enabling a high
recall, in order to provide reproducibility. We use a multiple testing
procedure that gives an overperforming way to solve the standard
Precision-Recall trade-off that gives probabilistic guarantees on the
precision. In addition, we significantly improve the overall performance of the
models (increase of F1-score) by selecting the decision threshold using
contextual biological information or using an auxiliary model. As it is
model-agnostic, this contextual selection procedure paves the way to the
development of a strategy that can improve the performance of any model without
the need of retraining it.
|
2501.14037
|
Leveraging Large Language Models to Analyze Emotional and Contextual
Drivers of Teen Substance Use in Online Discussions
|
cs.CL
|
Adolescence is a critical stage often linked to risky behaviors, including
substance use, with significant developmental and public health implications.
Social media provides a lens into adolescent self-expression, but interpreting
emotional and contextual signals remains complex. This study applies Large
Language Models (LLMs) to analyze adolescents' social media posts, uncovering
emotional patterns (e.g., sadness, guilt, fear, joy) and contextual factors
(e.g., family, peers, school) related to substance use. Heatmap and machine
learning analyses identified key predictors of substance use-related posts.
Negative emotions like sadness and guilt were significantly more frequent in
substance use contexts, with guilt acting as a protective factor, while shame
and peer influence heightened substance use risk. Joy was more common in
non-substance use discussions. Peer influence correlated strongly with sadness,
fear, and disgust, while family and school environments aligned with
non-substance use. Findings underscore the importance of addressing emotional
vulnerabilities and contextual influences, suggesting that collaborative
interventions involving families, schools, and communities can reduce risk
factors and foster healthier adolescent development.
|
2501.14038
|
Implicit Neural Surface Deformation with Explicit Velocity Fields
|
cs.CV
|
In this work, we introduce the first unsupervised method that simultaneously
predicts time-varying neural implicit surfaces and deformations between pairs
of point clouds. We propose to model the point movement using an explicit
velocity field and directly deform a time-varying implicit field using the
modified level-set equation. This equation utilizes an iso-surface evolution
with Eikonal constraints in a compact formulation, ensuring the integrity of
the signed distance field. By applying a smooth, volume-preserving constraint
to the velocity field, our method successfully recovers physically plausible
intermediate shapes. Our method is able to handle both rigid and non-rigid
deformations without any intermediate shape supervision. Our experimental
results demonstrate that our method significantly outperforms existing works,
delivering superior results in both quality and efficiency.
|
2501.14046
|
LLM-guided Instance-level Image Manipulation with Diffusion U-Net
Cross-Attention Maps
|
cs.CV
|
The advancement of text-to-image synthesis has introduced powerful generative
models capable of creating realistic images from textual prompts. However,
precise control over image attributes remains challenging, especially at the
instance level. While existing methods offer some control through fine-tuning
or auxiliary information, they often face limitations in flexibility and
accuracy. To address these challenges, we propose a pipeline leveraging Large
Language Models (LLMs), open-vocabulary detectors, cross-attention maps and
intermediate activations of diffusion U-Net for instance-level image
manipulation. Our method detects objects mentioned in the prompt and present in
the generated image, enabling precise manipulation without extensive training
or input masks. By incorporating cross-attention maps, our approach ensures
coherence in manipulated images while controlling object positions. Our method
enables precise manipulations at the instance level without fine-tuning or
auxiliary information such as masks or bounding boxes. Code is available at
https://github.com/Palandr123/DiffusionU-NetLLM
|
2501.14048
|
SIDDA: SInkhorn Dynamic Domain Adaptation for Image Classification with
Equivariant Neural Networks
|
cs.LG astro-ph.GA cs.AI cs.CV
|
Modern neural networks (NNs) often do not generalize well in the presence of
a "covariate shift"; that is, in situations where the training and test data
distributions differ, but the conditional distribution of classification labels
remains unchanged. In such cases, NN generalization can be reduced to a problem
of learning more domain-invariant features. Domain adaptation (DA) methods
include a range of techniques aimed at achieving this; however, these methods
have struggled with the need for extensive hyperparameter tuning, which then
incurs significant computational costs. In this work, we introduce SIDDA, an
out-of-the-box DA training algorithm built upon the Sinkhorn divergence, that
can achieve effective domain alignment with minimal hyperparameter tuning and
computational overhead. We demonstrate the efficacy of our method on multiple
simulated and real datasets of varying complexity, including simple shapes,
handwritten digits, and real astronomical observations. SIDDA is compatible
with a variety of NN architectures, and it works particularly well in improving
classification accuracy and model calibration when paired with equivariant
neural networks (ENNs). We find that SIDDA enhances the generalization
capabilities of NNs, achieving up to a $\approx40\%$ improvement in
classification accuracy on unlabeled target data. We also study the efficacy of
DA on ENNs with respect to the varying group orders of the dihedral group
$D_N$, and find that the model performance improves as the degree of
equivariance increases. Finally, we find that SIDDA enhances model calibration
on both source and target data--achieving over an order of magnitude
improvement in the ECE and Brier score. SIDDA's versatility, combined with its
automated approach to domain alignment, has the potential to advance
multi-dataset studies by enabling the development of highly generalizable
models.
|
2501.14050
|
GraphRAG under Fire
|
cs.LG cs.AI cs.CR
|
GraphRAG advances retrieval-augmented generation (RAG) by structuring
external knowledge as multi-scale knowledge graphs, enabling language models to
integrate both broad context and granular details in their reasoning. While
GraphRAG has demonstrated success across domains, its security implications
remain largely unexplored. To bridge this gap, this work examines GraphRAG's
vulnerability to poisoning attacks, uncovering an intriguing security paradox:
compared to conventional RAG, GraphRAG's graph-based indexing and retrieval
enhance resilience against simple poisoning attacks; meanwhile, the same
features also create new attack surfaces. We present GRAGPoison, a novel attack
that exploits shared relations in the knowledge graph to craft poisoning text
capable of compromising multiple queries simultaneously. GRAGPoison employs
three key strategies: i) relation injection to introduce false knowledge, ii)
relation enhancement to amplify poisoning influence, and iii) narrative
generation to embed malicious content within coherent text. Empirical
evaluation across diverse datasets and models shows that GRAGPoison
substantially outperforms existing attacks in terms of effectiveness (up to 98%
success rate) and scalability (using less than 68% poisoning text). We also
explore potential defensive measures and their limitations, identifying
promising directions for future research.
|
2501.14051
|
Revisiting CLIP: Efficient Alignment of 3D MRI and Tabular Data using
Domain-Specific Foundation Models
|
cs.CV cs.AI cs.LG
|
Multi-modal models require aligned, shared embedding spaces. However, common
CLIP-based approaches need large amounts of samples and do not natively support
3D or tabular data, both of which are crucial in the medical domain. To address
these issues, we revisit CLIP-style alignment by training a domain-specific 3D
foundation model as an image encoder and demonstrate that modality alignment is
feasible with only 62 MRI scans. Our approach is enabled by a simple embedding
accumulation strategy required for training in 3D, which scales the amount of
negative pairs across batches in order to stabilize training. We perform a
thorough evaluation of various design choices, including the choice of backbone
and loss functions, and evaluate the proposed methodology on zero-shot
classification and image-retrieval tasks. While zero-shot image-retrieval
remains challenging, zero-shot classification results demonstrate that the
proposed approach can meaningfully align the representations of 3D MRI with
tabular data.
|
2501.14053
|
The Redundancy of Non-Singular Channel Simulation
|
cs.IT math.IT
|
Channel simulation is an alternative to quantization and entropy coding for
performing lossy source coding. Recently, channel simulation has gained
significant traction in both the machine learning and information theory
communities, as it integrates better with machine learning-based data
compression algorithms and has better rate-distortion-perception properties
than quantization. As the practical importance of channel simulation increases,
it is vital to understand its fundamental limitations. Recently, Sriramu and
Wagner provided an almost complete characterisation of the redundancy of
channel simulation algorithms. In this paper, we complete this
characterisation. First, we significantly extend a result of Li and El Gamal,
and show that the redundancy of any instance of a channel simulation problem is
lower bounded by the channel simulation divergence. Second, we give two proofs
that the asymptotic redundancy of simulating iid non-singular channels is
lower-bounded by $1/2$: one using a direct approach based on the asymptotic
expansion of the channel simulation divergence and one using large deviations
theory.
|
2501.14056
|
Prior Knowledge Injection into Deep Learning Models Predicting Gene
Expression from Whole Slide Images
|
cs.CV
|
Cancer diagnosis and prognosis primarily depend on clinical parameters such
as age and tumor grade, and are increasingly complemented by molecular data,
such as gene expression, from tumor sequencing. However, sequencing is costly
and delays oncology workflows. Recent advances in Deep Learning allow to
predict molecular information from morphological features within Whole Slide
Images (WSIs), offering a cost-effective proxy of the molecular markers. While
promising, current methods lack the robustness to fully replace direct
sequencing. Here we aim to improve existing methods by introducing a
model-agnostic framework that allows to inject prior knowledge on gene-gene
interactions into Deep Learning architectures, thereby increasing accuracy and
robustness. We design the framework to be generic and flexibly adaptable to a
wide range of architectures. In a case study on breast cancer, our strategy
leads to an average increase of 983 significant genes (out of 25,761) across
all 18 experiments, with 14 generalizing to an increase on an independent
dataset. Our findings reveal a high potential for injection of prior knowledge
to increase gene expression prediction performance from WSIs across a wide
range of architectures.
|
2501.14064
|
Switched Feedback for the Multiple-Access Channel
|
cs.IT math.IT
|
A mechanism called switched feedback is introduced; under switched feedback,
each channel output goes forward to the receiver(s) or backwards to the
transmitter(s) but never both. By studying the capacity of the Multiple Access
Channel (MAC) with switched feedback, this work investigates the potential
benefits of feedback in the MAC and explores strategies for maximizing that
benefit under reliable and unreliable feedback scenarios. The study is
motivated by an exploration of the tradeoffs between cooperation and
transmission in the context of communication systems. Results include upper and
lower bounds on the capacity region of the MAC with switched feedback.
|
2501.14066
|
Efficient 2D CT Foundation Model for Contrast Phase Classification
|
eess.IV cs.CV
|
Purpose: The purpose of this study is to harness the efficiency of a 2D
foundation model to develop a robust phase classifier that is resilient to
domain shifts.
Materials and Methods: This retrospective study utilized three public
datasets from separate institutions. A 2D foundation model was trained on the
DeepLesion dataset (mean age: 51.2, s.d.: 17.6; 2398 males) to generate
embeddings from 2D CT slices for downstream contrast phase classification. The
classifier was trained on the VinDr Multiphase dataset and externally validated
on the WAW-TACE dataset. The 2D model was also compared to three 3D supervised
models.
Results: On the VinDr dataset (146 male, 63 female, 56 unidentified), the
model achieved near-perfect AUROC scores and F1 scores of 99.2%, 94.2%, and
93.1% for non-contrast, arterial, and venous phases, respectively. The `Other'
category scored lower (F1: 73.4%) due to combining multiple contrast phases
into one class. On the WAW-TACE dataset (mean age: 66.1, s.d.: 10.0; 185
males), the model showed strong performance with AUROCs of 91.0% and 85.6%, and
F1 scores of 87.3% and 74.1% for non-contrast and arterial phases. Venous phase
performance was lower, with AUROC and F1 scores of 81.7% and 70.2%
respectively, due to label mismatches. Compared to 3D supervised models, the
approach trained faster, performed as well or better, and showed greater
robustness to domain shifts.
Conclusion: The robustness of the 2D Foundation model may be potentially
useful for automation of hanging protocols and data orchestration for clinical
deployment of AI algorithms.
|
2501.14070
|
Expanding on the BRIAR Dataset: A Comprehensive Whole Body Biometric
Recognition Resource at Extreme Distances and Real-World Scenarios
(Collections 1-4)
|
cs.CV cs.AI cs.LG
|
The state-of-the-art in biometric recognition algorithms and operational
systems has advanced quickly in recent years providing high accuracy and
robustness in more challenging collection environments and consumer
applications. However, the technology still suffers greatly when applied to
non-conventional settings such as those seen when performing identification at
extreme distances or from elevated cameras on buildings or mounted to UAVs.
This paper summarizes an extension to the largest dataset currently focused on
addressing these operational challenges, and describes its composition as well
as methodologies of collection, curation, and annotation.
|
2501.14073
|
LLMs are Vulnerable to Malicious Prompts Disguised as Scientific
Language
|
cs.CL
|
As large language models (LLMs) have been deployed in various real-world
settings, concerns about the harm they may propagate have grown. Various
jailbreaking techniques have been developed to expose the vulnerabilities of
these models and improve their safety. This work reveals that many
state-of-the-art LLMs are vulnerable to malicious requests hidden behind
scientific language. Specifically, our experiments with GPT4o, GPT4o-mini,
GPT-4, LLama3-405B-Instruct, Llama3-70B-Instruct, Cohere, Gemini models
demonstrate that, the models' biases and toxicity substantially increase when
prompted with requests that deliberately misinterpret social science and
psychological studies as evidence supporting the benefits of stereotypical
biases. Alarmingly, these models can also be manipulated to generate fabricated
scientific arguments claiming that biases are beneficial, which can be used by
ill-intended actors to systematically jailbreak these strong LLMs. Our analysis
studies various factors that contribute to the models' vulnerabilities to
malicious requests in academic language. Mentioning author names and venues
enhances the persuasiveness of models, and the bias scores increase as
dialogues progress. Our findings call for a more careful investigation on the
use of scientific data for training LLMs.
|
2501.14079
|
Enhancing Biomedical Relation Extraction with Directionality
|
cs.CL
|
Biological relation networks contain rich information for understanding the
biological mechanisms behind the relationship of entities such as genes,
proteins, diseases, and chemicals. The vast growth of biomedical literature
poses significant challenges updating the network knowledge. The recent
Biomedical Relation Extraction Dataset (BioRED) provides valuable manual
annotations, facilitating the develop-ment of machine-learning and pre-trained
language model approaches for automatically identifying novel document-level
(inter-sentence context) relationships. Nonetheless, its annotations lack
directionality (subject/object) for the entity roles, essential for studying
complex biological networks. Herein we annotate the entity roles of the
relationships in the BioRED corpus and subsequently propose a novel multi-task
language model with soft-prompt learning to jointly identify the relationship,
novel findings, and entity roles. Our results in-clude an enriched BioRED
corpus with 10,864 directionality annotations. Moreover, our proposed method
outperforms existing large language models such as the state-of-the-art GPT-4
and Llama-3 on two benchmarking tasks. Our source code and dataset are
available at https://github.com/ncbi-nlp/BioREDirect.
|
2501.14081
|
Single-Letter Characterization of the Mismatched Distortion-Rate
Function
|
cs.IT math.IT
|
The mismatched distortion-rate problem has remained open since its
formulation by Lapidoth in 1997. In this paper, we characterize the mismatched
distortion-rate function. Our single-letter solution highlights the adequate
conditional distributions for the encoder and the decoder. The achievability
result relies on a time-sharing argument that allows to convexify the upper
bound of Lapidoth. We show that it is sufficient to consider two regimes, one
with a large rate and another one with a small rate. Our main contribution is
the converse proof. Suppose that the encoder selects a single-letter
conditional distribution distinct from the one in the solution, we construct an
encoding strategy that leads to the same expected cost for both encoder and
decoder. This ensures that the encoder cannot gain by changing the
single-letter conditional distribution. This argument relies on a careful
identification of the sequence of auxiliary random variables. By building on
Caratheodory's Theorem we show that the cardinality of the auxiliary random
variables is equal to the one of the source alphabet plus three.
|
2501.14082
|
Communicating Activations Between Language Model Agents
|
cs.CL cs.AI cs.LG
|
Communication between multiple language model (LM) agents has been shown to
scale up the reasoning ability of LMs. While natural language has been the
dominant medium for inter-LM communication, it is not obvious this should be
the standard: not only does natural language communication incur high inference
costs that scale quickly with the number of both agents and messages, but also
the decoding process abstracts away too much rich information that could be
otherwise accessed from the internal activations. In this work, we propose a
simple technique whereby LMs communicate via activations; concretely, we pause
an LM $\textit{B}$'s computation at an intermediate layer, combine its current
activation with another LM $\textit{A}$'s intermediate activation via some
function $\textit{f}$, then pass $\textit{f}$'s output into the next layer of
$\textit{B}$ and continue the forward pass till decoding is complete. This
approach scales up LMs on new tasks with zero additional parameters and data,
and saves a substantial amount of compute over natural language communication.
We test our method with various functional forms $\textit{f}$ on two
experimental setups--multi-player coordination games and reasoning
benchmarks--and find that it achieves up to $27.0\%$ improvement over natural
language communication across datasets with $<$$1/4$ the compute, illustrating
the superiority and robustness of activations as an alternative "language" for
communication between LMs.
|
2501.14084
|
The Role of Generative AI in Software Student CollaborAItion
|
cs.SE cs.AI cs.CY cs.HC
|
Collaboration is a crucial part of computing education. The increase in AI
capabilities over the last couple of years is bound to profoundly affect all
aspects of systems and software engineering, including collaboration. In this
position paper, we consider a scenario where AI agents would be able to take on
any role in collaborative processes in computing education. We outline these
roles, the activities and group dynamics that software development currently
include, and discuss if and in what way AI could facilitate these roles and
activities. The goal of our work is to envision and critically examine
potential futures. We present scenarios suggesting how AI can be integrated
into existing collaborations. These are contrasted by design fictions that help
demonstrate the new possibilities and challenges for computing education in the
AI era.
|
2501.14090
|
Making Reliable and Flexible Decisions in Long-tailed Classification
|
cs.LG stat.ML
|
Long-tailed classification is challenging due to its heavy imbalance in class
probabilities. While existing methods often focus on overall accuracy or
accuracy for tail classes, they overlook a critical aspect: certain types of
errors can carry greater risks than others in real-world long-tailed problems.
For example, misclassifying patients (a tail class) as healthy individuals (a
head class) entails far more serious consequences than the reverse scenario. To
address this critical issue, we introduce Making Reliable and Flexible
Decisions in Long-tailed Classification (RF-DLC), a novel framework aimed at
reliable predictions in long-tailed problems. Leveraging Bayesian Decision
Theory, we introduce an integrated gain to seamlessly combine long-tailed data
distributions and the decision-making procedure. We further propose an
efficient variational optimization strategy for the decision risk objective.
Our method adapts readily to diverse utility matrices, which can be designed
for specific tasks, ensuring its flexibility for different problem settings. In
empirical evaluation, we design a new metric, False Head Rate, to quantify
tail-sensitivity risk, along with comprehensive experiments on multiple
real-world tasks, including large-scale image classification and uncertainty
quantification, to demonstrate the reliability and flexibility of our method.
|
2501.14094
|
Datasheets for AI and medical datasets (DAIMS): a data validation and
documentation framework before machine learning analysis in medical research
|
cs.LG
|
Despite progresses in data engineering, there are areas with limited
consistencies across data validation and documentation procedures causing
confusions and technical problems in research involving machine learning. There
have been progresses by introducing frameworks like "Datasheets for Datasets",
however there are areas for improvements to prepare datasets, ready for ML
pipelines. Here, we extend the framework to "Datasheets for AI and medical
datasets - DAIMS." Our publicly available solution, DAIMS, provides a checklist
including data standardization requirements, a software tool to assist the
process of the data preparation, an extended form for data documentation and
pose research questions, a table as data dictionary, and a flowchart to suggest
ML analyses to address the research questions. The checklist consists of 24
common data standardization requirements, where the tool checks and validate a
subset of them. In addition, we provided a flowchart mapping research questions
to suggested ML methods. DAIMS can serve as a reference for standardizing
datasets and a roadmap for researchers aiming to apply effective ML techniques
in their medical research endeavors. DAIMS is available on GitHub and as an
online app to automate key aspects of dataset evaluation, facilitating
efficient preparation of datasets for ML studies.
|
2501.14095
|
Improved subsample-and-aggregate via the private modified winsorized
mean
|
stat.ME cs.LG
|
We develop a univariate, differentially private mean estimator, called the
private modified winsorized mean designed to be used as the aggregator in
subsample-and-aggregate. We demonstrate, via real data analysis, that common
differentially private multivariate mean estimators may not perform well as the
aggregator, even with a dataset with 8000 observations, motivating our
developments. We show that the modified winsorized mean is minimax optimal for
several, large classes of distributions, even under adversarial contamination.
We also demonstrate that, empirically, the modified winsorized mean performs
well compared to other private mean estimates. We consider the modified
winsorized mean as the aggregator in subsample-and-aggregate, deriving a finite
sample deviations bound for a subsample-and-aggregate estimate generated with
the new aggregator. This result yields two important insights: (i) the optimal
choice of subsamples depends on the bias of the estimator computed on the
subsamples, and (ii) the rate of convergence of the subsample-and-aggregate
estimator depends on the robustness of the estimator computed on the
subsamples.
|
2501.14099
|
The Perceived Danger (PD) Scale: Development and Validation
|
cs.RO
|
There are currently no psychometrically valid tools to measure the perceived
danger of robots. To fill this gap, we provided a definition of perceived
danger and developed and validated a 12-item bifactor scale through four
studies. An exploratory factor analysis revealed four subdimensions of
perceived danger: affective states, physical vulnerability, ominousness, and
cognitive readiness. A confirmatory factor analysis confirmed the bifactor
model. We then compared the perceived danger scale to the Godspeed perceived
safety scale and found that the perceived danger scale is a better predictor of
empirical data. We also validated the scale in an in-person setting and found
that the perceived danger scale is sensitive to robot speed manipulations,
consistent with previous empirical findings. Results across experiments suggest
that the perceived danger scale is reliable, valid, and an adequate predictor
of both perceived safety and perceived danger in human-robot interaction
contexts.
|
2501.14101
|
StreamingRAG: Real-time Contextual Retrieval and Generation Framework
|
cs.CV
|
Extracting real-time insights from multi-modal data streams from various
domains such as healthcare, intelligent transportation, and satellite remote
sensing remains a challenge. High computational demands and limited knowledge
scope restrict the applicability of Multi-Modal Large Language Models (MM-LLMs)
on these data streams. Traditional Retrieval-Augmented Generation (RAG) systems
address knowledge limitations of these models, but suffer from slow
preprocessing, making them unsuitable for real-time analysis. We propose
StreamingRAG, a novel RAG framework designed for streaming data. StreamingRAG
constructs evolving knowledge graphs capturing scene-object-entity
relationships in real-time. The knowledge graph achieves temporal-aware scene
representations using MM-LLMs and enables timely responses for specific events
or user queries. StreamingRAG addresses limitations in existing methods,
achieving significant improvements in real-time analysis (5-6x faster
throughput), contextual accuracy (through a temporal knowledge graph), and
reduced resource consumption (using lightweight models by 2-3x).
|
2501.14102
|
5G LDPC Linear Transformer for Channel Decoding
|
cs.LG cs.IT math.IT
|
This work introduces a novel, fully differentiable linear-time complexity
transformer decoder and a transformer decoder to correct 5G New Radio (NR)
LDPC. We propose a scalable approach to decode linear block codes with $O(n)$
complexity rather than $O(n^2)$ for regular transformers. The architectures'
performances are compared to Belief Propagation (BP), the production-level
decoding algorithm used for 5G New Radio (NR) LDPC codes. We achieve bit error
rate performance that matches a regular Transformer decoder and surpases one
iteration BP, also achieving competitive time performance against BP, even for
larger block codes. We utilize Sionna, Nvidia's 5G & 6G physical layer research
software, for reproducible results.
|
2501.14103
|
Personalized Interpolation: An Efficient Method to Tame Flexible
Optimization Window Estimation
|
cs.LG
|
In the realm of online advertising, optimizing conversions is crucial for
delivering relevant products to users and enhancing business outcomes.
Predicting conversion events is challenging due to variable delays between user
interactions, such as impressions or clicks, and the actual conversions. These
delays differ significantly across various advertisers and products,
necessitating distinct optimization time windows for targeted conversions. To
address this, we introduce a novel approach named the \textit{Personalized
Interpolation} method, which innovatively builds upon existing fixed conversion
window models to estimate flexible conversion windows. This method allows for
the accurate estimation of conversions across a variety of delay ranges, thus
meeting the diverse needs of advertisers without increasing system complexity.
To validate the efficacy of our proposed method, we conducted comprehensive
experiments using ads conversion model. Our experiments demonstrate that this
method not only achieves high prediction accuracy but also does so more
efficiently than other existing solutions. This validation underscores the
potential of our Personalized Interpolation method to significantly enhance
conversion optimization in real-world online advertising systems, promising
improved targeting and effectiveness in advertising strategies.
|
2501.14105
|
MedSlice: Fine-Tuned Large Language Models for Secure Clinical Note
Sectioning
|
cs.CL cs.AI cs.IR cs.LG
|
Extracting sections from clinical notes is crucial for downstream analysis
but is challenging due to variability in formatting and labor-intensive nature
of manual sectioning. While proprietary large language models (LLMs) have shown
promise, privacy concerns limit their accessibility. This study develops a
pipeline for automated note sectioning using open-source LLMs, focusing on
three sections: History of Present Illness, Interval History, and Assessment
and Plan. We fine-tuned three open-source LLMs to extract sections using a
curated dataset of 487 progress notes, comparing results relative to
proprietary models (GPT-4o, GPT-4o mini). Internal and external validity were
assessed via precision, recall and F1 score. Fine-tuned Llama 3.1 8B
outperformed GPT-4o (F1=0.92). On the external validity test set, performance
remained high (F1= 0.85). Fine-tuned open-source LLMs can surpass proprietary
models in clinical note sectioning, offering advantages in cost, performance,
and accessibility.
|
2501.14107
|
EFiGP: Eigen-Fourier Physics-Informed Gaussian Process for Inference of
Dynamic Systems
|
stat.ML cs.LG
|
Parameter estimation and trajectory reconstruction for data-driven dynamical
systems governed by ordinary differential equations (ODEs) are essential tasks
in fields such as biology, engineering, and physics. These inverse problems --
estimating ODE parameters from observational data -- are particularly
challenging when the data are noisy, sparse, and the dynamics are nonlinear. We
propose the Eigen-Fourier Physics-Informed Gaussian Process (EFiGP), an
algorithm that integrates Fourier transformation and eigen-decomposition into a
physics-informed Gaussian Process framework. This approach eliminates the need
for numerical integration, significantly enhancing computational efficiency and
accuracy. Built on a principled Bayesian framework, EFiGP incorporates the ODE
system through probabilistic conditioning, enforcing governing equations in the
Fourier domain while truncating high-frequency terms to achieve denoising and
computational savings. The use of eigen-decomposition further simplifies
Gaussian Process covariance operations, enabling efficient recovery of
trajectories and parameters even in dense-grid settings. We validate the
practical effectiveness of EFiGP on three benchmark examples, demonstrating its
potential for reliable and interpretable modeling of complex dynamical systems
while addressing key challenges in trajectory recovery and computational cost.
|
2501.14111
|
Collaborating in a competitive world: Heterogeneous Multi-Agent Decision
Making in Symbiotic Supply Chain Environments
|
cs.MA
|
Supply networks require collaboration in a competitive environment. To
achieve this, nodes in the network often form symbiotic relationships as they
can be adversely effected by the closure of companies in the network,
especially where products are niche. However, balancing support for other nodes
in the network against profit is challenging. Agents are increasingly being
explored to define optimal strategies in these complex networks. However, to
date much of the literature focuses on homogeneous agents where a single policy
controls all of the nodes. This isn't realistic for many supply chains as this
level of information sharing would require an exceptionally close relationship.
This paper therefore compares the behaviour of this type of agent to a
heterogeneous structure, where the agents each have separate polices, to solve
the product ordering and pricing problem. An approach to reward sharing is
developed that doesn't require sharing profit. The homogenous and heterogeneous
agents exhibit different behaviours, with the homogenous retailer retaining
high inventories and witnessing high levels of backlog while the heterogeneous
agents show a typical order strategy. This leads to the heterogeneous agents
mitigating the bullwhip effect whereas the homogenous agents do not. In the
high demand environment, the agent architecture dominates performance with the
Soft Actor-Critic (SAC) agents outperforming the Proximal Policy Optimisation
(PPO) agents. Here, the factory controls the supply chain. In the low demand
environment the homogenous agents outperform the heterogeneous agents. Control
of the supply chain shifts significantly, with the retailer outperforming the
factory by a significant margin.
|
2501.14112
|
CoPERLex: Content Planning with Event-based Representations for Legal
Case Summarization
|
cs.CL
|
Legal professionals often struggle with lengthy judgments and require
efficient summarization for quick comprehension. To address this challenge, we
investigate the need for structured planning in legal case summarization,
particularly through event-centric representations that reflect the narrative
nature of legal case documents. We propose our framework, CoPERLex, which
operates in three stages: first, it performs content selection to identify
crucial information from the judgment; second, the selected content is utilized
to generate intermediate plans through event-centric representations modeled as
Subject-Verb-Object tuples; and finally, it generates coherent summaries based
on both the content and the structured plan. Our experiments on four legal
summarization datasets demonstrate the effectiveness of integrating content
selection and planning components, highlighting the advantages of event-centric
plans over traditional entity-centric approaches in the context of legal
judgements.
|
2501.14113
|
RELexED: Retrieval-Enhanced Legal Summarization with Exemplar Diversity
|
cs.CL
|
This paper addresses the task of legal summarization, which involves
distilling complex legal documents into concise, coherent summaries. Current
approaches often struggle with content theme deviation and inconsistent writing
styles due to their reliance solely on source documents. We propose RELexED, a
retrieval-augmented framework that utilizes exemplar summaries along with the
source document to guide the model. RELexED employs a two-stage exemplar
selection strategy, leveraging a determinantal point process to balance the
trade-off between similarity of exemplars to the query and diversity among
exemplars, with scores computed via influence functions. Experimental results
on two legal summarization datasets demonstrate that RELexED significantly
outperforms models that do not utilize exemplars and those that rely solely on
similarity-based exemplar selection.
|
2501.14114
|
LeCoPCR: Legal Concept-guided Prior Case Retrieval for European Court of
Human Rights cases
|
cs.CL
|
Prior case retrieval (PCR) is crucial for legal practitioners to find
relevant precedent cases given the facts of a query case. Existing approaches
often overlook the underlying semantic intent in determining relevance with
respect to the query case. In this work, we propose LeCoPCR, a novel approach
that explicitly generate intents in the form of legal concepts from a given
query case facts and then augments the query with these concepts to enhance
models understanding of semantic intent that dictates relavance. To overcome
the unavailability of annotated legal concepts, we employ a weak supervision
approach to extract key legal concepts from the reasoning section using
Determinantal Point Process (DPP) to balance quality and diversity.
Experimental results on the ECtHR-PCR dataset demonstrate the effectiveness of
leveraging legal concepts and DPP-based key concept extraction.
|
2501.14115
|
Passivity-Based Robust Shape Control of a Cable-Driven Solar Sail Boom
for the CABLESSail Concept
|
eess.SY cs.SY physics.space-ph
|
Solar sails provide a means of propulsion using solar radiation pressure,
which offers the possibility of exciting new spacecraft capabilities. However,
solar sails have attitude control challenges because of the significant
disturbance torques that they encounter due to imperfections in the sail and
its supporting structure, as well as limited actuation capabilities. The
Cable-Actuated Bio-inspired Lightweight Elastic Solar Sail (CABLESSail) concept
was previously proposed to overcome these challenges by controlling the shape
of the sail through cable actuation. The structural flexibility of CABLESSail
introduces control challenges, which necessitate the design of a robust
feedback controller for this system. The purpose of the proposed research here
is to design a robust controller to ensure precise and reliable control of
CABLESSail's boom. Taking into account the system dynamics and the dynamic
properties of the CABLESSail concept, a passivity-based proportional-derivative
(PD) controller for a single boom on the CABLESSail system is designed. To
reach the nonzero desired setpoints, a feedforward input is additionally
applied to the control law and a time-varying feedforward input is used instead
of the constant one to effectively track a time-varying desired boom tip
deflection. This control law is assessed by numerical simulations and by tests
using a smaller-scale prototype of Solar Cruiser. Both the simulation and the
test results show that this PD control with the time-varying feedforward input
robustly controls the flexible cable-actuated solar sail.
|
2501.14118
|
Selecting Critical Scenarios of DER Adoption in Distribution Grids Using
Bayesian Optimization
|
cs.LG stat.AP stat.ML
|
We develop a new methodology to select scenarios of DER adoption most
critical for distribution grids. Anticipating risks of future voltage and line
flow violations due to additional PV adopters is central for utility investment
planning but continues to rely on deterministic or ad hoc scenario selection.
We propose a highly efficient search framework based on multi-objective
Bayesian Optimization. We treat underlying grid stress metrics as
computationally expensive black-box functions, approximated via Gaussian
Process surrogates and design an acquisition function based on probability of
scenarios being Pareto-critical across a collection of line- and bus-based
violation objectives. Our approach provides a statistical guarantee and offers
an order of magnitude speed-up relative to a conservative exhaustive search.
Case studies on realistic feeders with 200-400 buses demonstrate the
effectiveness and accuracy of our approach.
|
2501.14119
|
Autonomous Structural Memory Manipulation for Large Language Models
Using Hierarchical Embedding Augmentation
|
cs.CL cs.AI
|
Transformative innovations in model architectures have introduced
hierarchical embedding augmentation as a means to redefine the representation
of tokens through multi-level semantic structures, offering enhanced
adaptability to complex linguistic inputs. Autonomous structural memory
manipulation further advances this paradigm through dynamic memory reallocation
mechanisms that prioritize critical contextual features while suppressing less
relevant information, enabling scalable and efficient performance across
diverse tasks. Experimental results reveal substantial improvements in
computational efficiency, with marked reductions in processing overhead for
longer input sequences, achieved through memory reorganization strategies that
adapt to evolving contextual requirements. Hierarchical embeddings not only
improved contextual alignment but also facilitated task generalization by
capturing relationships at varying semantic granularities, ensuring coherence
across layers without introducing significant computational redundancies.
Comparative analysis against baseline models demonstrated unique advantages in
accuracy, efficiency, and interpretability, particularly in tasks requiring
complex contextual understanding or domain-specific adaptability. The ability
to dynamically adjust token representations and memory configurations
contributed to the model's robustness under varied and unpredictable input
conditions. Applications benefiting from these advancements include
multi-domain generalization, interactive systems, and scenarios involving
real-time decision-making, where traditional static memory architectures often
face limitations. The proposed methodology combines advanced embedding and
memory management strategies into a cohesive framework that addresses
scalability challenges while preserving task-specific relevance.
|
2501.14120
|
On the Transfer of Knowledge in Quantum Algorithms
|
quant-ph cs.AI
|
The field of quantum computing is generating significant anticipation within
the scientific and industrial communities due to its potential to revolutionize
computing paradigms. Recognizing this potential, this paper explores the
integration of transfer of knowledge techniques, traditionally used in
classical artificial intelligence, into quantum computing. We present a
comprehensive classification of the transfer models, focusing on Transfer
Learning and Transfer Optimization. Additionally, we analyze relevant schemes
in quantum computing that can benefit from knowledge sharing, and we delve into
the potential synergies, supported by theoretical insights and initial
experimental results. Our findings suggest that leveraging the transfer of
knowledge can enhance the efficiency and effectiveness of quantum algorithms,
particularly in the context of hybrid solvers. This approach not only
accelerates the optimization process but also reduces the computational burden
on quantum processors, making it a valuable tool for advancing quantum
computing technologies.
|
2501.14122
|
Reinforcement Learning Platform for Adversarial Black-box Attacks with
Custom Distortion Filters
|
cs.LG cs.AI cs.CR cs.CV
|
We present a Reinforcement Learning Platform for Adversarial Black-box
untargeted and targeted attacks, RLAB, that allows users to select from various
distortion filters to create adversarial examples. The platform uses a
Reinforcement Learning agent to add minimum distortion to input images while
still causing misclassification by the target model. The agent uses a novel
dual-action method to explore the input image at each step to identify
sensitive regions for adding distortions while removing noises that have less
impact on the target model. This dual action leads to faster and more efficient
convergence of the attack. The platform can also be used to measure the
robustness of image classification models against specific distortion types.
Also, retraining the model with adversarial samples significantly improved
robustness when evaluated on benchmark datasets. The proposed platform
outperforms state-of-the-art methods in terms of the average number of queries
required to cause misclassification. This advances trustworthiness with a
positive social impact.
|
2501.14133
|
Development of a Validation and Inspection Tool for Armband-based
Lifelog Data (VITAL) to Facilitate the Clinical Use of Wearable Data: A
Prototype and Usability Evaluation
|
cs.HC cs.SY eess.SY
|
Background: The rise of mobile technology and health apps has increased the
use of person-generated health data (PGHD). PGHD holds significant potential
for clinical decision-making but remains challenging to manage. Objective: This
study aimed to enhance the clinical utilization of wearable health data by
developing the Validation and Inspection Tool for Armband-Based Lifelog Data
(VITAL), a pipeline for data integration, visualization, and quality
management, and evaluating its usability. Methods: The study followed a
structured process of requirement gathering, tool implementation, and usability
evaluation. Requirements were identified through input from four clinicians.
Wearable health data from Samsung, Apple, Fitbit, and Xiaomi devices were
integrated into a standardized dataframe at 10-minute intervals, focusing on
biometrics, activity, and sleep. Features of VITAL support data integration,
visualization, and quality management. Usability evaluation involved seven
clinicians performing tasks, completing the Unified Theory of Acceptance and
Use of Technology (UTAUT) survey, and participating in interviews to identify
usability issues. Results: VITAL successfully integrated wearable data, thus
enabling all participants to complete tasks with minimal errors without prior
participant training. UTAUT survey results were positive, with average scores
of 4.2 for performance expectancy, 3.96 for effort expectancy, and 4.14 for
intention to use, indicating high user satisfaction and intent to adopt the
tool. Conclusions: By enhancing wearable data integration, visualization, and
quality management, the VITAL prototype shows significant potential for
clinical application. Positive feedback highlights its promise, while
emphasizing the need for further studies to confirm its real-world
effectiveness.
|
2501.14136
|
Saliency Maps are Ambiguous: Analysis of Logical Relations on First and
Second Order Attributions
|
cs.LG
|
Recent work uncovered potential flaws in \eg attribution or heatmap based
saliency methods. A typical flaw is a confirmations bias, where the scores are
compared to human expectation. Since measuring the quality of saliency methods
is hard due to missing ground truth model reasoning, finding general
limitations is also hard. This is further complicated, because masking-based
evaluation on complex data can easily introduce a bias, as most methods cannot
fully ignore inputs. In this work, we extend our previous analysis on the
logical dataset framework ANDOR, where we showed that all analysed saliency
methods fail to grasp all needed classification information for all possible
scenarios. Specifically, this paper extends our previous work using analysis on
more datasets, in order to better understand in which scenarios the saliency
methods fail. Further, we apply the Global Coherence Representation as an
additional evaluation method in order to enable actual input omission.
|
2501.14143
|
An Extensive and Methodical Review of Smart Grids for Sustainable Energy
Management-Addressing Challenges with AI, Renewable Energy Integration and
Leading-edge Technologies
|
cs.LG cs.CY
|
Energy management decreases energy expenditures and consumption while
simultaneously increasing energy efficiency, reducing carbon emissions, and
enhancing operational performance. Smart grids are a type of sophisticated
energy infrastructure that increase the generation and distribution of
electricity's sustainability, dependability, and efficiency by utilizing
digital communication technologies. They combine a number of cutting-edge
techniques and technology to improve energy resource management. A large amount
of research study on the topic of smart grids for energy management has been
completed in the last several years. The authors of the present study want to
cover a number of topics, including smart grid benefits and components,
technical developments, integrating renewable energy sources, using artificial
intelligence and data analytics, cybersecurity, and privacy. Smart Grids for
Energy Management are an innovative field of study aiming at tackling various
difficulties and magnifying the efficiency, dependability, and sustainability
of energy systems, including: 1) Renewable sources of power like solar and wind
are intermittent and unpredictable 2) Defending smart grid system from various
cyber-attacks 3) Incorporating an increasing number of electric vehicles into
the system of power grid without overwhelming it. Additionally, it is proposed
to use AI and data analytics for better performance on the grid, reliability,
and energy management. It also looks into how AI and data analytics can be used
to optimize grid performance, enhance reliability, and improve energy
management. The authors will explore these significant challenges and ongoing
research. Lastly, significant issues in this field are noted, and
recommendations for further work are provided.
|
2501.14144
|
Test-Time Code-Switching for Cross-lingual Aspect Sentiment Triplet
Extraction
|
cs.CL
|
Aspect Sentiment Triplet Extraction (ASTE) is a thriving research area with
impressive outcomes being achieved on high-resource languages. However, the
application of cross-lingual transfer to the ASTE task has been relatively
unexplored, and current code-switching methods still suffer from term boundary
detection issues and out-of-dictionary problems. In this study, we introduce a
novel Test-Time Code-SWitching (TT-CSW) framework, which bridges the gap
between the bilingual training phase and the monolingual test-time prediction.
During training, a generative model is developed based on bilingual
code-switched training data and can produce bilingual ASTE triplets for
bilingual inputs. In the testing stage, we employ an alignment-based
code-switching technique for test-time augmentation. Extensive experiments on
cross-lingual ASTE datasets validate the effectiveness of our proposed method.
We achieve an average improvement of 3.7% in terms of weighted-averaged F1 in
four datasets with different languages. Additionally, we set a benchmark using
ChatGPT and GPT-4, and demonstrate that even smaller generative models
fine-tuned with our proposed TT-CSW framework surpass ChatGPT and GPT-4 by
14.2% and 5.0% respectively.
|
2501.14147
|
HAMMER: Heterogeneous, Multi-Robot Semantic Gaussian Splatting
|
cs.RO
|
3D Gaussian Splatting offers expressive scene reconstruction, modeling a
broad range of visual, geometric, and semantic information. However, efficient
real-time map reconstruction with data streamed from multiple robots and
devices remains a challenge. To that end, we propose HAMMER, a server-based
collaborative Gaussian Splatting method that leverages widely available ROS
communication infrastructure to generate 3D, metric-semantic maps from
asynchronous robot data-streams with no prior knowledge of initial robot
positions and varying on-device pose estimators. HAMMER consists of (i) a frame
alignment module that transforms local SLAM poses and image data into a global
frame and requires no prior relative pose knowledge, and (ii) an online module
for training semantic 3DGS maps from streaming data. HAMMER handles mixed
perception modes, adjusts automatically for variations in image pre-processing
among different devices, and distills CLIP semantic codes into the 3D scene for
open-vocabulary language queries. In our real-world experiments, HAMMER creates
higher-fidelity maps (2x) compared to competing baselines and is useful for
downstream tasks, such as semantic goal-conditioned navigation (e.g., ``go to
the couch"). Accompanying content available at hammer-project.github.io.
|
2501.14148
|
SelfPrompt: Confidence-Aware Semi-Supervised Tuning for Robust
Vision-Language Model Adaptation
|
cs.CV
|
We present SelfPrompt, a novel prompt-tuning approach for vision-language
models (VLMs) in a semi-supervised learning setup. Existing methods for tuning
VLMs in semi-supervised setups struggle with the negative impact of the
miscalibrated VLMs on pseudo-labelling, and the accumulation of noisy
pseudo-labels. SelfPrompt addresses these challenges by introducing a
cluster-guided pseudo-labelling method that improves pseudo-label accuracy, and
a confidence-aware semi-supervised learning module that maximizes the
utilization of unlabelled data by combining supervised learning and
weakly-supervised learning. Additionally, we investigate our method in an
active semi-supervised learning setup, where the labelled set is strategically
selected to ensure the best utilization of a limited labelling budget. To this
end, we propose a weakly-supervised sampling technique that selects a diverse
and representative labelled set, which can be seamlessly integrated into
existing methods to enhance their performance. We conduct extensive evaluations
across 13 datasets, significantly surpassing state-of-the-art performances with
average improvements of 6.23% in standard semi-supervised learning, 6.25% in
active semi-supervised learning, and 4.9% in base-to-novel generalization,
using a 2-shot setup. Furthermore, SelfPrompt shows excellent generalization in
single-shot settings, achieving an average improvement of 11.78%.
|
2501.14149
|
Effective Defect Detection Using Instance Segmentation for NDI
|
cs.CV cs.LG
|
Ultrasonic testing is a common Non-Destructive Inspection (NDI) method used
in aerospace manufacturing. However, the complexity and size of the ultrasonic
scans make it challenging to identify defects through visual inspection or
machine learning models. Using computer vision techniques to identify defects
from ultrasonic scans is an evolving research area. In this study, we used
instance segmentation to identify the presence of defects in the ultrasonic
scan images of composite panels that are representative of real components
manufactured in aerospace. We used two models based on Mask-RCNN (Detectron 2)
and YOLO 11 respectively. Additionally, we implemented a simple statistical
pre-processing technique that reduces the burden of requiring custom-tailored
pre-processing techniques. Our study demonstrates the feasibility and
effectiveness of using instance segmentation in the NDI pipeline by
significantly reducing data pre-processing time, inspection time, and overall
costs.
|
2501.14151
|
RaccoonBot: An Autonomous Wire-Traversing Solar-Tracking Robot for
Persistent Environmental Monitoring
|
cs.RO
|
Environmental monitoring is used to characterize the health and relationship
between organisms and their environments. In forest ecosystems, robots can
serve as platforms to acquire such data, even in hard-to-reach places where
wire-traversing platforms are particularly promising due to their efficient
displacement. This paper presents the RaccoonBot, which is a novel autonomous
wire-traversing robot for persistent environmental monitoring, featuring a
fail-safe mechanical design with a self-locking mechanism in case of electrical
shortage. The robot also features energy-aware mobility through a novel Solar
tracking algorithm, that allows the robot to find a position on the wire to
have direct contact with solar power to increase the energy harvested.
Experimental results validate the electro-mechanical features of the
RaccoonBot, showing that it is able to handle wire perturbations, different
inclinations, and achieving energy autonomy.
|
2501.14152
|
Multimodal Prescriptive Deep Learning
|
cs.LG stat.ML
|
We introduce a multimodal deep learning framework, Prescriptive Neural
Networks (PNNs), that combines ideas from optimization and machine learning,
and is, to the best of our knowledge, the first prescriptive method to handle
multimodal data. The PNN is a feedforward neural network trained on embeddings
to output an outcome-optimizing prescription. In two real-world multimodal
datasets, we demonstrate that PNNs prescribe treatments that are able to
significantly improve estimated outcomes in transcatheter aortic valve
replacement (TAVR) procedures by reducing estimated postoperative complication
rates by 32% and in liver trauma injuries by reducing estimated mortality rates
by over 40%. In four real-world, unimodal tabular datasets, we demonstrate that
PNNs outperform or perform comparably to other well-known, state-of-the-art
prescriptive models; importantly, on tabular datasets, we also recover
interpretability through knowledge distillation, fitting interpretable Optimal
Classification Tree models onto the PNN prescriptions as classification
targets, which is critical for many real-world applications. Finally, we
demonstrate that our multimodal PNN models achieve stability across randomized
data splits comparable to other prescriptive methods and produce realistic
prescriptions across the different datasets.
|
2501.14155
|
Learning to Price with Resource Constraints: From Full Information to
Machine-Learned Prices
|
math.OC cs.LG
|
We study the dynamic pricing problem with knapsack, addressing the challenge
of balancing exploration and exploitation under resource constraints. We
introduce three algorithms tailored to different informational settings: a
Boundary Attracted Re-solve Method for full information, an online learning
algorithm for scenarios with no prior information, and an estimate-then-select
re-solve algorithm that leverages machine-learned informed prices with known
upper bound of estimation errors. The Boundary Attracted Re-solve Method
achieves logarithmic regret without requiring the non-degeneracy condition,
while the online learning algorithm attains an optimal $O(\sqrt{T})$ regret.
Our estimate-then-select approach bridges the gap between these settings,
providing improved regret bounds when reliable offline data is available.
Numerical experiments validate the effectiveness and robustness of our
algorithms across various scenarios. This work advances the understanding of
online resource allocation and dynamic pricing, offering practical solutions
adaptable to different informational structures.
|
2501.14158
|
Advancing MRI Reconstruction: A Systematic Review of Deep Learning and
Compressed Sensing Integration
|
cs.CV cs.AI physics.med-ph
|
Magnetic resonance imaging (MRI) is a non-invasive imaging modality and
provides comprehensive anatomical and functional insights into the human body.
However, its long acquisition times can lead to patient discomfort, motion
artifacts, and limiting real-time applications. To address these challenges,
strategies such as parallel imaging have been applied, which utilize multiple
receiver coils to speed up the data acquisition process. Additionally,
compressed sensing (CS) is a method that facilitates image reconstruction from
sparse data, significantly reducing image acquisition time by minimizing the
amount of data collection needed. Recently, deep learning (DL) has emerged as a
powerful tool for improving MRI reconstruction. It has been integrated with
parallel imaging and CS principles to achieve faster and more accurate MRI
reconstructions. This review comprehensively examines DL-based techniques for
MRI reconstruction. We categorize and discuss various DL-based methods,
including end-to-end approaches, unrolled optimization, and federated learning,
highlighting their potential benefits. Our systematic review highlights
significant contributions and underscores the potential of DL in MRI
reconstruction. Additionally, we summarize key results and trends in DL-based
MRI reconstruction, including quantitative metrics, the dataset, acceleration
factors, and the progress of and research interest in DL techniques over time.
Finally, we discuss potential future directions and the importance of DL-based
MRI reconstruction in advancing medical imaging. To facilitate further research
in this area, we provide a GitHub repository that includes up-to-date DL-based
MRI reconstruction publications and public
datasets-https://github.com/mosaf/Awesome-DL-based-CS-MRI.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.