id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
2501.14646
|
SyncAnimation: A Real-Time End-to-End Framework for Audio-Driven Human
Pose and Talking Head Animation
|
cs.CV
|
Generating talking avatar driven by audio remains a significant challenge.
Existing methods typically require high computational costs and often lack
sufficient facial detail and realism, making them unsuitable for applications
that demand high real-time performance and visual quality. Additionally, while
some methods can synchronize lip movement, they still face issues with
consistency between facial expressions and upper body movement, particularly
during silent periods. In this paper, we introduce SyncAnimation, the first
NeRF-based method that achieves audio-driven, stable, and real-time generation
of speaking avatar by combining generalized audio-to-pose matching and
audio-to-expression synchronization. By integrating AudioPose Syncer and
AudioEmotion Syncer, SyncAnimation achieves high-precision poses and expression
generation, progressively producing audio-synchronized upper body, head, and
lip shapes. Furthermore, the High-Synchronization Human Renderer ensures
seamless integration of the head and upper body, and achieves audio-sync lip.
The project page can be found at https://syncanimation.github.io
|
2501.14649
|
Investigating the (De)Composition Capabilities of Large Language Models
in Natural-to-Formal Language Conversion
|
cs.CL
|
To achieve generalized and robust natural-to-formal language conversion
(N2F), large language models (LLMs) need to have strong capabilities of
decomposition and composition in N2F when faced with an unfamiliar formal
language and be able to cope with compositional gaps and counter-intuitive
symbolic names. To investigate whether LLMs have this set of basic capabilities
in N2F, we propose the DEDC framework. This framework semi-automatically
performs sample and task construction, allowing decoupled evaluation of the set
of decomposition and composition capabilities of LLMs in N2F. Based on this
framework, we evaluate and analyze the most advanced LLMs, and the main
findings include that: (1) the LLMs are deficient in both decomposition and
composition; (2) the LLMs show a wide coverage of error types that can be
attributed to deficiencies in natural language understanding and the learning
and use of symbolic systems; (3) compositional gaps and counter-intuitive
symbolic names both affect the decomposition and composition of the LLMs. Our
work provides a new perspective for investigating the basic capabilities of
decomposition and composition of LLMs in N2F. The detailed analysis of
deficiencies and attributions can help subsequent improvements of LLMs.
|
2501.14652
|
Decoupled SGDA for Games with Intermittent Strategy Communication
|
cs.LG
|
We focus on reducing communication overhead in multiplayer games, where
frequently exchanging strategies between players is not feasible and players
have noisy or outdated strategies of the other players. We introduce Decoupled
SGDA, a novel adaptation of Stochastic Gradient Descent Ascent (SGDA). In this
approach, players independently update their strategies based on outdated
opponent strategies, with periodic synchronization to align strategies. For
Strongly-Convex-Strongly-Concave (SCSC) games, we demonstrate that Decoupled
SGDA achieves near-optimal communication complexity comparable to the
best-known GDA rates. For weakly coupled games where the interaction between
players is lower relative to the non-interactive part of the game, Decoupled
SGDA significantly reduces communication costs compared to standard SGDA. Our
findings extend to multi-player games. To provide insights into the effect of
communication frequency and convergence, we extensively study the convergence
of Decoupled SGDA for quadratic minimax problems. Lastly, in settings where the
noise over the players is imbalanced, Decoupled SGDA significantly outperforms
federated minimax methods.
|
2501.14653
|
Federated Domain Generalization with Data-free On-server Gradient
Matching
|
cs.LG cs.AI cs.DC cs.MA
|
Domain Generalization (DG) aims to learn from multiple known source domains a
model that can generalize well to unknown target domains. One of the key
approaches in DG is training an encoder which generates domain-invariant
representations. However, this approach is not applicable in Federated Domain
Generalization (FDG), where data from various domains are distributed across
different clients. In this paper, we introduce a novel approach, dubbed
Federated Learning via On-server Matching Gradient (FedOMG), which can
\emph{efficiently leverage domain information from distributed domains}.
Specifically, we utilize the local gradients as information about the
distributed models to find an invariant gradient direction across all domains
through gradient inner product maximization. The advantages are two-fold: 1)
FedOMG can aggregate the characteristics of distributed models on the
centralized server without incurring any additional communication cost, and 2)
FedOMG is orthogonal to many existing FL/FDG methods, allowing for additional
performance improvements by being seamlessly integrated with them. Extensive
experimental evaluations on various settings to demonstrate the robustness of
FedOMG compared to other FL/FDG baselines. Our method outperforms recent SOTA
baselines on four FL benchmark datasets (MNIST, EMNIST, CIFAR-10, and
CIFAR-100), and three FDG benchmark datasets (PACS, VLCS, and OfficeHome).
|
2501.14654
|
MedAgentBench: A Realistic Virtual EHR Environment to Benchmark Medical
LLM Agents
|
cs.LG cs.AI cs.MA
|
Recent large language models (LLMs) have demonstrated significant
advancements, particularly in their ability to serve as agents thereby
surpassing their traditional role as chatbots. These agents can leverage their
planning and tool utilization capabilities to address tasks specified at a high
level. However, a standardized dataset to benchmark the agent capabilities of
LLMs in medical applications is currently lacking, making the evaluation of
LLMs on complex tasks in interactive healthcare environments challenging. To
address this gap, we introduce MedAgentBench, a broad evaluation suite designed
to assess the agent capabilities of large language models within medical
records contexts. MedAgentBench encompasses 300 patient-specific
clinically-derived tasks from 10 categories written by human physicians,
realistic profiles of 100 patients with over 700,000 data elements, a
FHIR-compliant interactive environment, and an accompanying codebase. The
environment uses the standard APIs and communication infrastructure used in
modern EMR systems, so it can be easily migrated into live EMR systems.
MedAgentBench presents an unsaturated agent-oriented benchmark that current
state-of-the-art LLMs exhibit some ability to succeed at. The best model
(Claude 3.5 Sonnet v2) achieves a success rate of 69.67%. However, there is
still substantial space for improvement which gives the community a next
direction to optimize. Furthermore, there is significant variation in
performance across task categories. MedAgentBench establishes this and is
publicly available at https://github.com/stanfordmlgroup/MedAgentBench ,
offering a valuable framework for model developers to track progress and drive
continuous improvements in the agent capabilities of large language models
within the medical domain.
|
2501.14659
|
Towards Unified Structured Light Optimization
|
cs.CV
|
Structured light (SL) 3D reconstruction captures the precise surface shape of
objects, providing high-accuracy 3D data essential for industrial inspection
and robotic vision systems. However, current research on optimizing projection
patterns in SL 3D reconstruction faces two main limitations: each scene
requires separate training of calibration parameters, and optimization is
restricted to specific types of SL, which restricts their application range. To
tackle these limitations, we present a unified framework for SL optimization,
adaptable to diverse lighting conditions, object types, and different types of
SL. Our framework quickly determines the optimal projection pattern using only
a single projected image. Key contributions include a novel global matching
method for projectors, enabling precise projector-camera alignment with just
one projected image, and a new projection compensation model with a photometric
adjustment module to reduce artifacts from out-of-gamut clipping. Experimental
results show our method achieves superior decoding accuracy across various
objects, SL patterns, and lighting conditions, significantly outperforming
previous methods.
|
2501.14660
|
Mean-field limit from general mixtures of experts to quantum neural
networks
|
math-ph cs.LG math.MP math.PR
|
In this work, we study the asymptotic behavior of Mixture of Experts (MoE)
trained via gradient flow on supervised learning problems. Our main result
establishes the propagation of chaos for a MoE as the number of experts
diverges. We demonstrate that the corresponding empirical measure of their
parameters is close to a probability measure that solves a nonlinear continuity
equation, and we provide an explicit convergence rate that depends solely on
the number of experts. We apply our results to a MoE generated by a quantum
neural network.
|
2501.14661
|
Neural-Symbolic Message Passing with Dynamic Pruning
|
cs.LG cs.AI
|
Complex Query Answering (CQA) over incomplete Knowledge Graphs (KGs) is a
challenging task. Recently, a line of message-passing-based research has been
proposed to solve CQA. However, they perform unsatisfactorily on negative
queries and fail to address the noisy messages between variable nodes in the
query graph. Moreover, they offer little interpretability and require complex
query data and resource-intensive training. In this paper, we propose a
Neural-Symbolic Message Passing (NSMP) framework based on pre-trained neural
link predictors. By introducing symbolic reasoning and fuzzy logic, NSMP can
generalize to arbitrary existential first order logic queries without requiring
training while providing interpretable answers. Furthermore, we introduce a
dynamic pruning strategy to filter out noisy messages between variable nodes.
Experimental results show that NSMP achieves a strong performance.
Additionally, through complexity analysis and empirical verification, we
demonstrate the superiority of NSMP in inference time over the current
state-of-the-art neural-symbolic method. Compared to this approach, NSMP
demonstrates faster inference times across all query types on benchmark
datasets, with speedup ranging from 2$\times$ to over 150$\times$.
|
2501.14663
|
End-to-end workflow for machine learning-based qubit readout with QICK
and hls4ml
|
quant-ph cs.LG
|
We present an end-to-end workflow for superconducting qubit readout that
embeds co-designed Neural Networks (NNs) into the Quantum Instrumentation
Control Kit (QICK). Capitalizing on the custom firmware and software of the
QICK platform, which is built on Xilinx RFSoC FPGAs, we aim to leverage machine
learning (ML) to address critical challenges in qubit readout accuracy and
scalability. The workflow utilizes the hls4ml package and employs
quantization-aware training to translate ML models into hardware-efficient FPGA
implementations via user-friendly Python APIs. We experimentally demonstrate
the design, optimization, and integration of an ML algorithm for single
transmon qubit readout, achieving 96% single-shot fidelity with a latency of
32ns and less than 16% FPGA look-up table resource utilization. Our results
offer the community an accessible workflow to advance ML-driven readout and
adaptive control in quantum information processing applications.
|
2501.14664
|
Predictive Position Estimation for Remote Surgery under Packet Loss
Using the Informer Framework
|
eess.SY cs.SY
|
Accurate and real-time position estimation of the robotic arm on the
patient's side is crucial for the success of remote robotic surgery in Tactile
Internet environments. This paper proposes a predictive approach using the
computationally efficient Transformer-based Informer model for position
estimation, combined with a Four-State Hidden Markov Model (4-State HMM) to
simulate realistic packet loss scenarios. The method effectively addresses
network-induced delays, jitter, and packet loss, ensuring reliable performance
in remote robotic surgery. The study evaluates the Informer model on the
JIGSAWS dataset, demonstrating its capability to handle sequential data
challenges caused by network uncertainties. Key features, including ProbSparse
attention and a generative-style decoder, enhance prediction accuracy,
computational speed, and memory efficiency. Results indicate that the proposed
method achieves over 90 percent accuracy across varying network conditions.
Furthermore, the Informer framework outperforms traditional models such as TCN,
RNN, and LSTM, highlighting its suitability for real-time remote surgery
applications.
|
2501.14672
|
Gaussian-Process-based Adaptive Tracking Control with Dynamic Active
Learning for Autonomous Ground Vehicles
|
eess.SY cs.RO cs.SY
|
This article proposes an active-learning-based adaptive trajectory tracking
control method for autonomous ground vehicles to compensate for modeling errors
and unmodeled dynamics. The nominal vehicle model is decoupled into lateral and
longitudinal subsystems, which are augmented with online Gaussian Processes
(GPs), using measurement data. The estimated mean functions of the GPs are used
to construct a feedback compensator, which, together with an LPV state feedback
controller designed for the nominal system, gives the adaptive control
structure. To assist exploration of the dynamics, the paper proposes a new,
dynamic active learning method to collect the most informative samples to
accelerate the training process. To analyze the performance of the overall
learning tool-chain provided controller, a novel iterative,
counterexample-based algorithm is proposed for calculating the induced L2 gain
between the reference trajectory and the tracking error. The analysis can be
executed for a set of possible realizations of the to-be-controlled system,
giving robust performance certificate of the learning method under variation of
the vehicle dynamics. The efficiency of the proposed control approach is shown
on a high-fidelity physics simulator and in real experiments using a 1/10 scale
F1TENTH electric car.
|
2501.14673
|
State Space Models for Extractive Summarization in Low Resource
Scenarios
|
cs.CL cs.AI
|
Extractive summarization involves selecting the most relevant sentences from
a text. Recently, researchers have focused on advancing methods to improve
state-of-the-art results in low-resource settings. Motivated by these
advancements, we propose the MPoincareSum method. This method applies the Mamba
state space model to generate the semantics of reviews and sentences, which are
then concatenated. A Poincare compression is used to select the most meaningful
features, followed by the application of a linear layer to predict sentence
relevance based on the corresponding review. Finally, we paraphrase the
relevant sentences to create the final summary. To evaluate the effectiveness
of MPoincareSum, we conducted extensive experiments using the Amazon review
dataset. The performance of the method was assessed using ROUGE scores. The
experimental results demonstrate that MPoincareSum outperforms several existing
approaches in the literature
|
2501.14677
|
MatAnyone: Stable Video Matting with Consistent Memory Propagation
|
cs.CV
|
Auxiliary-free human video matting methods, which rely solely on input
frames, often struggle with complex or ambiguous backgrounds. To address this,
we propose MatAnyone, a robust framework tailored for target-assigned video
matting. Specifically, building on a memory-based paradigm, we introduce a
consistent memory propagation module via region-adaptive memory fusion, which
adaptively integrates memory from the previous frame. This ensures semantic
stability in core regions while preserving fine-grained details along object
boundaries. For robust training, we present a larger, high-quality, and diverse
dataset for video matting. Additionally, we incorporate a novel training
strategy that efficiently leverages large-scale segmentation data, boosting
matting stability. With this new network design, dataset, and training
strategy, MatAnyone delivers robust and accurate video matting results in
diverse real-world scenarios, outperforming existing methods.
|
2501.14678
|
A Predictive Approach for Enhancing Accuracy in Remote Robotic Surgery
Using Informer Model
|
cs.RO cs.AI
|
Precise and real-time estimation of the robotic arm's position on the
patient's side is essential for the success of remote robotic surgery in
Tactile Internet (TI) environments. This paper presents a prediction model
based on the Transformer-based Informer framework for accurate and efficient
position estimation. Additionally, it combines a Four-State Hidden Markov Model
(4-State HMM) to simulate realistic packet loss scenarios. The proposed
approach addresses challenges such as network delays, jitter, and packet loss
to ensure reliable and precise operation in remote surgical applications. The
method integrates the optimization problem into the Informer model by embedding
constraints such as energy efficiency, smoothness, and robustness into its
training process using a differentiable optimization layer. The Informer
framework uses features such as ProbSparse attention, attention distilling, and
a generative-style decoder to focus on position-critical features while
maintaining a low computational complexity of O(L log L). The method is
evaluated using the JIGSAWS dataset, achieving a prediction accuracy of over 90
percent under various network scenarios. A comparison with models such as TCN,
RNN, and LSTM demonstrates the Informer framework's superior performance in
handling position prediction and meeting real-time requirements, making it
suitable for Tactile Internet-enabled robotic surgery.
|
2501.14679
|
Surface Vision Mamba: Leveraging Bidirectional State Space Model for
Efficient Spherical Manifold Representation
|
cs.CV cs.AI
|
Attention-based methods have demonstrated exceptional performance in
modelling long-range dependencies on spherical cortical surfaces, surpassing
traditional Geometric Deep Learning (GDL) models. However, their extensive
inference time and high memory demands pose challenges for application to large
datasets with limited computing resources. Inspired by the state space model in
computer vision, we introduce the attention-free Vision Mamba (Vim) to
spherical surfaces, presenting a domain-agnostic architecture for analyzing
data on spherical manifolds. Our method achieves surface patching by
representing spherical data as a sequence of triangular patches derived from a
subdivided icosphere. The proposed Surface Vision Mamba (SiM) is evaluated on
multiple neurodevelopmental phenotype regression tasks using cortical surface
metrics from neonatal brains. Experimental results demonstrate that SiM
outperforms both attention- and GDL-based methods, delivering 4.8 times faster
inference and achieving 91.7% lower memory consumption compared to the Surface
Vision Transformer (SiT) under the Ico-4 grid partitioning. Sensitivity
analysis further underscores the potential of SiM to identify subtle cognitive
developmental patterns. The code is available at
https://github.com/Rongzhao-He/surface-vision-mamba.
|
2501.14685
|
Rethinking Foundation Models for Medical Image Classification through a
Benchmark Study on MedMNIST
|
eess.IV cs.AI cs.CV cs.LG
|
Foundation models are widely employed in medical image analysis, due to their
high adaptability and generalizability for downstream tasks. With the
increasing number of foundation models being released, model selection has
become an important issue. In this work, we study the capabilities of
foundation models in medical image classification tasks by conducting a
benchmark study on the MedMNIST dataset. Specifically, we adopt various
foundation models ranging from convolutional to Transformer-based models and
implement both end-to-end training and linear probing for all classification
tasks. The results demonstrate the significant potential of these pre-trained
models when transferred for medical image classification. We further conduct
experiments with different image sizes and various sizes of training data. By
analyzing all the results, we provide preliminary, yet useful insights and
conclusions on this topic.
|
2501.14687
|
Decoding Generalization from Memorization in Deep Neural Networks
|
cs.LG cs.AI
|
Overparameterized Deep Neural Networks that generalize well have been key to
the dramatic success of Deep Learning in recent years. The reasons for their
remarkable ability to generalize are not well understood yet. It has also been
known that deep networks possess the ability to memorize training data, as
evidenced by perfect or high training accuracies on models trained with
corrupted data that have class labels shuffled to varying degrees.
Concomitantly, such models are known to generalize poorly, i.e. they suffer
from poor test accuracies, due to which it is thought that the act of
memorizing substantially degrades the ability to generalize. It has, however,
been unclear why the poor generalization that accompanies such memorization,
comes about. One possibility is that in the process of training with corrupted
data, the layers of the network irretrievably reorganize their representations
in a manner that makes generalization difficult. The other possibility is that
the network retains significant ability to generalize, but the trained network
somehow chooses to readout in a manner that is detrimental to generalization.
Here, we provide evidence for the latter possibility by demonstrating,
empirically, that such models possess information in their representations for
substantially improved generalization, even in the face of memorization.
Furthermore, such generalization abilities can be easily decoded from the
internals of the trained model, and we build a technique to do so from the
outputs of specific layers of the network. We demonstrate results on multiple
models trained with a number of standard datasets.
|
2501.14689
|
Approach to Designing CV Systems for Medical Applications: Data,
Architecture and AI
|
cs.CV cs.AI
|
This paper introduces an innovative software system for fundus image analysis
that deliberately diverges from the conventional screening approach, opting not
to predict specific diagnoses. Instead, our methodology mimics the diagnostic
process by thoroughly analyzing both normal and pathological features of fundus
structures, leaving the ultimate decision-making authority in the hands of
healthcare professionals. Our initiative addresses the need for objective
clinical analysis and seeks to automate and enhance the clinical workflow of
fundus image examination. The system, from its overarching architecture to the
modular analysis design powered by artificial intelligence (AI) models, aligns
seamlessly with ophthalmological practices. Our unique approach utilizes a
combination of state-of-the-art deep learning methods and traditional computer
vision algorithms to provide a comprehensive and nuanced analysis of fundus
structures. We present a distinctive methodology for designing medical
applications, using our system as an illustrative example. Comprehensive
verification and validation results demonstrate the efficacy of our approach in
revolutionizing fundus image analysis, with potential applications across
various medical domains.
|
2501.14693
|
Rethinking Table Instruction Tuning
|
cs.CL cs.AI
|
Recent advances in table understanding have focused on instruction-tuning
large language models (LLMs) for table-related tasks. However, existing
research has overlooked the impact of hyperparameter choices and lacks a
comprehensive evaluation of the out-of-domain table understanding ability and
the general capabilities of these table LLMs. In this paper, we evaluate these
abilities in existing table LLMs, and reveal significant declines in both
out-of-domain table understanding and general capabilities compared to their
base models. Through systematic analysis, we show that hyperparameters, such as
learning rate, can significantly influence both table-specific and general
capabilities. Contrary to the existing table instruction-tuning works, we
demonstrate that smaller learning rates and fewer training instances can
enhance table understanding while preserving general capabilities. Based on our
findings, we introduce TAMA, a TAble LLM instruction-tuned from LLaMA 3.1 8B
Instruct, which achieves performance on par with, or surpassing GPT-3.5 and
GPT-4 on table tasks, while maintaining strong out-of-domain generalization and
general capabilities. Our findings highlight the potential for reduced data
annotation costs and more efficient model development through careful
hyperparameter selection.
|
2501.14694
|
Towards Automated Self-Supervised Learning for Truly Unsupervised Graph
Anomaly Detection
|
cs.LG cs.AI
|
Self-supervised learning (SSL) is an emerging paradigm that exploits
supervisory signals generated from the data itself, and many recent studies
have leveraged SSL to conduct graph anomaly detection. However, we empirically
found that three important factors can substantially impact detection
performance across datasets: 1) the specific SSL strategy employed; 2) the
tuning of the strategy's hyperparameters; and 3) the allocation of combination
weights when using multiple strategies. Most SSL-based graph anomaly detection
methods circumvent these issues by arbitrarily or selectively (i.e., guided by
label information) choosing SSL strategies, hyperparameter settings, and
combination weights. While an arbitrary choice may lead to subpar performance,
using label information in an unsupervised setting is label information leakage
and leads to severe overestimation of a method's performance. Leakage has been
criticized as "one of the top ten data mining mistakes", yet many recent
studies on SSL-based graph anomaly detection have been using label information
to select hyperparameters. To mitigate this issue, we propose to use an
internal evaluation strategy (with theoretical analysis) to select
hyperparameters in SSL for unsupervised anomaly detection. We perform extensive
experiments using 10 recent SSL-based graph anomaly detection algorithms on
various benchmark datasets, demonstrating both the prior issues with
hyperparameter selection and the effectiveness of our proposed strategy.
|
2501.14696
|
Predictor-Feedback Stabilization of Globally Lipschitz Nonlinear Systems
with State and Input Quantization
|
math.OC cs.SY eess.SY
|
We develop a switched nonlinear predictor-feedback control law to achieve
global asymptotic stabilization for nonlinear systems with arbitrarily long
input delay, under state quantization. The proposed design generalizes the
nonlinear predictor-feedback framework by incorporating quantized measurements
of both the plant and actuator states into the predictor state formulation. Due
to the mismatch between the (inapplicable) exact predictor state and the
predictor state constructed in the presence of state quantization, a global
stabilization result is possible under a global Lipschitzness assumption on the
vector field, as well as under the assumption of existence of a globally
Lipschitz, nominal feedback law that achieves global exponential stability of
the delay and quantization-free system. To address the constraints imposed by
quantization, a dynamic switching strategy is constructed, adjusting the
quantizer's tunable parameter in a piecewise constant manner-initially
increasing the quantization range, to capture potentially large system states
and subsequently refining the precision to reduce quantization error. The
global asymptotic stability of the closed-loop system is established through
solutions estimates derived using backstepping transformations, combined with
small-gain and input-to-state stability arguments. We also extend our approach
to the case of input quantization.
|
2501.14700
|
An Attentive Graph Agent for Topology-Adaptive Cyber Defence
|
cs.LG cs.AI cs.CR cs.NI
|
As cyber threats grow increasingly sophisticated, reinforcement learning (RL)
is emerging as a promising technique to create intelligent and adaptive cyber
defense systems. However, most existing autonomous defensive agents have
overlooked the inherent graph structure of computer networks subject to cyber
attacks, potentially missing critical information and constraining their
adaptability. To overcome these limitations, we developed a custom version of
the Cyber Operations Research Gym (CybORG) environment, encoding network state
as a directed graph with realistic low-level features. We employ a Graph
Attention Network (GAT) architecture to process node, edge, and global
features, and adapt its output to be compatible with policy gradient methods in
RL. Our GAT-based approach offers key advantages over flattened alternatives:
policies that demonstrate resilience to certain types of unexpected dynamic
network topology changes, reasonable generalisation to networks of varying
sizes within the same structural distribution, and interpretable defensive
actions grounded in tangible network properties. We demonstrate that GAT
defensive policies can be trained using our low-level directed graph
observations, even when unexpected connections arise during simulation.
Evaluations across networks of different sizes, but consistent subnetwork
structure, show our policies achieve comparable performance to policies trained
specifically for each network configuration. Our study contributes to the
development of robust cyber defence systems that can better adapt to real-world
network security challenges.
|
2501.14701
|
NLP-based assessment of prescription appropriateness from Italian
referrals
|
cs.CL cs.LG
|
Objective: This study proposes a Natural Language Processing pipeline to
evaluate prescription appropriateness in Italian referrals, where reasons for
prescriptions are recorded only as free text, complicating automated
comparisons with guidelines. The pipeline aims to derive, for the first time, a
comprehensive summary of the reasons behind these referrals and a
quantification of their appropriateness. While demonstrated in a specific case
study, the approach is designed to generalize to other types of examinations.
Methods: Leveraging embeddings from a transformer-based model, the proposed
approach clusters referral texts, maps clusters to labels, and aligns these
labels with existing guidelines. We present a case study on a dataset of
496,971 referrals, consisting of all referrals for venous echocolordopplers of
the lower limbs between 2019 and 2021 in the Lombardy Region. A sample of 1,000
referrals was manually annotated to validate the results.
Results: The pipeline exhibited high performance for referrals' reasons
(Prec=92.43%, Rec=83.28%) and excellent results for referrals' appropriateness
(Prec=93.58%, Rec=91.52%) on the annotated subset. Analysis of the entire
dataset identified clusters matching guideline-defined reasons - both
appropriate and inappropriate - as well as clusters not addressed in the
guidelines. Overall, 34.32% of referrals were marked as appropriate, 34.07%
inappropriate, 14.37% likely inappropriate, and 17.24% could not be mapped to
guidelines.
Conclusions: The proposed pipeline effectively assessed prescription
appropriateness across a large dataset, serving as a valuable tool for health
authorities. Findings have informed the Lombardy Region's efforts to strengthen
recommendations and reduce the burden of inappropriate referrals.
|
2501.14704
|
Stroke classification using Virtual Hybrid Edge Detection from in silico
electrical impedance tomography data
|
math.AP cs.CV cs.NA math.NA
|
Electrical impedance tomography (EIT) is a non-invasive imaging method for
recovering the internal conductivity of a physical body from electric boundary
measurements. EIT combined with machine learning has shown promise for the
classification of strokes. However, most previous works have used raw EIT
voltage data as network inputs. We build upon a recent development which
suggested the use of special noise-robust Virtual Hybrid Edge Detection (VHED)
functions as network inputs, although that work used only highly simplified and
mathematically ideal models. In this work we strengthen the case for the use of
EIT, and VHED functions especially, for stroke classification. We design models
with high detail and mathematical realism to test the use of VHED functions as
inputs. Virtual patients are created using a physically detailed 2D head model
which includes features known to create challenges in real-world imaging
scenarios. Conductivity values are drawn from statistically realistic
distributions, and phantoms are afflicted with either hemorrhagic or ischemic
strokes of various shapes and sizes. Simulated noisy EIT electrode data,
generated using the realistic Complete Electrode Model (CEM) as opposed to the
mathematically ideal continuum model, is processed to obtain VHED functions. We
compare the use of VHED functions as inputs against the alternative paradigm of
using raw EIT voltages. Our results show that (i) stroke classification can be
performed with high accuracy using 2D EIT data from physically detailed and
mathematically realistic models, and (ii) in the presence of noise, VHED
functions outperform raw data as network inputs.
|
2501.14705
|
The Karp Dataset
|
cs.LG cs.CL
|
Understanding the mathematical reasoning capabilities of Large Language
Models (LLMs) is a central topic in the study of artificial intelligence. This
new domain necessitates the creation of datasets of reasoning tasks for both
training and benchmarking the performance of LLMs. To this end, we introduce
the Karp dataset: The first dataset composed of detailed proofs of
NP-completeness reductions. The reductions vary in difficulty, ranging from
simple exercises of undergraduate courses to more challenging reductions from
academic papers. We compare the performance of state-of-the-art models on this
task and demonstrate the effect of fine-tuning with the Karp dataset on
reasoning capacity.
|
2501.14708
|
Decision-Focused Learning for Complex System Identification: HVAC
Management System Application
|
eess.SY cs.LG cs.SY
|
As opposed to conventional training methods tailored to minimize a given
statistical metric or task-agnostic loss (e.g., mean squared error),
Decision-Focused Learning (DFL) trains machine learning models for optimal
performance in downstream decision-making tools. We argue that DFL can be
leveraged to learn the parameters of system dynamics, expressed as constraint
of the convex optimization control policy, while the system control signal is
being optimized, thus creating an end-to-end learning framework. This is
particularly relevant for systems in which behavior changes once the control
policy is applied, hence rendering historical data less applicable. The
proposed approach can perform system identification - i.e., determine
appropriate parameters for the system analytical model - and control
simultaneously to ensure that the model's accuracy is focused on areas most
relevant to control. Furthermore, because black-box systems are
non-differentiable, we design a loss function that requires solely to measure
the system response. We propose pre-training on historical data and constraint
relaxation to stabilize the DFL and deal with potential infeasibilities in
learning. We demonstrate the usefulness of the method on a building Heating,
Ventilation, and Air Conditioning day-ahead management system for a realistic
15-zone building located in Denver, US. The results show that the conventional
RC building model, with the parameters obtained from historical data using
supervised learning, underestimates HVAC electrical power consumption. For our
case study, the ex-post cost is on average six times higher than the expected
one. Meanwhile, the same RC model with parameters obtained via DFL
underestimates the ex-post cost only by 3%.
|
2501.14709
|
Enhanced Confocal Laser Scanning Microscopy with Adaptive Physics
Informed Deep Autoencoders
|
cond-mat.mtrl-sci cs.CV eess.IV
|
We present a physics-informed deep learning framework to address common
limitations in Confocal Laser Scanning Microscopy (CLSM), such as diffraction
limited resolution, noise, and undersampling due to low laser power conditions.
The optical system's point spread function (PSF) and common CLSM image
degradation mechanisms namely photon shot noise, dark current noise, motion
blur, speckle noise, and undersampling were modeled and were directly included
into model architecture. The model reconstructs high fidelity images from
heavily noisy inputs by using convolutional and transposed convolutional
layers. Following the advances in compressed sensing, our approach
significantly reduces data acquisition requirements without compromising image
resolution. The proposed method was extensively evaluated on simulated CLSM
images of diverse structures, including lipid droplets, neuronal networks, and
fibrillar systems. Comparisons with traditional deconvolution algorithms such
as Richardson-Lucy (RL), non-negative least squares (NNLS), and other methods
like Total Variation (TV) regularization, Wiener filtering, and Wavelet
denoising demonstrate the superiority of the network in restoring fine
structural details with high fidelity. Assessment metrics like Structural
Similarity Index (SSIM) and Peak Signal to Noise Ratio (PSNR), underlines that
the AdaptivePhysicsAutoencoder achieved robust image enhancement across diverse
CLSM conditions, helping faster acquisition, reduced photodamage, and reliable
performance in low light and sparse sampling scenarios holding promise for
applications in live cell imaging, dynamic biological studies, and high
throughput material characterization.
|
2501.14710
|
Overcoming Fairness Trade-offs via Pre-processing: A Causal Perspective
|
stat.ML cs.LG
|
Training machine learning models for fair decisions faces two key challenges:
The \emph{fairness-accuracy trade-off} results from enforcing fairness which
weakens its predictive performance in contrast to an unconstrained model. The
incompatibility of different fairness metrics poses another trade-off -- also
known as the \emph{impossibility theorem}. Recent work identifies the bias
within the observed data as a possible root cause and shows that fairness and
predictive performance are in fact in accord when predictive performance is
measured on unbiased data. We offer a causal explanation for these findings
using the framework of the FiND (fictitious and normatively desired) world, a
"fair" world, where protected attributes have no causal effects on the target
variable. We show theoretically that (i) classical fairness metrics deemed to
be incompatible are naturally satisfied in the FiND world, while (ii) fairness
aligns with high predictive performance. We extend our analysis by suggesting
how one can benefit from these theoretical insights in practice, using causal
pre-processing methods that approximate the FiND world. Additionally, we
propose a method for evaluating the approximation of the FiND world via
pre-processing in practical use cases where we do not have access to the FiND
world. In simulations and empirical studies, we demonstrate that these
pre-processing methods are successful in approximating the FiND world and
resolve both trade-offs. Our results provide actionable solutions for
practitioners to achieve fairness and high predictive performance
simultaneously.
|
2501.14713
|
FlexiGPT: Pruning and Extending Large Language Models with Low-Rank
Weight Sharing
|
cs.CL cs.LG
|
The rapid proliferation of large language models (LLMs) in natural language
processing (NLP) has created a critical need for techniques that enable
efficient deployment on memory-constrained devices without compromising
performance. We present a method to prune LLMs that selectively prunes model
blocks based on an importance score and replaces them with a low-parameter
replacement strategy. Specifically, we propose a principled metric to replace
each pruned block using a weight-sharing mechanism that leverages unpruned
counterparts from the model and block-specific low-rank adapters. Furthermore,
we facilitate the learning of these replacement blocks with output feature
normalization and an adapter initialization scheme built on low-rank SVD
reconstructions. Empirical evaluations demonstrate substantial performance
gains over existing methods, achieving state-of-the-art performance on 5/6
benchmarks for a compression rate of 30% and 6/6 benchmarks for a compression
rate of 40%. We also demonstrate that our approach can extend smaller models,
boosting performance on 6/6 benchmarks using only ~0.3% tokens of extended
training with minimal additional parameter costs.
|
2501.14717
|
Towards Better Understanding Table Instruction Tuning: Decoupling the
Effects from Data versus Models
|
cs.CL
|
Recent advances in natural language processing have leveraged instruction
tuning to enhance Large Language Models (LLMs) for table-related tasks.
However, previous works train different base models with different training
data, lacking an apples-to-apples comparison across the result table LLMs. To
address this, we fine-tune base models from the Mistral, OLMo, and Phi families
on existing public training datasets. Our replication achieves performance on
par with or surpassing existing table LLMs, establishing new state-of-the-art
performance on Hitab, a table question-answering dataset. More importantly,
through systematic out-of-domain evaluation, we decouple the contributions of
training data and the base model, providing insight into their individual
impacts. In addition, we assess the effects of table-specific instruction
tuning on general-purpose benchmarks, revealing trade-offs between
specialization and generalization.
|
2501.14719
|
Do LLMs Provide Consistent Answers to Health-Related Questions across
Languages?
|
cs.CL cs.AI cs.HC cs.IR
|
Equitable access to reliable health information is vital for public health,
but the quality of online health resources varies by language, raising concerns
about inconsistencies in Large Language Models (LLMs) for healthcare. In this
study, we examine the consistency of responses provided by LLMs to
health-related questions across English, German, Turkish, and Chinese. We
largely expand the HealthFC dataset by categorizing health-related questions by
disease type and broadening its multilingual scope with Turkish and Chinese
translations. We reveal significant inconsistencies in responses that could
spread healthcare misinformation. Our main contributions are 1) a multilingual
health-related inquiry dataset with meta-information on disease categories, and
2) a novel prompt-based evaluation workflow that enables sub-dimensional
comparisons between two languages through parsing. Our findings highlight key
challenges in deploying LLM-based tools in multilingual contexts and emphasize
the need for improved cross-lingual alignment to ensure accurate and equitable
healthcare information.
|
2501.14720
|
Communication-Based Distributed Control of Large-Scale District Heating
Networks
|
eess.SY cs.SY
|
This paper presents a non-cooperative distributed model predictive controller
for the control of large-scale District Heating Networks. To enable the design
of this controller a novel information passing scheme and feasibility
restoration method are created, allowing the local controllers to achieve a
global consensus while minimizing a local cost function. The effectiveness of
this controller is demonstrated on an 18-user District Heating Network
decomposed into six subsystems. The results show that the developed control
scheme effectively uses flexibility to manage the buildings' heat demands
reducing the total losses by 14% and the return temperature by 37%.
|
2501.14721
|
Comparable Corpora: Opportunities for New Research Directions
|
cs.CL
|
Most conference papers present new results, but this paper will focus more on
opportunities for the audience to make their own contributions. This paper is
intended to challenge the community to think more broadly about what we can do
with comparable corpora. We will start with a review of the history, and then
suggest new directions for future research. This was a keynote at BUCC-2025, a
workshop associated with Coling-2025.
|
2501.14723
|
CodeMonkeys: Scaling Test-Time Compute for Software Engineering
|
cs.LG
|
Scaling test-time compute is a promising axis for improving LLM capabilities.
However, test-time compute can be scaled in a variety of ways, and effectively
combining different approaches remains an active area of research. Here, we
explore this problem in the context of solving real-world GitHub issues from
the SWE-bench dataset. Our system, named CodeMonkeys, allows models to
iteratively edit a codebase by jointly generating and running a testing script
alongside their draft edit. We sample many of these multi-turn trajectories for
every issue to generate a collection of candidate edits. This approach lets us
scale "serial" test-time compute by increasing the number of iterations per
trajectory and "parallel" test-time compute by increasing the number of
trajectories per problem. With parallel scaling, we can amortize up-front costs
across multiple downstream samples, allowing us to identify relevant codebase
context using the simple method of letting an LLM read every file. In order to
select between candidate edits, we combine voting using model-generated tests
with a final multi-turn trajectory dedicated to selection. Overall, CodeMonkeys
resolves 57.4% of issues from SWE-bench Verified using a budget of
approximately 2300 USD. Our selection method can also be used to combine
candidates from different sources. Selecting over an ensemble of edits from
existing top SWE-bench Verified submissions obtains a score of 66.2% and
outperforms the best member of the ensemble on its own. We fully release our
code and data at https://scalingintelligence.stanford.edu/pubs/codemonkeys.
|
2501.14724
|
MLPs at the EOC: Concentration of the NTK
|
cs.LG stat.ML
|
We study the concentration of the Neural Tangent Kernel (NTK) $K_\theta :
\mathbb{R}^{m_0} \times \mathbb{R}^{m_0} \to \mathbb{R}^{m_l \times m_l}$ of
$l$-layer Multilayer Perceptrons (MLPs) $N : \mathbb{R}^{m_0} \times \Theta \to
\mathbb{R}^{m_l}$ equipped with activation functions $\phi(s) = a s + b \vert s
\vert$ for some $a,b \in \mathbb{R}$ with the parameter $\theta \in \Theta$
being initialized at the Edge Of Chaos (EOC). Without relying on the gradient
independence assumption that has only been shown to hold asymptotically in the
infinitely wide limit, we prove that an approximate version of gradient
independence holds at finite width. Showing that the NTK entries
$K_\theta(x_{i_1},x_{i_2})$ for $i_1,i_2 \in [1:n]$ over a dataset
$\{x_1,\cdots,x_n\} \subset \mathbb{R}^{m_0}$ concentrate simultaneously via
maximal inequalities, we prove that the NTK matrix $K(\theta) = [\frac{1}{n}
K_\theta(x_{i_1},x_{i_2}) : i_1,i_2 \in [1:n]] \in \mathbb{R}^{nm_l \times
nm_l}$ concentrates around its infinitely wide limit
$\overset{\scriptscriptstyle\infty}{K} \in \mathbb{R}^{nm_l \times nm_l}$
without the need for linear overparameterization. Our results imply that in
order to accurately approximate the limit, hidden layer widths have to grow
quadratically as $m_k = k^2 m$ for some $m \in \mathbb{N}+1$ for sufficient
concentration. For such MLPs, we obtain the concentration bound $\mathbb{P}(
\Vert K(\theta) - \overset{\scriptscriptstyle\infty}{K} \Vert \leq
O((\Delta_\phi^{-2} + m_l^{\frac{1}{2}} l) \kappa_\phi^2 m^{-\frac{1}{2}}))
\geq 1-O(m^{-1})$ modulo logarithmic terms, where we denoted $\Delta_\phi =
\frac{b^2}{a^2+b^2}$ and $\kappa_\phi = \frac{\vert a \vert + \vert b
\vert}{\sqrt{a^2 + b^2}}$. This reveals in particular that the absolute value
($\Delta_\phi=1$, $\kappa_\phi=1$) beats the ReLU ($\Delta_\phi=\frac{1}{2}$,
$\kappa_\phi=\sqrt{2}$) in terms of the concentration of the NTK.
|
2501.14726
|
Relightable Full-Body Gaussian Codec Avatars
|
cs.CV cs.GR
|
We propose Relightable Full-Body Gaussian Codec Avatars, a new approach for
modeling relightable full-body avatars with fine-grained details including face
and hands. The unique challenge for relighting full-body avatars lies in the
large deformations caused by body articulation and the resulting impact on
appearance caused by light transport. Changes in body pose can dramatically
change the orientation of body surfaces with respect to lights, resulting in
both local appearance changes due to changes in local light transport
functions, as well as non-local changes due to occlusion between body parts. To
address this, we decompose the light transport into local and non-local
effects. Local appearance changes are modeled using learnable zonal harmonics
for diffuse radiance transfer. Unlike spherical harmonics, zonal harmonics are
highly efficient to rotate under articulation. This allows us to learn diffuse
radiance transfer in a local coordinate frame, which disentangles the local
radiance transfer from the articulation of the body. To account for non-local
appearance changes, we introduce a shadow network that predicts shadows given
precomputed incoming irradiance on a base mesh. This facilitates the learning
of non-local shadowing between the body parts. Finally, we use a deferred
shading approach to model specular radiance transfer and better capture
reflections and highlights such as eye glints. We demonstrate that our approach
successfully models both the local and non-local light transport required for
relightable full-body avatars, with a superior generalization ability under
novel illumination conditions and unseen poses.
|
2501.14728
|
Mitigating GenAI-powered Evidence Pollution for Out-of-Context
Multimodal Misinformation Detection
|
cs.MM cs.CL cs.CV cs.CY
|
While large generative artificial intelligence (GenAI) models have achieved
significant success, they also raise growing concerns about online information
security due to their potential misuse for generating deceptive content.
Out-of-context (OOC) multimodal misinformation detection, which often retrieves
Web evidence to identify the repurposing of images in false contexts, faces the
issue of reasoning over GenAI-polluted evidence to derive accurate predictions.
Existing works simulate GenAI-powered pollution at the claim level with
stylistic rewriting to conceal linguistic cues, and ignore evidence-level
pollution for such information-seeking applications. In this work, we
investigate how polluted evidence affects the performance of existing OOC
detectors, revealing a performance degradation of more than 9 percentage
points. We propose two strategies, cross-modal evidence reranking and
cross-modal claim-evidence reasoning, to address the challenges posed by
polluted evidence. Extensive experiments on two benchmark datasets show that
these strategies can effectively enhance the robustness of existing
out-of-context detectors amidst polluted evidence.
|
2501.14729
|
HERMES: A Unified Self-Driving World Model for Simultaneous 3D Scene
Understanding and Generation
|
cs.CV
|
Driving World Models (DWMs) have become essential for autonomous driving by
enabling future scene prediction. However, existing DWMs are limited to scene
generation and fail to incorporate scene understanding, which involves
interpreting and reasoning about the driving environment. In this paper, we
present a unified Driving World Model named HERMES. We seamlessly integrate 3D
scene understanding and future scene evolution (generation) through a unified
framework in driving scenarios. Specifically, HERMES leverages a Bird's-Eye
View (BEV) representation to consolidate multi-view spatial information while
preserving geometric relationships and interactions. We also introduce world
queries, which incorporate world knowledge into BEV features via causal
attention in the Large Language Model (LLM), enabling contextual enrichment for
understanding and generation tasks. We conduct comprehensive studies on
nuScenes and OmniDrive-nuScenes datasets to validate the effectiveness of our
method. HERMES achieves state-of-the-art performance, reducing generation error
by 32.4% and improving understanding metrics such as CIDEr by 8.0%. The model
and code will be publicly released at https://github.com/LMD0311/HERMES.
|
2501.14731
|
From Critique to Clarity: A Pathway to Faithful and Personalized Code
Explanations with Large Language Models
|
cs.SE cs.AI cs.CL
|
In the realm of software development, providing accurate and personalized
code explanations is crucial for both technical professionals and business
stakeholders. Technical professionals benefit from enhanced understanding and
improved problem-solving skills, while business stakeholders gain insights into
project alignments and transparency. Despite the potential, generating such
explanations is often time-consuming and challenging. This paper presents an
innovative approach that leverages the advanced capabilities of large language
models (LLMs) to generate faithful and personalized code explanations. Our
methodology integrates prompt enhancement, self-correction mechanisms,
personalized content customization, and interaction with external tools,
facilitated by collaboration among multiple LLM agents. We evaluate our
approach using both automatic and human assessments, demonstrating that our
method not only produces accurate explanations but also tailors them to
individual user preferences. Our findings suggest that this approach
significantly improves the quality and relevance of code explanations, offering
a valuable tool for developers and stakeholders alike.
|
2501.14733
|
LLM as HPC Expert: Extending RAG Architecture for HPC Data
|
cs.DC cs.AI
|
High-Performance Computing (HPC) is crucial for performing advanced
computational tasks, yet their complexity often challenges users, particularly
those unfamiliar with HPC-specific commands and workflows. This paper
introduces Hypothetical Command Embeddings (HyCE), a novel method that extends
Retrieval-Augmented Generation (RAG) by integrating real-time, user-specific
HPC data, enhancing accessibility to these systems. HyCE enriches large
language models (LLM) with real-time, user-specific HPC information, addressing
the limitations of fine-tuned models on such data. We evaluate HyCE using an
automated RAG evaluation framework, where the LLM itself creates synthetic
questions from the HPC data and serves as a judge, assessing the efficacy of
the extended RAG with the evaluation metrics relevant for HPC tasks.
Additionally, we tackle essential security concerns, including data privacy and
command execution risks, associated with deploying LLMs in HPC environments.
This solution provides a scalable and adaptable approach for HPC clusters to
leverage LLMs as HPC expert, bridging the gap between users and the complex
systems of HPC.
|
2501.14734
|
Research on the Application of Spark Streaming Real-Time Data Analysis
System and large language model Intelligent Agents
|
cs.DC cs.AI
|
This study explores the integration of Agent AI with LangGraph to enhance
real-time data analysis systems in big data environments. The proposed
framework overcomes limitations of static workflows, inefficient stateful
computations, and lack of human intervention by leveraging LangGraph's
graph-based workflow construction and dynamic decision-making capabilities.
LangGraph allows large language models (LLMs) to dynamically determine control
flows, invoke tools, and assess the necessity of further actions, improving
flexibility and efficiency.
The system architecture incorporates Apache Spark Streaming, Kafka, and
LangGraph to create a high-performance sentiment analysis system. LangGraph's
capabilities include precise state management, dynamic workflow construction,
and robust memory checkpointing, enabling seamless multi-turn interactions and
context retention. Human-in-the-loop mechanisms are integrated to refine
sentiment analysis, particularly in ambiguous or high-stakes scenarios,
ensuring greater reliability and contextual relevance.
Key features such as real-time state streaming, debugging via LangGraph
Studio, and efficient handling of large-scale data streams make this framework
ideal for adaptive decision-making. Experimental results confirm the system's
ability to classify inquiries, detect sentiment trends, and escalate complex
issues for manual review, demonstrating a synergistic blend of LLM capabilities
and human oversight.
This work presents a scalable, adaptable, and reliable solution for real-time
sentiment analysis and decision-making, advancing the use of Agent AI and
LangGraph in big data applications.
|
2501.14735
|
ARCEAK: An Automated Rule Checking Framework Enhanced with Architectural
Knowledge
|
cs.SE cs.AI
|
Automated Rule Checking (ARC) plays a crucial role in advancing the
construction industry by addressing the laborious, inconsistent, and
error-prone nature of traditional model review conducted by industry
professionals. Manual assessment against intricate sets of rules often leads to
significant project delays and expenses. In response to these challenges, ARC
offers a promising solution to improve efficiency and compliance in design
within the construction sector. However, the main challenge of ARC lies in
translating regulatory text into a format suitable for computer processing.
Current methods for rule interpretation require extensive manual labor, thereby
limiting their practicality. To address this issue, our study introduces a
novel approach that decomposes ARC into two distinct tasks: rule information
extraction and verification code generation. Leveraging generative pre-trained
transformers, our method aims to streamline the interpretation of regulatory
texts and simplify the process of generating model compliance checking code.
Through empirical evaluation and case studies, we showcase the effectiveness
and potential of our approach in automating code compliance checking, enhancing
the efficiency and reliability of construction projects.
|
2501.14736
|
NEAT Algorithm-based Stock Trading Strategy with Multiple Technical
Indicators Resonance
|
cs.NE cs.LG q-fin.PM
|
In this study, we applied the NEAT (NeuroEvolution of Augmenting Topologies)
algorithm to stock trading using multiple technical indicators. Our approach
focused on maximizing earning, avoiding risk, and outperforming the Buy & Hold
strategy. We used progressive training data and a multi-objective fitness
function to guide the evolution of the population towards these objectives. The
results of our study showed that the NEAT model achieved similar returns to the
Buy & Hold strategy, but with lower risk exposure and greater stability. We
also identified some challenges in the training process, including the presence
of a large number of unused nodes and connections in the model architecture. In
future work, it may be worthwhile to explore ways to improve the NEAT algorithm
and apply it to shorter interval data in order to assess the potential impact
on performance.
|
2501.14737
|
EvalSVA: Multi-Agent Evaluators for Next-Gen Software Vulnerability
Assessment
|
cs.SE cs.AI
|
Software Vulnerability (SV) assessment is a crucial process of determining
different aspects of SVs (e.g., attack vectors and scope) for developers to
effectively prioritize efforts in vulnerability mitigation. It presents a
challenging and laborious process due to the complexity of SVs and the scarcity
of labeled data. To mitigate the above challenges, we introduce EvalSVA, a
multi-agent evaluators team to autonomously deliberate and evaluate various
aspects of SV assessment. Specifically, we propose a multi-agent-based
framework to simulate vulnerability assessment strategies in real-world
scenarios, which employs multiple Large Language Models (LLMs) into an
integrated group to enhance the effectiveness of SV assessment in the limited
data. We also design diverse communication strategies to autonomously discuss
and assess different aspects of SV. Furthermore, we construct a multi-lingual
SV assessment dataset based on the new standard of CVSS, comprising 699, 888,
and 1,310 vulnerability-related commits in C++, Python, and Java, respectively.
Our experimental results demonstrate that EvalSVA averagely outperforms the
44.12\% accuracy and 43.29\% F1 for SV assessment compared with the previous
methods. It shows that EvalSVA offers a human-like process and generates both
reason and answer for SV assessment. EvalSVA can also aid human experts in SV
assessment, which provides more explanation and details for SV assessment.
|
2501.14738
|
On strict ranking by pairwise comparisons
|
cs.IT math.IT
|
We attack the problem of getting a strict ranking (i.e. a ranking without
equally ranked items) of $n$ items from a pairwise comparisons matrix. Basic
structures are described, a first heuristical approach based on a condition,
the $\mathcal{R}-$condition, is proposed. Analyzing the limits of this ranking
procedure, we finish with a minimization problem which can be applied to a
wider class of pairwise comparisons matrices. If solved, it produces consistent
pairwise comparisons that produce a strict ranking.
|
2501.14739
|
Reproduction Research of FSA-Benchmark
|
cs.DC cs.LG
|
In the current landscape of big data, the reliability and performance of
storage systems are essential to the success of various applications and
services. as data volumes continue to grow exponentially, the complexity and
scale of the storage infrastructures needed to manage this data also increase.
a significant challenge faced by data centers and storage systems is the
detection and management of fail-slow disks that experience a gradual decline
in performance before ultimately failing. Unlike outright disk failures,
fail-slow conditions can go undetected for prolonged periods, leading to
considerable impacts on system performance and user experience.
|
2501.14741
|
On Design Choices in Similarity-Preserving Sparse Randomized Embeddings
|
cs.NE cs.LG q-bio.NC
|
Expand & Sparsify is a principle that is observed in anatomically similar
neural circuits found in the mushroom body (insects) and the cerebellum
(mammals). Sensory data are projected randomly to much higher-dimensionality
(expand part) where only few the most strongly excited neurons are activated
(sparsify part). This principle has been leveraged to design a FlyHash
algorithm that forms similarity-preserving sparse embeddings, which have been
found useful for such tasks as novelty detection, pattern recognition, and
similarity search. Despite its simplicity, FlyHash has a number of design
choices to be set such as preprocessing of the input data, choice of
sparsifying activation function, and formation of the random projection matrix.
In this paper, we explore the effect of these choices on the performance of
similarity search with FlyHash embeddings. We find that the right combination
of design choices can lead to drastic difference in the search performance.
|
2501.14742
|
Evaluating the effectiveness, reliability and efficiency of a
multi-objective sequential optimization approach for building performance
design
|
cs.NE math.OC
|
The complexity of performance-based building design stems from the evaluation
of numerous candidate design options, driven by the plethora of variables,
objectives, and constraints inherent in multi-disciplinary projects. This
necessitates optimization approaches to support the identification of well
performing designs while reducing the computational time of performance
evaluation. In response, this paper proposes and evaluates a sequential
approach for multi-objective design optimization of building geometry, fabric,
HVAC system and controls for building performance. This approach involves
sequential optimizations with optimal solutions from previous stages passed to
the next. The performance of the sequential approach is benchmarked against a
full factorial search, assessing its effectiveness in finding global optima,
solution quality, reliability to scale and variations of problem formulations,
and computational efficiency compared to the NSGA-II algorithm. 24
configurations of the sequential approach are tested on a multi-scale case
study, simulating 874 to 4,147,200 design options for an office building,
aiming to minimize energy demand while maintaining thermal comfort. A two-stage
sequential process-(building geometry + fabric) and (HVAC system + controls)
identified the same Pareto-optimal solutions as the full factorial search
across all four scales and variations of problem formulations, demonstrating
100% effectiveness and reliability. This approach required 100,700 function
evaluations, representing a 91.2% reduction in computational effort compared to
the full factorial search. In contrast, NSGA-II achieved only 73.5% of the
global optima with the same number of function evaluations. This research
indicates that a sequential optimization approach is a highly efficient and
robust alternative to the standard NSGA-II algorithm.
|
2501.14743
|
KVDirect: Distributed Disaggregated LLM Inference
|
cs.DC cs.LG cs.PF
|
Large Language Models (LLMs) have become the new foundation for many
applications, reshaping human society like a storm. Disaggregated inference,
which separates prefill and decode stages, is a promising approach to improving
hardware utilization and service quality. However, due to inefficient
inter-node communication, existing systems restrict disaggregated inference to
a single node, limiting resource allocation flexibility and reducing service
capacity. This paper introduces KVDirect, which optimizes KV cache transfer to
enable a distributed disaggregated LLM inference. KVDirect achieves this
through the following contributions. First, we propose a novel tensor-centric
communication mechanism that reduces the synchronization overhead in
traditional distributed GPU systems. Second, we design a custom communication
library to support dynamic GPU resource scheduling and efficient KV cache
transfer. Third, we introduce a pull-based KV cache transfer strategy that
reduces GPU resource idling and improves latency. Finally, we implement
KVDirect as an open-source LLM inference framework. Our evaluation demonstrates
that KVDirect reduces per-request latency by 55% compared to the baseline
across diverse workloads under the same resource constraints.
|
2501.14744
|
FSTA-SNN:Frequency-based Spatial-Temporal Attention Module for Spiking
Neural Networks
|
cs.NE cs.CV cs.LG
|
Spiking Neural Networks (SNNs) are emerging as a promising alternative to
Artificial Neural Networks (ANNs) due to their inherent energy efficiency.
Owing to the inherent sparsity in spike generation within SNNs, the in-depth
analysis and optimization of intermediate output spikes are often neglected.
This oversight significantly restricts the inherent energy efficiency of SNNs
and diminishes their advantages in spatiotemporal feature extraction, resulting
in a lack of accuracy and unnecessary energy expenditure. In this work, we
analyze the inherent spiking characteristics of SNNs from both temporal and
spatial perspectives. In terms of spatial analysis, we find that shallow layers
tend to focus on learning vertical variations, while deeper layers gradually
learn horizontal variations of features. Regarding temporal analysis, we
observe that there is not a significant difference in feature learning across
different time steps. This suggests that increasing the time steps has limited
effect on feature learning. Based on the insights derived from these analyses,
we propose a Frequency-based Spatial-Temporal Attention (FSTA) module to
enhance feature learning in SNNs. This module aims to improve the feature
learning capabilities by suppressing redundant spike features.The experimental
results indicate that the introduction of the FSTA module significantly reduces
the spike firing rate of SNNs, demonstrating superior performance compared to
state-of-the-art baselines across multiple datasets.
|
2501.14745
|
AI-Driven Health Monitoring of Distributed Computing Architecture:
Insights from XGBoost and SHAP
|
cs.DC cs.LG
|
With the rapid development of artificial intelligence technology, its
application in the optimization of complex computer systems is becoming more
and more extensive. Edge computing is an efficient distributed computing
architecture, and the health status of its nodes directly affects the
performance and reliability of the entire system. In view of the lack of
accuracy and interpretability of traditional methods in node health status
judgment, this paper proposes a health status judgment method based on XGBoost
and combines the SHAP method to analyze the interpretability of the model.
Through experiments, it is verified that XGBoost has superior performance in
processing complex features and nonlinear data of edge computing nodes,
especially in capturing the impact of key features (such as response time and
power consumption) on node status. SHAP value analysis further reveals the
global and local importance of features, so that the model not only has high
precision discrimination ability but also can provide intuitive explanations,
providing data support for system optimization. Research shows that the
combination of AI technology and computer system optimization can not only
realize the intelligent monitoring of the health status of edge computing nodes
but also provide a scientific basis for dynamic optimization scheduling,
resource management and anomaly detection. In the future, with the in-depth
development of AI technology, model dynamics, cross-node collaborative
optimization and multimodal data fusion will become the focus of research,
providing important support for the intelligent evolution of edge computing
systems.
|
2501.14746
|
Neuromorphic Spiking Neural Network Based Classification of COVID-19
Spike Sequences
|
cs.NE cs.LG
|
The availability of SARS-CoV-2 (severe acute respiratory syndrome coronavirus
2) virus data post-COVID has reached exponentially to an enormous magnitude,
opening research doors to analyze its behavior. Various studies are conducted
by researchers to gain a deeper understanding of the virus, like genomic
surveillance, etc, so that efficient prevention mechanisms can be developed.
However, the unstable nature of the virus (rapid mutations, multiple hosts,
etc) creates challenges in designing analytical systems for it. Therefore, we
propose a neural network-based (NN) mechanism to perform an efficient analysis
of the SARS-CoV-2 data, as NN portrays generalized behavior upon training.
Moreover, rather than using the full-length genome of the virus, we apply our
method to its spike region, as this region is known to have predominant
mutations and is used to attach to the host cell membrane. In this paper, we
introduce a pipeline that first converts the spike protein sequences into a
fixed-length numerical representation and then uses Neuromorphic Spiking Neural
Network to classify those sequences. We compare the performance of our method
with various baselines using real-world SARS-CoV-2 spike sequence data and show
that our method is able to achieve higher predictive accuracy compared to the
recent baselines.
|
2501.14747
|
Enhancing Green Economy with Artificial Intelligence: Role of Energy Use
and FDI in the United States
|
econ.GN cs.AI q-fin.EC
|
The escalating challenge of climate change necessitates an urgent exploration
of factors influencing carbon emissions. This study contributes to the
discourse by examining the interplay of technological, economic, and
demographic factors on environmental sustainability. This study investigates
the impact of artificial intelligence (AI) innovation, economic growth, foreign
direct investment (FDI), energy consumption, and urbanization on CO2 emissions
in the United States from 1990 to 2022. Employing the ARDL framework integrated
with the STIRPAT model, the findings reveal a dual narrative: while AI
innovation mitigates environmental stress, economic growth, energy use, FDI,
and urbanization exacerbate environmental degradation. Unit root tests (ADF,
PP, and DF-GLS) confirm mixed integration levels among variables, and the ARDL
bounds test establishes long-term co-integration. The analysis highlights that
AI innovation positively correlates with CO2 reduction when environmental
safeguards are in place, whereas GDP growth, energy consumption, FDI, and
urbanization intensify CO2 emissions. Robustness checks using FMOLS, DOLS, and
CCR validate the ARDL findings. Additionally, Pairwise Granger causality tests
reveal significant one-way causal links between CO2 emissions and economic
growth, AI innovation, energy use, FDI, and urbanization. These relationships
emphasize the critical role of AI-driven technological advancements,
sustainable investments, and green energy in fostering ecological
sustainability. The study suggests policy measures such as encouraging green
FDI, advancing AI technologies, adopting sustainable energy practices, and
implementing eco-friendly urban development to promote sustainable growth in
the USA.
|
2501.14750
|
Engineering Carbon Credits Towards A Responsible FinTech Era: The
Practices, Implications, and Future
|
cs.CY cs.LG
|
Carbon emissions significantly contribute to climate change, and carbon
credits have emerged as a key tool for mitigating environmental damage and
helping organizations manage their carbon footprint. Despite their growing
importance across sectors, fully leveraging carbon credits remains challenging.
This study explores engineering practices and fintech solutions to enhance
carbon emission management. We first review the negative impacts of carbon
emission non-disclosure, revealing its adverse effects on financial stability
and market value. Organizations are encouraged to actively manage emissions and
disclose relevant data to mitigate risks. Next, we analyze factors influencing
carbon prices and review advanced prediction algorithms that optimize carbon
credit purchasing strategies, reducing costs and improving efficiency.
Additionally, we examine corporate carbon emission prediction models, which
offer accurate performance assessments and aid in planning future carbon credit
needs. By integrating carbon price and emission predictions, we propose
research directions, including corporate carbon management cost forecasting.
This study provides a foundation for future quantitative research on the
financial and market impacts of carbon management practices and is the first
systematic review focusing on computing solutions and engineering practices for
carbon credits.
|
2501.14751
|
Optimizing LPB Algorithms using Simulated Annealing
|
cs.NE
|
Learner Performance-based Behavior using Simulated Annealing (LPBSA) is an
improvement of the Learner Performance-based Behavior (LPB) algorithm. LPBSA,
like LPB, has been proven to deal with single and complex problems. Simulated
Annealing (SA) has been utilized as a powerful technique to optimize LPB. LPBSA
has provided results that outperformed popular algorithms, like the Genetic
Algorithm (GA), Particle Swarm Optimization (PSO), and even LPB. This study
outlines the improved algorithm's working procedure by providing a main
population and dividing it into Good and Bad populations and then applying
crossover and mutation operators. When some individuals are born in the
crossover stage, they have to go through the mutation process. Between these
two steps, we have applied SA using the Metropolis Acceptance Criterion (MAC)
to accept only the best and most useful individuals to be used in the next
iteration. Finally, the outcomes demonstrate that the population is enhanced,
leading to improved efficiency and validating the performance of LPBSA.
|
2501.14753
|
ABACUS: A FinOps Service for Cloud Cost Optimization
|
cs.DC cs.AI cs.NI cs.SE
|
In recent years, as more enterprises have moved their infrastructure to the
cloud, significant challenges have emerged in achieving holistic cloud spend
visibility and cost optimization. FinOps practices provide a way for
enterprises to achieve these business goals by optimizing cloud costs and
bringing accountability to cloud spend. This paper presents ABACUS - Automated
Budget Analysis and Cloud Usage Surveillance, a FinOps solution for optimizing
cloud costs by setting budgets, enforcing those budgets through blocking new
deployments, and alerting appropriate teams if spending breaches a budget
threshold. ABACUS also leverages best practices like Infrastructure-as-Code to
alert engineering teams of the expected cost of deployment before resources are
deployed in the cloud. Finally, future research directions are proposed to
advance the state of the art in this important field.
|
2501.14755
|
Data-Juicer 2.0: Cloud-Scale Adaptive Data Processing for Foundation
Models
|
cs.DC cs.AI
|
The burgeoning field of foundation models necessitates advanced data
processing mechanisms capable of harnessing vast valuable data with varied
types utilized by these models. Nevertheless, the current landscape presents
unique challenges that traditional data processing frameworks cannot handle
effectively, especially with multimodal intricacies. In response, we present
Data-Juicer 2.0, a new system offering fruitful data processing capabilities
backed by over a hundred operators spanning various modalities like text,
image, audio, and video. With seamless compatibility and dedicated optimization
to popular dataset hubs like Hugging Face and computing engines like Ray,
Data-Juicer 2.0 enhances its predecessor in both usability, efficiency, and
programmability. It features an easily accessible user interface layer that
supports decoupled Python interactions, RESTful APIs, and conversational
commands. Alongside this, it contains a core runtime layer optimized for
adaptive execution and management across different dataset scales, processing
demands, and computational environments, while shielding unnecessary system
details. Extensive empirical evaluations demonstrate Data-Juicer 2.0's
remarkable performance and scalability, highlighting its capability to
efficiently process tens of billions of data samples with tens of thousands of
CPU cores. The system is publicly available, actively maintained, and broadly
adopted in diverse research endeavors, practical applications, and real-world
products such as Alibaba Cloud PAI.
|
2501.14756
|
Towards An Automated AI Act FRIA Tool That Can Reuse GDPR's DPIA
|
cs.CY cs.AI
|
The AI Act introduces the obligation to conduct a Fundamental Rights Impact
Assessment (FRIA), with the possibility to reuse a Data Protection Impact
Assessment (DPIA), and requires the EU Commission to create of an automated
tool to support the FRIA process. In this article, we provide our novel
exploration of the DPIA and FRIA as information processes to enable the
creation of automated tools. We first investigate the information involved in
DPIA and FRIA, and then use this to align the two to state where a DPIA can be
reused in a FRIA. We then present the FRIA as a 5-step process and discuss the
role of an automated tool for each step. Our work provides the necessary
foundation for creating and managing information for FRIA and supporting it
through an automated tool as required by the AI Act.
|
2501.14759
|
LPBSA: Enhancing Optimization Efficiency through Learner
Performance-based Behavior and Simulated Annealing
|
cs.NE math.OC
|
This study introduces the LPBSA, an advanced optimization algorithm that
combines Learner Performance-based Behavior (LPB) and Simulated Annealing (SA)
in a hybrid approach. Emphasizing metaheuristics, the LPBSA addresses and
mitigates the challenges associated with traditional LPB methodologies,
enhancing convergence, robustness, and adaptability in solving complex
optimization problems. Through extensive evaluations using benchmark test
functions, the LPBSA demonstrates superior performance compared to LPB and
competes favorably with established algorithms such as PSO, FDO, LEO, and GA.
Real-world applications underscore the algorithm's promise, with LPBSA
outperforming the LEO algorithm in two tested scenarios. Based on the study
results many test function results such as TF5 by recording (4.76762333) and
some other test functions provided in the result section prove that LPBSA
outperforms popular algorithms. This research highlights the efficacy of a
hybrid approach in the ongoing evolution of optimization algorithms, showcasing
the LPBSA's capacity to navigate diverse optimization landscapes and contribute
significantly to addressing intricate optimization challenges.
|
2501.14762
|
Linked Data on Geo-annotated Events and Use Cases for the Resilience of
Ukraine
|
cs.CY cs.SI
|
The mission of resilience of Ukrainian cities calls for international
collaboration with the scientific community to increase the quality of
information by identifying and integrating information from various news and
social media sources. Linked Data technology can be used to unify, enrich, and
integrate data from multiple sources. In our work, we focus on datasets about
damaging events in Ukraine due to Russia's invasion between February 2022 and
the end of April 2023. We convert two selected datasets to Linked Data and
enrich them with additional geospatial information. Following that, we present
an algorithm for the detection of identical events from different datasets. Our
pipeline makes it easy to convert and enrich datasets to integrated Linked
Data. The resulting dataset consists of 10K reported events covering damage to
hospitals, schools, roads, residential buildings, etc. Finally, we demonstrate
in use cases how our dataset can be applied to different scenarios for
resilience purposes.
|
2501.14765
|
Hybrid Cooperative Co-Evolution Algorithm for Deadlock-prone Distributed
Assembly Flowshop Scheduling with Limited buffers Using Petri nets
|
cs.DC cs.SY eess.SY
|
The distributed assembly flowshop scheduling problem (DAFSP) can be applied
to immense manufacturing environments. In DAFSP, jobs are first processed in
distributed flowshops, and then assembled into final products by an assembly
machine, which usually has limited buffers in practical application. This
limited capacity can lead to deadlocks, halting job completion and blocking the
entire manufacturing process. However, existing scheduling methods fail to
address these deadlocks in DAFSP effectively. As such, we develop a hybrid
cooperative co-evolution (HCCE) algorithm for solving the deadlock-prone DAFSP
by minimizing the makespan. For the first time, we use Petri nets to analyze
the deadlocks in DAFSP and propose a Petri net-based deadlock amending method
(IDAM), which is further integrated into HCCE to ensure the feasibility (i.e.,
deadlock-freeness) of solutions. Importantly, HCCE contains an elite archive
(EAR) and two subpopulations. It uses the problem-specific operators for
heuristic initialization and global-search. To enhance the quality and
diversity of solutions, an information transfer mechanism (ITM) is developed
among subpopulation and EAR, and four local-search operators are performed
sequentially on each individual in EAR. Finally, comprehensive experiments
demonstrate the effectiveness and superiority of the proposed HCCE algorithm.
|
2501.14766
|
Artificial Intelligence for Sustainable Urban Biodiversity: A Framework
for Monitoring and Conservation
|
cs.CY cs.AI
|
The rapid expansion of urban areas challenges biodiversity conservation,
requiring innovative ecosystem management. This study explores the role of
Artificial Intelligence (AI) in urban biodiversity conservation, its
applications, and a framework for implementation. Key findings show that: (a)
AI enhances species detection and monitoring, achieving over 90% accuracy in
urban wildlife tracking and invasive species management; (b) integrating data
from remote sensing, acoustic monitoring, and citizen science enables
large-scale ecosystem analysis; and (c) AI decision tools improve conservation
planning and resource allocation, increasing prediction accuracy by up to 18.5%
compared to traditional methods. The research presents an AI-Driven Framework
for Urban Biodiversity Management, highlighting AI's impact on monitoring,
conservation strategies, and ecological outcomes. Implementation strategies
include: (a) standardizing data collection and model validation, (b) ensuring
equitable AI access across urban contexts, and (c) developing ethical
guidelines for biodiversity monitoring. The study concludes that integrating AI
in urban biodiversity conservation requires balancing innovation with
ecological wisdom and addressing data quality, socioeconomic disparities, and
ethical concerns.
|
2501.14767
|
Leveraging Social Media Data and Artificial Intelligence for Improving
Earthquake Response Efforts
|
cs.CY cs.AI cs.CL cs.IR cs.SI
|
The integration of social media and artificial intelligence (AI) into
disaster management, particularly for earthquake response, represents a
profound evolution in emergency management practices. In the digital age,
real-time information sharing has reached unprecedented levels, with social
media platforms emerging as crucial communication channels during crises. This
shift has transformed traditional, centralized emergency services into more
decentralized, participatory models of disaster situational awareness. Our
study includes an experimental analysis of 8,900 social media interactions,
including 2,920 posts and 5,980 replies on X (formerly Twitter), following a
magnitude 5.1 earthquake in Oklahoma on February 2, 2024. The analysis covers
data from the immediate aftermath and extends over the following seven days,
illustrating the critical role of digital platforms in modern disaster
response. The results demonstrate that social media platforms can be
effectively used as real-time situational awareness tools, delivering critical
information to society and authorities during emergencies.
|
2501.14768
|
Equation discovery framework EPDE: Towards a better equation discovery
|
cs.NE cs.AI cs.LG
|
Equation discovery methods hold promise for extracting knowledge from
physics-related data. However, existing approaches often require substantial
prior information that significantly reduces the amount of knowledge extracted.
In this paper, we enhance the EPDE algorithm -- an evolutionary
optimization-based discovery framework. In contrast to methods like SINDy,
which rely on pre-defined libraries of terms and linearities, our approach
generates terms using fundamental building blocks such as elementary functions
and individual differentials. Within evolutionary optimization, we may improve
the computation of the fitness function as is done in gradient methods and
enhance the optimization algorithm itself. By incorporating multi-objective
optimization, we effectively explore the search space, yielding more robust
equation extraction, even when dealing with complex experimental data. We
validate our algorithm's noise resilience and overall performance by comparing
its results with those from the state-of-the-art equation discovery framework
SINDy.
|
2501.14769
|
A survey on pioneering metaheuristic algorithms between 2019 and 2024
|
cs.NE cs.AI
|
This review examines over 150 new metaheuristics of the last six years
(between 2019 and 2024), underscoring their profound influence and performance.
Over the past three decades, more than 500 new metaheuristic algorithms have
been proposed, with no slowdown in sight. An overwhelming abundance that
complicates the process of selecting and assessing the most effective solutions
for complex optimization challenges. Our evaluation centers on pivotal
criteria, including annual citation metrics, the breadth of the addressed
problem types, source code availability, user friendly parameter
configurations, innovative mechanisms and operators, and approaches designed to
mitigate traditional metaheuristic issues such as stagnation and premature
convergence. We further explore recent high impact applications of the past six
years' most influential 23 metahueristic algorithms, shedding light on their
advantages and limitations, while identifying challenges and potential avenues
for future research.
|
2501.14770
|
Optimizing SSD Caches for Cloud Block Storage Systems Using Machine
Learning Approaches
|
cs.DC cs.LG cs.OS
|
The growing demand for efficient cloud storage solutions has led to the
widespread adoption of Solid-State Drives (SSDs) for caching in cloud block
storage systems. The management of data writes to SSD caches plays a crucial
role in improving overall system performance, reducing latency, and extending
the lifespan of storage devices. A critical challenge arises from the large
volume of write-only data, which significantly impacts the performance of SSD
caches when handled inefficiently. Specifically, writes that have not been read
for a certain period may introduce unnecessary write traffic to the SSD cache
without offering substantial benefits for cache performance. This paper
proposes a novel approach to mitigate this issue by leveraging machine learning
techniques to dynamically optimize the write policy in cloud-based storage
systems. The proposed method identifies write-only data and selectively filters
it out in real-time, thereby minimizing the number of unnecessary write
operations and improving the overall performance of the cache system.
Experimental results demonstrate that the proposed machine learning-based
policy significantly outperforms traditional approaches by reducing the number
of harmful writes and optimizing cache utilization. This solution is
particularly suitable for cloud environments with varying and unpredictable
workloads, where traditional cache management strategies often fall short.
|
2501.14771
|
Dynamic Adaptation in Data Storage: Real-Time Machine Learning for
Enhanced Prefetching
|
cs.DC cs.LG cs.OS
|
The exponential growth of data storage demands has necessitated the evolution
of hierarchical storage management strategies [1]. This study explores the
application of streaming machine learning [3] to revolutionize data prefetching
within multi-tiered storage systems. Unlike traditional batch-trained models,
streaming machine learning [5] offers adaptability, real-time insights, and
computational efficiency, responding dynamically to workload variations. This
work designs and validates an innovative framework that integrates streaming
classification models for predicting file access patterns, specifically the
next file offset. Leveraging comprehensive feature engineering and real-time
evaluation over extensive production traces, the proposed methodology achieves
substantial improvements in prediction accuracy, memory efficiency, and system
adaptability. The results underscore the potential of streaming models in
real-time storage management, setting a precedent for advanced caching and
tiering strategies.
|
2501.14772
|
DropMicroFluidAgents (DMFAs): Autonomous Droplet Microfluidic Research
Framework Through Large Language Model Agents
|
cs.CY cs.AI
|
Applying Large language models (LLMs) within specific domains requires
substantial adaptation to account for the unique terminologies, nuances, and
context-specific challenges inherent to those areas. Here, we introduce
DropMicroFluidAgents (DMFAs), an advanced language-driven framework leveraging
state-of-the-art pre-trained LLMs. DMFAs employs LLM agents to perform two key
functions: (1) delivering focused guidance, answers, and suggestions specific
to droplet microfluidics and (2) generating machine learning models to optimise
and automate the design of droplet microfluidic devices, including the creation
of code-based computer-aided design (CAD) scripts to enable rapid and precise
design execution. Experimental evaluations demonstrated that the integration of
DMFAs with the LLAMA3.1 model yielded the highest accuracy of 76.15%,
underscoring the significant performance enhancement provided by agent
integration. This effect was particularly pronounced when DMFAs were paired
with the GEMMA2 model, resulting in a 34.47% improvement in accuracy compared
to the standalone GEMMA2 configuration. This study demonstrates the effective
use of LLM agents in droplet microfluidics research as powerful tools for
automating workflows, synthesising knowledge, optimising designs, and
interacting with external systems. These capabilities enable their application
across education and industrial support, driving greater efficiency in
scientific discovery and innovation.
|
2501.14775
|
Hybrid Firefly-Genetic Algorithm for Single and Multi-dimensional 0-1
Knapsack Problems
|
cs.NE cs.AI
|
This paper addresses the challenges faced by algorithms, such as the Firefly
Algorithm (FA) and the Genetic Algorithm (GA), in constrained optimization
problems. While both algorithms perform well for unconstrained problems, their
effectiveness diminishes when constraints are introduced due to limitations in
exploration, exploitation, and constraint handling. To overcome these
challenges, a hybrid FAGA algorithm is proposed, combining the strengths of
both algorithms. The hybrid algorithm is validated by solving unconstrained
benchmark functions and constrained optimization problems, including design
engineering problems and combinatorial problems such as the 0-1 Knapsack
Problem. The proposed algorithm delivers improved solution accuracy and
computational efficiency compared to conventional optimization algorithm. This
paper outlines the development and structure of the hybrid algorithm and
demonstrates its effectiveness in handling complex optimization problems.
|
2501.14776
|
Green AI: Which Programming Language Consumes the Most?
|
cs.CY cs.AI cs.PL
|
AI is demanding an evergrowing portion of environmental resources. Despite
their potential impact on AI environmental sustainability, the role that
programming languages play in AI (in)efficiency is to date still unknown. With
this study, we aim to understand the impact that programming languages can have
on AI environmental sustainability. To achieve our goal, we conduct a
controlled empirical experiment by considering five programming languages (C++,
Java, Python, MATLAB, and R), seven AI algorithms (KNN, SVC, AdaBoost, decision
tree, logistic regression, naive bayses, and random forest), three popular
datasets, and the training and inference phases. The collected results show
that programming languages have a considerable impact on AI environmental
sustainability. Compiled and semi-compiled languages (C++, Java) consistently
consume less than interpreted languages (Python, MATLAB, R), which require up
to 54x more energy. Some languages are cumulatively more efficient in training,
while others in inference. Which programming language consumes the most highly
depends on the algorithm considered. Ultimately, algorithm implementation might
be the most determining factor in Green AI, regardless of the language used. As
conclusion, while making AI more environmentally sustainable is paramount, a
trade-off between energy efficiency and implementation ease should always be
considered. Green AI can be achieved without the need of completely disrupting
the development practices and technologies currently in place.
|
2501.14777
|
Enhancing Supply Chain Resilience with Metaverse and ChatGPT
Technologies
|
cs.CY cs.AI
|
Global supply lines have been severely disrupted by the COVID-19 epidemic and
the conflict between Russia and Ukraine, which has sharply increased the price
of commodities and generated inflation. These incidents highlight how critical
it is to improve supply chain resilience (SCRES) in order to fend off
unforeseen setbacks. Controlling both internal and external interruptions, such
as transportation problems brought on by natural catastrophes and wars, is the
responsibility of SCRES. Enhancing resilience in supply chains requires
accurate and timely information transfer. Promising answers to these problems
can be found in the Metaverse and ChatGPT, two new digital technologies. The
Metaverse may imitate real-world situations and offer dynamic, real-time 3D
representations of supply chain data by integrating blockchain, IoT, network
connection, and computer power.Large-scale natural language processing model
ChatGPT improves communication and data translation accuracy and speed. To
manage risk and facilitate decision making in Supply Chain management, firms
should increase information transmission, Speed and quality. This study aim to
show the importance of ChatGPT and Metaverse technologies to improve SCRES,
with an emphasis on the most important criteria for SCRES, and maturity factor
that can influence directly the SC development.
|
2501.14778
|
Advancing Trustworthy AI for Sustainable Development: Recommendations
for Standardising AI Incident Reporting
|
cs.CY cs.AI cs.HC
|
The increasing use of AI technologies has led to increasing AI incidents,
posing risks and causing harm to individuals, organizations, and society. This
study recognizes and addresses the lack of standardized protocols for reliably
and comprehensively gathering such incident data crucial for preventing future
incidents and developing mitigating strategies. Specifically, this study
analyses existing open-access AI-incident databases through a systematic
methodology and identifies nine gaps in current AI incident reporting
practices. Further, it proposes nine actionable recommendations to enhance
standardization efforts to address these gaps. Ensuring the trustworthiness of
enabling technologies such as AI is necessary for sustainable digital
transformation. Our research promotes the development of standards to prevent
future AI incidents and promote trustworthy AI, thus facilitating achieving the
UN sustainable development goals. Through international cooperation,
stakeholders can unlock the transformative potential of AI, enabling a
sustainable and inclusive future for all.
|
2501.14779
|
The Use of Generative Artificial Intelligence for Upper Secondary
Mathematics Education Through the Lens of Technology Acceptance
|
cs.CY cs.AI cs.HC
|
This study investigated the students' perceptions of using Generative
Artificial Intelligence (GenAI) in upper-secondary mathematics education. Data
was collected from Finnish high school students to represent how key constructs
of the Technology Acceptance Model (Perceived Usefulness, Perceived Ease of
Use, Perceived Enjoyment, and Intention to Use) influence the adoption of AI
tools. First, a structural equation model for a comparative study with a prior
study was constructed and analyzed. Then, an extended model with the additional
construct of Compatibility, which represents the alignment of AI tools with
students' educational experiences and needs, was proposed and analyzed. The
results demonstrated a strong influence of perceived usefulness on the
intention to use GenAI, emphasizing the statistically significant role of
perceived enjoyment in determining perceived usefulness and ease of use. The
inclusion of compatibility improved the model's explanatory power, particularly
in predicting perceived usefulness. This study contributes to a deeper
understanding of how AI tools can be integrated into mathematics education and
highlights key differences between the Finnish educational context and previous
studies based on structural equation modeling.
|
2501.14780
|
Perspective Chapter: MOOCs in India: Evolution, Innovation, Impact, and
Roadmap
|
cs.CY cs.AI cs.DL
|
With the largest population of the world and one of the highest enrolments in
higher education, India needs efficient and effective means to educate its
learners. India started focusing on open and digital education in 1980's and
its efforts were escalated in 2009 through the NMEICT program of the Government
of India. A study by the Government and FICCI in 2014 noted that India cannot
meet its educational needs just by capacity building in brick and mortar
institutions. It was decided that ongoing MOOCs projects under the umbrella of
NMEICT will be further strengthened over its second (2017-21) and third
(2021-26) phases. NMEICT now steers NPTEL or SWAYAM (India's MOOCs) and several
digital learning projects including Virtual Labs, e-Yantra, Spoken Tutorial,
FOSSEE, and National Digital Library on India - the largest digital education
library in the world. Further, India embraced its new National Education Policy
in 2020 to strongly foster online education. In this chapter, we take a deep
look into the evolution of MOOCs in India, its innovations, its current status
and impact, and the roadmap for the next decade to address its challenges and
grow. AI-powered MOOCs is an emerging opportunity for India to lead MOOCs
worldwide.
|
2501.14784
|
DeServe: Towards Affordable Offline LLM Inference via Decentralization
|
cs.DC cs.AI
|
The rapid growth of generative AI and its integration into everyday workflows
have significantly increased the demand for large language model (LLM)
inference services. While proprietary models remain popular, recent
advancements in open-source LLMs have positioned them as strong contenders.
However, deploying these models is often constrained by the high costs and
limited availability of GPU resources. In response, this paper presents the
design of a decentralized offline serving system for LLM inference. Utilizing
idle GPU resources, our proposed system, DeServe, decentralizes access to LLMs
at a lower cost. DeServe specifically addresses key challenges in optimizing
serving throughput in high-latency network environments. Experiments
demonstrate that DeServe achieves a 6.7x-12.6x improvement in throughput over
existing serving system baselines in such conditions.
|
2501.14785
|
ED-Filter: Dynamic Feature Filtering for Eating Disorder Classification
|
stat.ML cs.AI cs.LG cs.SI
|
Eating disorders (ED) are critical psychiatric problems that have alarmed the
mental health community. Mental health professionals are increasingly
recognizing the utility of data derived from social media platforms such as
Twitter. However, high dimensionality and extensive feature sets of Twitter
data present remarkable challenges for ED classification. To overcome these
hurdles, we introduce a novel method, an informed branch and bound search
technique known as ED-Filter. This strategy significantly improves the
drawbacks of conventional feature selection algorithms such as filters and
wrappers. ED-Filter iteratively identifies an optimal set of promising features
that maximize the eating disorder classification accuracy. In order to adapt to
the dynamic nature of Twitter ED data, we enhance the ED-Filter with a hybrid
greedy-based deep learning algorithm. This algorithm swiftly identifies
sub-optimal features to accommodate the ever-evolving data landscape.
Experimental results on Twitter eating disorder data affirm the effectiveness
and efficiency of ED-Filter. The method demonstrates significant improvements
in classification accuracy and proves its value in eating disorder detection on
social media platforms.
|
2501.14786
|
Punch Out Model Synthesis: A Stochastic Algorithm for Constraint Based
Tiling Generation
|
cs.DC cs.LG
|
As an artistic aid in tiled level design, Constraint Based Tiling Generation
(CBTG) algorithms can help to automatically create level realizations from a
set of tiles and placement constraints. Merrell's Modify in Blocks Model
Synthesis (MMS) and Gumin's Wave Function Collapse (WFC) have been proposed as
Constraint Based Tiling Generation (CBTG) algorithms that work well for many
scenarios but have limitations in problem size, problem setup and solution
biasing. We present Punch Out Model Synthesis (POMS), a Constraint Based Tiling
Generation algorithm, that can handle large problem sizes, requires minimal
assumptions for setup and can help mitigate solution biasing. POMS attempts to
resolve indeterminate grid regions by trying to progressively realize
sub-blocks, performing a stochastic boundary erosion on previously resolved
regions should sub-block resolution fail. We highlight the results of running a
reference implementation on different tile sets and discuss a tile correlation
length, implied by the tile constraints, and its role in choosing an
appropriate block size to aid POMS in successfully finding grid realizations.
|
2501.14787
|
Matrix Calculus (for Machine Learning and Beyond)
|
math.HO cs.LG cs.NA math.NA stat.ML
|
This course, intended for undergraduates familiar with elementary calculus
and linear algebra, introduces the extension of differential calculus to
functions on more general vector spaces, such as functions that take as input a
matrix and return a matrix inverse or factorization, derivatives of ODE
solutions, and even stochastic derivatives of random functions. It emphasizes
practical computational applications, such as large-scale optimization and
machine learning, where derivatives must be re-imagined in order to be
propagated through complicated calculations. The class also discusses
efficiency concerns leading to "adjoint" or "reverse-mode" differentiation
(a.k.a. "backpropagation"), and gives a gentle introduction to modern automatic
differentiation (AD) techniques.
|
2501.14788
|
Methods to Increase the Amount of Data for Speech Recognition for Low
Resource Languages
|
cs.SD cs.CL eess.AS
|
This study explores methods to increase data volume for low-resource
languages using techniques such as crowdsourcing, pseudo-labeling, advanced
data preprocessing and various permissive data sources such as audiobooks,
Common Voice, YouTube. While these methods are well-explored for highresource
languages, their application for low-resource languages remains underexplored.
Using Armenian and Georgian as case studies, we demonstrate how linguistic and
resource-specific characteristics influence the success of these methods. This
work provides practical guidance for researchers to choose cost-effective and
quality-driven dataset extension strategies for low-resource languages. The key
takeaway from various data extension approaches is that paid crowd-sourcing
offers the best balance between cost and quality, outperforming volunteer
crowd-sourcing, open-source audiobooks, and unlabeled data usage. Ablation
study shows that models trained on the expanded datasets outperform existing
baselines and achieve 5.73% for Gergian and 9.9% for Armenian ASR word error
rate using a relatively small FastConformer architecture. We open-sourced both
the Armenian and Georgian models to allow further research and practical
applications.
|
2501.14790
|
Towards Dynamic Neural Communication and Speech Neuroprosthesis Based on
Viseme Decoding
|
q-bio.NC cs.AI cs.SD eess.AS
|
Decoding text, speech, or images from human neural signals holds promising
potential both as neuroprosthesis for patients and as innovative communication
tools for general users. Although neural signals contain various information on
speech intentions, movements, and phonetic details, generating informative
outputs from them remains challenging, with mostly focusing on decoding short
intentions or producing fragmented outputs. In this study, we developed a
diffusion model-based framework to decode visual speech intentions from
speech-related non-invasive brain signals, to facilitate face-to-face neural
communication. We designed an experiment to consolidate various phonemes to
train visemes of each phoneme, aiming to learn the representation of
corresponding lip formations from neural signals. By decoding visemes from both
isolated trials and continuous sentences, we successfully reconstructed
coherent lip movements, effectively bridging the gap between brain signals and
dynamic visual interfaces. The results highlight the potential of viseme
decoding and talking face reconstruction from human neural signals, marking a
significant step toward dynamic neural communication systems and speech
neuroprosthesis for patients.
|
2501.14794
|
HeteroLLM: Accelerating Large Language Model Inference on Mobile SoCs
platform with Heterogeneous AI Accelerators
|
cs.DC cs.AI cs.LG
|
With the rapid advancement of artificial intelligence technologies such as
ChatGPT, AI agents and video generation,contemporary mobile systems have begun
integrating these AI capabilities on local devices to enhance privacy and
reduce response latency. To meet the computational demands of AI tasks, current
mobile SoCs are equipped with diverse AI accelerators, including GPUs and
Neural Processing Units (NPUs). However, there has not been a comprehensive
characterization of these heterogeneous processors, and existing designs
typically only leverage a single AI accelerator for LLM inference, leading to
suboptimal use of computational resources and memory bandwidth. In this paper,
we first summarize key performance characteristics of mobile SoC, including
heterogeneous processors, unified memory, synchronization, etc. Drawing on
these observations, we propose different tensor partition strategies to fulfill
the distinct requirements of the prefill and decoding phases. We further design
a fast synchronization mechanism that leverages the unified memory address
provided by mobile SoCs. By employing these techniques, we present HeteroLLM,
the fastest LLM inference engine in mobile devices which supports both
layer-level and tensor-level heterogeneous execution. Evaluation results show
that HeteroLLM achieves 9.99 and 4.36 performance improvement over other
mobile-side LLM inference engines: MLC and MNN.
|
2501.14802
|
DNN-Powered MLOps Pipeline Optimization for Large Language Models: A
Framework for Automated Deployment and Resource Management
|
cs.DC cs.LG
|
The exponential growth in the size and complexity of Large Language Models
(LLMs) has introduced unprecedented challenges in their deployment and
operational management. Traditional MLOps approaches often fail to efficiently
handle the scale, resource requirements, and dynamic nature of these models.
This research presents a novel framework that leverages Deep Neural Networks
(DNNs) to optimize MLOps pipelines specifically for LLMs. Our approach
introduces an intelligent system that automates deployment decisions, resource
allocation, and pipeline optimization while maintaining optimal performance and
cost efficiency. Through extensive experimentation across multiple cloud
environments and deployment scenarios, we demonstrate significant improvements:
40% enhancement in resource utilization, 35% reduction in deployment latency,
and 30% decrease in operational costs compared to traditional MLOps approaches.
The framework's ability to adapt to varying workloads and automatically
optimize deployment strategies represents a significant advancement in
automated MLOps management for large-scale language models. Our framework
introduces several novel components including a multi-stream neural
architecture for processing heterogeneous operational metrics, an adaptive
resource allocation system that continuously learns from deployment patterns,
and a sophisticated deployment orchestration mechanism that automatically
selects optimal strategies based on model characteristics and environmental
conditions. The system demonstrates robust performance across various
deployment scenarios, including multi-cloud environments, high-throughput
production systems, and cost-sensitive deployments. Through rigorous evaluation
using production workloads from multiple organizations, we validate our
approach's effectiveness in reducing operational complexity while improving
system reliability and cost efficiency.
|
2501.14808
|
HyGen: Efficient LLM Serving via Elastic Online-Offline Request
Co-location
|
cs.DC cs.LG
|
Large language models (LLMs) have facilitated a wide range of applications
with distinct service-level objectives (SLOs), from latency-sensitive online
tasks like interactive chatbots to throughput-oriented offline workloads like
document summarization. The existing deployment model, which dedicates machines
to each workload, simplifies SLO management but often leads to poor resource
utilization. This paper introduces HyGen, an interference-aware LLM serving
system that enables efficient co-location of online and offline workloads while
preserving latency requirements. HyGen incorporates two key innovations: (1)
performance control mechanisms, including a latency predictor to estimate batch
execution time and an SLO-aware profiler to quantify latency interference, and
(2) SLO-aware offline scheduling policies that maximize serving throughput and
prevent starvation, without compromising online serving latency. Our evaluation
on production workloads shows that HyGen achieves up to 3.87x overall
throughput and 5.84x offline throughput gains over online and hybrid serving
baselines, respectively, while strictly satisfying latency SLOs.
|
2501.14809
|
Towards Foundation Models: Evaluation of Geoscience Artificial
Intelligence with Uncertainty
|
cs.LG cs.AI physics.geo-ph
|
Artificial intelligence (AI) has transformed the geoscience community with
deep learning models (DLMs) that are trained to complete specific tasks within
workflows. This success has led to the development of geoscience foundation
models (FMs), which promise to accomplish multiple tasks within a workflow or
replace the workflow altogether. However, lack of robust evaluation frameworks,
even for traditional DLMs, leaves the geoscience community ill prepared for the
inevitable adoption of FMs. We address this gap by designing an evaluation
framework that jointly incorporates three crucial aspects to current DLMs and
future FMs: performance uncertainty, learning efficiency, and overlapping
training-test data splits. To target the three aspects, we meticulously
construct the training, validation, and test splits using clustering methods
tailored to geoscience data and enact an expansive training design to segregate
performance uncertainty arising from stochastic training processes and random
data sampling. The framework's ability to guard against misleading declarations
of model superiority is demonstrated through evaluation of PhaseNet, a popular
seismic phase picking DLM, under 3 training approaches. Furthermore, we show
how the performance gains due to overlapping training-test data can lead to
biased FM evaluation. Our framework helps practitioners choose the best model
for their problem and set performance expectations by explicitly analyzing
model performance at varying budgets of training data.
|
2501.14813
|
Dissertation Machine Learning in Materials Science -- A case study in
Carbon Nanotube field effect transistors
|
physics.app-ph cond-mat.mes-hall cs.LG physics.data-an
|
In this thesis, I explored the use of several machine learning techniques,
including neural networks, simulation-based inference, and generative flow
networks, on predicting CNTFETs performance, probing the conductivity
properties of CNT network, and generating CNTFETs processing information for
target performance.
|
2501.14815
|
A VM-HDL Co-Simulation Framework for Systems with PCIe-Connected FPGAs
|
cs.DC cs.AI cs.AR cs.NI
|
PCIe-connected FPGAs are gaining popularity as an accelerator technology in
data centers. However, it is challenging to jointly develop and debug host
software and FPGA hardware. Changes to the hardware design require a
time-consuming FPGA synthesis process, and modification to the software,
especially the operating system and device drivers, can frequently cause the
system to hang, without providing enough information for debugging. The
combination of these problems results in long debug iterations and a slow
development process. To overcome these problems, we designed a VM-HDL
co-simulation framework, which is capable of running the same software,
operating system, and hardware designs as the target physical system, while
providing full visibility and significantly shorter debug iterations.
|
2501.14816
|
Jump Point Search Pathfinding in 4-connected Grids
|
cs.RO
|
This work introduces JPS4, a novel pathfinding algorithm for 4-connected grid
maps. JPS4 builds upon the Jump Point Search (JPS8) algorithm, originally
designed for 8-connected environments. To achieve efficient pathfinding on
4-connected grids, JPS4 employs a canonical ordering and a successor function
that enable online graph pruning. This reduces the search space by minimizing
unnecessary node expansions.
The core concept of JPS4 as well as JPS8 lies in the utilization of jump
points. Strategically placed at obstacle corners, jump points prevent the
search from overlooking crucial sections of the state space. They essentially
reinitialize the canonical ordering, allowing exploration beyond obstacles.
This mechanism ensures JPS4 finds optimal paths even in complex environments.
The paper further explores the optimality of JPS4 and compares its
performance against the established A* algorithm on various grid maps.
Benchmarking results demonstrate that JPS4 significantly outperforms A* in
scenarios with high obstacle density. However, A* remains more efficient on
open maps. Overall, JPS4 presents itself as a promising alternative to A* for
pathfinding on 4-connected grids, particularly applicable in video game
development.
|
2501.14817
|
A Cutting Mechanics-based Machine Learning Modeling Method to Discover
Governing Equations of Machining Dynamics
|
cs.LG cs.CE
|
This paper proposes a cutting mechanics-based machine learning (CMML)
modeling method to discover governing equations of machining dynamics. The main
idea of CMML design is to integrate existing physics in cutting mechanics and
unknown physics in data to achieve automated model discovery, with the
potential to advance machining modeling. Based on existing physics in cutting
mechanics, CMML first establishes a general modeling structure governing
machining dynamics, that is represented by a set of unknown differential
algebraic equations. CMML can therefore achieve data-driven discovery of these
unknown equations through effective cutting mechanics-based nonlinear learning
function space design and discrete optimization-based learning algorithm.
Experimentally verified time domain simulation of milling is used to validate
the proposed modeling method. Numerical results show CMML can discover the
exact milling dynamics models with process damping and edge force from noisy
data. This indicates that CMML has the potential to be used for advancing
machining modeling in practice with the development of effective metrology
systems.
|
2501.14818
|
Eagle 2: Building Post-Training Data Strategies from Scratch for
Frontier Vision-Language Models
|
cs.CV cs.AI cs.LG
|
Recently, promising progress has been made by open-source vision-language
models (VLMs) in bringing their capabilities closer to those of proprietary
frontier models. However, most open-source models only publish their final
model weights, leaving the critical details of data strategies and
implementation largely opaque. In this work, we address VLM post-training from
a data-centric perspective, showing the key role of data strategy in developing
frontier VLMs. By studying and building our post-training data strategy from
scratch, we share detailed insights into the development processes, aiming to
benefit the development of competitive models for the open-source community.
Our introduced data strategy, together with training recipes and model design,
leads to a family of performant VLMs named Eagle2. Specifically, Eagle2-9B
achieves state-of-the-art results across various multimodal benchmarks,
matching certain competitive models with up to 70B parameters.
|
2501.14819
|
A Comprehensive Mathematical and System-Level Analysis of Autonomous
Vehicle Timelines
|
cs.MA cs.RO
|
Fully autonomous vehicles (AVs) continue to spark immense global interest,
yet predictions on when they will operate safely and broadly remain heavily
debated. This paper synthesizes two distinct research traditions: computational
complexity and algorithmic constraints versus reliability growth modeling and
real-world testing to form an integrated, quantitative timeline for future AV
deployment. We propose a mathematical framework that unifies NP-hard
multi-agent path planning analyses, high-performance computing (HPC)
projections, and extensive Crow-AMSAA reliability growth calculations,
factoring in operational design domain (ODD) variations, severity, and partial
vs. full domain restrictions. Through category-specific case studies (e.g.,
consumer automotive, robo-taxis, highway trucking, industrial and defense
applications), we show how combining HPC limitations, safety demonstration
requirements, production/regulatory hurdles, and parallel/serial test
strategies can push out the horizon for universal Level 5 deployment by up to
several decades. Conversely, more constrained ODDs; like fenced industrial
sites or specialized defense operations; may see autonomy reach commercial
viability in the near-to-medium term. Our findings illustrate that while
targeted domains can achieve automated service sooner, widespread driverless
vehicles handling every environment remain far from realized. This paper thus
offers a unique and rigorous perspective on why AV timelines extend well beyond
short-term optimism, underscoring how each dimension of complexity and
reliability imposes its own multi-year delays. By quantifying these constraints
and exploring potential accelerators (e.g., advanced AI hardware,
infrastructure up-grades), we provide a structured baseline for researchers,
policymakers, and industry stakeholders to more accurately map their
expectations and investments in AV technology.
|
2501.14822
|
Controlling Ensemble Variance in Diffusion Models: An Application for
Reanalyses Downscaling
|
stat.AP cs.AI cs.LG
|
In recent years, diffusion models have emerged as powerful tools for
generating ensemble members in meteorology. In this work, we demonstrate that a
Denoising Diffusion Implicit Model (DDIM) can effectively control ensemble
variance by varying the number of diffusion steps. Introducing a theoretical
framework, we relate diffusion steps to the variance expressed by the reverse
diffusion process. Focusing on reanalysis downscaling, we propose an ensemble
diffusion model for the full ERA5-to-CERRA domain, generating
variance-calibrated ensemble members for wind speed at full spatial and
temporal resolution. Our method aligns global mean variance with a reference
ensemble dataset and ensures spatial variance is distributed in accordance with
observed meteorological variability. Additionally, we address the lack of
ensemble information in the CARRA dataset, showcasing the utility of our
approach for efficient, high-resolution ensemble generation.
|
2501.14823
|
Quantifying Energy and Cost Benefits of Hybrid Edge Cloud: Analysis of
Traditional and Agentic Workloads
|
cs.DC cs.AI
|
This paper examines the workload distribution challenges in centralized cloud
systems and demonstrates how Hybrid Edge Cloud (HEC) [1] mitigates these
inefficiencies. Workloads in cloud environments often follow a Pareto
distribution, where a small percentage of tasks consume most resources, leading
to bottlenecks and energy inefficiencies. By analyzing both traditional
workloads reflective of typical IoT and smart device usage and agentic
workloads, such as those generated by AI agents, robotics, and autonomous
systems, this study quantifies the energy and cost savings enabled by HEC. Our
findings reveal that HEC achieves energy savings of up to 75% and cost
reductions exceeding 80%, even in resource-intensive agentic scenarios. These
results highlight the critical role of HEC in enabling scalable,
cost-effective, and sustainable computing for the next generation of
intelligent systems.
|
2501.14824
|
A causal learning approach to in-orbit inertial parameter estimation for
multi-payload deployers
|
eess.SY astro-ph.IM cs.LG cs.RO cs.SY
|
This paper discusses an approach to inertial parameter estimation for the
case of cargo carrying spacecraft that is based on causal learning, i.e.
learning from the responses of the spacecraft, under actuation. Different
spacecraft configurations (inertial parameter sets) are simulated under
different actuation profiles, in order to produce an optimised time-series
clustering classifier that can be used to distinguish between them. The
actuation is comprised of finite sequences of constant inputs that are applied
in order, based on typical actuators available. By learning from the system's
responses across multiple input sequences, and then applying measures of
time-series similarity and F1-score, an optimal actuation sequence can be
chosen either for one specific system configuration or for the overall set of
possible configurations. This allows for both estimation of the inertial
parameter set without any prior knowledge of state, as well as validation of
transitions between different configurations after a deployment event. The
optimisation of the actuation sequence is handled by a reinforcement learning
model that uses the proximal policy optimisation (PPO) algorithm, by repeatedly
trying different sequences and evaluating the impact on classifier performance
according to a multi-objective metric.
|
2501.14826
|
Multi-Modality Transformer for E-Commerce: Inferring User Purchase
Intention to Bridge the Query-Product Gap
|
cs.IR cs.AI cs.LG
|
E-commerce click-stream data and product catalogs offer critical user
behavior insights and product knowledge. This paper propose a multi-modal
transformer termed as PINCER, that leverages the above data sources to
transform initial user queries into pseudo-product representations. By tapping
into these external data sources, our model can infer users' potential purchase
intent from their limited queries and capture query relevant product features.
We demonstrate our model's superior performance over state-of-the-art
alternatives on e-commerce online retrieval in both controlled and real-world
experiments. Our ablation studies confirm that the proposed transformer
architecture and integrated learning strategies enable the mining of key data
sources to infer purchase intent, extract product features, and enhance the
transformation pipeline from queries to more accurate pseudo-product
representations.
|
2501.14828
|
An Ensemble Model with Attention Based Mechanism for Image Captioning
|
cs.CV cs.AI
|
Image captioning creates informative text from an input image by creating a
relationship between the words and the actual content of an image. Recently,
deep learning models that utilize transformers have been the most successful in
automatically generating image captions. The capabilities of transformer
networks have led to notable progress in several activities related to vision.
In this paper, we thoroughly examine transformer models, emphasizing the
critical role that attention mechanisms play. The proposed model uses a
transformer encoder-decoder architecture to create textual captions and a deep
learning convolutional neural network to extract features from the images. To
create the captions, we present a novel ensemble learning framework that
improves the richness of the generated captions by utilizing several deep
neural network architectures based on a voting mechanism that chooses the
caption with the highest bilingual evaluation understudy (BLEU) score. The
proposed model was evaluated using publicly available datasets. Using the
Flickr8K dataset, the proposed model achieved the highest BLEU-[1-3] scores
with rates of 0.728, 0.495, and 0.323, respectively. The suggested model
outperformed the latest methods in Flickr30k datasets, determined by BLEU-[1-4]
scores with rates of 0.798, 0.561, 0.387, and 0.269, respectively. The model
efficacy was also obtained by the Semantic propositional image caption
evaluation (SPICE) metric with a scoring rate of 0.164 for the Flicker8k
dataset and 0.387 for the Flicker30k. Finally, ensemble learning significantly
advances the process of image captioning and, hence, can be leveraged in
various applications across different domains.
|
2501.14830
|
Sharp exact recovery threshold for two-community Euclidean random graphs
|
cs.SI math.PR
|
This paper considers the problem of label recovery in random graphs and
matrices. Motivated by transitive behavior in real-world networks (i.e., ``the
friend of my friend is my friend''), a recent line of work considers
spatially-embedded networks, which exhibit transitive behavior. In particular,
the Geometric Hidden Community Model (GHCM), introduced by Gaudio, Guan, Niu,
and Wei, models a network as a labeled Poisson point process where every pair
of vertices is associated with a pairwise observation whose distribution
depends on the labels and positions of the vertices. The GHCM is in turn a
generalization of the Geometric SBM (proposed by Baccelli and Sankararaman).
Gaudio et al. provided a threshold below which exact recovery is
information-theoretically impossible. Above the threshold, they provided a
linear-time algorithm that succeeds in exact recovery under a certain
``distinctness-of-distributions'' assumption, which they conjectured to be
unnecessary. In this paper, we partially resolve the conjecture by showing that
the threshold is indeed tight for the two-community GHCM. We provide a
two-phase, linear-time algorithm that explores the spatial graph in a
data-driven manner in Phase I to yield an almost exact labeling, which is
refined to achieve exact recovery in Phase II. Our results extend achievability
to geometric formulations of well-known inference problems, such as the planted
dense subgraph problem and submatrix localization, in which the
distinctness-of-distributions assumption does not hold.
|
2501.14836
|
Symbolic Knowledge Extraction and Injection with Sub-symbolic
Predictors: A Systematic Literature Review
|
cs.AI cs.LG cs.LO
|
In this paper we focus on the opacity issue of sub-symbolic machine learning
predictors by promoting two complementary activities, namely, symbolic
knowledge extraction (SKE) and injection (SKI) from and into sub-symbolic
predictors. We consider as symbolic any language being intelligible and
interpretable for both humans and computers. Accordingly, we propose general
meta-models for both SKE and SKI, along with two taxonomies for the
classification of SKE and SKI methods. By adopting an explainable artificial
intelligence (XAI) perspective, we highlight how such methods can be exploited
to mitigate the aforementioned opacity issue. Our taxonomies are attained by
surveying and classifying existing methods from the literature, following a
systematic approach, and by generalising the results of previous surveys
targeting specific sub-topics of either SKE or SKI alone. More precisely, we
analyse 132 methods for SKE and 117 methods for SKI, and we categorise them
according to their purpose, operation, expected input/output data and predictor
types. For each method, we also indicate the presence/lack of runnable software
implementations. Our work may be of interest for data scientists aiming at
selecting the most adequate SKE/SKI method for their needs, and also work as
suggestions for researchers interested in filling the gaps of the current state
of the art, as well as for developers willing to implement SKE/SKI-based
technologies.
|
2501.14837
|
A Semiparametric Bayesian Method for Instrumental Variable Analysis with
Partly Interval-Censored Time-to-Event Outcome
|
stat.ME cs.LG stat.AP stat.CO stat.ML
|
This paper develops a semiparametric Bayesian instrumental variable analysis
method for estimating the causal effect of an endogenous variable when dealing
with unobserved confounders and measurement errors with partly
interval-censored time-to-event data, where event times are observed exactly
for some subjects but left-censored, right-censored, or interval-censored for
others. Our method is based on a two-stage Dirichlet process mixture
instrumental variable (DPMIV) model which simultaneously models the first-stage
random error term for the exposure variable and the second-stage random error
term for the time-to-event outcome using a bivariate Gaussian mixture of the
Dirichlet process (DPM) model. The DPM model can be broadly understood as a
mixture model with an unspecified number of Gaussian components, which relaxes
the normal error assumptions and allows the number of mixture components to be
determined by the data. We develop an MCMC algorithm for the DPMIV model
tailored for partly interval-censored data and conduct extensive simulations to
assess the performance of our DPMIV method in comparison with some competing
methods. Our simulations revealed that our proposed method is robust under
different error distributions and can have superior performance over its
parametric counterpart under various scenarios. We further demonstrate the
effectiveness of our approach on an UK Biobank data to investigate the causal
effect of systolic blood pressure on time-to-development of cardiovascular
disease from the onset of diabetes mellitus.
|
2501.14844
|
Unmasking Conversational Bias in AI Multiagent Systems
|
cs.CL cs.AI cs.MA
|
Detecting biases in the outputs produced by generative models is essential to
reduce the potential risks associated with their application in critical
settings. However, the majority of existing methodologies for identifying
biases in generated text consider the models in isolation and neglect their
contextual applications. Specifically, the biases that may arise in multi-agent
systems involving generative models remain under-researched. To address this
gap, we present a framework designed to quantify biases within multi-agent
systems of conversational Large Language Models (LLMs). Our approach involves
simulating small echo chambers, where pairs of LLMs, initialized with aligned
perspectives on a polarizing topic, engage in discussions. Contrary to
expectations, we observe significant shifts in the stance expressed in the
generated messages, particularly within echo chambers where all agents
initially express conservative viewpoints, in line with the well-documented
political bias of many LLMs toward liberal positions. Crucially, the bias
observed in the echo-chamber experiment remains undetected by current
state-of-the-art bias detection methods that rely on questionnaires. This
highlights a critical need for the development of a more sophisticated toolkit
for bias detection and mitigation for AI multi-agent systems. The code to
perform the experiments is publicly available at
https://anonymous.4open.science/r/LLMsConversationalBias-7725.
|
2501.14846
|
Wormhole Memory: A Rubik's Cube for Cross-Dialogue Retrieval
|
cs.LG cs.AI cs.CL
|
In view of the gap in the current large language model in sharing memory
across dialogues, this research proposes a wormhole memory module (WMM) to
realize memory as a Rubik's cube that can be arbitrarily retrieved between
different dialogues. Through simulation experiments, the researcher built an
experimental framework based on the Python environment and used setting memory
barriers to simulate the current situation where memories between LLMs
dialogues are difficult to share. The CoQA development data set was imported
into the experiment, and the feasibility of its cross-dialogue memory retrieval
function was verified for WMM's nonlinear indexing and dynamic retrieval, and a
comparative analysis was conducted with the capabilities of Titans and MemGPT
memory modules. Experimental results show that WMM demonstrated the ability to
retrieve memory across dialogues and the stability of quantitative indicators
in eight experiments. It contributes new technical approaches to the
optimization of memory management of LLMs and provides experience for the
practical application in the future.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.