id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
2501.06948
|
The Einstein Test: Towards a Practical Test of a Machine's Ability to
Exhibit Superintelligence
|
cs.AI
|
Creative and disruptive insights (CDIs), such as the development of the
theory of relativity, have punctuated human history, marking pivotal shifts in
our intellectual trajectory. Recent advancements in artificial intelligence
(AI) have sparked debates over whether state of the art models possess the
capacity to generate CDIs. We argue that the ability to create CDIs should be
regarded as a significant feature of machine superintelligence (SI).To this
end, we propose a practical test to evaluate whether an approach to AI
targeting SI can yield novel insights of this kind. We propose the Einstein
test: given the data available prior to the emergence of a known CDI, can an AI
independently reproduce that insight (or one that is formally equivalent)? By
achieving such a milestone, a machine can be considered to at least match
humanity's past top intellectual achievements, and therefore to have the
potential to surpass them.
|
2501.06954
|
A Hessian-informed hyperparameter optimization for differential learning
rate
|
cs.LG
|
Differential learning rate (DLR), a technique that applies different learning
rates to different model parameters, has been widely used in deep learning and
achieved empirical success via its various forms. For example,
parameter-efficient fine-tuning (PEFT) applies zero learning rates to most
parameters so as to significantly save the computational cost.
At the core, DLR leverages the observation that different parameters can have
different loss curvature, which is hard to characterize in general. We propose
the Hessian-informed differential learning rate (Hi-DLR), an efficient approach
that solves the hyperparameter optimization (HPO) of learning rates and
captures the loss curvature for any model and optimizer adaptively. Given a
proper grouping of parameters, we empirically demonstrate that Hi-DLR can
improve the convergence by dynamically determining the learning rates during
the training. Furthermore, we can quantify the influence of different
parameters and freeze the less-contributing parameters, which leads to a new
PEFT that automatically adapts to various tasks and models. Additionally,
Hi-DLR also exhibits comparable performance on various full model training
tasks.
|
2501.06956
|
Patent Novelty Assessment Accelerating Innovation and Patent Prosecution
|
cs.DL cs.AI cs.IR
|
In the rapidly evolving landscape of technological innovation, safeguarding
intellectual property rights through patents is crucial for fostering progress
and stimulating research and development investments. This report introduces a
ground-breaking Patent Novelty Assessment and Claim Generation System,
meticulously crafted to dissect the inventive aspects of intellectual property
and simplify access to extensive patent claim data. Addressing a crucial gap in
academic institutions, our system provides college students and researchers
with an intuitive platform to navigate and grasp the intricacies of patent
claims, particularly tailored for the nuances of Chinese patents. Unlike
conventional analysis systems, our initiative harnesses a proprietary Chinese
API to ensure unparalleled precision and relevance. The primary challenge lies
in the complexity of accessing and comprehending diverse patent claims,
inhibiting effective innovation upon existing ideas. Our solution aims to
overcome these barriers by offering a bespoke approach that seamlessly
retrieves comprehensive claim information, finely tuned to the specifics of the
Chinese patent landscape. By equipping users with efficient access to
comprehensive patent claim information, our transformative platform seeks to
ignite informed exploration and innovation in the ever-evolving domain of
intellectual property. Its envisioned impact transcends individual colleges,
nurturing an environment conducive to research and development while deepening
the understanding of patented concepts within the academic community.
|
2501.06959
|
Sanidha: A Studio Quality Multi-Modal Dataset for Carnatic Music
|
cs.SD cs.DL cs.LG eess.AS
|
Music source separation demixes a piece of music into its individual sound
sources (vocals, percussion, melodic instruments, etc.), a task with no simple
mathematical solution. It requires deep learning methods involving training on
large datasets of isolated music stems. The most commonly available datasets
are made from commercial Western music, limiting the models' applications to
non-Western genres like Carnatic music. Carnatic music is a live tradition,
with the available multi-track recordings containing overlapping sounds and
bleeds between the sources. This poses a challenge to commercially available
source separation models like Spleeter and Hybrid Demucs. In this work, we
introduce 'Sanidha', the first open-source novel dataset for Carnatic music,
offering studio-quality, multi-track recordings with minimal to no overlap or
bleed. Along with the audio files, we provide high-definition videos of the
artists' performances. Additionally, we fine-tuned Spleeter, one of the most
commonly used source separation models, on our dataset and observed improved
SDR performance compared to fine-tuning on a pre-existing Carnatic multi-track
dataset. The outputs of the fine-tuned model with 'Sanidha' are evaluated
through a listening study.
|
2501.06962
|
Compact Bayesian Neural Networks via pruned MCMC sampling
|
cs.LG cs.AI
|
Bayesian Neural Networks (BNNs) offer robust uncertainty quantification in
model predictions, but training them presents a significant computational
challenge. This is mainly due to the problem of sampling multimodal posterior
distributions using Markov Chain Monte Carlo (MCMC) sampling and variational
inference algorithms. Moreover, the number of model parameters scales
exponentially with additional hidden layers, neurons, and features in the
dataset. Typically, a significant portion of these densely connected parameters
are redundant and pruning a neural network not only improves portability but
also has the potential for better generalisation capabilities. In this study,
we address some of the challenges by leveraging MCMC sampling with network
pruning to obtain compact probabilistic models having removed redundant
parameters. We sample the posterior distribution of model parameters (weights
and biases) and prune weights with low importance, resulting in a compact
model. We ensure that the compact BNN retains its ability to estimate
uncertainty via the posterior distribution while retaining the model training
and generalisation performance accuracy by adapting post-pruning resampling. We
evaluate the effectiveness of our MCMC pruning strategy on selected benchmark
datasets for regression and classification problems through empirical result
analysis. We also consider two coral reef drill-core lithology classification
datasets to test the robustness of the pruning model in complex real-world
datasets. We further investigate if refining compact BNN can retain any loss of
performance. Our results demonstrate the feasibility of training and pruning
BNNs using MCMC whilst retaining generalisation performance with over 75%
reduction in network size. This paves the way for developing compact BNN models
that provide uncertainty estimates for real-world applications.
|
2501.06963
|
Generative Artificial Intelligence-Supported Pentesting: A Comparison
between Claude Opus, GPT-4, and Copilot
|
cs.CR cs.AI
|
The advent of Generative Artificial Intelligence (GenAI) has brought a
significant change to our society. GenAI can be applied across numerous fields,
with particular relevance in cybersecurity. Among the various areas of
application, its use in penetration testing (pentesting) or ethical hacking
processes is of special interest. In this paper, we have analyzed the potential
of leading generic-purpose GenAI tools-Claude Opus, GPT-4 from ChatGPT, and
Copilot-in augmenting the penetration testing process as defined by the
Penetration Testing Execution Standard (PTES). Our analysis involved evaluating
each tool across all PTES phases within a controlled virtualized environment.
The findings reveal that, while these tools cannot fully automate the
pentesting process, they provide substantial support by enhancing efficiency
and effectiveness in specific tasks. Notably, all tools demonstrated utility;
however, Claude Opus consistently outperformed the others in our experimental
scenarios.
|
2501.06964
|
Enhancing Patient-Centric Communication: Leveraging LLMs to Simulate
Patient Perspectives
|
cs.AI cs.HC
|
Large Language Models (LLMs) have demonstrated impressive capabilities in
role-playing scenarios, particularly in simulating domain-specific experts
using tailored prompts. This ability enables LLMs to adopt the persona of
individuals with specific backgrounds, offering a cost-effective and efficient
alternative to traditional, resource-intensive user studies. By mimicking human
behavior, LLMs can anticipate responses based on concrete demographic or
professional profiles. In this paper, we evaluate the effectiveness of LLMs in
simulating individuals with diverse backgrounds and analyze the consistency of
these simulated behaviors compared to real-world outcomes. In particular, we
explore the potential of LLMs to interpret and respond to discharge summaries
provided to patients leaving the Intensive Care Unit (ICU). We evaluate and
compare with human responses the comprehensibility of discharge summaries among
individuals with varying educational backgrounds, using this analysis to assess
the strengths and limitations of LLM-driven simulations. Notably, when LLMs are
primed with educational background information, they deliver accurate and
actionable medical guidance 88% of the time. However, when other information is
provided, performance significantly drops, falling below random chance levels.
This preliminary study shows the potential benefits and pitfalls of
automatically generating patient-specific health information from diverse
populations. While LLMs show promise in simulating health personas, our results
highlight critical gaps that must be addressed before they can be reliably used
in clinical settings. Our findings suggest that a straightforward
query-response model could outperform a more tailored approach in delivering
health information. This is a crucial first step in understanding how LLMs can
be optimized for personalized health communication while maintaining accuracy.
|
2501.06965
|
Kolmogorov-Arnold Recurrent Network for Short Term Load Forecasting
Across Diverse Consumers
|
cs.LG cs.AI eess.SP
|
Load forecasting plays a crucial role in energy management, directly
impacting grid stability, operational efficiency, cost reduction, and
environmental sustainability. Traditional Vanilla Recurrent Neural Networks
(RNNs) face issues such as vanishing and exploding gradients, whereas
sophisticated RNNs such as LSTMs have shown considerable success in this
domain. However, these models often struggle to accurately capture complex and
sudden variations in energy consumption, and their applicability is typically
limited to specific consumer types, such as offices or schools. To address
these challenges, this paper proposes the Kolmogorov-Arnold Recurrent Network
(KARN), a novel load forecasting approach that combines the flexibility of
Kolmogorov-Arnold Networks with RNN's temporal modeling capabilities. KARN
utilizes learnable temporal spline functions and edge-based activations to
better model non-linear relationships in load data, making it adaptable across
a diverse range of consumer types. The proposed KARN model was rigorously
evaluated on a variety of real-world datasets, including student residences,
detached homes, a home with electric vehicle charging, a townhouse, and
industrial buildings. Across all these consumer categories, KARN consistently
outperformed traditional Vanilla RNNs, while it surpassed LSTM and Gated
Recurrent Units (GRUs) in six buildings. The results demonstrate KARN's
superior accuracy and applicability, making it a promising tool for enhancing
load forecasting in diverse energy management scenarios.
|
2501.06970
|
Next-Gen Space-Based Surveillance: Blockchain for Trusted and Efficient
Debris Tracking
|
cs.IT math.IT
|
This paper presents a novel blockchain-enabled architecture for efficient
decentralized space surveillance. Our simulation results indicate that a
network under 30 nodes achieves optimal throughput and response time. We also
compare our architecture with a fully participatory consensus model, where all
nodes perform both verification and approval tasks. Across all scenarios, our
approach demonstrates a 9x improvement in both throughput and response time
compared to the full participatory consensus, highlighting the efficiency gains
achieved by assigning dedicated roles for verification and approval. Future
work will explore the impact of faulty nodes and potential security threats on
network performance.
|
2501.06974
|
Downlink OFDM-FAMA in 5G-NR Systems
|
cs.IT eess.SP math.IT
|
Fluid antenna multiple access (FAMA), enabled by the fluid antenna system
(FAS), offers a new and straightforward solution to massive connectivity.
Previous results on FAMA were primarily based on narrowband channels. This
paper studies the adoption of FAMA within the fifth-generation (5G) orthogonal
frequency division multiplexing (OFDM) framework, referred to as OFDM-FAMA, and
evaluate its performance in broadband multipath channels. We first design the
OFDM-FAMA system, taking into account 5G channel coding and OFDM modulation.
Then the system's achievable rate is analyzed, and an algorithm to approximate
the FAS configuration at each user is proposed based on the rate. Extensive
link-level simulation results reveal that OFDM-FAMA can significantly improve
the multiplexing gain over the OFDM system with fixed-position antenna (FPA)
users, especially when robust channel coding is applied and the number of
radio-frequency (RF) chains at each user is small.
|
2501.06976
|
TensorConvolutionPlus: A python package for distribution system
flexibility area estimation
|
cs.SE cs.SY eess.SY
|
Power system operators need new, efficient operational tools to use the
flexibility of distributed resources and deal with the challenges of highly
uncertain and variable power systems. Transmission system operators can
consider the available flexibility in distribution systems (DSs) without
breaching the DS constraints through flexibility areas. However, there is an
absence of open-source packages for flexibility area estimation. This paper
introduces TensorConvolutionPlus, a user-friendly Python-based package for
flexibility area estimation. The main features of TensorConvolutionPlus include
estimating flexibility areas using the TensorConvolution+ algorithm, the power
flow-based algorithm, an exhaustive PF-based algorithm, and an optimal power
flow-based algorithm. Additional features include adapting flexibility area
estimations from different operating conditions and including flexibility
service providers offering discrete setpoints of flexibility. The
TensorConvolutionPlus package facilitates a broader adaptation of flexibility
estimation algorithms by system operators and power system researchers.
|
2501.06978
|
Towards a visually interpretable analysis of Two-Phase Locking
membership
|
cs.DB
|
Two-phase locking (2PL) is a consolidated policy commonly adopted by Database
Management Systems to enforce serializability of a schedule. While the policy
is well understood, both in its standard and in the strict version,
automatically deriving a suitable tabular/graphical analysis of schedules with
respect to 2PL is far from trivial, and requires several technicalities that do
not straightforwardly translate to visual cues. In this paper, we delve into
the details of the development of a tool for 2PL analysis.
|
2501.06980
|
Combining LLM decision and RL action selection to improve RL policy for
adaptive interventions
|
cs.LG cs.AI
|
Reinforcement learning (RL) is increasingly being used in the healthcare
domain, particularly for the development of personalized health adaptive
interventions. Inspired by the success of Large Language Models (LLMs), we are
interested in using LLMs to update the RL policy in real time, with the goal of
accelerating personalization. We use the text-based user preference to
influence the action selection on the fly, in order to immediately incorporate
the user preference. We use the term "user preference" as a broad term to refer
to a user personal preference, constraint, health status, or a statement
expressing like or dislike, etc. Our novel approach is a hybrid method that
combines the LLM response and the RL action selection to improve the RL policy.
Given an LLM prompt that incorporates the user preference, the LLM acts as a
filter in the typical RL action selection. We investigate different prompting
strategies and action selection strategies. To evaluate our approach, we
implement a simulation environment that generates the text-based user
preferences and models the constraints that impact behavioral dynamics. We show
that our approach is able to take into account the text-based user preferences,
while improving the RL policy, thus improving personalization in adaptive
intervention.
|
2501.06981
|
Data Enrichment Work and AI Labor in Latin America and the Caribbean
|
cs.CY cs.AI cs.HC
|
The global AI surge demands crowdworkers from diverse languages and cultures.
They are pivotal in labeling data for enabling global AI systems. Despite
global significance, research has primarily focused on understanding the
perspectives and experiences of US and India crowdworkers, leaving a notable
gap. To bridge this, we conducted a survey with 100 crowdworkers across 16
Latin American and Caribbean countries. We discovered that these workers
exhibited pride and respect for their digital labor, with strong support and
admiration from their families. Notably, crowd work was also seen as a stepping
stone to financial and professional independence. Surprisingly, despite wanting
more connection, these workers also felt isolated from peers and doubtful of
others' labor quality. They resisted collaboration and gender-based tools,
valuing gender-neutrality. Our work advances HCI understanding of Latin
American and Caribbean crowdwork, offering insights for digital resistance
tools for the region.
|
2501.06985
|
Graph Contrastive Learning on Multi-label Classification for
Recommendations
|
cs.IR cs.AI
|
In business analysis, providing effective recommendations is essential for
enhancing company profits. The utilization of graph-based structures, such as
bipartite graphs, has gained popularity for their ability to analyze complex
data relationships. Link prediction is crucial for recommending specific items
to users. Traditional methods in this area often involve identifying patterns
in the graph structure or using representational techniques like graph neural
networks (GNNs). However, these approaches encounter difficulties as the volume
of data increases. To address these challenges, we propose a model called Graph
Contrastive Learning for Multi-label Classification (MCGCL). MCGCL leverages
contrastive learning to enhance recommendation effectiveness. The model
incorporates two training stages: a main task and a subtask. The main task is
holistic user-item graph learning to capture user-item relationships. The
homogeneous user-user (item-item) subgraph is constructed to capture user-user
and item-item relationships in the subtask. We assessed the performance using
real-world datasets from Amazon Reviews in multi-label classification tasks.
Comparative experiments with state-of-the-art methods confirm the effectiveness
of MCGCL, highlighting its potential for improving recommendation systems.
|
2501.06986
|
LEO: Boosting Mixture of Vision Encoders for Multimodal Large Language
Models
|
cs.CV cs.CL
|
Enhanced visual understanding serves as a cornerstone for multimodal large
language models (MLLMs). Recent hybrid MLLMs incorporate a mixture of vision
experts to address the limitations of using a single vision encoder and
excessively long visual tokens. Despite the progress of these MLLMs, a research
gap remains in effectively integrating diverse vision encoders. This work
explores fusion strategies of visual tokens for hybrid MLLMs, leading to the
design of LEO, a novel MLLM with a dual-branch vision encoder framework that
incorporates a post-adaptation fusion strategy and adaptive tiling: for each
segmented tile of the input images, LEO sequentially interleaves the visual
tokens from its two vision encoders. Extensive evaluation across 13
vision-language benchmarks reveals that LEO outperforms state-of-the-art
open-source MLLMs and hybrid MLLMs on the majority of tasks. Furthermore, we
show that LEO can be adapted to the specialized domain of autonomous driving
without altering the model architecture or training recipe, achieving
competitive performance compared to existing baselines. The code and model will
be publicly available.
|
2501.06987
|
Hand-Object Contact Detection using Grasp Quality Metrics
|
cs.RO
|
We propose a novel hand-object contact detection system based on grasp
quality metrics extracted from object and hand poses, and evaluated its
performance using the DexYCB dataset. Our evaluation demonstrated the system's
high accuracy (approaching 90%). Future work will focus on a real-time
implementation using vision-based estimation, and integrating it to a
robot-to-human handover system.
|
2501.06988
|
Fully Differentiable Boundary Element Solver for Hydrodynamic
Sensitivity Analysis of Wave-Structure Interactions
|
cs.CE
|
Accurately predicting wave-structure interactions is critical for the
effective design and analysis of marine structures. This is typically achieved
using solvers that employ the boundary element method (BEM), which relies on
linear potential flow theory. Precise estimation of the sensitivity of these
interactions is equally important for system-level applications such as design
optimization. Current BEM solvers are unable to provide these sensitivities as
they are not differentiable. To address these challenges, we have developed a
fully-differentiable BEM solver for marine hydrodynamics, capable of
calculating diffraction and radiation coefficients, and their derivatives with
high accuracy. This new solver implements both direct and indirect BEM
formulations and incorporates two Green's function expressions, offering a
trade-off between accuracy and computational speed. Gradients are computed
using reverse-mode automatic differentiation (AD) within the Julia programming
language. As a first case study, we analyze two identical floating spheres,
evaluating gradients with respect to physical dimensions, inter-sphere
distance, and wave frequency. Validation studies demonstrate excellent
agreement between AD-computed gradients and finite-difference results. In a
second case study, we leverage AD-computed gradients to optimize the mechanical
power production of a pair of wave energy converters (WECs). This represents
the first application of gradients in WEC power optimization, offering valuable
insights into hydrodynamic interactions and advancing the understanding of
layout optimization for maximum efficiency. Beyond power optimization, the
differentiable BEM solver highlights the potential of AD for offshore design
studies.
|
2501.06994
|
Motion Tracks: A Unified Representation for Human-Robot Transfer in
Few-Shot Imitation Learning
|
cs.RO cs.AI cs.LG
|
Teaching robots to autonomously complete everyday tasks remains a challenge.
Imitation Learning (IL) is a powerful approach that imbues robots with skills
via demonstrations, but is limited by the labor-intensive process of collecting
teleoperated robot data. Human videos offer a scalable alternative, but it
remains difficult to directly train IL policies from them due to the lack of
robot action labels. To address this, we propose to represent actions as
short-horizon 2D trajectories on an image. These actions, or motion tracks,
capture the predicted direction of motion for either human hands or robot
end-effectors. We instantiate an IL policy called Motion Track Policy (MT-pi)
which receives image observations and outputs motion tracks as actions. By
leveraging this unified, cross-embodiment action space, MT-pi completes tasks
with high success given just minutes of human video and limited additional
robot demonstrations. At test time, we predict motion tracks from two camera
views, recovering 6DoF trajectories via multi-view synthesis. MT-pi achieves an
average success rate of 86.5% across 4 real-world tasks, outperforming
state-of-the-art IL baselines which do not leverage human data or our action
space by 40%, and generalizes to scenarios seen only in human videos. Code and
videos are available on our website
https://portal-cornell.github.io/motion_track_policy/.
|
2501.06999
|
Likelihood Training of Cascaded Diffusion Models via Hierarchical
Volume-preserving Maps
|
cs.LG cs.AI
|
Cascaded models are multi-scale generative models with a marked capacity for
producing perceptually impressive samples at high resolutions. In this work, we
show that they can also be excellent likelihood models, so long as we overcome
a fundamental difficulty with probabilistic multi-scale models: the
intractability of the likelihood function. Chiefly, in cascaded models each
intermediary scale introduces extraneous variables that cannot be tractably
marginalized out for likelihood evaluation. This issue vanishes by modeling the
diffusion process on latent spaces induced by a class of transformations we
call hierarchical volume-preserving maps, which decompose spatially structured
data in a hierarchical fashion without introducing local distortions in the
latent space. We demonstrate that two such maps are well-known in the
literature for multiscale modeling: Laplacian pyramids and wavelet transforms.
Not only do such reparameterizations allow the likelihood function to be
directly expressed as a joint likelihood over the scales, we show that the
Laplacian pyramid and wavelet transform also produces significant improvements
to the state-of-the-art on a selection of benchmarks in likelihood modeling,
including density estimation, lossless compression, and out-of-distribution
detection. Investigating the theoretical basis of our empirical gains we
uncover deep connections to score matching under the Earth Mover's Distance
(EMD), which is a well-known surrogate for perceptual similarity. Code can be
found at \href{https://github.com/lihenryhfl/pcdm}{this https url}.
|
2501.07000
|
Multiple-gain Estimation for Running Time of Evolutionary Combinatorial
Optimization
|
cs.NE
|
The running-time analysis of evolutionary combinatorial optimization is a
fundamental topic in evolutionary computation. Its current research mainly
focuses on specific algorithms for simplified problems due to the challenge
posed by fluctuating fitness values. This paper proposes a multiple-gain model
to estimate the fitness trend of population during iterations. The proposed
model is an improved version of the average gain model, which is the approach
to estimate the running time of evolutionary algorithms for numerical
optimization. The improvement yields novel results of evolutionary
combinatorial optimization, including a briefer proof for the time complexity
upper bound in the case of (1+1) EA for the Onemax problem, two tighter time
complexity upper bounds than the known results in the case of (1+$\lambda$) EA
for the knapsack problem with favorably correlated weights and a closed-form
expression of time complexity upper bound in the case of (1+$\lambda$) EA for
general $k$-MAX-SAT problems. The results indicate that the practical running
time aligns with the theoretical results, verifying that the multiple-gain
model is more general for running-time analysis of evolutionary combinatorial
optimization than state-of-the-art methods.
|
2501.07005
|
Global Search for Optimal Low Thrust Spacecraft Trajectories using
Diffusion Models and the Indirect Method
|
eess.SY cs.LG cs.SY math.OC
|
Long time-duration low-thrust nonlinear optimal spacecraft trajectory global
search is a computationally and time expensive problem characterized by
clustering patterns in locally optimal solutions. During preliminary mission
design, mission parameters are subject to frequent changes, necessitating that
trajectory designers efficiently generate high-quality control solutions for
these new scenarios. Generative machine learning models can be trained to learn
how the solution structure varies with respect to a conditional parameter,
thereby accelerating the global search for missions with updated parameters. In
this work, state-of-the-art diffusion models are integrated with the indirect
approach for trajectory optimization within a global search framework. This
framework is tested on two low-thrust transfers of different complexity in the
circular restricted three-body problem. By generating and analyzing a training
data set, we develop mathematical relations and techniques to understand the
complex structures in the costate domain of locally optimal solutions for these
problems. A diffusion model is trained on this data and successfully
accelerates the global search for both problems. The model predicts how the
costate solution structure changes, based on the maximum spacecraft thrust
magnitude. Warm-starting a numerical solver with diffusion model samples for
the costates at the initial time increases the number of solutions generated
per minute for problems with unseen thrust magnitudes by one to two orders of
magnitude in comparison to samples from a uniform distribution and from an
adjoint control transformation.
|
2501.07013
|
Sthymuli: a Static Educational Robot. Leveraging the Thymio II Platform
|
cs.RO
|
The use of robots in education represents a challenge for teachers and a
fixed vision of what robots can do for students. This paper presents the
development of Sthymuli, a static educational robot designed to explore new
classroom interactions between robots, students and teachers. We propose the
use of the Thymio II educational platform as a base, ensuring a robust
benchmark for a fair comparison of the commonly available wheeled robots and
our exploratory approach with Sthymuli. This paper outlines the constraints and
requirements for developing such a robot, the current state of development and
future work.
|
2501.07014
|
AlgoRxplorers | Precision in Mutation: Enhancing Drug Design with
Advanced Protein Stability Prediction Tools
|
cs.LG cs.AI
|
Predicting the impact of single-point amino acid mutations on protein
stability is essential for understanding disease mechanisms and advancing drug
development. Protein stability, quantified by changes in Gibbs free energy
($\Delta\Delta G$), is influenced by these mutations. However, the scarcity of
data and the complexity of model interpretation pose challenges in accurately
predicting stability changes. This study proposes the application of deep
neural networks, leveraging transfer learning and fusing complementary
information from different models, to create a feature-rich representation of
the protein stability landscape. We developed four models, with our third
model, ThermoMPNN+, demonstrating the best performance in predicting
$\Delta\Delta G$ values. This approach, which integrates diverse feature sets
and embeddings through latent transfusion techniques, aims to refine
$\Delta\Delta G$ predictions and contribute to a deeper understanding of
protein dynamics, potentially leading to advancements in disease research and
drug discovery.
|
2501.07015
|
SplatMAP: Online Dense Monocular SLAM with 3D Gaussian Splatting
|
cs.CV
|
Achieving high-fidelity 3D reconstruction from monocular video remains
challenging due to the inherent limitations of traditional methods like
Structure-from-Motion (SfM) and monocular SLAM in accurately capturing scene
details. While differentiable rendering techniques such as Neural Radiance
Fields (NeRF) address some of these challenges, their high computational costs
make them unsuitable for real-time applications. Additionally, existing 3D
Gaussian Splatting (3DGS) methods often focus on photometric consistency,
neglecting geometric accuracy and failing to exploit SLAM's dynamic depth and
pose updates for scene refinement. We propose a framework integrating dense
SLAM with 3DGS for real-time, high-fidelity dense reconstruction. Our approach
introduces SLAM-Informed Adaptive Densification, which dynamically updates and
densifies the Gaussian model by leveraging dense point clouds from SLAM.
Additionally, we incorporate Geometry-Guided Optimization, which combines
edge-aware geometric constraints and photometric consistency to jointly
optimize the appearance and geometry of the 3DGS scene representation, enabling
detailed and accurate SLAM mapping reconstruction. Experiments on the Replica
and TUM-RGBD datasets demonstrate the effectiveness of our approach, achieving
state-of-the-art results among monocular systems. Specifically, our method
achieves a PSNR of 36.864, SSIM of 0.985, and LPIPS of 0.040 on Replica,
representing improvements of 10.7%, 6.4%, and 49.4%, respectively, over the
previous SOTA. On TUM-RGBD, our method outperforms the closest baseline by
10.2%, 6.6%, and 34.7% in the same metrics. These results highlight the
potential of our framework in bridging the gap between photometric and
geometric dense 3D scene representations, paving the way for practical and
efficient monocular dense reconstruction.
|
2501.07016
|
A Multi-Modal Deep Learning Framework for Pan-Cancer Prognosis
|
eess.IV cs.AI cs.CV
|
Prognostic task is of great importance as it closely related to the survival
analysis of patients, the optimization of treatment plans and the allocation of
resources. The existing prognostic models have shown promising results on
specific datasets, but there are limitations in two aspects. On the one hand,
they merely explore certain types of modal data, such as patient histopathology
WSI and gene expression analysis. On the other hand, they adopt the
per-cancer-per-model paradigm, which means the trained models can only predict
the prognostic effect of a single type of cancer, resulting in weak
generalization ability. In this paper, a deep-learning based model, named
UMPSNet, is proposed. Specifically, to comprehensively understand the condition
of patients, in addition to constructing encoders for histopathology images and
genomic expression profiles respectively, UMPSNet further integrates four types
of important meta data (demographic information, cancer type information,
treatment protocols, and diagnosis results) into text templates, and then
introduces a text encoder to extract textual features. In addition, the optimal
transport OT-based attention mechanism is utilized to align and fuse features
of different modalities. Furthermore, a guided soft mixture of experts (GMoE)
mechanism is introduced to effectively address the issue of distribution
differences among multiple cancer datasets. By incorporating the multi-modality
of patient data and joint training, UMPSNet outperforms all SOTA approaches,
and moreover, it demonstrates the effectiveness and generalization ability of
the proposed learning paradigm of a single model for multiple cancer types. The
code of UMPSNet is available at https://github.com/binging512/UMPSNet.
|
2501.07017
|
UNetVL: Enhancing 3D Medical Image Segmentation with Chebyshev KAN
Powered Vision-LSTM
|
cs.CV cs.AI
|
3D medical image segmentation has progressed considerably due to
Convolutional Neural Networks (CNNs) and Vision Transformers (ViTs), yet these
methods struggle to balance long-range dependency acquisition with
computational efficiency. To address this challenge, we propose UNETVL (U-Net
Vision-LSTM), a novel architecture that leverages recent advancements in
temporal information processing. UNETVL incorporates Vision-LSTM (ViL) for
improved scalability and memory functions, alongside an efficient Chebyshev
Kolmogorov-Arnold Networks (KAN) to handle complex and long-range dependency
patterns more effectively. We validated our method on the ACDC and AMOS2022
(post challenge Task 2) benchmark datasets, showing a significant improvement
in mean Dice score compared to recent state-of-the-art approaches, especially
over its predecessor, UNETR, with increases of 7.3% on ACDC and 15.6% on AMOS,
respectively. Extensive ablation studies were conducted to demonstrate the
impact of each component in UNETVL, providing a comprehensive understanding of
its architecture. Our code is available at https://github.com/tgrex6/UNETVL,
facilitating further research and applications in this domain.
|
2501.07020
|
ViSoLex: An Open-Source Repository for Vietnamese Social Media Lexical
Normalization
|
cs.CL cs.AI
|
ViSoLex is an open-source system designed to address the unique challenges of
lexical normalization for Vietnamese social media text. The platform provides
two core services: Non-Standard Word (NSW) Lookup and Lexical Normalization,
enabling users to retrieve standard forms of informal language and standardize
text containing NSWs. ViSoLex's architecture integrates pre-trained language
models and weakly supervised learning techniques to ensure accurate and
efficient normalization, overcoming the scarcity of labeled data in Vietnamese.
This paper details the system's design, functionality, and its applications for
researchers and non-technical users. Additionally, ViSoLex offers a flexible,
customizable framework that can be adapted to various datasets and research
requirements. By publishing the source code, ViSoLex aims to contribute to the
development of more robust Vietnamese natural language processing tools and
encourage further research in lexical normalization. Future directions include
expanding the system's capabilities for additional languages and improving the
handling of more complex non-standard linguistic patterns.
|
2501.07021
|
Neural Probabilistic Circuits: Enabling Compositional and Interpretable
Predictions through Logical Reasoning
|
cs.LG cs.AI
|
End-to-end deep neural networks have achieved remarkable success across
various domains but are often criticized for their lack of interpretability.
While post hoc explanation methods attempt to address this issue, they often
fail to accurately represent these black-box models, resulting in misleading or
incomplete explanations. To overcome these challenges, we propose an inherently
transparent model architecture called Neural Probabilistic Circuits (NPCs),
which enable compositional and interpretable predictions through logical
reasoning. In particular, an NPC consists of two modules: an attribute
recognition model, which predicts probabilities for various attributes, and a
task predictor built on a probabilistic circuit, which enables logical
reasoning over recognized attributes to make class predictions. To train NPCs,
we introduce a three-stage training algorithm comprising attribute recognition,
circuit construction, and joint optimization. Moreover, we theoretically
demonstrate that an NPC's error is upper-bounded by a linear combination of the
errors from its modules. To further demonstrate the interpretability of NPC, we
provide both the most probable explanations and the counterfactual
explanations. Empirical results on four benchmark datasets show that NPCs
strike a balance between interpretability and performance, achieving results
competitive even with those of end-to-end black-box models while providing
enhanced interpretability.
|
2501.07022
|
Improved Regret Bounds for Online Fair Division with Bandit Learning
|
cs.GT cs.LG
|
We study online fair division when there are a finite number of item types
and the player values for the items are drawn randomly from distributions with
unknown means. In this setting, a sequence of indivisible items arrives
according to a random online process, and each item must be allocated to a
single player. The goal is to maximize expected social welfare while
maintaining that the allocation satisfies proportionality in expectation. When
player values are normalized, we show that it is possible to with high
probability guarantee proportionality constraint satisfaction and achieve
$\tilde{O}(\sqrt{T})$ regret. To achieve this result, we present an upper
confidence bound (UCB) algorithm that uses two rounds of linear optimization.
This algorithm highlights fundamental aspects of proportionality constraints
that allow for a UCB algorithm despite the presence of many (potentially tight)
constraints. This result improves upon the previous best regret rate of
$\tilde{O}(T^{2/3})$.
|
2501.07024
|
A Proposed Large Language Model-Based Smart Search for Archive System
|
cs.AI cs.IR
|
This study presents a novel framework for smart search in digital archival
systems, leveraging the capabilities of Large Language Models (LLMs) to enhance
information retrieval. By employing a Retrieval-Augmented Generation (RAG)
approach, the framework enables the processing of natural language queries and
transforming non-textual data into meaningful textual representations. The
system integrates advanced metadata generation techniques, a hybrid retrieval
mechanism, a router query engine, and robust response synthesis, the results
proved search precision and relevance. We present the architecture and
implementation of the system and evaluate its performance in four experiments
concerning LLM efficiency, hybrid retrieval optimizations, multilingual query
handling, and the impacts of individual components. Obtained results show
significant improvements over conventional approaches and have demonstrated the
potential of AI-powered systems to transform modern archival practices.
|
2501.07025
|
A Weighted Similarity Metric for Community Detection in Sparse Data
|
stat.ME cs.SI
|
Many Natural Language Processing (NLP) related applications involves topics
and sentiments derived from short documents such as consumer reviews and social
media posts. Topics and sentiments of short documents are highly sparse because
a short document generally covers a few topics among hundreds of candidates.
Imputation of missing data is sometimes hard to justify and also often
unpractical in highly sparse data. We developed a method for calculating a
weighted similarity for highly sparse data without imputation. This weighted
similarity is consist of three components to capture similarities based on both
existence and lack of common properties and pattern of missing values. As a
case study, we used a community detection algorithm and this weighted
similarity to group different shampoo brands based on sparse topic sentiments
derived from short consumer reviews. Compared with traditional imputation and
similarity measures, the weighted similarity shows better performance in both
general community structures and average community qualities. The performance
is consistent and robust across metrics and community complexities.
|
2501.07026
|
IEEE_TIE25: Analysis and Synthesis of DOb-based Robust Motion
Controllers
|
eess.SY cs.SY
|
By employing a unified state-space design framework, this paper proposes a
novel systematic analysis and synthesis method that facilitates the
implementation of both conventional zero-order (ZO) and high-order (HO) DObs.
Furthermore, this design method supports the development of advanced DObs
(e.g., the proposed High-Performance (HP) DOb in this paper), enabling more
accurate disturbance estimation and, consequently, enhancing the robust
stability and performance of motion control systems. Lyapunov direct method is
employed in the discrete-time domain to analyse the stability of the proposed
digital robust motion controllers. The analysis demonstrates that the proposed
DObs are stable in the sense that the estimation error is uniformly ultimately
bounded when subjected to bounded disturbances. Additionally, they are proven
to be asymptotically stable under specific disturbance conditions, such as
constant disturbances for the ZO and HP DObs. Stability constraints on the
design parameters of the DObs are analytically derived, providing effective
synthesis tools for the implementation of the digital robust motion
controllers. The discrete-time analysis facilitates the derivation of more
practical design constraints. The proposed analysis and synthesis methods have
been rigorously validated through experimental evaluations, confirming their
effectiveness.
|
2501.07027
|
Necessary and sufficient condition for constructing a single qudit
insertion/deletion code and its decoding algorithm
|
quant-ph cs.IT math.IT
|
This paper shows that Knill-Laflamme condition, known as a necessary and
sufficient condition for quantum error-correction, can be applied to quantum
errors where the number of particles changes before and after the error. This
fact shows that correctabilities of single deletion errors and single insertion
errors are equivalent. By applying Knill-Laflamme condition, we generalize the
previously known correction conditions for single insertion and deletion errors
to necessary and sufficient level. By giving an example that satisfies this
condition, we construct a new single qudit insertion/deletion code and explain
its decoding algorithm.
|
2501.07030
|
Erasing Noise in Signal Detection with Diffusion Model: From Theory to
Application
|
eess.SY cs.LG cs.SY eess.SP
|
In this paper, a signal detection method based on the denoise diffusion model
(DM) is proposed, which outperforms the maximum likelihood (ML) estimation
method that has long been regarded as the optimal signal detection technique.
Theoretically, a novel mathematical theory for intelligent signal detection
based on stochastic differential equations (SDEs) is established in this paper,
demonstrating the effectiveness of DM in reducing the additive white Gaussian
noise in received signals. Moreover, a mathematical relationship between the
signal-to-noise ratio (SNR) and the timestep in DM is established, revealing
that for any given SNR, a corresponding optimal timestep can be identified.
Furthermore, to address potential issues with out-of-distribution inputs in the
DM, we employ a mathematical scaling technique that allows the trained DM to
handle signal detection across a wide range of SNRs without any fine-tuning.
Building on the above theoretical foundation, we propose a DM-based signal
detection method, with the diffusion transformer (DiT) serving as the backbone
neural network, whose computational complexity of this method is
$\mathcal{O}(n^2)$. Simulation results demonstrate that, for BPSK and QAM
modulation schemes, the DM-based method achieves a significantly lower symbol
error rate (SER) compared to ML estimation, while maintaining a much lower
computational complexity.
|
2501.07032
|
PRKAN: Parameter-Reduced Kolmogorov-Arnold Networks
|
cs.LG
|
Kolmogorov-Arnold Networks (KANs) represent an innovation in neural network
architectures, offering a compelling alternative to Multi-Layer Perceptrons
(MLPs) in models such as Convolutional Neural Networks (CNNs), Recurrent Neural
Networks (RNNs), and Transformers. By advancing network design, KANs drive
groundbreaking research and enable transformative applications across various
scientific domains involving neural networks. However, existing KANs often
require significantly more parameters in their network layers than MLPs. To
address this limitation, this paper introduces PRKANs (Parameter-Reduced
Kolmogorov-Arnold Networks), which employ several methods to reduce the
parameter count in KAN layers, making them comparable to MLP layers.
Experimental results on the MNIST and Fashion-MNIST datasets demonstrate that
PRKANs outperform several existing KANs, and their variant with attention
mechanisms rivals the performance of MLPs, albeit with slightly longer training
times. Furthermore, the study highlights the advantages of Gaussian Radial
Basis Functions (GRBFs) and layer normalization in KAN designs. The repository
for this work is available at: https://github.com/hoangthangta/All-KAN.
|
2501.07033
|
Detection of AI Deepfake and Fraud in Online Payments Using GAN-Based
Models
|
cs.LG cs.CR cs.CV
|
This study explores the use of Generative Adversarial Networks (GANs) to
detect AI deepfakes and fraudulent activities in online payment systems. With
the growing prevalence of deepfake technology, which can manipulate facial
features in images and videos, the potential for fraud in online transactions
has escalated. Traditional security systems struggle to identify these
sophisticated forms of fraud. This research proposes a novel GAN-based model
that enhances online payment security by identifying subtle manipulations in
payment images. The model is trained on a dataset consisting of real-world
online payment images and deepfake images generated using advanced GAN
architectures, such as StyleGAN and DeepFake. The results demonstrate that the
proposed model can accurately distinguish between legitimate transactions and
deepfakes, achieving a high detection rate above 95%. This approach
significantly improves the robustness of payment systems against AI-driven
fraud. The paper contributes to the growing field of digital security, offering
insights into the application of GANs for fraud detection in financial
services. Keywords- Payment Security, Image Recognition, Generative Adversarial
Networks, AI Deepfake, Fraudulent Activities
|
2501.07034
|
Explore the Use of Time Series Foundation Model for Car-Following
Behavior Analysis
|
cs.LG
|
Modeling car-following behavior is essential for traffic simulation,
analyzing driving patterns, and understanding complex traffic flows with
varying levels of autonomous vehicles. Traditional models like the Safe
Distance Model and Intelligent Driver Model (IDM) require precise parameter
calibration and often lack generality due to simplified assumptions about
driver behavior. While machine learning and deep learning methods capture
complex patterns, they require large labeled datasets. Foundation models
provide a more efficient alternative. Pre-trained on vast, diverse time series
datasets, they can be applied directly to various tasks without the need for
extensive re-training. These models generalize well across domains, and with
minimal fine-tuning, they can be adapted to specific tasks like car-following
behavior prediction. In this paper, we apply Chronos, a state-of-the-art public
time series foundation model, to analyze car-following behavior using the Open
ACC dataset. Without fine-tuning, Chronos outperforms traditional models like
IDM and Exponential smoothing with trend and seasonality (ETS), and achieves
similar results to deep learning models such as DeepAR and TFT, with an RMSE of
0.60. After fine-tuning, Chronos reduces the error to an RMSE of 0.53,
representing a 33.75% improvement over IDM and a 12-37% reduction compared to
machine learning models like ETS and deep learning models including DeepAR,
WaveNet, and TFT. This demonstrates the potential of foundation models to
significantly advance transportation research, offering a scalable, adaptable,
and highly accurate approach to predicting and simulating car-following
behaviors.
|
2501.07039
|
IoT-Based Real-Time Medical-Related Human Activity Recognition Using
Skeletons and Multi-Stage Deep Learning for Healthcare
|
cs.CV
|
The Internet of Things (IoT) and mobile technology have significantly
transformed healthcare by enabling real-time monitoring and diagnosis of
patients. Recognizing medical-related human activities (MRHA) is pivotal for
healthcare systems, particularly for identifying actions that are critical to
patient well-being. However, challenges such as high computational demands, low
accuracy, and limited adaptability persist in Human Motion Recognition (HMR).
While some studies have integrated HMR with IoT for real-time healthcare
applications, limited research has focused on recognizing MRHA as essential for
effective patient monitoring. This study proposes a novel HMR method for MRHA
detection, leveraging multi-stage deep learning techniques integrated with IoT.
The approach employs EfficientNet to extract optimized spatial features from
skeleton frame sequences using seven Mobile Inverted Bottleneck Convolutions
(MBConv) blocks, followed by ConvLSTM to capture spatio-temporal patterns. A
classification module with global average pooling, a fully connected layer, and
a dropout layer generates the final predictions. The model is evaluated on the
NTU RGB+D 120 and HMDB51 datasets, focusing on MRHA, such as sneezing, falling,
walking, sitting, etc. It achieves 94.85% accuracy for cross-subject
evaluations and 96.45% for cross-view evaluations on NTU RGB+D 120, along with
89.00% accuracy on HMDB51. Additionally, the system integrates IoT capabilities
using a Raspberry Pi and GSM module, delivering real-time alerts via Twilios
SMS service to caregivers and patients. This scalable and efficient solution
bridges the gap between HMR and IoT, advancing patient monitoring, improving
healthcare outcomes, and reducing costs.
|
2501.07040
|
Rethinking Knowledge in Distillation: An In-context Sample Retrieval
Perspective
|
cs.CV
|
Conventional knowledge distillation (KD) approaches are designed for the
student model to predict similar output as the teacher model for each sample.
Unfortunately, the relationship across samples with same class is often
neglected. In this paper, we explore to redefine the knowledge in distillation,
capturing the relationship between each sample and its corresponding in-context
samples (a group of similar samples with the same or different classes), and
perform KD from an in-context sample retrieval perspective. As KD is a type of
learned label smoothing regularization (LSR), we first conduct a theoretical
analysis showing that the teacher's knowledge from the in-context samples is a
crucial contributor to regularize the student training with the corresponding
samples. Buttressed by the analysis, we propose a novel in-context knowledge
distillation (IC-KD) framework that shows its superiority across diverse KD
paradigms (offline, online, and teacher-free KD). Firstly, we construct a
feature memory bank from the teacher model and retrieve in-context samples for
each corresponding sample through retrieval-based learning. We then introduce
Positive In-Context Distillation (PICD) to reduce the discrepancy between a
sample from the student and the aggregated in-context samples with the same
class from the teacher in the logit space. Moreover, Negative In-Context
Distillation (NICD) is introduced to separate a sample from the student and the
in-context samples with different classes from the teacher in the logit space.
Extensive experiments demonstrate that IC-KD is effective across various types
of KD, and consistently achieves state-of-the-art performance on CIFAR-100 and
ImageNet datasets.
|
2501.07041
|
Beam Structured Turbo Receiver for HF Skywave Massive MIMO
|
cs.IT eess.SP math.IT
|
In this paper, we investigate receiver design for high frequency (HF) skywave
massive multiple-input multiple-output (MIMO) communications. We first
establish a modified beam based channel model (BBCM) by performing uniform
sampling for directional cosine with deterministic sampling interval, where the
beam matrix is constructed using a phase-shifted discrete Fourier transform
(DFT) matrix. Based on the modified BBCM, we propose a beam structured turbo
receiver (BSTR) involving low-dimensional beam domain signal detection for
grouped user terminals (UTs), which is proved to be asymptotically optimal in
terms of minimizing mean-squared error (MSE). Moreover, we extend it to
windowed BSTR by introducing a windowing approach for interference suppression
and complexity reduction, and propose a well-designed energy-focusing window.
We also present an efficient implementation of the windowed BSTR by exploiting
the structure properties of the beam matrix and the beam domain channel
sparsity. Simulation results validate the superior performance of the proposed
receivers but with remarkably low complexity.
|
2501.07044
|
Protego: Detecting Adversarial Examples for Vision Transformers via
Intrinsic Capabilities
|
cs.CV cs.LG
|
Transformer models have excelled in natural language tasks, prompting the
vision community to explore their implementation in computer vision problems.
However, these models are still influenced by adversarial examples. In this
paper, we investigate the attack capabilities of six common adversarial attacks
on three pretrained ViT models to reveal the vulnerability of ViT models. To
understand and analyse the bias in neural network decisions when the input is
adversarial, we use two visualisation techniques that are attention rollout and
grad attention rollout. To prevent ViT models from adversarial attack, we
propose Protego, a detection framework that leverages the transformer intrinsic
capabilities to detection adversarial examples of ViT models. Nonetheless, this
is challenging due to a diversity of attack strategies that may be adopted by
adversaries. Inspired by the attention mechanism, we know that the token of
prediction contains all the information from the input sample. Additionally,
the attention region for adversarial examples differs from that of normal
examples. Given these points, we can train a detector that achieves superior
performance than existing detection methods to identify adversarial examples.
Our experiments have demonstrated the high effectiveness of our detection
method. For these six adversarial attack methods, our detector's AUC scores all
exceed 0.95. Protego may advance investigations in metaverse security.
|
2501.07045
|
ACCon: Angle-Compensated Contrastive Regularizer for Deep Regression
|
cs.LG cs.AI
|
In deep regression, capturing the relationship among continuous labels in
feature space is a fundamental challenge that has attracted increasing
interest. Addressing this issue can prevent models from converging to
suboptimal solutions across various regression tasks, leading to improved
performance, especially for imbalanced regression and under limited sample
sizes. However, existing approaches often rely on order-aware representation
learning or distance-based weighting. In this paper, we hypothesize a linear
negative correlation between label distances and representation similarities in
regression tasks. To implement this, we propose an angle-compensated
contrastive regularizer for deep regression, which adjusts the cosine distance
between anchor and negative samples within the contrastive learning framework.
Our method offers a plug-and-play compatible solution that extends most
existing contrastive learning methods for regression tasks. Extensive
experiments and theoretical analysis demonstrate that our proposed
angle-compensated contrastive regularizer not only achieves competitive
regression performance but also excels in data efficiency and effectiveness on
imbalanced datasets.
|
2501.07046
|
Differentially Private Kernelized Contextual Bandits
|
stat.ML cs.LG
|
We consider the problem of contextual kernel bandits with stochastic
contexts, where the underlying reward function belongs to a known Reproducing
Kernel Hilbert Space (RKHS). We study this problem under the additional
constraint of joint differential privacy, where the agents needs to ensure that
the sequence of query points is differentially private with respect to both the
sequence of contexts and rewards. We propose a novel algorithm that improves
upon the state of the art and achieves an error rate of
$\mathcal{O}\left(\sqrt{\frac{\gamma_T}{T}} + \frac{\gamma_T}{T
\varepsilon}\right)$ after $T$ queries for a large class of kernel families,
where $\gamma_T$ represents the effective dimensionality of the kernel and
$\varepsilon > 0$ is the privacy parameter. Our results are based on a novel
estimator for the reward function that simultaneously enjoys high utility along
with a low-sensitivity to observed rewards and contexts, which is crucial to
obtain an order optimal learning performance with improved dependence on the
privacy parameter.
|
2501.07047
|
Leveraging ASIC AI Chips for Homomorphic Encryption
|
cs.CR cs.AR cs.CL cs.PL
|
Cloud-based services are making the outsourcing of sensitive client data
increasingly common. Although homomorphic encryption (HE) offers strong privacy
guarantee, it requires substantially more resources than computing on
plaintext, often leading to unacceptably large latencies in getting the
results. HE accelerators have emerged to mitigate this latency issue, but with
the high cost of ASICs. In this paper we show that HE primitives can be
converted to AI operators and accelerated on existing ASIC AI accelerators,
like TPUs, which are already widely deployed in the cloud. Adapting such
accelerators for HE requires (1) supporting modular multiplication, (2)
high-precision arithmetic in software, and (3) efficient mapping on matrix
engines. We introduce the CROSS compiler (1) to adopt Barrett reduction to
provide modular reduction support using multiplier and adder, (2) Basis Aligned
Transformation (BAT) to convert high-precision multiplication as low-precision
matrix-vector multiplication, (3) Matrix Aligned Transformation (MAT) to covert
vectorized modular operation with reduction into matrix multiplication that can
be efficiently processed on 2D spatial matrix engine. Our evaluation of CROSS
on a Google TPUv4 demonstrates significant performance improvements, with up to
161x and 5x speedup compared to the previous work on many-core CPUs and V100.
The kernel-level codes are open-sourced at
https://github.com/google/jaxite.git.
|
2501.07048
|
Unveiling the Potential of Text in High-Dimensional Time Series
Forecasting
|
cs.AI
|
Time series forecasting has traditionally focused on univariate and
multivariate numerical data, often overlooking the benefits of incorporating
multimodal information, particularly textual data. In this paper, we propose a
novel framework that integrates time series models with Large Language Models
to improve high-dimensional time series forecasting. Inspired by multimodal
models, our method combines time series and textual data in the dual-tower
structure. This fusion of information creates a comprehensive representation,
which is then processed through a linear layer to generate the final forecast.
Extensive experiments demonstrate that incorporating text enhances
high-dimensional time series forecasting performance. This work paves the way
for further research in multimodal time series forecasting.
|
2501.07051
|
ROSAnnotator: A Web Application for ROSBag Data Analysis in Human-Robot
Interaction
|
cs.RO cs.HC
|
Human-robot interaction (HRI) is an interdisciplinary field that utilises
both quantitative and qualitative methods. While ROSBags, a file format within
the Robot Operating System (ROS), offer an efficient means of collecting
temporally synched multimodal data in empirical studies with real robots, there
is a lack of tools specifically designed to integrate qualitative coding and
analysis functions with ROSBags. To address this gap, we developed
ROSAnnotator, a web-based application that incorporates a multimodal Large
Language Model (LLM) to support both manual and automated annotation of ROSBag
data. ROSAnnotator currently facilitates video, audio, and transcription
annotations and provides an open interface for custom ROS messages and tools.
By using ROSAnnotator, researchers can streamline the qualitative analysis
process, create a more cohesive analysis pipeline, and quickly access
statistical summaries of annotations, thereby enhancing the overall efficiency
of HRI data analysis. https://github.com/CHRI-Lab/ROSAnnotator
|
2501.07054
|
PoAct: Policy and Action Dual-Control Agent for Generalized Applications
|
cs.AI
|
Based on their superior comprehension and reasoning capabilities, Large
Language Model (LLM) driven agent frameworks have achieved significant success
in numerous complex reasoning tasks. ReAct-like agents can solve various
intricate problems step-by-step through progressive planning and tool calls,
iteratively optimizing new steps based on environmental feedback. However, as
the planning capabilities of LLMs improve, the actions invoked by tool calls in
ReAct-like frameworks often misalign with complex planning and challenging data
organization. Code Action addresses these issues while also introducing the
challenges of a more complex action space and more difficult action
organization. To leverage Code Action and tackle the challenges of its
complexity, this paper proposes Policy and Action Dual-Control Agent (PoAct)
for generalized applications. The aim is to achieve higher-quality code actions
and more accurate reasoning paths by dynamically switching reasoning policies
and modifying the action space. Experimental results on the Agent Benchmark for
both legal and generic scenarios demonstrate the superior reasoning
capabilities and reduced token consumption of our approach in complex tasks. On
the LegalAgentBench, our method shows a 20 percent improvement over the
baseline while requiring fewer tokens. We conducted experiments and analyses on
the GPT-4o and GLM-4 series models, demonstrating the significant potential and
scalability of our approach to solve complex problems.
|
2501.07055
|
SFC-GAN: A Generative Adversarial Network for Brain Functional and
Structural Connectome Translation
|
cs.CV cs.LG
|
Modern brain imaging technologies have enabled the detailed reconstruction of
human brain connectomes, capturing structural connectivity (SC) from diffusion
MRI and functional connectivity (FC) from functional MRI. Understanding the
intricate relationships between SC and FC is vital for gaining deeper insights
into the brain's functional and organizational mechanisms. However, obtaining
both SC and FC modalities simultaneously remains challenging, hindering
comprehensive analyses. Existing deep generative models typically focus on
synthesizing a single modality or unidirectional translation between FC and SC,
thereby missing the potential benefits of bi-directional translation,
especially in scenarios where only one connectome is available. Therefore, we
propose Structural-Functional Connectivity GAN (SFC-GAN), a novel framework for
bidirectional translation between SC and FC. This approach leverages the
CycleGAN architecture, incorporating convolutional layers to effectively
capture the spatial structures of brain connectomes. To preserve the
topological integrity of these connectomes, we employ a structure-preserving
loss that guides the model in capturing both global and local connectome
patterns while maintaining symmetry. Our framework demonstrates superior
performance in translating between SC and FC, outperforming baseline models in
similarity and graph property evaluations compared to ground truth data, each
translated modality can be effectively utilized for downstream classification.
|
2501.07057
|
Optimization with Multi-sourced Reference Information and Unknown Trust:
A Distributionally Robust Approach
|
math.OC cs.SY eess.SY
|
In problems that involve input parameter information gathered from multiple
data sources with varying reliability, incorporating users' trust about
different sources in decision-optimization models can potentially improve
solution performance and reliability. In this work, we propose a novel
multi-reference distributionally robust optimization (MR-DRO) framework, where
the model inputs are uncertain and their probability distributions can be
statistically inferred from multiple data sources. Via nonparametric data
fusion, we construct a Wasserstein ambiguity set to minimize the worst-case
expected value of a stochastic objective function, accounting for both
uncertainty and unknown reliability of information sources. We reformulate the
MR-DRO model as a linear program given linear objective and constraints in the
original problem. We also incorporate a dynamic trust update mechanism that
adjusts the trust for each source based on its performance over time. In
addition, we introduce the concept of probability dominance to identify sources
with dominant trust. Via solving instances of resource allocation and portfolio
optimization, we demonstrate the effectiveness of the trust-informed MR-DRO
approach compared to traditional optimization frameworks relying on a single
data source. Our results highlight the significance of integrating (dynamic)
user trust in decision making under uncertainty, particularly when given
diverse and potentially conflicting input data.
|
2501.07058
|
Logic Meets Magic: LLMs Cracking Smart Contract Vulnerabilities
|
cs.CR cs.AI
|
Smart contract vulnerabilities caused significant economic losses in
blockchain applications. Large Language Models (LLMs) provide new possibilities
for addressing this time-consuming task. However, state-of-the-art LLM-based
detection solutions are often plagued by high false-positive rates.
In this paper, we push the boundaries of existing research in two key ways.
First, our evaluation is based on Solidity v0.8, offering the most up-to-date
insights compared to prior studies that focus on older versions (v0.4). Second,
we leverage the latest five LLM models (across companies), ensuring
comprehensive coverage across the most advanced capabilities in the field.
We conducted a series of rigorous evaluations. Our experiments demonstrate
that a well-designed prompt can reduce the false-positive rate by over 60%.
Surprisingly, we also discovered that the recall rate for detecting some
specific vulnerabilities in Solidity v0.8 has dropped to just 13% compared to
earlier versions (i.e., v0.4). Further analysis reveals the root cause of this
decline: the reliance of LLMs on identifying changes in newly introduced
libraries and frameworks during detection.
|
2501.07063
|
Research on the Online Update Method for Retrieval-Augmented Generation
(RAG) Model with Incremental Learning
|
cs.IR cs.CL
|
In the contemporary context of rapid advancements in information technology
and the exponential growth of data volume, language models are confronted with
significant challenges in effectively navigating the dynamic and ever-evolving
information landscape to update and adapt to novel knowledge in real time. In
this work, an online update method is proposed, which is based on the existing
Retrieval Enhanced Generation (RAG) model with multiple innovation mechanisms.
Firstly, the dynamic memory is used to capture the emerging data samples, and
then gradually integrate them into the core model through a tunable knowledge
distillation strategy. At the same time, hierarchical indexing and multi-layer
gating mechanism are introduced into the retrieval module to ensure that the
retrieved content is more targeted and accurate. Finally, a multi-stage network
structure is established for different types of inputs in the generation stage,
and cross-attention matching and screening are carried out on the intermediate
representations of each stage to ensure the effective integration and iterative
update of new and old knowledge. Experimental results show that the proposed
method is better than the existing mainstream comparison models in terms of
knowledge retention and inference accuracy.
|
2501.07069
|
Hierarchical Superpixel Segmentation via Structural Information Theory
|
cs.CV
|
Superpixel segmentation is a foundation for many higher-level computer vision
tasks, such as image segmentation, object recognition, and scene understanding.
Existing graph-based superpixel segmentation methods typically concentrate on
the relationships between a given pixel and its directly adjacent pixels while
overlooking the influence of non-adjacent pixels. These approaches do not fully
leverage the global information in the graph, leading to suboptimal
segmentation quality. To address this limitation, we present SIT-HSS, a
hierarchical superpixel segmentation method based on structural information
theory. Specifically, we first design a novel graph construction strategy that
incrementally explores the pixel neighborhood to add edges based on
1-dimensional structural entropy (1D SE). This strategy maximizes the retention
of graph information while avoiding an overly complex graph structure. Then, we
design a new 2D SE-guided hierarchical graph partitioning method, which
iteratively merges pixel clusters layer by layer to reduce the graph's 2D SE
until a predefined segmentation scale is achieved. Experimental results on
three benchmark datasets demonstrate that the SIT-HSS performs better than
state-of-the-art unsupervised superpixel segmentation algorithms. The source
code is available at \url{https://github.com/SELGroup/SIT-HSS}.
|
2501.07070
|
Enhancing Image Generation Fidelity via Progressive Prompts
|
cs.CV
|
The diffusion transformer (DiT) architecture has attracted significant
attention in image generation, achieving better fidelity, performance, and
diversity. However, most existing DiT - based image generation methods focus on
global - aware synthesis, and regional prompt control has been less explored.
In this paper, we propose a coarse - to - fine generation pipeline for regional
prompt - following generation. Specifically, we first utilize the powerful
large language model (LLM) to generate both high - level descriptions of the
image (such as content, topic, and objects) and low - level descriptions (such
as details and style). Then, we explore the influence of cross - attention
layers at different depths. We find that deeper layers are always responsible
for high - level content control, while shallow layers handle low - level
content control. Various prompts are injected into the proposed regional cross
- attention control for coarse - to - fine generation. By using the proposed
pipeline, we enhance the controllability of DiT - based image generation.
Extensive quantitative and qualitative results show that our pipeline can
improve the performance of the generated images.
|
2501.07071
|
Value Compass Leaderboard: A Platform for Fundamental and Validated
Evaluation of LLMs Values
|
cs.AI
|
As Large Language Models (LLMs) achieve remarkable breakthroughs, aligning
their values with humans has become imperative for their responsible
development and customized applications. However, there still lack evaluations
of LLMs values that fulfill three desirable goals. (1) Value Clarification: We
expect to clarify the underlying values of LLMs precisely and comprehensively,
while current evaluations focus narrowly on safety risks such as bias and
toxicity. (2) Evaluation Validity: Existing static, open-source benchmarks are
prone to data contamination and quickly become obsolete as LLMs evolve.
Additionally, these discriminative evaluations uncover LLMs' knowledge about
values, rather than valid assessments of LLMs' behavioral conformity to values.
(3) Value Pluralism: The pluralistic nature of human values across individuals
and cultures is largely ignored in measuring LLMs value alignment. To address
these challenges, we presents the Value Compass Leaderboard, with three
correspondingly designed modules. It (i) grounds the evaluation on
motivationally distinct \textit{basic values to clarify LLMs' underlying values
from a holistic view; (ii) applies a \textit{generative evolving evaluation
framework with adaptive test items for evolving LLMs and direct value
recognition from behaviors in realistic scenarios; (iii) propose a metric that
quantifies LLMs alignment with a specific value as a weighted sum over multiple
dimensions, with weights determined by pluralistic values.
|
2501.07072
|
Label Calibration in Source Free Domain Adaptation
|
cs.CV
|
Source-free domain adaptation (SFDA) utilizes a pre-trained source model with
unlabeled target data. Self-supervised SFDA techniques generate pseudolabels
from the pre-trained source model, but these pseudolabels often contain noise
due to domain discrepancies between the source and target domains. Traditional
self-supervised SFDA techniques rely on deterministic model predictions using
the softmax function, leading to unreliable pseudolabels. In this work, we
propose to introduce predictive uncertainty and softmax calibration for
pseudolabel refinement using evidential deep learning. The Dirichlet prior is
placed over the output of the target network to capture uncertainty using
evidence with a single forward pass. Furthermore, softmax calibration solves
the translation invariance problem to assist in learning with noisy labels. We
incorporate a combination of evidential deep learning loss and information
maximization loss with calibrated softmax in both prior and non-prior target
knowledge SFDA settings. Extensive experimental analysis shows that our method
outperforms other state-of-the-art methods on benchmark datasets.
|
2501.07076
|
Representation Learning of Point Cloud Upsampling in Global and Local
Inputs
|
cs.CV cs.AI
|
In recent years, point cloud upsampling has been widely applied in fields
such as 3D reconstruction. Our study investigates the factors influencing point
cloud upsampling on both global and local levels through representation
learning. Specifically, the paper inputs global and local information of the
same point cloud model object into two encoders to extract these features,
fuses them, and then feeds the combined features into an upsampling decoder.
The goal is to address issues of sparsity and noise in point clouds by
leveraging prior knowledge from both global and local inputs. And the proposed
framework can be applied to any state-of-the-art point cloud upsampling neural
network. Experiments were conducted on a series of autoencoder-based models
utilizing deep learning, yielding interpretability for both global and local
inputs, and it has been proven in the results that our proposed framework can
further improve the upsampling effect in previous SOTA works. At the same time,
the Saliency Map reflects the differences between global and local feature
inputs, as well as the effectiveness of training with both inputs in parallel.
|
2501.07077
|
D3MES: Diffusion Transformer with multihead equivariant self-attention
for 3D molecule generation
|
cs.LG physics.chem-ph
|
Understanding and predicting the diverse conformational states of molecules
is crucial for advancing fields such as chemistry, material science, and drug
development. Despite significant progress in generative models, accurately
generating complex and biologically or material-relevant molecular structures
remains a major challenge. In this work, we introduce a diffusion model for
three-dimensional (3D) molecule generation that combines a classifiable
diffusion model, Diffusion Transformer, with multihead equivariant
self-attention. This method addresses two key challenges: correctly attaching
hydrogen atoms in generated molecules through learning representations of
molecules after hydrogen atoms are removed; and overcoming the limitations of
existing models that cannot generate molecules across multiple classes
simultaneously. The experimental results demonstrate that our model not only
achieves state-of-the-art performance across several key metrics but also
exhibits robustness and versatility, making it highly suitable for early-stage
large-scale generation processes in molecular design, followed by validation
and further screening to obtain molecules with specific properties.
|
2501.07078
|
ADKGD: Anomaly Detection in Knowledge Graphs with Dual-Channel Training
|
cs.AI cs.DB
|
In the current development of large language models (LLMs), it is important
to ensure the accuracy and reliability of the underlying data sources. LLMs are
critical for various applications, but they often suffer from hallucinations
and inaccuracies due to knowledge gaps in the training data. Knowledge graphs
(KGs), as a powerful structural tool, could serve as a vital external
information source to mitigate the aforementioned issues. By providing a
structured and comprehensive understanding of real-world data, KGs enhance the
performance and reliability of LLMs. However, it is common that errors exist in
KGs while extracting triplets from unstructured data to construct KGs. This
could lead to degraded performance in downstream tasks such as
question-answering and recommender systems. Therefore, anomaly detection in KGs
is essential to identify and correct these errors. This paper presents an
anomaly detection algorithm in knowledge graphs with dual-channel learning
(ADKGD). ADKGD leverages a dual-channel learning approach to enhance
representation learning from both the entity-view and triplet-view
perspectives. Furthermore, using a cross-layer approach, our framework
integrates internal information aggregation and context information
aggregation. We introduce a kullback-leibler (KL)-loss component to improve the
accuracy of the scoring function between the dual channels. To evaluate ADKGD's
performance, we conduct empirical studies on three real-world KGs: WN18RR,
FB15K, and NELL-995. Experimental results demonstrate that ADKGD outperforms
the state-of-the-art anomaly detection algorithms. The source code and datasets
are publicly available at https://github.com/csjywu1/ADKGD.
|
2501.07086
|
Boosting Text-To-Image Generation via Multilingual Prompting in Large
Multimodal Models
|
cs.CL
|
Previous work on augmenting large multimodal models (LMMs) for text-to-image
(T2I) generation has focused on enriching the input space of in-context
learning (ICL). This includes providing a few demonstrations and optimizing
image descriptions to be more detailed and logical. However, as demand for more
complex and flexible image descriptions grows, enhancing comprehension of input
text within the ICL paradigm remains a critical yet underexplored area. In this
work, we extend this line of research by constructing parallel multilingual
prompts aimed at harnessing the multilingual capabilities of LMMs. More
specifically, we translate the input text into several languages and provide
the models with both the original text and the translations. Experiments on two
LMMs across 3 benchmarks show that our method, PMT2I, achieves superior
performance in general, compositional, and fine-grained assessments, especially
in human preference alignment. Additionally, with its advantage of generating
more diverse images, PMT2I significantly outperforms baseline prompts when
incorporated with reranking methods. Our code and parallel multilingual data
can be found at https://github.com/takagi97/PMT2I.
|
2501.07087
|
Video Quality Assessment for Online Processing: From Spatial to Temporal
Sampling
|
cs.CV cs.AI
|
With the rapid development of multimedia processing and deep learning
technologies, especially in the field of video understanding, video quality
assessment (VQA) has achieved significant progress. Although researchers have
moved from designing efficient video quality mapping models to various research
directions, in-depth exploration of the effectiveness-efficiency trade-offs of
spatio-temporal modeling in VQA models is still less sufficient. Considering
the fact that videos have highly redundant information, this paper investigates
this problem from the perspective of joint spatial and temporal sampling,
aiming to seek the answer to how little information we should keep at least
when feeding videos into the VQA models while with acceptable performance
sacrifice. To this end, we drastically sample the video's information from both
spatial and temporal dimensions, and the heavily squeezed video is then fed
into a stable VQA model. Comprehensive experiments regarding joint spatial and
temporal sampling are conducted on six public video quality databases, and the
results demonstrate the acceptable performance of the VQA model when throwing
away most of the video information. Furthermore, with the proposed joint
spatial and temporal sampling strategy, we make an initial attempt to design an
online VQA model, which is instantiated by as simple as possible a spatial
feature extractor, a temporal feature fusion module, and a global quality
regression module. Through quantitative and qualitative experiments, we verify
the feasibility of online VQA model by simplifying itself and reducing input.
|
2501.07088
|
MathReader : Text-to-Speech for Mathematical Documents
|
cs.AI cs.SD eess.AS
|
TTS (Text-to-Speech) document reader from Microsoft, Adobe, Apple, and OpenAI
have been serviced worldwide. They provide relatively good TTS results for
general plain text, but sometimes skip contents or provide unsatisfactory
results for mathematical expressions. This is because most modern academic
papers are written in LaTeX, and when LaTeX formulas are compiled, they are
rendered as distinctive text forms within the document. However, traditional
TTS document readers output only the text as it is recognized, without
considering the mathematical meaning of the formulas. To address this issue, we
propose MathReader, which effectively integrates OCR, a fine-tuned T5 model,
and TTS. MathReader demonstrated a lower Word Error Rate (WER) than existing
TTS document readers, such as Microsoft Edge and Adobe Acrobat, when processing
documents containing mathematical formulas. MathReader reduced the WER from
0.510 to 0.281 compared to Microsoft Edge, and from 0.617 to 0.281 compared to
Adobe Acrobat. This will significantly contribute to alleviating the
inconvenience faced by users who want to listen to documents, especially those
who are visually impaired. The code is available at
https://github.com/hyeonsieun/MathReader.
|
2501.07096
|
Intent-Interest Disentanglement and Item-Aware Intent Contrastive
Learning for Sequential Recommendation
|
cs.IR
|
Recommender systems aim to provide personalized item recommendations by
capturing user behaviors derived from their interaction history. Considering
that user interactions naturally occur sequentially based on users' intents in
mind, user behaviors can be interpreted as user intents. Therefore,
intent-based sequential recommendations are actively studied recently to model
user intents from historical interactions for a more precise user understanding
beyond traditional studies that often overlook the underlying semantics behind
user interactions. However, existing studies face three challenges: 1) the
limited understanding of user behaviors by focusing solely on intents, 2) the
lack of robustness in categorizing intents due to arbitrary fixed numbers of
intent categories, and 3) the neglect of interacted items in modeling of user
intents. To address these challenges, we propose Intent-Interest
Disentanglement and Item-Aware Intent Contrastive Learning for Sequential
Recommendation (IDCLRec). IDCLRec disentangles user behaviors into intents
which are dynamic motivations and interests which are stable tastes of users
for a comprehensive understanding of user behaviors. A causal cross-attention
mechanism is used to identify consistent interests across interactions, while
residual behaviors are modeled as intents by modeling their temporal dynamics
through a similarity adjustment loss. In addition, without predefining the
number of intent categories, an importance-weighted attention mechanism
captures user-specific categorical intent considering the importance of intent
for each interaction. Furthermore, we introduce item-aware contrastive learning
which aligns intents that occurred the same interaction and aligns intent with
item combinations occurred by the corresponding intent. Extensive experiments
conducted on real-world datasets demonstrate the effectiveness of IDCLRec.
|
2501.07100
|
Collaborative Learning for 3D Hand-Object Reconstruction and
Compositional Action Recognition from Egocentric RGB Videos Using
Superquadrics
|
cs.CV cs.AI
|
With the availability of egocentric 3D hand-object interaction datasets,
there is increasing interest in developing unified models for hand-object pose
estimation and action recognition. However, existing methods still struggle to
recognise seen actions on unseen objects due to the limitations in representing
object shape and movement using 3D bounding boxes. Additionally, the reliance
on object templates at test time limits their generalisability to unseen
objects. To address these challenges, we propose to leverage superquadrics as
an alternative 3D object representation to bounding boxes and demonstrate their
effectiveness on both template-free object reconstruction and action
recognition tasks. Moreover, as we find that pure appearance-based methods can
outperform the unified methods, the potential benefits from 3D geometric
information remain unclear. Therefore, we study the compositionality of actions
by considering a more challenging task where the training combinations of verbs
and nouns do not overlap with the testing split. We extend H2O and FPHA
datasets with compositional splits and design a novel collaborative learning
framework that can explicitly reason about the geometric relations between
hands and the manipulated object. Through extensive quantitative and
qualitative evaluations, we demonstrate significant improvements over the
state-of-the-arts in (compositional) action recognition.
|
2501.07101
|
Dual Scale-aware Adaptive Masked Knowledge Distillation for Object
Detection
|
cs.CV
|
Recent feature masking knowledge distillation methods make use of attention
mechanisms to identify either important spatial regions or channel clues for
discriminative feature reconstruction. However, most of existing strategies
perform global attention-guided feature masking distillation without delving
into fine-grained visual clues in feature maps. In particular, uncovering
locality-aware clues across different scales are conducive to reconstructing
region-aware features, thereby significantly benefiting distillation
performance. In this study, we propose a fine-grained adaptive feature masking
distillation framework for accurate object detection. Different from previous
methods in which global masking is performed on single-scale feature maps, we
explore the scale-aware feature masking by performing feature distillation
across various scales, such that the object-aware locality is encoded for
improved feature reconstruction. In addition, our fine-grained feature
distillation strategy is combined with a masking logits distillation scheme in
which logits difference between teacher and student networks is utilized to
guide the distillation process. Thus, it can help the student model to better
learn from the teacher counterpart with improved knowledge transfer. Extensive
experiments for detection task demonstrate the superiority of our method. For
example, when RetinaNet, RepPoints and Cascade Mask RCNN are used as teacher
detectors, the student network achieves mAP scores of 41.5\%, 42.9\%, and
42.6\%, respectively, outperforming state-of-the-art methods such as DMKD and
FreeKD.
|
2501.07102
|
AdaCS: Adaptive Normalization for Enhanced Code-Switching ASR
|
cs.CL cs.AI cs.SD eess.AS
|
Intra-sentential code-switching (CS) refers to the alternation between
languages that happens within a single utterance and is a significant challenge
for Automatic Speech Recognition (ASR) systems. For example, when a Vietnamese
speaker uses foreign proper names or specialized terms within their speech. ASR
systems often struggle to accurately transcribe intra-sentential CS due to
their training on monolingual data and the unpredictable nature of CS. This
issue is even more pronounced for low-resource languages, where limited data
availability hinders the development of robust models. In this study, we
propose AdaCS, a normalization model integrates an adaptive bias attention
module (BAM) into encoder-decoder network. This novel approach provides a
robust solution to CS ASR in unseen domains, thereby significantly enhancing
our contribution to the field. By utilizing BAM to both identify and normalize
CS phrases, AdaCS enhances its adaptive capabilities with a biased list of
words provided during inference. Our method demonstrates impressive performance
and the ability to handle unseen CS phrases across various domains. Experiments
show that AdaCS outperforms previous state-of-the-art method on Vietnamese CS
ASR normalization by considerable WER reduction of 56.2% and 36.8% on the two
proposed test sets.
|
2501.07104
|
RMAvatar: Photorealistic Human Avatar Reconstruction from Monocular
Video Based on Rectified Mesh-embedded Gaussians
|
cs.CV
|
We introduce RMAvatar, a novel human avatar representation with Gaussian
splatting embedded on mesh to learn clothed avatar from a monocular video. We
utilize the explicit mesh geometry to represent motion and shape of a virtual
human and implicit appearance rendering with Gaussian Splatting. Our method
consists of two main modules: Gaussian initialization module and Gaussian
rectification module. We embed Gaussians into triangular faces and control
their motion through the mesh, which ensures low-frequency motion and surface
deformation of the avatar. Due to the limitations of LBS formula, the human
skeleton is hard to control complex non-rigid transformations. We then design a
pose-related Gaussian rectification module to learn fine-detailed non-rigid
deformations, further improving the realism and expressiveness of the avatar.
We conduct extensive experiments on public datasets, RMAvatar shows
state-of-the-art performance on both rendering quality and quantitative
evaluations. Please see our project page at https://rm-avatar.github.io.
|
2501.07106
|
Efficient Multiple Temporal Network Kernel Density Estimation
|
cs.DB
|
Kernel density estimation (KDE) has become a popular method for visual
analysis in various fields, such as financial risk forecasting, crime
clustering, and traffic monitoring. KDE can identify high-density areas from
discrete datasets. However, most existing works only consider planar distance
and spatial data. In this paper, we introduce a new model, called TN-KDE, that
applies KDE-based techniques to road networks with temporal data. Specifically,
we introduce a novel solution, Range Forest Solution (RFS), which can
efficiently compute KDE values on spatiotemporal road networks. To support the
insertion operation, we present a dynamic version, called Dynamic Range Forest
Solution (DRFS). We also propose an optimization called Lixel Sharing (LS) to
share similar KDE values between two adjacent lixels. Furthermore, our
solutions support many non-polynomial kernel functions and still report exact
values. Experimental results show that our solutions achieve up to 6 times
faster than the state-of-the-art method.
|
2501.07108
|
How GPT learns layer by layer
|
cs.AI
|
Large Language Models (LLMs) excel at tasks like language processing,
strategy games, and reasoning but struggle to build generalizable internal
representations essential for adaptive decision-making in agents. For agents to
effectively navigate complex environments, they must construct reliable world
models. While LLMs perform well on specific benchmarks, they often fail to
generalize, leading to brittle representations that limit their real-world
effectiveness. Understanding how LLMs build internal world models is key to
developing agents capable of consistent, adaptive behavior across tasks. We
analyze OthelloGPT, a GPT-based model trained on Othello gameplay, as a
controlled testbed for studying representation learning. Despite being trained
solely on next-token prediction with random valid moves, OthelloGPT shows
meaningful layer-wise progression in understanding board state and gameplay.
Early layers capture static attributes like board edges, while deeper layers
reflect dynamic tile changes. To interpret these representations, we compare
Sparse Autoencoders (SAEs) with linear probes, finding that SAEs offer more
robust, disentangled insights into compositional features, whereas linear
probes mainly detect features useful for classification. We use SAEs to decode
features related to tile color and tile stability, a previously unexamined
feature that reflects complex gameplay concepts like board control and
long-term planning. We study the progression of linear probe accuracy and tile
color using both SAE's and linear probes to compare their effectiveness at
capturing what the model is learning. Although we begin with a smaller language
model, OthelloGPT, this study establishes a framework for understanding the
internal representations learned by GPT models, transformers, and LLMs more
broadly. Our code is publicly available: https://github.com/ALT-JS/OthelloSAE.
|
2501.07109
|
The Quest for Visual Understanding: A Journey Through the Evolution of
Visual Question Answering
|
cs.CV
|
Visual Question Answering (VQA) is an interdisciplinary field that bridges
the gap between computer vision (CV) and natural language processing(NLP),
enabling Artificial Intelligence(AI) systems to answer questions about images.
Since its inception in 2015, VQA has rapidly evolved, driven by advances in
deep learning, attention mechanisms, and transformer-based models. This survey
traces the journey of VQA from its early days, through major breakthroughs,
such as attention mechanisms, compositional reasoning, and the rise of
vision-language pre-training methods. We highlight key models, datasets, and
techniques that shaped the development of VQA systems, emphasizing the pivotal
role of transformer architectures and multimodal pre-training in driving recent
progress. Additionally, we explore specialized applications of VQA in domains
like healthcare and discuss ongoing challenges, such as dataset bias, model
interpretability, and the need for common-sense reasoning. Lastly, we discuss
the emerging trends in large multimodal language models and the integration of
external knowledge, offering insights into the future directions of VQA. This
paper aims to provide a comprehensive overview of the evolution of VQA,
highlighting both its current state and potential advancements.
|
2501.07110
|
Dynamic Multimodal Fusion via Meta-Learning Towards Micro-Video
Recommendation
|
cs.CV cs.IR cs.MM
|
Multimodal information (e.g., visual, acoustic, and textual) has been widely
used to enhance representation learning for micro-video recommendation. For
integrating multimodal information into a joint representation of micro-video,
multimodal fusion plays a vital role in the existing micro-video recommendation
approaches. However, the static multimodal fusion used in previous studies is
insufficient to model the various relationships among multimodal information of
different micro-videos. In this paper, we develop a novel meta-learning-based
multimodal fusion framework called Meta Multimodal Fusion (MetaMMF), which
dynamically assigns parameters to the multimodal fusion function for each
micro-video during its representation learning. Specifically, MetaMMF regards
the multimodal fusion of each micro-video as an independent task. Based on the
meta information extracted from the multimodal features of the input task,
MetaMMF parameterizes a neural network as the item-specific fusion function via
a meta learner. We perform extensive experiments on three benchmark datasets,
demonstrating the significant improvements over several state-of-the-art
multimodal recommendation models, like MMGCN, LATTICE, and InvRL. Furthermore,
we lighten our model by adopting canonical polyadic decomposition to improve
the training efficiency, and validate its effectiveness through experimental
results. Codes are available at https://github.com/hanliu95/MetaMMF.
|
2501.07111
|
ListConRanker: A Contrastive Text Reranker with Listwise Encoding
|
cs.CL cs.IR
|
Reranker models aim to re-rank the passages based on the semantics similarity
between the given query and passages, which have recently received more
attention due to the wide application of the Retrieval-Augmented Generation.
Most previous methods apply pointwise encoding, meaning that it can only encode
the context of the query for each passage input into the model. However, for
the reranker model, given a query, the comparison results between passages are
even more important, which is called listwise encoding. Besides, previous
models are trained using the cross-entropy loss function, which leads to issues
of unsmooth gradient changes during training and low training efficiency. To
address these issues, we propose a novel Listwise-encoded Contrastive text
reRanker (ListConRanker). It can help the passage to be compared with other
passages during the encoding process, and enhance the contrastive information
between positive examples and between positive and negative examples. At the
same time, we use the circle loss to train the model to increase the
flexibility of gradients and solve the problem of training efficiency.
Experimental results show that ListConRanker achieves state-of-the-art
performance on the reranking benchmark of Chinese Massive Text Embedding
Benchmark, including the cMedQA1.0, cMedQA2.0, MMarcoReranking, and T2Reranking
datasets.
|
2501.07113
|
Matching Free Depth Recovery from Structured Light
|
cs.CV
|
We present a novel approach for depth estimation from images captured by
structured light systems. Unlike many previous methods that rely on image
matching process, our approach uses a density voxel grid to represent scene
geometry, which is trained via self-supervised differentiable volume rendering.
Our method leverages color fields derived from projected patterns in structured
light systems during the rendering process, enabling the isolated optimization
of the geometry field. This contributes to faster convergence and high-quality
output. Additionally, we incorporate normalized device coordinates (NDC), a
distortion loss, and a novel surface-based color loss to enhance geometric
fidelity. Experimental results demonstrate that our method outperforms existing
matching-based techniques in geometric performance for few-shot scenarios,
achieving approximately a 60% reduction in average estimated depth errors on
synthetic scenes and about 30% on real-world captured scenes. Furthermore, our
approach delivers fast training, with a speed roughly three times faster than
previous matching-free methods that employ implicit representations.
|
2501.07114
|
Duplex: Dual Prototype Learning for Compositional Zero-Shot Learning
|
cs.CV
|
Compositional Zero-Shot Learning (CZSL) aims to enable models to recognize
novel compositions of visual states and objects that were absent during
training. Existing methods predominantly focus on learning semantic
representations of seen compositions but often fail to disentangle the
independent features of states and objects in images, thereby limiting their
ability to generalize to unseen compositions. To address this challenge, we
propose Duplex, a novel dual-prototype learning method that integrates semantic
and visual prototypes through a carefully designed dual-branch architecture,
enabling effective representation learning for compositional tasks. Duplex
utilizes a Graph Neural Network (GNN) to adaptively update visual prototypes,
capturing complex interactions between states and objects. Additionally, it
leverages the strong visual-semantic alignment of pre-trained Vision-Language
Models (VLMs) and employs a multi-path architecture combined with prompt
engineering to align image and text representations, ensuring robust
generalization. Extensive experiments on three benchmark datasets demonstrate
that Duplex outperforms state-of-the-art methods in both closed-world and
open-world settings.
|
2501.07120
|
MSV-Mamba: A Multiscale Vision Mamba Network for Echocardiography
Segmentation
|
eess.IV cs.CV
|
Ultrasound imaging frequently encounters challenges, such as those related to
elevated noise levels, diminished spatiotemporal resolution, and the complexity
of anatomical structures. These factors significantly hinder the model's
ability to accurately capture and analyze structural relationships and dynamic
patterns across various regions of the heart. Mamba, an emerging model, is one
of the most cutting-edge approaches that is widely applied to diverse vision
and language tasks. To this end, this paper introduces a U-shaped deep learning
model incorporating a large-window Mamba scale (LMS) module and a hierarchical
feature fusion approach for echocardiographic segmentation. First, a cascaded
residual block serves as an encoder and is employed to incrementally extract
multiscale detailed features. Second, a large-window multiscale mamba module is
integrated into the decoder to capture global dependencies across regions and
enhance the segmentation capability for complex anatomical structures.
Furthermore, our model introduces auxiliary losses at each decoder layer and
employs a dual attention mechanism to fuse multilayer features both spatially
and across channels. This approach enhances segmentation performance and
accuracy in delineating complex anatomical structures. Finally, the
experimental results using the EchoNet-Dynamic and CAMUS datasets demonstrate
that the model outperforms other methods in terms of both accuracy and
robustness. For the segmentation of the left ventricular endocardium
(${LV}_{endo}$), the model achieved optimal values of 95.01 and 93.36,
respectively, while for the left ventricular epicardium (${LV}_{epi}$), values
of 87.35 and 87.80, respectively, were achieved. This represents an improvement
ranging between 0.54 and 1.11 compared with the best-performing model.
|
2501.07121
|
The Value of Battery Energy Storage in the Continuous Intraday Market:
Forecast vs. Perfect Foresight Strategies
|
cs.CE
|
Grid-scale battery energy storage systems (BESSs) can provide flexibility to
the power system and capture shortterm price volatility by shifting energy in
time through controlled charging and discharging. The highly volatile European
continuous intraday (CID) market allows trading until just a few minutes before
physical delivery, offering significant earning potential. However, its high
trading frequency poses substantial modeling challenges. Accurate modeling of
BESSs trading in the CID market is essential to estimate revenue potential and
optimize trading strategies. Additionally, comparing CID profits with other
spot markets helps determine whether participating in the CID is worthwhile
despite its complexity. We propose a forecast-driven model to optimize BESS
trading in the CID market. Our strategy employs a rolling window modeling
framework to capture market dynamics. Price forecasts for impending CID
products are generated at the beginning of each window and used to optimize
trading schedules for subsequent execution. We also benchmark our approach
across various spot markets, offering a broad cross-market profit comparison.
We evaluate our forecast-driven model across different BESS power-to-capacity
ratios, comparing it to a perfect-foresight scenario and key CID market
indices, such as ID1 and ID3. Using real 2023 German CID data, a 1 MW/1 MWh
system adopting our method earns EUR 146 237, only 11% below perfect foresight,
surpassing all other markets and indices. Our approach surpasses ID1 and ID3 by
over 4% and 32%, respectively, confirming ID1 as a reliable lower-bound
estimate for earnings potential in the CID market.
|
2501.07123
|
Inferring Interpretable Models of Fragmentation Functions using Symbolic
Regression
|
hep-ph cs.LG cs.SC hep-th
|
Machine learning is rapidly making its path into natural sciences, including
high-energy physics. We present the first study that infers, directly from
experimental data, a functional form of fragmentation functions. The latter
represent a key ingredient to describe physical observables measured in
high-energy physics processes that involve hadron production, and predict their
values at different energy. Fragmentation functions can not be calculated in
theory and have to be determined instead from data. Traditional approaches rely
on global fits of experimental data using a pre-assumed functional form
inspired from phenomenological models to learn its parameters. This novel
approach uses a ML technique, namely symbolic regression, to learn an
analytical model from measured charged hadron multiplicities. The function
learned by symbolic regression resembles the Lund string function and describes
the data well, thus representing a potential candidate for use in global FFs
fits. This study represents an approach to follow in such QCD-related
phenomenology studies and more generally in sciences.
|
2501.07124
|
LLM360 K2: Building a 65B 360-Open-Source Large Language Model from
Scratch
|
cs.LG
|
We detail the training of the LLM360 K2-65B model, scaling up our 360-degree
OPEN SOURCE approach to the largest and most powerful models under project
LLM360. While open-source LLMs continue to advance, the answer to "How are the
largest LLMs trained?" remains unclear within the community. The implementation
details for such high-capacity models are often protected due to business
considerations associated with their high cost. This lack of transparency
prevents LLM researchers from leveraging valuable insights from prior
experience, e.g., "What are the best practices for addressing loss spikes?" The
LLM360 K2 project addresses this gap by providing full transparency and access
to resources accumulated during the training of LLMs at the largest scale. This
report highlights key elements of the K2 project, including our first model, K2
DIAMOND, a 65 billion-parameter LLM that surpasses LLaMA-65B and rivals
LLaMA2-70B, while requiring fewer FLOPs and tokens. We detail the
implementation steps and present a longitudinal analysis of K2 DIAMOND's
capabilities throughout its training process. We also outline ongoing projects
such as TXT360, setting the stage for future models in the series. By offering
previously unavailable resources, the K2 project also resonates with the
360-degree OPEN SOURCE principles of transparency, reproducibility, and
accessibility, which we believe are vital in the era of resource-intensive AI
research.
|
2501.07126
|
A Federated Deep Learning Framework for Cell-Free RSMA Networks
|
eess.SY cs.SY
|
Next-generation wireless networks are poised to benefit significantly from
the integration of three key technologies (KTs): Rate-Splitting Multiple Access
(RSMA), cell-free architectures, and federated learning. Each of these
technologies offers distinct advantages in terms of security, robustness, and
distributed structure. In this paper, we propose a novel cell-free network
architecture that incorporates RSMA and employs machine learning techniques
within a federated framework. This combination leverages the strengths of each
KT, creating a synergistic effect that maximizes the benefits of security,
robustness, and distributed structure. We formally formulate the access point
(AP) selection and precoder design for max-min rate optimization in a cell-free
MIMO RSMA network. Our proposed solution scheme involves a three-block
procedure. The first block trains deep reinforcement learning (DRL) neural
networks to obtain RSMA precoders, assuming full connectivity between APs and
user equipments (UEs). The second block uses these precoders and principal
component analysis (PCA) to assign APs to UEs by removing a subset of AP-UE
connections. The final block fine-tunes the RSMA precoders by incorporating the
associated APs into a second DRL network. To leverage the distributed nature of
the cell-free network, this process is implemented in a Federated Deep
Reinforcement Learning (FDRL) structure operating through the cooperation of
APs and a central processing unit (CPU). Simulation results demonstrate that
the proposed FDRL approach performs comparably to a benchmark centralized DRL
scheme. Our FDRL approach, provides a balanced trade-off, maintaining high
performance with enhanced security and reduced processing demands.
|
2501.07133
|
Robust Single Object Tracking in LiDAR Point Clouds under Adverse
Weather Conditions
|
cs.CV
|
3D single object tracking (3DSOT) in LiDAR point clouds is a critical task
for outdoor perception, enabling real-time perception of object location,
orientation, and motion. Despite the impressive performance of current 3DSOT
methods, evaluating them on clean datasets inadequately reflects their
comprehensive performance, as the adverse weather conditions in real-world
surroundings has not been considered. One of the main obstacles is the lack of
adverse weather benchmarks for the evaluation of 3DSOT. To this end, this work
proposes a challenging benchmark for LiDAR-based 3DSOT in adverse weather,
which comprises two synthetic datasets (KITTI-A and nuScenes-A) and one
real-world dataset (CADC-SOT) spanning three weather types: rain, fog, and
snow. Based on this benchmark, five representative 3D trackers from different
tracking frameworks conducted robustness evaluation, resulting in significant
performance degradations. This prompts the question: What are the factors that
cause current advanced methods to fail on such adverse weather samples?
Consequently, we explore the impacts of adverse weather and answer the above
question from three perspectives: 1) target distance; 2) template shape
corruption; and 3) target shape corruption. Finally, based on domain
randomization and contrastive learning, we designed a dual-branch tracking
framework for adverse weather, named DRCT, achieving excellent performance in
benchmarks.
|
2501.07139
|
FlexQuant: Elastic Quantization Framework for Locally Hosted LLM on Edge
Devices
|
cs.AI cs.PF
|
Deploying LLMs on edge devices presents serious technical challenges. Memory
elasticity is crucial for edge devices with unified memory, where memory is
shared and fluctuates dynamically. Existing solutions suffer from either poor
transition granularity or high storage costs. We propose FlexQuant, a novel
elasticity framework that generates an ensemble of quantized models, providing
an elastic hosting solution with 15x granularity improvement and 10x storage
reduction compared to SoTA methods. FlexQuant works with most quantization
methods and creates a family of trade-off options under various storage limits
through our pruning method. It brings great performance and flexibility to the
edge deployment of LLMs.
|
2501.07145
|
A User's Guide to $\texttt{KSig}$: GPU-Accelerated Computation of the
Signature Kernel
|
stat.ML cs.LG
|
The signature kernel is a positive definite kernel for sequential and
temporal data that has become increasingly popular in machine learning
applications due to powerful theoretical guarantees, strong empirical
performance, and recently introduced various scalable variations. In this
chapter, we give a short introduction to $\texttt{KSig}$, a
$\texttt{Scikit-Learn}$ compatible Python package that implements various
GPU-accelerated algorithms for computing signature kernels, and performing
downstream learning tasks. We also introduce a new algorithm based on tensor
sketches which gives strong performance compared to existing algorithms. The
package is available at https://github.com/tgcsaba/ksig.
|
2501.07146
|
TIMRL: A Novel Meta-Reinforcement Learning Framework for Non-Stationary
and Multi-Task Environments
|
cs.LG cs.AI
|
In recent years, meta-reinforcement learning (meta-RL) algorithm has been
proposed to improve sample efficiency in the field of decision-making and
control, enabling agents to learn new knowledge from a small number of samples.
However, most research uses the Gaussian distribution to extract task
representation, which is poorly adapted to tasks that change in non-stationary
environment. To address this problem, we propose a novel meta-reinforcement
learning method by leveraging Gaussian mixture model and the transformer
network to construct task inference model. The Gaussian mixture model is
utilized to extend the task representation and conduct explicit encoding of
tasks. Specifically, the classification of tasks is encoded through transformer
network to determine the Gaussian component corresponding to the task. By
leveraging task labels, the transformer network is trained using supervised
learning. We validate our method on MuJoCo benchmarks with non-stationary and
multi-task environments. Experimental results demonstrate that the proposed
method dramatically improves sample efficiency and accurately recognizes the
classification of the tasks, while performing excellently in the environment.
|
2501.07148
|
Implementing LoRa MIMO System for Internet of Things
|
cs.CY cs.AR cs.NI cs.SY eess.SY
|
Bandwidth constraints limit LoRa implementations. Contemporary IoT
applications require higher throughput than that provided by LoRa. This work
introduces a LoRa Multiple Input Multiple Output (MIMO) system and a spatial
multiplexing algorithm to address LoRa's bandwidth limitation. The transceivers
in the proposed approach modulate the signals on distinct frequencies of the
same LoRa band. A Frequency Division Multiplexing (FDM) method is used at the
transmitters to provide a wider MIMO channel. Unlike conventional Orthogonal
Frequency Division Multiplexing (OFDM) techniques, this work exploits the
orthogonality of the LoRa signals facilitated by its proprietary Chirp Spread
Spectrum (CSS) modulation to perform an OFDM in the proposed LoRa MIMO system.
By varying the Spreading Factor (SF) and bandwidth of LoRa signals, orthogonal
signals can transmit on the same frequency irrespective of the FDM. Even though
the channel correlation is minimal for different spreading factors and
bandwidths, different Carrier Frequencies (CF) ensure the signals do not
overlap and provide additional degrees of freedom. This work assesses the
proposed model's performance and conducts an extensive analysis to provide an
overview of resources consumed by the proposed system. Finally, this work
provides the detailed results of a thorough evaluation of the model on test
hardware.
|
2501.07154
|
Privacy-Preserving Data Quality Assessment for Time-Series IoT Sensors
|
cs.IT math.IT
|
Data from Internet of Things (IoT) sensors has emerged as a key contributor
to decision-making processes in various domains. However, the quality of the
data is crucial to the effectiveness of applications built on it, and
assessment of the data quality is heavily context-dependent. Further,
preserving the privacy of the data during quality assessment is critical in
domains where sensitive data is prevalent. This paper proposes a novel
framework for automated, objective, and privacy-preserving data quality
assessment of time-series data from IoT sensors deployed in smart cities. We
leverage custom, autonomously computable metrics that parameterise the temporal
performance and adherence to a declarative schema document to achieve
objectivity. Additionally, we utilise a trusted execution environment to create
a "data-blind" model that ensures individual privacy, eliminates assessee bias,
and enhances adaptability across data types. This paper describes this data
quality assessment methodology for IoT sensors, emphasising its relevance
within the smart-city context while addressing the growing need for privacy in
the face of extensive data collection practices.
|
2501.07155
|
AlphaNet: Scaling Up Local Frame-based Atomistic Foundation Model
|
cs.LG
|
We present AlphaNet, a local frame-based equivariant model designed to
achieve both accurate and efficient simulations for atomistic systems.
Recently, machine learning force fields (MLFFs) have gained prominence in
molecular dynamics simulations due to their advantageous efficiency-accuracy
balance compared to classical force fields and quantum mechanical calculations,
alongside their transferability across various systems. Despite the
advancements in improving model accuracy, the efficiency and scalability of
MLFFs remain significant obstacles in practical applications. AlphaNet enhances
computational efficiency and accuracy by leveraging the local geometric
structures of atomic environments through the construction of equivariant local
frames and learnable frame transitions. We substantiate the efficacy of
AlphaNet across diverse datasets, including defected graphene, formate
decomposition, zeolites, and surface reactions. AlphaNet consistently surpasses
well-established models, such as NequIP and DeepPot, in terms of both energy
and force prediction accuracy. Notably, AlphaNet offers one of the best
trade-offs between computational efficiency and accuracy among existing models.
Moreover, AlphaNet exhibits scalability across a broad spectrum of system and
dataset sizes, affirming its versatility.
|
2501.07157
|
CureGraph: Contrastive Multi-Modal Graph Representation Learning for
Urban Living Circle Health Profiling and Prediction
|
cs.AI
|
The early detection and prediction of health status decline among the elderly
at the neighborhood level are of great significance for urban planning and
public health policymaking. While existing studies affirm the connection
between living environments and health outcomes, most rely on single data
modalities or simplistic feature concatenation of multi-modal information,
limiting their ability to comprehensively profile the health-oriented urban
environments. To fill this gap, we propose CureGraph, a contrastive multi-modal
representation learning framework for urban health prediction that employs
graph-based techniques to infer the prevalence of common chronic diseases among
the elderly within the urban living circles of each neighborhood. CureGraph
leverages rich multi-modal information, including photos and textual reviews of
residential areas and their surrounding points of interest, to generate urban
neighborhood embeddings. By integrating pre-trained visual and textual encoders
with graph modeling techniques, CureGraph captures cross-modal spatial
dependencies, offering a comprehensive understanding of urban environments
tailored to elderly health considerations. Extensive experiments on real-world
datasets demonstrate that CureGraph improves the best baseline by $28\%$ on
average in terms of $R^2$ across elderly disease risk prediction tasks.
Moreover, the model enables the identification of stage-wise chronic disease
progression and supports comparative public health analysis across
neighborhoods, offering actionable insights for sustainable urban development
and enhanced quality of life. The code is publicly available at
https://github.com/jinlin2021/CureGraph.
|
2501.07158
|
Eye Sclera for Fair Face Image Quality Assessment
|
cs.CV cs.AI
|
Fair operational systems are crucial in gaining and maintaining society's
trust in face recognition systems (FRS). FRS start with capturing an image and
assessing its quality before using it further for enrollment or verification.
Fair Face Image Quality Assessment (FIQA) schemes therefore become equally
important in the context of fair FRS. This work examines the sclera as a
quality assessment region for obtaining a fair FIQA. The sclera region is
agnostic to demographic variations and skin colour for assessing the quality of
a face image. We analyze three skin tone related ISO/IEC face image quality
assessment measures and assess the sclera region as an alternative area for
assessing FIQ. Our analysis of the face dataset of individuals from different
demographic groups representing different skin tones indicates sclera as an
alternative to measure dynamic range, over- and under-exposure of face using
sclera region alone. The sclera region being agnostic to skin tone, i.e.,
demographic factors, provides equal utility as a fair FIQA as shown by our
Error-vs-Discard Characteristic (EDC) curve analysis.
|
2501.07161
|
QuantuneV2: Compiler-Based Local Metric-Driven Mixed Precision
Quantization for Practical Embedded AI Applications
|
cs.AI
|
Mixed-precision quantization methods have been proposed to reduce model size
while minimizing accuracy degradation. However, existing studies require
retraining and do not consider the computational overhead and intermediate
representations (IR) generated during the compilation process, limiting their
application at the compiler level. This computational overhead refers to the
runtime latency caused by frequent quantization and dequantization operations
during inference. Performing these operations at the individual operator level
causes significant runtime delays. To address these issues, we propose
QuantuneV2, a compiler-based mixed-precision quantization method designed for
practical embedded AI applications. QuantuneV2 performs inference only twice,
once before quantization and once after quantization, and operates with a
computational complexity of O(n) that increases linearly with the number of
model parameters. We also made the sensitivity analysis more stable by using
local metrics like weights, activation values, the Signal to Quantization Noise
Ratio, and the Mean Squared Error. We also cut down on computational overhead
by choosing the best IR and using operator fusion. Experimental results show
that QuantuneV2 achieved up to a 10.28 percent improvement in accuracy and a
12.52 percent increase in speed compared to existing methods across five
models: ResNet18v1, ResNet50v1, SqueezeNetv1, VGGNet, and MobileNetv2. This
demonstrates that QuantuneV2 enhances model performance while maintaining
computational efficiency, making it suitable for deployment in embedded AI
environments.
|
2501.07163
|
Adaptive Noise-Tolerant Network for Image Segmentation
|
cs.CV
|
Unlike image classification and annotation, for which deep network models
have achieved dominating superior performances compared to traditional computer
vision algorithms, deep learning for automatic image segmentation still faces
critical challenges. One of such hurdles is to obtain ground-truth
segmentations as the training labels for deep network training. Especially when
we study biomedical images, such as histopathological images (histo-images), it
is unrealistic to ask for manual segmentation labels as the ground truth for
training due to the fine image resolution as well as the large image size and
complexity. In this paper, instead of relying on clean segmentation labels, we
study whether and how integrating imperfect or noisy segmentation results from
off-the-shelf segmentation algorithms may help achieve better segmentation
results through a new Adaptive Noise-Tolerant Network (ANTN) model. We extend
the noisy label deep learning to image segmentation with two novel aspects: (1)
multiple noisy labels can be integrated into one deep learning model; (2) noisy
segmentation modeling, including probabilistic parameters, is adaptive,
depending on the given testing image appearance. Implementation of the new ANTN
model on both the synthetic data and real-world histo-images demonstrates its
effectiveness and superiority over off-the-shelf and other existing
deep-learning-based image segmentation algorithms.
|
2501.07166
|
Natural Language-Assisted Multi-modal Medication Recommendation
|
cs.AI
|
Combinatorial medication recommendation(CMR) is a fundamental task of
healthcare, which offers opportunities for clinical physicians to provide more
precise prescriptions for patients with intricate health conditions,
particularly in the scenarios of long-term medical care. Previous research
efforts have sought to extract meaningful information from electronic health
records (EHRs) to facilitate combinatorial medication recommendations. Existing
learning-based approaches further consider the chemical structures of
medications, but ignore the textual medication descriptions in which the
functionalities are clearly described. Furthermore, the textual knowledge
derived from the EHRs of patients remains largely underutilized. To address
these issues, we introduce the Natural Language-Assisted Multi-modal Medication
Recommendation(NLA-MMR), a multi-modal alignment framework designed to learn
knowledge from the patient view and medication view jointly. Specifically,
NLA-MMR formulates CMR as an alignment problem from patient and medication
modalities. In this vein, we employ pretrained language models(PLMs) to extract
in-domain knowledge regarding patients and medications, serving as the
foundational representation for both modalities. In the medication modality, we
exploit both chemical structures and textual descriptions to create medication
representations. In the patient modality, we generate the patient
representations based on textual descriptions of diagnosis, procedure, and
symptom. Extensive experiments conducted on three publicly accessible datasets
demonstrate that NLA-MMR achieves new state-of-the-art performance, with a
notable average improvement of 4.72% in Jaccard score. Our source code is
publicly available on https://github.com/jtan1102/NLA-MMR_CIKM_2024.
|
2501.07171
|
BIOMEDICA: An Open Biomedical Image-Caption Archive, Dataset, and
Vision-Language Models Derived from Scientific Literature
|
cs.CV cs.CL
|
The development of vision-language models (VLMs) is driven by large-scale and
diverse multimodal datasets. However, progress toward generalist biomedical
VLMs is limited by the lack of annotated, publicly accessible datasets across
biology and medicine. Existing efforts are restricted to narrow domains,
missing the full diversity of biomedical knowledge encoded in scientific
literature. To address this gap, we introduce BIOMEDICA, a scalable,
open-source framework to extract, annotate, and serialize the entirety of the
PubMed Central Open Access subset into an easy-to-use, publicly accessible
dataset. Our framework produces a comprehensive archive with over 24 million
unique image-text pairs from over 6 million articles. Metadata and
expert-guided annotations are also provided. We demonstrate the utility and
accessibility of our resource by releasing BMCA-CLIP, a suite of CLIP-style
models continuously pre-trained on the BIOMEDICA dataset via streaming,
eliminating the need to download 27 TB of data locally. On average, our models
achieve state-of-the-art performance across 40 tasks - spanning pathology,
radiology, ophthalmology, dermatology, surgery, molecular biology,
parasitology, and cell biology - excelling in zero-shot classification with a
6.56% average improvement (as high as 29.8% and 17.5% in dermatology and
ophthalmology, respectively), and stronger image-text retrieval, all while
using 10x less compute. To foster reproducibility and collaboration, we release
our codebase and dataset for the broader research community.
|
2501.07172
|
Anomalous Agreement: How to find the Ideal Number of Anomaly Classes in
Correlated, Multivariate Time Series Data
|
cs.LG cs.AI stat.ML
|
Detecting and classifying abnormal system states is critical for condition
monitoring, but supervised methods often fall short due to the rarity of
anomalies and the lack of labeled data. Therefore, clustering is often used to
group similar abnormal behavior. However, evaluating cluster quality without
ground truth is challenging, as existing measures such as the Silhouette Score
(SSC) only evaluate the cohesion and separation of clusters and ignore possible
prior knowledge about the data. To address this challenge, we introduce the
Synchronized Anomaly Agreement Index (SAAI), which exploits the synchronicity
of anomalies across multivariate time series to assess cluster quality. We
demonstrate the effectiveness of SAAI by showing that maximizing SAAI improves
accuracy on the task of finding the true number of anomaly classes K in
correlated time series by 0.23 compared to SSC and by 0.32 compared to X-Means.
We also show that clusters obtained by maximizing SAAI are easier to interpret
compared to SSC.
|
2501.07173
|
Knowledge Distillation and Enhanced Subdomain Adaptation Using Graph
Convolutional Network for Resource-Constrained Bearing Fault Diagnosis
|
cs.LG eess.SP
|
Bearing fault diagnosis under varying working conditions faces challenges,
including a lack of labeled data, distribution discrepancies, and resource
constraints. To address these issues, we propose a progressive knowledge
distillation framework that transfers knowledge from a complex teacher model,
utilizing a Graph Convolutional Network (GCN) with Autoregressive moving
average (ARMA) filters, to a compact and efficient student model. To mitigate
distribution discrepancies and labeling uncertainty, we introduce Enhanced
Local Maximum Mean Squared Discrepancy (ELMMSD), which leverages mean and
variance statistics in the Reproducing Kernel Hilbert Space (RKHS) and
incorporates a priori probability distributions between labels. This approach
increases the distance between clustering centers, bridges subdomain gaps, and
enhances subdomain alignment reliability. Experimental results on benchmark
datasets (CWRU and JNU) demonstrate that the proposed method achieves superior
diagnostic accuracy while significantly reducing computational costs.
Comprehensive ablation studies validate the effectiveness of each component,
highlighting the robustness and adaptability of the approach across diverse
working conditions.
|
2501.07178
|
The Spoils of Algorithmic Collusion: Profit Allocation Among Asymmetric
Firms
|
econ.GN cs.AI q-fin.EC
|
We study the propensity of independent algorithms to collude in repeated
Cournot duopoly games. Specifically, we investigate the predictive power of
different oligopoly and bargaining solutions regarding the effect of asymmetry
between firms. We find that both consumers and firms can benefit from
asymmetry. Algorithms produce more competitive outcomes when firms are
symmetric, but less when they are very asymmetric. Although the static Nash
equilibrium underestimates the effect on total quantity and overestimates the
effect on profits, it delivers surprisingly accurate predictions in terms of
total welfare. The best description of our results is provided by the equal
relative gains solution. In particular, we find algorithms to agree on profits
that are on or close to the Pareto frontier for all degrees of asymmetry. Our
results suggest that the common belief that symmetric industries are more prone
to collusion may no longer hold when algorithms increasingly drive managerial
decisions.
|
2501.07179
|
Radial Distortion in Face Images: Detection and Impact
|
cs.CV
|
Acquiring face images of sufficiently high quality is important for online ID
and travel document issuance applications using face recognition systems (FRS).
Low-quality, manipulated (intentionally or unintentionally), or distorted
images degrade the FRS performance and facilitate documents' misuse. Securing
quality for enrolment images, especially in the unsupervised self-enrolment
scenario via a smartphone, becomes important to assure FRS performance. In this
work, we focus on the less studied area of radial distortion (a.k.a., the
fish-eye effect) in face images and its impact on FRS performance. We introduce
an effective radial distortion detection model that can detect and flag radial
distortion in the enrolment scenario. We formalize the detection model as a
face image quality assessment (FIQA) algorithm and provide a careful inspection
of the effect of radial distortion on FRS performance. Evaluation results show
excellent detection results for the proposed models, and the study on the
impact on FRS uncovers valuable insights into how to best use these models in
operational systems.
|
2501.07180
|
Evaluating Robotic Approach Techniques for the Insertion of a Straight
Instrument into a Vitreoretinal Surgery Trocar
|
cs.RO cs.HC cs.SY eess.SY
|
Advances in vitreoretinal robotic surgery enable precise techniques for gene
therapies. This study evaluates three robotic approaches using the 7-DoF
robotic arm for docking a micro-precise tool to a trocar: fully co-manipulated,
hybrid co-manipulated/teleoperated, and hybrid with camera assistance. The
fully co-manipulated approach was the fastest but had a 42% success rate.
Hybrid methods showed higher success rates (91.6% and 100%) and completed tasks
within 2 minutes. NASA Task Load Index (TLX) assessments indicated lower
physical demand and effort for hybrid approaches.
|
2501.07182
|
Unveiling Voices: A Co-Hashtag Analysis of TikTok Discourse on the 2023
Israel-Palestine Crisis
|
cs.SI cs.HC
|
TikTok has gradually become one of the most pervasive social media platforms
in our daily lives. In this research article, I explore how users on TikTok
discussed the crisis in Palestine that worsened in 2023. Using network
analysis, I situate keywords representing the conflict and categorize them
thematically based on a coding schema derived from politically and
ideologically differentiable stances. I conclude that that activism and
propaganda are contending amongst themselves in the thriving space afforded by
TikTok today.
|
2501.07183
|
Kriging and Gaussian Process Interpolation for Georeferenced Data
Augmentation
|
cs.AI
|
Data augmentation is a crucial step in the development of robust supervised
learning models, especially when dealing with limited datasets. This study
explores interpolation techniques for the augmentation of geo-referenced data,
with the aim of predicting the presence of Commelina benghalensis L. in
sugarcane plots in La R{\'e}union. Given the spatial nature of the data and the
high cost of data collection, we evaluated two interpolation approaches:
Gaussian processes (GPs) with different kernels and kriging with various
variograms. The objectives of this work are threefold: (i) to identify which
interpolation methods offer the best predictive performance for various
regression algorithms, (ii) to analyze the evolution of performance as a
function of the number of observations added, and (iii) to assess the spatial
consistency of augmented datasets. The results show that GP-based methods, in
particular with combined kernels (GP-COMB), significantly improve the
performance of regression algorithms while requiring less additional data.
Although kriging shows slightly lower performance, it is distinguished by a
more homogeneous spatial coverage, a potential advantage in certain contexts.
|
2501.07185
|
Uncertainty Guarantees on Automated Precision Weeding using Conformal
Prediction
|
cs.CV cs.LG stat.AP stat.ML
|
Precision agriculture in general, and precision weeding in particular, have
greatly benefited from the major advancements in deep learning and computer
vision. A large variety of commercial robotic solutions are already available
and deployed. However, the adoption by farmers of such solutions is still low
for many reasons, an important one being the lack of trust in these systems.
This is in great part due to the opaqueness and complexity of deep neural
networks and the manufacturers' inability to provide valid guarantees on their
performance. Conformal prediction, a well-established methodology in the
machine learning community, is an efficient and reliable strategy for providing
trustworthy guarantees on the predictions of any black-box model under very
minimal constraints. Bridging the gap between the safe machine learning and
precision agriculture communities, this article showcases conformal prediction
in action on the task of precision weeding through deep learning-based image
classification. After a detailed presentation of the conformal prediction
methodology and the development of a precision spraying pipeline based on a
''conformalized'' neural network and well-defined spraying decision rules, the
article evaluates this pipeline on two real-world scenarios: one under
in-distribution conditions, the other reflecting a near out-of-distribution
setting. The results show that we are able to provide formal, i.e. certifiable,
guarantees on spraying at least 90% of the weeds.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.