id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
2501.12668
|
NBDI: A Simple and Efficient Termination Condition for Skill Extraction
from Task-Agnostic Demonstrations
|
cs.LG cs.AI
|
Intelligent agents are able to make decisions based on different levels of
granularity and duration. Recent advances in skill learning enabled the agent
to solve complex, long-horizon tasks by effectively guiding the agent in
choosing appropriate skills. However, the practice of using fixed-length skills
can easily result in skipping valuable decision points, which ultimately limits
the potential for further exploration and faster policy learning. In this work,
we propose to learn a simple and effective termination condition that
identifies decision points through a state-action novelty module that leverages
agent experience data. Our approach, Novelty-based Decision Point
Identification (NBDI), outperforms previous baselines in complex, long-horizon
tasks, and remains effective even in the presence of significant variations in
the environment configurations of downstream tasks, highlighting the importance
of decision point identification in skill learning.
|
2501.12669
|
Information Design for Adaptive Organizations
|
econ.TH cs.GT cs.SI
|
This paper examines the optimal design of information sharing in
organizations. Organizational performance depends on agents adapting to
uncertain external environments while coordinating their actions, where
coordination incentives and synergies are modeled as graphs (networks). The
equilibrium strategies and the principal's objective function are summarized
using Laplacian matrices of these graphs. I formulate a Bayesian persuasion
problem to determine the optimal public signal and show that it comprises a set
of statistics on local states, necessarily including their average, which
serves as the organizational goal. When the principal benefits equally from the
coordination of any two agents, the choice of disclosed statistics is based on
the Laplacian eigenvectors and eigenvalues of the incentive graph. The
algebraic connectivity (the second smallest Laplacian eigenvalue) determines
the condition for full revelation, while the Laplacian spectral radius (the
largest Laplacian eigenvalue) establishes the condition for minimum
transparency, where only the average state is disclosed.
|
2501.12670
|
Learning Versatile Optimizers on a Compute Diet
|
cs.LG
|
Learned optimization has emerged as a promising alternative to hand-crafted
optimizers, with the potential to discover stronger learned update rules that
enable faster, hyperparameter-free training of neural networks. A critical
element for practically useful learned optimizers, that can be used
off-the-shelf after meta-training, is strong meta-generalization: the ability
to apply the optimizers to new tasks. Recent state-of-the-art work in learned
optimizers, VeLO (Metz et al., 2022), requires a large number of highly diverse
meta-training tasks along with massive computational resources, 4000 TPU
months, to achieve meta-generalization. This makes further improvements to such
learned optimizers impractical. In this work, we identify several key elements
in learned optimizer architectures and meta-training procedures that can lead
to strong meta-generalization. We also propose evaluation metrics to reliably
assess quantitative performance of an optimizer at scale on a set of evaluation
tasks. Our proposed approach, Celo, makes a significant leap in improving the
meta-generalization performance of learned optimizers and also outperforms
tuned state-of-the-art optimizers on a diverse set of out-of-distribution
tasks, despite being meta-trained for just 24 GPU hours.
|
2501.12678
|
Manifold learning and optimization using tangent space proxies
|
cs.LG math.OC
|
We present a framework for efficiently approximating differential-geometric
primitives on arbitrary manifolds via construction of an atlas graph
representation, which leverages the canonical characterization of a manifold as
a finite collection, or atlas, of overlapping coordinate charts. We first show
the utility of this framework in a setting where the manifold is expressed in
closed form, specifically, a runtime advantage, compared with state-of-the-art
approaches, for first-order optimization over the Grassmann manifold. Moreover,
using point cloud data for which a complex manifold structure was previously
established, i.e., high-contrast image patches, we show that an atlas graph
with the correct geometry can be directly learned from the point cloud.
Finally, we demonstrate that learning an atlas graph enables downstream key
machine learning tasks. In particular, we implement a Riemannian generalization
of support vector machines that uses the learned atlas graph to approximate
complex differential-geometric primitives, including Riemannian logarithms and
vector transports. These settings suggest the potential of this framework for
even more complex settings, where ambient dimension and noise levels may be
much higher.
|
2501.12681
|
Can masking background and object reduce static bias for zero-shot
action recognition?
|
cs.CV
|
In this paper, we address the issue of static bias in zero-shot action
recognition. Action recognition models need to represent the action itself, not
the appearance. However, some fully-supervised works show that models often
rely on static appearances, such as the background and objects, rather than
human actions. This issue, known as static bias, has not been investigated for
zero-shot. Although CLIP-based zero-shot models are now common, it remains
unclear if they sufficiently focus on human actions, as CLIP primarily captures
appearance features related to languages. In this paper, we investigate the
influence of static bias in zero-shot action recognition with CLIP-based
models. Our approach involves masking backgrounds, objects, and people
differently during training and validation. Experiments with masking background
show that models depend on background bias as their performance decreases for
Kinetics400. However, for Mimetics, which has a weak background bias, masking
the background leads to improved performance even if the background is masked
during validation. Furthermore, masking both the background and objects in
different colors improves performance for SSv2, which has a strong object bias.
These results suggest that masking the background or objects during training
prevents models from overly depending on static bias and makes them focus more
on human action.
|
2501.12689
|
EchoLM: Accelerating LLM Serving with Real-time Knowledge Distillation
|
cs.LG
|
Large language models (LLMs) have excelled in various applications, yet
serving them at scale is challenging due to their substantial resource demands
and high latency. Our real-world studies reveal that over 60% of user requests
to LLMs have semantically similar counterparts, suggesting the potential for
knowledge sharing among requests. However, naively caching and reusing past
responses leads to large quality degradation. In this paper, we introduce
EchoLM, an in-context caching system that leverages historical requests as
examples to guide response generation, enabling selective offloading of
requests to more efficient LLMs. However, enabling this real-time knowledge
transfer leads to intricate tradeoffs between response quality, latency, and
system throughput at scale. For a new request, EchoLM identifies similar,
high-utility examples and efficiently prepends them to the input for better
response. At scale, EchoLM adaptively routes requests to LLMs of varying
capabilities, accounting for response quality and serving loads. EchoLM employs
a cost-aware cache replay mechanism to improve example quality and coverage
offline, maximizing cache utility and runtime efficiency. Evaluations on
millions of open-source requests demonstrate that EchoLM has a throughput
improvement of 1.4-5.9x while reducing latency by 28-71% without hurting
response quality on average.
|
2501.12690
|
Growth strategies for arbitrary DAG neural architectures
|
cs.LG cs.AI
|
Deep learning has shown impressive results obtained at the cost of training
huge neural networks. However, the larger the architecture, the higher the
computational, financial, and environmental costs during training and
inference. We aim at reducing both training and inference durations. We focus
on Neural Architecture Growth, which can increase the size of a small model
when needed, directly during training using information from the
backpropagation. We expand existing work and freely grow neural networks in the
form of any Directed Acyclic Graph by reducing expressivity bottlenecks in the
architecture. We explore strategies to reduce excessive computations and steer
network growth toward more parameter-efficient architectures.
|
2501.12697
|
Combining Knowledge Graph and LLMs for Enhanced Zero-shot Visual
Question Answering
|
cs.CV
|
Zero-shot visual question answering (ZS-VQA), an emerged critical research
area, intends to answer visual questions without providing training samples.
Existing research in ZS-VQA has proposed to leverage knowledge graphs or large
language models (LLMs), respectively, as external information sources to help
VQA model comprehend images and questions. However, LLMs often struggle in
accurately interpreting specific question meanings. Meanwhile, although
knowledge graph has rich entity relationships, it is challenging to effectively
connect entities to individual image content for visual question answers. In
this paper, we propose a novel design to combine knowledge graph and LLMs for
zero-shot visual question answer. Our approach uses LLMs' powerful
understanding capabilities to accurately interpret image content through a
strategic question search mechanism. Meanwhile, the knowledge graph is used to
expand and connect users' queries to the image content for better visual
question answering. An optimization algorithm is further used to determine the
optimal weights for the loss functions derived from different information
sources, towards a globally optimal set of candidate answers. Experimental
results on two benchmark datasets demonstrate that our model achieves
state-of-the-art (SOTA) performance. Both source code and benchmark data will
be released for public access.
|
2501.12698
|
Training Dialogue Systems by AI Feedback for Improving Overall Dialogue
Impression
|
cs.CL
|
To improve user engagement during conversations with dialogue systems, we
must improve individual dialogue responses and dialogue impressions such as
consistency, personality, and empathy throughout the entire dialogue. While
such dialogue systems have been developing rapidly with the help of large
language models (LLMs), reinforcement learning from AI feedback (RLAIF) has
attracted attention to align LLM-based dialogue models for such dialogue
impressions. In RLAIF, a reward model based on another LLM is used to create a
training signal for an LLM-based dialogue model using zero-shot/few-shot
prompting techniques. However, evaluating an entire dialogue only by prompting
LLMs is challenging. In this study, the supervised fine-tuning (SFT) of LLMs
prepared reward models corresponding to 12 metrics related to the impression of
the entire dialogue for evaluating dialogue responses. We tuned our dialogue
models using the reward model signals as feedback to improve the impression of
the system. The results of automatic and human evaluations showed that tuning
the dialogue model using our reward model corresponding to dialogue impression
improved the evaluation of individual metrics and the naturalness of the
dialogue response.
|
2501.12703
|
HEPPO: Hardware-Efficient Proximal Policy Optimization -- A Universal
Pipelined Architecture for Generalized Advantage Estimation
|
cs.AR cs.AI cs.LG
|
This paper introduces HEPPO, an FPGA-based accelerator designed to optimize
the Generalized Advantage Estimation (GAE) stage in Proximal Policy
Optimization (PPO). Unlike previous approaches that focused on trajectory
collection and actor-critic updates, HEPPO addresses GAE's computational
demands with a parallel, pipelined architecture implemented on a single
System-on-Chip (SoC). This design allows for the adaptation of various hardware
accelerators tailored for different PPO phases. A key innovation is our
strategic standardization technique, which combines dynamic reward
standardization and block standardization for values, followed by 8-bit uniform
quantization. This method stabilizes learning, enhances performance, and
manages memory bottlenecks, achieving a 4x reduction in memory usage and a 1.5x
increase in cumulative rewards. We propose a solution on a single SoC device
with programmable logic and embedded processors, delivering throughput orders
of magnitude higher than traditional CPU-GPU systems. Our single-chip solution
minimizes communication latency and throughput bottlenecks, significantly
boosting PPO training efficiency. Experimental results show a 30% increase in
PPO speed and a substantial reduction in memory access time, underscoring
HEPPO's potential for broad applicability in hardware-efficient reinforcement
learning algorithms.
|
2501.12705
|
The Marginal Importance of Distortions and Alignment in CASSI systems
|
eess.IV cs.LG physics.comp-ph
|
This paper introduces a differentiable ray-tracing based model that
incorporates aberrations and distortions to render realistic coded
hyperspectral acquisitions using Coded-Aperture Spectral Snapshot Imagers
(CASSI). CASSI systems can now be optimized in order to fulfill simultaneously
several optical design constraints as well as processing constraints. Four
comparable CASSI systems with varying degree of optical aberrations have been
designed and modeled. The resulting rendered hyperspectral acquisitions from
each of these systems are combined with five state-of-the-art hyperspectral
cube reconstruction processes. These reconstruction processes encompass a
mapping function created from each system's propagation model to account for
distortions and aberrations during the reconstruction process. Our analyses
show that if properly modeled, the effects of geometric distortions of the
system and misalignments of the dispersive elements have a marginal impact on
the overall quality of the reconstructed hyperspectral data cubes. Therefore,
relaxing traditional constraints on measurement conformity and fidelity to the
scene enables the development of novel imaging instruments, guided by
performance metrics applied to the design or the processing of acquisitions. By
providing a complete framework for design, simulation and evaluation, this work
contributes to the optimization and exploration of new CASSI systems, and more
generally to the computational imaging community.
|
2501.12706
|
REX: Causal Discovery based on Machine Learning and Explainability
techniques
|
cs.LG
|
Explainability techniques hold significant potential for enhancing the causal
discovery process, which is crucial for understanding complex systems in areas
like healthcare, economics, and artificial intelligence. However, no causal
discovery methods currently incorporate explainability into their models to
derive causal graphs. Thus, in this paper we explore this innovative approach,
as it offers substantial potential and represents a promising new direction
worth investigating. Specifically, we introduce REX, a causal discovery method
that leverages machine learning (ML) models coupled with explainability
techniques, specifically Shapley values, to identify and interpret significant
causal relationships among variables.
Comparative evaluations on synthetic datasets comprising continuous tabular
data reveal that REX outperforms state-of-the-art causal discovery methods
across diverse data generation processes, including non-linear and additive
noise models. Moreover, REX was tested on the Sachs single-cell
protein-signaling dataset, achieving a precision of 0.952 and recovering key
causal relationships with no incorrect edges. Taking together, these results
showcase REX's effectiveness in accurately recovering true causal structures
while minimizing false positive predictions, its robustness across diverse
datasets, and its applicability to real-world problems. By combining ML and
explainability techniques with causal discovery, REX bridges the gap between
predictive modeling and causal inference, offering an effective tool for
understanding complex causal structures. REX is publicly available at
https://github.com/renero/causalgraph.
|
2501.12709
|
Practical quantum federated learning and its experimental demonstration
|
quant-ph cs.AI cs.CR cs.DC
|
Federated learning is essential for decentralized, privacy-preserving model
training in the data-driven era. Quantum-enhanced federated learning leverages
quantum resources to address privacy and scalability challenges, offering
security and efficiency advantages beyond classical methods. However, practical
and scalable frameworks addressing privacy concerns in the quantum computing
era remain undeveloped. Here, we propose a practical quantum federated learning
framework on quantum networks, utilizing distributed quantum secret keys to
protect local model updates and enable secure aggregation with
information-theoretic security. We experimentally validate our framework on a
4-client quantum network with a scalable structure. Extensive numerical
experiments on both quantum and classical datasets show that adding a quantum
client significantly enhances the trained global model's ability to classify
multipartite entangled and non-stabilizer quantum datasets. Simulations further
demonstrate scalability to 200 clients with classical models trained on the
MNIST dataset, reducing communication costs by $75\%$ through advanced model
compression techniques and achieving rapid training convergence. Our work
provides critical insights for building scalable, efficient, and quantum-secure
machine learning systems for the coming quantum internet era.
|
2501.12720
|
A systematic data characteristic understanding framework towards
physical-sensor big data challenges
|
cs.IR
|
Big data present new opportunities for modern society while posing challenges
for data scientists. Recent advancements in sensor networks and the widespread
adoption of IoT have led to the collection of physical-sensor data on an
enormous scale. However, significant challenges arise in high-quality big data
analytics. To uncover big data challenges and enhance data quality, it is
essential to quantitatively unveil data characteristics. Furthermore, the
existing studies lack analysis of the specific time-related characteristics.
Enhancing the efficiency and precision of data analytics through the big data
lifecycle requires a comprehensive understanding of data characteristics to
address the hidden big data challenges. To fill in the research gap, this paper
proposes a systematic data characteristic framework based on a 6Vs model. The
framework aims to unveil the data characteristics in terms of data volume,
variety, velocity, veracity, value, and variability through a set of
statistical indicators. This model improves the objectivity of data
characteristic understanding by relying solely on data-driven indicators. The
indicators related to time-related characteristics in physical-sensor data are
also included. Furthermore, the big data challenges are linked to each
dimension of the 6Vs model to gain a quantitative understanding of the data
challenges. Finally, a pipeline is developed to implement the proposed
framework, and two case studies are conducted to illustrate the process of
understanding the physical-sensor data characteristics and making
recommendations for data preprocessing to address the big data challenges. The
proposed framework is able to analyze the characteristics of all
physical-sensor data, therefore, identifying potential challenges in subsequent
analytics, and providing recommendations for data preprocessing.
|
2501.12723
|
Anomaly Detection in Double-entry Bookkeeping Data by Federated Learning
System with Non-model Sharing Approach
|
cs.LG
|
Anomaly detection is crucial in financial auditing and effective detection
often requires obtaining large volumes of data from multiple organizations.
However, confidentiality concerns hinder data sharing among audit firms.
Although the federated learning (FL)-based approach, FedAvg, has been proposed
to address this challenge, its use of mutiple communication rounds increases
its overhead, limiting its practicality. In this study, we propose a novel
framework employing Data Collaboration (DC) analysis -- a non-model share-type
FL method -- to streamline model training into a single communication round.
Our method first encodes journal entry data via dimensionality reduction to
obtain secure intermediate representations, then transforms them into
collaboration representations for building an autoencoder that detects
anomalies. We evaluate our approach on a synthetic dataset and real journal
entry data from multiple organizations. The results show that our method not
only outperforms single-organization baselines but also exceeds FedAvg in
non-i.i.d. experiments on real journal entry data that closely mirror
real-world conditions. By preserving data confidentiality and reducing
iterative communication, this study addresses a key auditing challenge --
ensuring data confidentiality while integrating knowledge from multiple audit
firms. Our findings represent a significant advance in artificial
intelligence-driven auditing and underscore the potential of FL methods in
high-security domains.
|
2501.12728
|
A Call for Critically Rethinking and Reforming Data Analysis in
Empirical Software Engineering
|
cs.SE cs.AI cs.DL
|
Context: Empirical Software Engineering (ESE) drives innovation in SE through
qualitative and quantitative studies. However, concerns about the correct
application of empirical methodologies have existed since the 2006 Dagstuhl
seminar on SE. Objective: To analyze three decades of SE research, identify
mistakes in statistical methods, and evaluate experts' ability to detect and
address these issues. Methods: We conducted a literature survey of ~27,000
empirical studies, using LLMs to classify statistical methodologies as adequate
or inadequate. Additionally, we selected 30 primary studies and held a workshop
with 33 ESE experts to assess their ability to identify and resolve statistical
issues. Results: Significant statistical issues were found in the primary
studies, and experts showed limited ability to detect and correct these
methodological problems, raising concerns about the broader ESE community's
proficiency in this area. Conclusions. Despite our study's eventual
limitations, its results shed light on recurring issues from promoting
information copy-and-paste from past authors' works and the continuous
publication of inadequate approaches that promote dubious results and
jeopardize the spread of the correct statistical strategies among researchers.
Besides, it justifies further investigation into empirical rigor in software
engineering to expose these recurring issues and establish a framework for
reassessing our field's foundation of statistical methodology application.
Therefore, this work calls for critically rethinking and reforming data
analysis in empirical software engineering, paving the way for our work soon.
|
2501.12732
|
GRAMA: Adaptive Graph Autoregressive Moving Average Models
|
cs.LG
|
Graph State Space Models (SSMs) have recently been introduced to enhance
Graph Neural Networks (GNNs) in modeling long-range interactions. Despite their
success, existing methods either compromise on permutation equivariance or
limit their focus to pairwise interactions rather than sequences. Building on
the connection between Autoregressive Moving Average (ARMA) and SSM, in this
paper, we introduce GRAMA, a Graph Adaptive method based on a learnable
Autoregressive Moving Average (ARMA) framework that addresses these
limitations. By transforming from static to sequential graph data, GRAMA
leverages the strengths of the ARMA framework, while preserving permutation
equivariance. Moreover, GRAMA incorporates a selective attention mechanism for
dynamic learning of ARMA coefficients, enabling efficient and flexible
long-range information propagation. We also establish theoretical connections
between GRAMA and Selective SSMs, providing insights into its ability to
capture long-range dependencies. Extensive experiments on 14 synthetic and
real-world datasets demonstrate that GRAMA consistently outperforms backbone
models and performs competitively with state-of-the-art methods.
|
2501.12735
|
Online Preference Alignment for Language Models via Count-based
Exploration
|
cs.LG
|
Reinforcement Learning from Human Feedback (RLHF) has shown great potential
in fine-tuning Large Language Models (LLMs) to align with human preferences.
Existing methods perform preference alignment from a fixed dataset, which can
be limited in data coverage, and the resulting reward model is hard to
generalize in out-of-distribution responses. Thus, online RLHF is more
desirable to empower the LLM to explore outside the support of the initial
dataset by iteratively collecting the prompt-response pairs. In this paper, we
study the fundamental problem in online RLHF, i.e. \emph{how to explore} for
LLM. We give a theoretical motivation in linear reward assumption to show that
an optimistic reward with an upper confidence bound (UCB) term leads to a
provably efficient RLHF policy. Then, we reformulate our objective to direct
preference optimization with an exploration term, where the UCB-term can be
converted to a count-based exploration bonus. We further propose a practical
algorithm, named \emph{Count-based Online Preference Optimization (COPO)},
which leverages a simple coin-flip counting module to estimate the pseudo-count
of a prompt-response pair in previously collected data. COPO encourages LLMs to
balance exploration and preference optimization in an iterative manner, which
enlarges the exploration space and the entire data coverage of iterative LLM
policies. We conduct online RLHF experiments on Zephyr and Llama-3 models. The
results on instruction-following and standard academic benchmarks show that
COPO significantly increases performance.
|
2501.12736
|
Bad-PFL: Exploring Backdoor Attacks against Personalized Federated
Learning
|
cs.LG cs.CR cs.CV
|
Data heterogeneity and backdoor attacks rank among the most significant
challenges facing federated learning (FL). For data heterogeneity, personalized
federated learning (PFL) enables each client to maintain a private personalized
model to cater to client-specific knowledge. Meanwhile, vanilla FL has proven
vulnerable to backdoor attacks. However, recent advancements in PFL community
have demonstrated a potential immunity against such attacks. This paper
explores this intersection further, revealing that existing federated backdoor
attacks fail in PFL because backdoors about manually designed triggers struggle
to survive in personalized models. To tackle this, we design Bad-PFL, which
employs features from natural data as our trigger. As long as the model is
trained on natural data, it inevitably embeds the backdoor associated with our
trigger, ensuring its longevity in personalized models. Moreover, our trigger
undergoes mutual reinforcement training with the model, further solidifying the
backdoor's durability and enhancing attack effectiveness. The large-scale
experiments across three benchmark datasets demonstrate the superior
performance of our attack against various PFL methods, even when equipped with
state-of-the-art defense mechanisms.
|
2501.12737
|
Stability and Generalization of Quantum Neural Networks
|
cs.LG stat.ML
|
Quantum neural networks (QNNs) play an important role as an emerging
technology in the rapidly growing field of quantum machine learning. While
their empirical success is evident, the theoretical explorations of QNNs,
particularly their generalization properties, are less developed and primarily
focus on the uniform convergence approach. In this paper, we exploit an
advanced tool in classical learning theory, i.e., algorithmic stability, to
study the generalization of QNNs. We first establish high-probability
generalization bounds for QNNs via uniform stability. Our bounds shed light on
the key factors influencing the generalization performance of QNNs and provide
practical insights into both the design and training processes. We next explore
the generalization of QNNs on near-term noisy intermediate-scale quantum (NISQ)
devices, highlighting the potential benefits of quantum noise. Moreover, we
argue that our previous analysis characterizes worst-case generalization
guarantees, and we establish a refined optimization-dependent generalization
bound for QNNs via on-average stability. Numerical experiments on various
real-world datasets support our theoretical findings.
|
2501.12739
|
Multiscale Training of Convolutional Neural Networks
|
cs.LG
|
Convolutional Neural Networks (CNNs) are the backbone of many deep learning
methods, but optimizing them remains computationally expensive. To address
this, we explore multiscale training frameworks and mathematically identify key
challenges, particularly when dealing with noisy inputs. Our analysis reveals
that in the presence of noise, the gradient of standard CNNs in multiscale
training may fail to converge as the mesh-size approaches to , undermining the
optimization process. This insight drives the development of Mesh-Free
Convolutions (MFCs), which are independent of input scale and avoid the
pitfalls of traditional convolution kernels. We demonstrate that MFCs, with
their robust gradient behavior, ensure convergence even with noisy inputs,
enabling more efficient neural network optimization in multiscale settings. To
validate the generality and effectiveness of our multiscale training approach,
we show that (i) MFCs can theoretically deliver substantial computational
speedups without sacrificing performance in practice, and (ii) standard
convolutions benefit from our multiscale training framework in practice.
|
2501.12746
|
EvidenceMap: Learning Evidence Analysis to Unleash the Power of Small
Language Models for Biomedical Question Answering
|
cs.CL cs.AI
|
When addressing professional questions in the biomedical domain, humans
typically acquire multiple pieces of information as evidence and engage in
multifaceted analysis to provide high-quality answers. Current LLM-based
question answering methods lack a detailed definition and learning process for
evidence analysis, leading to the risk of error propagation and hallucinations
while using evidence. Although increasing the parameter size of LLMs can
alleviate these issues, it also presents challenges in training and deployment
with limited resources. In this study, we propose EvidenceMap, which aims to
enable a tiny pre-trained language model to explicitly learn multiple aspects
of biomedical evidence, including supportive evaluation, logical correlation
and content summarization, thereby latently guiding a small generative model
(around 3B parameters) to provide textual responses. Experimental results
demonstrate that our method, learning evidence analysis by fine-tuning a model
with only 66M parameters, exceeds the RAG method with an 8B LLM by 19.9% and
5.7% in reference-based quality and accuracy, respectively.
|
2501.12747
|
Singular leaning coefficients and efficiency in learning theory
|
stat.ML cs.LG math.AG math.ST stat.TH
|
Singular learning models with non-positive Fisher information matrices
include neural networks, reduced-rank regression, Boltzmann machines, normal
mixture models, and others. These models have been widely used in the
development of learning machines. However, theoretical analysis is still in its
early stages. In this paper, we examine learning coefficients, which indicate
the general learning efficiency of deep linear learning models and three-layer
neural network models with ReLU units. Finally, we extend the results to
include the case of the Softmax function.
|
2501.12749
|
Estimating the Conformal Prediction Threshold from Noisy Labels
|
cs.LG cs.AI stat.ML
|
Conformal Prediction (CP) is a method to control prediction uncertainty by
producing a small prediction set, ensuring a predetermined probability that the
true class lies within this set. This is commonly done by defining a score,
based on the model predictions, and setting a threshold on this score using a
validation set. In this study, we address the problem of CP calibration when we
only have access to a validation set with noisy labels. We show how we can
estimate the noise-free conformal threshold based on the noisy labeled data.
Our solution is flexible and can accommodate various modeling assumptions
regarding the label contamination process, without needing any information
about the underlying data distribution or the internal mechanisms of the
machine learning classifier. We develop a coverage guarantee for uniform noise
that is effective even in tasks with a large number of classes. We dub our
approach Noise-Aware Conformal Prediction (NACP) and show on several natural
and medical image classification datasets, including ImageNet, that it
significantly outperforms current noisy label methods and achieves results
comparable to those obtained with a clean validation set.
|
2501.12751
|
Patent Figure Classification using Large Vision-language Models
|
cs.IR cs.CV cs.LG
|
Patent figure classification facilitates faceted search in patent retrieval
systems, enabling efficient prior art search. Existing approaches have explored
patent figure classification for only a single aspect and for aspects with a
limited number of concepts. In recent years, large vision-language models
(LVLMs) have shown tremendous performance across numerous computer vision
downstream tasks, however, they remain unexplored for patent figure
classification. Our work explores the efficacy of LVLMs in patent figure visual
question answering (VQA) and classification, focusing on zero-shot and few-shot
learning scenarios. For this purpose, we introduce new datasets, PatFigVQA and
PatFigCLS, for fine-tuning and evaluation regarding multiple aspects of patent
figures~(i.e., type, projection, patent class, and objects). For a
computational-effective handling of a large number of classes using LVLM, we
propose a novel tournament-style classification strategy that leverages a
series of multiple-choice questions. Experimental results and comparisons of
multiple classification approaches based on LVLMs and Convolutional Neural
Networks (CNNs) in few-shot settings show the feasibility of the proposed
approaches.
|
2501.12752
|
Indoor Channel Characterization with Extremely Large Reconfigurable
Intelligent Surfaces at $300$ GHz
|
cs.IT cs.ET math.IT
|
The technology of Reconfigurable Intelligent Surfaces (RISs) is lately being
considered as a boosting component for various indoor wireless applications,
enabling wave propagation control and coverage extension. However, the
incorporation of extremely large RISs, as recently being considered for
ultra-high capacity industrial environments at subTHz frequencies, imposes
certain challenges for indoor channel characterization. In particular, such
RISs contribute additional multipath components and their large sizes with
respect to the signal wavelength lead to near-field propagation. To this end,
ray tracing approaches become quite cumbersome and need to be rerun for
different RIS unit cell designs. In this paper, we present a novel approach for
the incorporation of RISs in indoor multipath environments towards their
efficient channel characterization. An $100\times100$ RIS design with $2$-bit
resolution unit cells realizing a fixed anomalous reflection at 300 GHz is
presented, whose radar cross section patterns are obtained via full-wave
simulations. It is showcased that the RIS behavior can be conveniently
approximated by a three-ray model, which can be efficiently incorporated within
available ray tracing tools, and that the far-field approximation is valid for
even very small distances from the RIS.
|
2501.12756
|
A topology optimisation framework to design test specimens for one-shot
identification or discovery of material models
|
cs.CE cond-mat.mtrl-sci
|
The increasing availability of full-field displacement data from imaging
techniques in experimental mechanics is determining a gradual shift in the
paradigm of material model calibration and discovery, from using several
simple-geometry tests towards a few, or even one single test with complicated
geometry. The feasibility of such a "one-shot" calibration or discovery heavily
relies upon the richness of the measured displacement data, i.e., their ability
to probe the space of the state variables and the stress space (whereby the
stresses depend on the constitutive law being sought) to an extent sufficient
for an accurate and robust calibration or discovery process. The richness of
the displacement data is in turn directly governed by the specimen geometry. In
this paper, we propose a density-based topology optimisation framework to
optimally design the geometry of the target specimen for calibration of an
anisotropic elastic material model. To this end, we perform automatic,
high-resolution specimen design by maximising the robustness of the solution of
the inverse problem, i.e., the identified material parameters, given noisy
displacement measurements from digital image correlation. We discuss the choice
of the cost function and the design of the topology optimisation framework, and
we analyse a range of optimised topologies generated for the identification of
isotropic and anisotropic elastic responses.
|
2501.12761
|
Modality Unified Attack for Omni-Modality Person Re-Identification
|
cs.CV cs.LG
|
Deep learning based person re-identification (re-id) models have been widely
employed in surveillance systems. Recent studies have demonstrated that
black-box single-modality and cross-modality re-id models are vulnerable to
adversarial examples (AEs), leaving the robustness of multi-modality re-id
models unexplored. Due to the lack of knowledge about the specific type of
model deployed in the target black-box surveillance system, we aim to generate
modality unified AEs for omni-modality (single-, cross- and multi-modality)
re-id models. Specifically, we propose a novel Modality Unified Attack method
to train modality-specific adversarial generators to generate AEs that
effectively attack different omni-modality models. A multi-modality model is
adopted as the surrogate model, wherein the features of each modality are
perturbed by metric disruption loss before fusion. To collapse the common
features of omni-modality models, Cross Modality Simulated Disruption approach
is introduced to mimic the cross-modality feature embeddings by intentionally
feeding images to non-corresponding modality-specific subnetworks of the
surrogate model. Moreover, Multi Modality Collaborative Disruption strategy is
devised to facilitate the attacker to comprehensively corrupt the informative
content of person images by leveraging a multi modality feature collaborative
metric disruption loss. Extensive experiments show that our MUA method can
effectively attack the omni-modality re-id models, achieving 55.9%, 24.4%,
49.0% and 62.7% mean mAP Drop Rate, respectively.
|
2501.12764
|
Grid-based Submap Joining: An Efficient Algorithm for Simultaneously
Optimizing Global Occupancy Map and Local Submap Frames
|
cs.RO
|
Optimizing robot poses and the map simultaneously has been shown to provide
more accurate SLAM results. However, for non-feature based SLAM approaches,
directly optimizing all the robot poses and the whole map will greatly increase
the computational cost, making SLAM problems difficult to solve in large-scale
environments. To solve the 2D non-feature based SLAM problem in large-scale
environments more accurately and efficiently, we propose the grid-based submap
joining method. Specifically, we first formulate the 2D grid-based submap
joining problem as a non-linear least squares (NLLS) form to optimize the
global occupancy map and local submap frames simultaneously. We then prove that
in solving the NLLS problem using Gauss-Newton (GN) method, the increments of
the poses in each iteration are independent of the occupancy values of the
global occupancy map. Based on this property, we propose a poseonly GN
algorithm equivalent to full GN method to solve the NLLS problem. The proposed
submap joining algorithm is very efficient due to the independent property and
the pose-only solution. Evaluations using simulations and publicly available
practical 2D laser datasets confirm the outperformance of our proposed method
compared to the state-of-the-art methods in terms of efficiency and accuracy,
as well as the ability to solve the grid-based SLAM problem in very large-scale
environments.
|
2501.12766
|
NExtLong: Toward Effective Long-Context Training without Long Documents
|
cs.CL cs.AI
|
Large language models (LLMs) with extended context windows have made
significant strides yet remain a challenge due to the scarcity of long
documents. Existing methods tend to synthesize long-context data but lack a
clear mechanism to reinforce the long-range dependency modeling. To address
this limitation, we propose NExtLong, a novel framework for synthesizing
long-context data through Negative document Extension. NExtLong decomposes a
document into multiple meta-chunks and extends the context by interleaving hard
negative distractors retrieved from pretraining corpora. This approach compels
the model to discriminate long-range dependent context from distracting
content, enhancing its ability to model long-range dependencies. Extensive
experiments demonstrate that NExtLong achieves significant performance
improvements on the HELMET and RULER benchmarks compared to existing
long-context synthesis approaches and leading models, which are trained on
non-synthetic long documents. These findings highlight NExtLong's ability to
reduce reliance on non-synthetic long documents, making it an effective
framework for developing advanced long-context LLMs.
|
2501.12769
|
Urban Priority Pass: Fair Signalized Intersection Management Accounting
For Passenger Needs Through Prioritization
|
eess.SY cs.SY
|
Over the past few decades, efforts of road traffic management and practice
have predominantly focused on maximizing system efficiency and mitigating
congestion from a system perspective. This efficiency-driven approach implies
the equal treatment of all vehicles, which often overlooks individual user
experiences, broader social impacts, and the fact that users are heterogeneous
in their urgency and experience different costs when being delayed. Existing
strategies to account for the differences in needs of users in traffic
management cover dedicated transit lanes, prioritization of emergency vehicles,
transit signal prioritization, and economic instruments. Even though they are
the major bottleneck for traffic in cities, no dedicated instrument that
enables prioritization of individual drivers at intersections. The Priority
Pass is a reservation-based, economic controller that expedites entitled
vehicles at signalized intersections, without causing arbitrary delays for
not-entitled vehicles and without affecting transportation efficiency de trop.
The prioritization of vulnerable road users, emergency vehicles, commercial
taxi and delivery drivers, or urgent individuals can enhance road safety, and
achieve social, environmental, and economic goals. A case study in Manhattan
demonstrates the feasibility of individual prioritization (up to 40\% delay
decrease), and quantifies the potential of the Priority Pass to gain social
welfare benefits for the people. A market for prioritization could generate up
to 1 million \$ in daily revenues for Manhattan, and equitably allocate delay
reductions to those in need, rather than those with a high income.
|
2501.12770
|
On Tradeoffs in Learning-Augmented Algorithms
|
cs.DS cs.AI cs.LG
|
The field of learning-augmented algorithms has gained significant attention
in recent years. These algorithms, using potentially inaccurate predictions,
must exhibit three key properties: consistency, robustness, and smoothness. In
scenarios where distributional information about predictions is available, a
strong expected performance is required. Typically, the design of these
algorithms involves a natural tradeoff between consistency and robustness, and
previous works aimed to achieve Pareto-optimal tradeoffs for specific problems.
However, in some settings, this comes at the expense of smoothness. This paper
demonstrates that certain problems involve multiple tradeoffs between
consistency, robustness, smoothness, and average performance.
|
2501.12771
|
Non-adaptive Learning of Random Hypergraphs with Queries
|
cs.IT cs.DM cs.DS cs.LG math.IT stat.ML
|
We study the problem of learning a hidden hypergraph $G=(V,E)$ by making a
single batch of queries (non-adaptively). We consider the hyperedge detection
model, in which every query must be of the form:
``Does this set $S\subseteq V$ contain at least one full hyperedge?''
In this model, it is known that there is no algorithm that allows to
non-adaptively learn arbitrary hypergraphs by making fewer than
$\Omega(\min\{m^2\log n, n^2\})$ even when the hypergraph is constrained to be
$2$-uniform (i.e. the hypergraph is simply a graph). Recently, Li et al.
overcame this lower bound in the setting in which $G$ is a graph by assuming
that the graph learned is sampled from an Erd\H{o}s-R\'enyi model. We
generalize the result of Li et al. to the setting of random $k$-uniform
hypergraphs. To achieve this result, we leverage a novel equivalence between
the problem of learning a single hyperedge and the standard group testing
problem. This latter result may also be of independent interest.
|
2501.12773
|
Low-Complexity Channel Estimation for RIS-Assisted Multi-User Wireless
Communications
|
cs.IT eess.SP math.IT
|
Reconfigurable intelligent surfaces (RISs) are eminently suitable for
improving the reliability of wireless communications by jointly designing the
active beamforming at the base station (BS) and the passive beamforming at the
RIS. Therefore, the accuracy of channel estimation is crucial for RIS-aided
systems. The challenge is that only the cascaded two-hop channel spanning from
the user equipments (UEs) to the RIS and spanning from the RIS to the BS can be
estimated, due to the lack of active radio frequency (RF) chains at RIS
elements, which leads to high pilot overhead. In this paper, we propose a
low-overhead linear minimum mean square error (LMMSE) channel estimation method
by exploiting the spatial correlation of channel links, which strikes a
trade-off between the pilot overhead and the channel estimation accuracy.
Moreover, we calculate the theoretical normalized mean square error (MSE) for
our channel estimation method. Finally, we verify numerically that the proposed
LMMSE estimator has lower MSE than the state-of-the-art (SoA) grouping based
estimators.
|
2501.12774
|
LLMs as Repositories of Factual Knowledge: Limitations and Solutions
|
cs.CL
|
LLMs' sources of knowledge are data snapshots containing factual information
about entities collected at different timestamps and from different media types
(e.g. wikis, social media, etc.). Such unstructured knowledge is subject to
change due to updates through time from past to present. Equally important are
the inconsistencies and inaccuracies occurring in different information
sources. Consequently, the model's knowledge about an entity may be perturbed
while training over the sequence of snapshots or at inference time, resulting
in inconsistent and inaccurate model performance. In this work, we study the
appropriateness of Large Language Models (LLMs) as repositories of factual
knowledge. We consider twenty-four state-of-the-art LLMs that are either
closed-, partially (weights), or fully (weight and training data) open-source.
We evaluate their reliability in responding to time-sensitive factual questions
in terms of accuracy and consistency when prompts are perturbed. We further
evaluate the effectiveness of state-of-the-art methods to improve LLMs'
accuracy and consistency. We then propose "ENtity-Aware Fine-tuning" (ENAF), a
soft neurosymbolic approach aimed at providing a structured representation of
entities during fine-tuning to improve the model's performance.
|
2501.12775
|
Regularization, Semi-supervision, and Supervision for a Plausible
Attention-Based Explanation
|
cs.CL
|
Attention mechanism is contributing to the majority of recent advances in
machine learning for natural language processing. Additionally, it results in
an attention map that shows the proportional influence of each input in its
decision. Empirical studies postulate that attention maps can be provided as an
explanation for model output. However, it is still questionable to ask whether
this explanation helps regular people to understand and accept the model output
(the plausibility of the explanation). Recent studies show that attention
weights in the RNN encoders are hardly plausible because they spread on input
tokens. We thus propose 3 additional constraints to the learning objective
function to improve the plausibility of the attention map: regularization to
increase the attention weight sparsity, semi-supervision to supervise the map
by a heuristic and supervision by human annotation. Results show that all
techniques can improve the attention map plausibility at some level. We also
observe that specific instructions for human annotation might have a negative
effect on classification performance. Beyond the attention map, the result of
experiments on text classification tasks also shows that no matter how the
constraint brings the gain, the contextualization layer plays a crucial role in
finding the right space for finding plausible tokens.
|
2501.12776
|
Data re-uploading in Quantum Machine Learning for time series:
application to traffic forecasting
|
quant-ph cs.AI cs.LG cs.NE
|
Accurate traffic forecasting plays a crucial role in modern Intelligent
Transportation Systems (ITS), as it enables real-time traffic flow management,
reduces congestion, and improves the overall efficiency of urban transportation
networks. With the rise of Quantum Machine Learning (QML), it has emerged a new
paradigm possessing the potential to enhance predictive capabilities beyond
what classical machine learning models can achieve. In the present work we
pursue a heuristic approach to explore the potential of QML, and focus on a
specific transport issue. In particular, as a case study we investigate a
traffic forecast task for a major urban area in Athens (Greece), for which we
possess high-resolution data. In this endeavor we explore the application of
Quantum Neural Networks (QNN), and, notably, we present the first application
of quantum data re-uploading in the context of transport forecasting. This
technique allows quantum models to better capture complex patterns, such as
traffic dynamics, by repeatedly encoding classical data into a quantum state.
Aside from providing a prediction model, we spend considerable effort in
comparing the performance of our hybrid quantum-classical neural networks with
classical deep learning approaches. Our results show that hybrid models achieve
competitive accuracy with state-of-the-art classical methods, especially when
the number of qubits and re-uploading blocks is increased. While the classical
models demonstrate lower computational demands, we provide evidence that
increasing the complexity of the quantum model improves predictive accuracy.
These findings indicate that QML techniques, and specifically the data
re-uploading approach, hold promise for advancing traffic forecasting models
and could be instrumental in addressing challenges inherent in ITS
environments.
|
2501.12785
|
On Generalization and Distributional Update for Mimicking Observations
with Adequate Exploration
|
stat.ML cs.LG
|
This paper tackles the efficiency and stability issues in learning from
observations (LfO). We commence by investigating how reward functions and
policies generalize in LfO. Subsequently, the built-in reinforcement learning
(RL) approach in generative adversarial imitation from observation (GAIfO) is
replaced with distributional soft actor-critic (DSAC). This change results in a
novel algorithm called Mimicking Observations through Distributional Update
Learning with adequate Exploration (MODULE), which combines soft actor-critic's
superior efficiency with distributional RL's robust stability.
|
2501.12789
|
Generating Diverse Q&A Benchmarks for RAG Evaluation with DataMorgana
|
cs.CL cs.IR
|
Evaluating Retrieval-Augmented Generation (RAG) systems, especially in
domain-specific contexts, requires benchmarks that address the distinctive
requirements of the applicative scenario. Since real data can be hard to
obtain, a common strategy is to use LLM-based methods to generate synthetic
data. Existing solutions are general purpose: given a document, they generate a
question to build a Q&A pair. However, although the generated questions can be
individually good, they are typically not diverse enough to reasonably cover
the different ways real end-users can interact with the RAG system. We
introduce here DataMorgana, a tool for generating highly customizable and
diverse synthetic Q&A benchmarks tailored to RAG applications. DataMorgana
enables detailed configurations of user and question categories and provides
control over their distribution within the benchmark. It uses a lightweight
two-stage process, ensuring efficiency and fast iterations, while generating
benchmarks that reflect the expected traffic. We conduct a thorough line of
experiments, showing quantitatively and qualitatively that DataMorgana
surpasses existing tools and approaches in producing lexically, syntactically,
and semantically diverse question sets across domain-specific and
general-knowledge corpora. DataMorgana will be made available to selected teams
in the research community, as first beta testers, in the context of the
upcoming SIGIR'2025 LiveRAG challenge to be announced in early February 2025.
|
2501.12793
|
Revisit Self-Debugging with Self-Generated Tests for Code Generation
|
cs.SE cs.AI
|
Large language models (LLMs) have shown significant advancements in code
generation, but still face challenges on tasks beyond their basic capabilities.
Recently, the notion of self-debugging has been proposed to boost the
performance of code generation by leveraging execution feedback from tests.
Despite its promise, the availability of high-quality tests in real-world
scenarios is limited. In this context, self-debugging with self-generated tests
is a promising solution but lacks a full exploration of its limitations and
practical potential. Therefore, we investigate its efficacy on diverse
programming problems. To deepen our understanding, we propose two distinct
paradigms for the process: post-execution and in-execution self-debugging.
Within the scope of self-contained Python programming tasks, we find that
post-execution self-debugging struggles on basic problems but shows potential
for improvement on competitive ones, due to the bias introduced by
self-generated tests. On the other hand, in-execution self-debugging enables
LLMs to mitigate the bias by solely leveraging intermediate states during
execution, thereby enhancing code generation.
|
2501.12794
|
Generation of Standardized E-Learning Contents from Digital Medical
Collections
|
cs.CL
|
In this paper, we describe an approach to transforming the huge amount of
medical knowledge available in existing online medical collections into
standardized learning packages ready to be integrated into the most popular
e-learning platforms. The core of our approach is a tool called Clavy, which
makes it possible to retrieve pieces of content in medical collections, to
transform this content into meaningful learning units, and to export it in the
form of standardized learning packages. In addition to describing the approach,
we demonstrate its feasibility by applying it to the generation of IMS content
packages from MedPix, a popular online database of medical cases in the domain
of radiology.
|
2501.12796
|
Hybrid Losses for Hierarchical Embedding Learning
|
cs.SD cs.IR cs.LG eess.AS
|
In traditional supervised learning, the cross-entropy loss treats all
incorrect predictions equally, ignoring the relevance or proximity of wrong
labels to the correct answer. By leveraging a tree hierarchy for fine-grained
labels, we investigate hybrid losses, such as generalised triplet and
cross-entropy losses, to enforce similarity between labels within a multi-task
learning framework. We propose metrics to evaluate the embedding space
structure and assess the model's ability to generalise to unseen classes, that
is, to infer similar classes for data belonging to unseen categories. Our
experiments on OrchideaSOL, a four-level hierarchical instrument sound dataset
with nearly 200 detailed categories, demonstrate that the proposed hybrid
losses outperform previous works in classification, retrieval, embedding space
structure, and generalisation.
|
2501.12799
|
Int2Planner: An Intention-based Multi-modal Motion Planner for
Integrated Prediction and Planning
|
cs.RO
|
Motion planning is a critical module in autonomous driving, with the primary
challenge of uncertainty caused by interactions with other participants. As
most previous methods treat prediction and planning as separate tasks, it is
difficult to model these interactions. Furthermore, since the route path
navigates ego vehicles to a predefined destination, it provides relatively
stable intentions for ego vehicles and helps constrain uncertainty. On this
basis, we construct Int2Planner, an \textbf{Int}ention-based
\textbf{Int}egrated motion \textbf{Planner} achieves multi-modal planning and
prediction. Instead of static intention points, Int2Planner utilizes route
intention points for ego vehicles and generates corresponding planning
trajectories for each intention point to facilitate multi-modal planning. The
experiments on the private dataset and the public nuPlan benchmark show the
effectiveness of route intention points, and Int2Planner achieves
state-of-the-art performance. We also deploy it in real-world vehicles and have
conducted autonomous driving for hundreds of kilometers in urban areas. It
further verifies that Int2Planner can continuously interact with the traffic
environment. Code will be avaliable at https://github.com/cxlz/Int2Planner.
|
2501.12810
|
Machine Learning Modeling for Multi-order Human Visual Motion Processing
|
cs.CV cs.AI cs.LG
|
Our research aims to develop machines that learn to perceive visual motion as
do humans. While recent advances in computer vision (CV) have enabled DNN-based
models to accurately estimate optical flow in naturalistic images, a
significant disparity remains between CV models and the biological visual
system in both architecture and behavior. This disparity includes humans'
ability to perceive the motion of higher-order image features (second-order
motion), which many CV models fail to capture because of their reliance on the
intensity conservation law. Our model architecture mimics the cortical V1-MT
motion processing pathway, utilizing a trainable motion energy sensor bank and
a recurrent graph network. Supervised learning employing diverse naturalistic
videos allows the model to replicate psychophysical and physiological findings
about first-order (luminance-based) motion perception. For second-order motion,
inspired by neuroscientific findings, the model includes an additional sensing
pathway with nonlinear preprocessing before motion energy sensing, implemented
using a simple multilayer 3D CNN block. When exploring how the brain acquired
the ability to perceive second-order motion in natural environments, in which
pure second-order signals are rare, we hypothesized that second-order
mechanisms were critical when estimating robust object motion amidst optical
fluctuations, such as highlights on glossy surfaces. We trained our
dual-pathway model on novel motion datasets with varying material properties of
moving objects. We found that training to estimate object motion from
non-Lambertian materials naturally endowed the model with the capacity to
perceive second-order motion, as can humans. The resulting model effectively
aligns with biological systems while generalizing to both first- and
second-order motion phenomena in natural scenes.
|
2501.12811
|
Unveiling Zero-Space Detection: A Novel Framework for Autonomous
Ransomware Identification in High-Velocity Environments
|
cs.CR cs.AI
|
Modern cybersecurity landscapes increasingly demand sophisticated detection
frameworks capable of identifying evolving threats with precision and
adaptability. The proposed Zero-Space Detection framework introduces a novel
approach that dynamically identifies latent behavioral patterns through
unsupervised clustering and advanced deep learning techniques. Designed to
address the limitations of signature-based and heuristic methods, it operates
effectively in high-velocity environments by integrating multi-phase filtering
and ensemble learning for refined decision-making. Experimental evaluation
reveals high detection rates across diverse ransomware families, including
LockBit, Conti, REvil, and BlackMatter, while maintaining low false positive
rates and scalable performance. Computational overhead remains minimal, with
average processing times ensuring compatibility with real-time systems even
under peak operational loads. The framework demonstrates resilience against
adversarial strategies such as obfuscation and encryption speed variability,
which frequently challenge conventional detection systems. Analysis across
multiple data sources highlights its versatility in handling diverse file types
and operational contexts. Comprehensive metrics, including detection
probability, latency, and resource efficiency, validate its efficacy under
real-world conditions. Through its modular architecture, the framework achieves
seamless integration with existing cybersecurity infrastructures without
significant reconfiguration. The results demonstrate its robustness and
scalability, offering a transformative paradigm for ransomware identification
in dynamic and resource-constrained environments.
|
2501.12812
|
PSGSL: A Probabilistic Framework Integrating Semantic Scene
Understanding and Gas Sensing for Gas Source Localization
|
cs.RO
|
Semantic scene understanding allows a robotic agent to reason about problems
in complex ways, using information from multiple and varied sensors to make
deductions about a particular matter. As a result, this form of intelligent
robotics is capable of performing more complex tasks and achieving more precise
results than simpler approaches based on single data sources. However, these
improved capabilities come at the cost of higher complexity, both computational
and in terms of design. Due to the increased design complexity, formal
approaches for exploiting semantic understanding become necessary.
We present here a probabilistic formulation for integrating semantic
knowledge into the process of gas source localization (GSL). The problem of GSL
poses many unsolved challenges, and proposed solutions need to contend with the
constraining limitations of sensing hardware. By exploiting semantic scene
understanding, we can leverage other sources of information, such as vision, to
improve the estimation of the source location. We show how our formulation can
be applied to pre-existing GSL algorithms and the effect that including
semantic data has on the produced estimations of the location of the source.
|
2501.12815
|
Certified Guidance for Planning with Deep Generative Models
|
cs.LG stat.ML
|
Deep generative models, such as generative adversarial networks and diffusion
models, have recently emerged as powerful tools for planning tasks and behavior
synthesis in autonomous systems. Various guidance strategies have been
introduced to steer the generative process toward outputs that are more likely
to satisfy the planning objectives. These strategies avoid the need for model
retraining but do not provide any guarantee that the generated outputs will
satisfy the desired planning objectives. To address this limitation, we
introduce certified guidance, an approach that modifies a generative model,
without retraining it, into a new model guaranteed to satisfy a given
specification with probability one. We focus on Signal Temporal Logic
specifications, which are rich enough to describe nontrivial planning tasks.
Our approach leverages neural network verification techniques to systematically
explore the latent spaces of the generative models, identifying latent regions
that are certifiably correct with respect to the STL property of interest. We
evaluate the effectiveness of our method on four planning benchmarks using GANs
and diffusion models. Our results confirm that certified guidance produces
generative models that are always correct, unlike existing guidance methods
that are not certified.
|
2501.12823
|
To Measure or Not: A Cost-Sensitive, Selective Measuring Environment for
Agricultural Management Decisions with Reinforcement Learning
|
cs.LG cs.AI
|
Farmers rely on in-field observations to make well-informed crop management
decisions to maximize profit and minimize adverse environmental impact.
However, obtaining real-world crop state measurements is labor-intensive,
time-consuming and expensive. In most cases, it is not feasible to gather crop
state measurements before every decision moment. Moreover, in previous research
pertaining to farm management optimization, these observations are often
assumed to be readily available without any cost, which is unrealistic. Hence,
enabling optimization without the need to have temporally complete crop state
observations is important. An approach to that problem is to include measuring
as part of decision making. As a solution, we apply reinforcement learning (RL)
to recommend opportune moments to simultaneously measure crop features and
apply nitrogen fertilizer. With realistic considerations, we design an RL
environment with explicit crop feature measuring costs. While balancing costs,
we find that an RL agent, trained with recurrent PPO, discovers adaptive
measuring policies that follow critical crop development stages, with results
aligned by what domain experts would consider a sensible approach. Our results
highlight the importance of measuring when crop feature measurements are not
readily available.
|
2501.12824
|
Enhancing Monocular Depth Estimation with Multi-Source Auxiliary Tasks
|
cs.CV
|
Monocular depth estimation (MDE) is a challenging task in computer vision,
often hindered by the cost and scarcity of high-quality labeled datasets. We
tackle this challenge using auxiliary datasets from related vision tasks for an
alternating training scheme with a shared decoder built on top of a pre-trained
vision foundation model, while giving a higher weight to MDE. Through extensive
experiments we demonstrate the benefits of incorporating various in-domain
auxiliary datasets and tasks to improve MDE quality on average by ~11%. Our
experimental analysis shows that auxiliary tasks have different impacts,
confirming the importance of task selection, highlighting that quality gains
are not achieved by merely adding data. Remarkably, our study reveals that
using semantic segmentation datasets as Multi-Label Dense Classification (MLDC)
often results in additional quality gains. Lastly, our method significantly
improves the data efficiency for the considered MDE datasets, enhancing their
quality while reducing their size by at least 80%. This paves the way for using
auxiliary data from related tasks to improve MDE quality despite limited
availability of high-quality labeled data. Code is available at
https://jugit.fz-juelich.de/ias-8/mdeaux.
|
2501.12826
|
Open or Closed LLM for Lesser-Resourced Languages? Lessons from Greek
|
cs.CL cs.AI cs.LG
|
Natural Language Processing (NLP) for lesser-resourced languages faces
persistent challenges, including limited datasets, inherited biases from
high-resource languages, and the need for domain-specific solutions. This study
addresses these gaps for Modern Greek through three key contributions. First,
we evaluate the performance of open-source (Llama-70b) and closed-source
(GPT-4o mini) large language models (LLMs) on seven core NLP tasks with dataset
availability, revealing task-specific strengths, weaknesses, and parity in
their performance. Second, we expand the scope of Greek NLP by reframing
Authorship Attribution as a tool to assess potential data usage by LLMs in
pre-training, with high 0-shot accuracy suggesting ethical implications for
data provenance. Third, we showcase a legal NLP case study, where a Summarize,
Translate, and Embed (STE) methodology outperforms the traditional TF-IDF
approach for clustering \emph{long} legal texts. Together, these contributions
provide a roadmap to advance NLP in lesser-resourced languages, bridging gaps
in model evaluation, task innovation, and real-world impact.
|
2501.12829
|
A transformer-based deep q learning approach for dynamic load balancing
in software-defined networks
|
cs.NI cs.AI cs.ET cs.LG cs.MA
|
This study proposes a novel approach for dynamic load balancing in
Software-Defined Networks (SDNs) using a Transformer-based Deep Q-Network
(DQN). Traditional load balancing mechanisms, such as Round Robin (RR) and
Weighted Round Robin (WRR), are static and often struggle to adapt to
fluctuating traffic conditions, leading to inefficiencies in network
performance. In contrast, SDNs offer centralized control and flexibility,
providing an ideal platform for implementing machine learning-driven
optimization strategies. The core of this research combines a Temporal Fusion
Transformer (TFT) for accurate traffic prediction with a DQN model to perform
real-time dynamic load balancing. The TFT model predicts future traffic loads,
which the DQN uses as input, allowing it to make intelligent routing decisions
that optimize throughput, minimize latency, and reduce packet loss. The
proposed model was tested against RR and WRR in simulated environments with
varying data rates, and the results demonstrate significant improvements in
network performance. For the 500MB data rate, the DQN model achieved an average
throughput of 0.275 compared to 0.202 and 0.205 for RR and WRR, respectively.
Additionally, the DQN recorded lower average latency and packet loss. In the
1000MB simulation, the DQN model outperformed the traditional methods in
throughput, latency, and packet loss, reinforcing its effectiveness in managing
network loads dynamically. This research presents an important step towards
enhancing network performance through the integration of machine learning
models within SDNs, potentially paving the way for more adaptive, intelligent
network management systems.
|
2501.12830
|
Orbit-Attitude Predictive Control in the Vicinity of Asteroids with In
Situ Gravity Estimation
|
eess.SY cs.SY
|
This paper presents an integrated model-learning predictive control scheme
for spacecraft orbit-attitude station-keeping in the vicinity of asteroids. The
orbiting probe relies on optical and laser navigation while attitude
measurements are provided by star trackers and gyroscopes. The asteroid gravity
field inhomogeneities are assumed to be unknown a priori. The state and gravity
model parameters are estimated simultaneously using an unscented Kalman filter.
The proposed gravity model identification enables the application of a
learning-based predictive control methodology. The predictive control allows
for a high degree of accuracy because the predicted model is progressively
identified in situ. Consequently, the tracking errors decrease over time as the
model accuracy increases. Finally, a constellation mission concept is analyzed
in order to speed up the model identification process. Numerical results are
shown and discussed.
|
2501.12832
|
FDG-Diff: Frequency-Domain-Guided Diffusion Framework for Compressed
Hazy Image Restoration
|
eess.IV cs.CV
|
In this study, we reveal that the interaction between haze degradation and
JPEG compression introduces complex joint loss effects, which significantly
complicate image restoration. Existing dehazing models often neglect
compression effects, which limits their effectiveness in practical
applications. To address these challenges, we introduce three key
contributions. First, we design FDG-Diff, a novel frequency-domain-guided
dehazing framework that improves JPEG image restoration by leveraging
frequency-domain information. Second, we introduce the High-Frequency
Compensation Module (HFCM), which enhances spatial-domain detail restoration by
incorporating frequency-domain augmentation techniques into a diffusion-based
restoration framework. Lastly, the introduction of the Degradation-Aware
Denoising Timestep Predictor (DADTP) module further enhances restoration
quality by enabling adaptive region-specific restoration, effectively
addressing regional degradation inconsistencies in compressed hazy images.
Experimental results across multiple compressed dehazing datasets demonstrate
that our method consistently outperforms the latest state-of-the-art
approaches. Code be available at https://github.com/SYSUzrc/FDG-Diff.
|
2501.12833
|
A coupled FE-BE multi-scale method for the dynamics of jointed
structures
|
eess.SY cs.SY
|
The damping of built-up structures stems largely from the microscopic dry
frictional interactions in the contact interfaces. The accurate prediction of
friction damping has been an important scientific aim of the past several
decades. Recent research indicates that very good agreement with vibration
measurements is to be expected if the actual contact surface topography is
sufficiently well known and finely resolved, and frictional-unilateral
interactions are modeled in terms of the Coulomb-Signorini conditions.
Resolving all relevant length scales in one finite element model leads to
enormous or even prohibitive computation effort and regularization of the
set-valued contact laws might be needed to ensure numerical stability. In this
work, we propose a multi-scale approach: The stress and deformation field in
the contact region is modeled using elastic half-space theory, implemented on a
regular and fine grid of boundary elements (BE), so that the compliance matrix
can be expressed in closed form. The vibration behavior of the remaining region
is described using a relatively coarse finite element (FE) model, which is
further reduced via component mode synthesis. The two models are coupled by
enforcing compatibility and equilibrium conditions in the far field. The
set-valued Coulomb-Signorini conditions are enforced robustly and efficiently
using a projected over-relaxation scheme in conjunction with an appropriate
active-set strategy. For the S4 beam benchmark, very good agreement with regard
to the amplitude-dependent frequency and damping ratio of the first few modes
is achieved, while the computation effort is reduced by several orders of
magnitude compared to the full-FE reference. The proposed multi-scale method
permits a very fine resolution of the contact surface topography without
suffering from numerical instability.
|
2501.12834
|
The Optimization of Random Tree Codes for Limited Computational
Resources
|
cs.IT cs.CC math.IT
|
In this paper, we introduce an achievability bound on the frame error rate of
random tree code ensembles under a sequential decoding algorithm with a hard
computational limit and consider the optimization of the random tree code
ensembles over their branching structures/profiles and the decoding measure.
Through numerical examples, we show that the achievability bound for the
optimizated random tree codes can approach the maximum likelihood (ML) decoding
performance of pure random codes.
|
2501.12835
|
Adaptive Retrieval Without Self-Knowledge? Bringing Uncertainty Back
Home
|
cs.CL cs.LG
|
Retrieval Augmented Generation (RAG) improves correctness of Question
Answering (QA) and addresses hallucinations in Large Language Models (LLMs),
yet greatly increase computational costs. Besides, RAG is not always needed as
may introduce irrelevant information. Recent adaptive retrieval methods
integrate LLMs' intrinsic knowledge with external information appealing to LLM
self-knowledge, but they often neglect efficiency evaluations and comparisons
with uncertainty estimation techniques. We bridge this gap by conducting a
comprehensive analysis of 35 adaptive retrieval methods, including 8 recent
approaches and 27 uncertainty estimation techniques, across 6 datasets using 10
metrics for QA performance, self-knowledge, and efficiency. Our findings show
that uncertainty estimation techniques often outperform complex pipelines in
terms of efficiency and self-knowledge, while maintaining comparable QA
performance.
|
2501.12840
|
AMM-Diff: Adaptive Multi-Modality Diffusion Network for Missing Modality
Imputation
|
cs.CV
|
In clinical practice, full imaging is not always feasible, often due to
complex acquisition protocols, stringent privacy regulations, or specific
clinical needs. However, missing MR modalities pose significant challenges for
tasks like brain tumor segmentation, especially in deep learning-based
segmentation, as each modality provides complementary information crucial for
improving accuracy. A promising solution is missing data imputation, where
absent modalities are generated from available ones. While generative models
have been widely used for this purpose, most state-of-the-art approaches are
limited to single or dual target translations, lacking the adaptability to
generate missing modalities based on varying input configurations. To address
this, we propose an Adaptive Multi-Modality Diffusion Network (AMM-Diff), a
novel diffusion-based generative model capable of handling any number of input
modalities and generating the missing ones. We designed an Image-Frequency
Fusion Network (IFFN) that learns a unified feature representation through a
self-supervised pretext task across the full input modalities and their
selected high-frequency Fourier components. The proposed diffusion model
leverages this representation, encapsulating prior knowledge of the complete
modalities, and combines it with an adaptive reconstruction strategy to achieve
missing modality completion. Experimental results on the BraTS 2021 dataset
demonstrate the effectiveness of our approach.
|
2501.12844
|
GAMED-Snake: Gradient-aware Adaptive Momentum Evolution Deep Snake Model
for Multi-organ Segmentation
|
cs.CV cs.AI
|
Multi-organ segmentation is a critical yet challenging task due to complex
anatomical backgrounds, blurred boundaries, and diverse morphologies. This
study introduces the Gradient-aware Adaptive Momentum Evolution Deep Snake
(GAMED-Snake) model, which establishes a novel paradigm for contour-based
segmentation by integrating gradient-based learning with adaptive momentum
evolution mechanisms. The GAMED-Snake model incorporates three major
innovations: First, the Distance Energy Map Prior (DEMP) generates a
pixel-level force field that effectively attracts contour points towards the
true boundaries, even in scenarios with complex backgrounds and blurred edges.
Second, the Differential Convolution Inception Module (DCIM) precisely extracts
comprehensive energy gradients, significantly enhancing segmentation accuracy.
Third, the Adaptive Momentum Evolution Mechanism (AMEM) employs cross-attention
to establish dynamic features across different iterations of evolution,
enabling precise boundary alignment for diverse morphologies. Experimental
results on four challenging multi-organ segmentation datasets demonstrate that
GAMED-Snake improves the mDice metric by approximately 2% compared to
state-of-the-art methods. Code will be available at
https://github.com/SYSUzrc/GAMED-Snake.
|
2501.12851
|
ACEBench: Who Wins the Match Point in Tool Usage?
|
cs.CL
|
Large Language Models (LLMs) have demonstrated significant potential in
decision-making and reasoning, particularly when integrated with various tools
to effectively solve complex problems. However, existing benchmarks for
evaluating LLMs' tool usage face several limitations: (1) limited evaluation
scenarios, often lacking assessments in real multi-turn dialogue contexts; (2)
narrow evaluation dimensions, with insufficient detailed assessments of how
LLMs use tools; and (3) reliance on LLMs or real API executions for evaluation,
which introduces significant overhead. To address these challenges, we
introduce ACEBench, a comprehensive benchmark for assessing tool usage in LLMs.
ACEBench categorizes data into three primary types based on evaluation
methodology: Normal, Special, and Agent. "Normal" evaluates tool usage in basic
scenarios; "Special" evaluates tool usage in situations with ambiguous or
incomplete instructions; "Agent" evaluates tool usage through multi-agent
interactions to simulate real-world, multi-turn dialogues. We conducted
extensive experiments using ACEBench, analyzing various LLMs in-depth and
providing a more granular examination of error causes across different data
types.
|
2501.12853
|
Data-and-Semantic Dual-Driven Spectrum Map Construction for 6G Spectrum
Management
|
cs.LG
|
Spectrum maps reflect the utilization and distribution of spectrum resources
in the electromagnetic environment, serving as an effective approach to support
spectrum management. However, the construction of spectrum maps in urban
environments is challenging because of high-density connection and complex
terrain. Moreover, the existing spectrum map construction methods are typically
applied to a fixed frequency, which cannot cover the entire frequency band. To
address the aforementioned challenges, a UNet-based data-and-semantic
dual-driven method is proposed by introducing the semantic knowledge of binary
city maps and binary sampling location maps to enhance the accuracy of spectrum
map construction in complex urban environments with dense communications.
Moreover, a joint frequency-space reasoning model is exploited to capture the
correlation of spectrum data in terms of space and frequency, enabling the
realization of complete spectrum map construction without sampling all
frequencies of spectrum data. The simulation results demonstrate that the
proposed method can infer the spectrum utilization status of missing
frequencies and improve the completeness of the spectrum map construction.
Furthermore, the accuracy of spectrum map construction achieved by the proposed
data-and-semantic dual-driven method outperforms the benchmark schemes,
especially in scenarios with low sampling density.
|
2501.12857
|
HierPromptLM: A Pure PLM-based Framework for Representation Learning on
Heterogeneous Text-rich Networks
|
cs.LG
|
Representation learning on heterogeneous text-rich networks (HTRNs), which
consist of multiple types of nodes and edges with each node associated with
textual information, is essential for various real-world applications. Given
the success of pretrained language models (PLMs) in processing text data,
recent efforts have focused on integrating PLMs into HTRN representation
learning. These methods typically handle textual and structural information
separately, using both PLMs and heterogeneous graph neural networks (HGNNs).
However, this separation fails to capture the critical interactions between
these two types of information within HTRNs. Additionally, it necessitates an
extra alignment step, which is challenging due to the fundamental differences
between distinct embedding spaces generated by PLMs and HGNNs. To deal with it,
we propose HierPromptLM, a novel pure PLM-based framework that seamlessly
models both text data and graph structures without the need for separate
processing. Firstly, we develop a Hierarchical Prompt module that employs
prompt learning to integrate text data and heterogeneous graph structures at
both the node and edge levels, within a unified textual space. Building upon
this foundation, we further introduce two innovative HTRN-tailored pretraining
tasks to fine-tune PLMs for representation learning by emphasizing the inherent
heterogeneity and interactions between textual and structural information
within HTRNs. Extensive experiments on two real-world HTRN datasets demonstrate
HierPromptLM outperforms state-of-the-art methods, achieving significant
improvements of up to 6.08% for node classification and 10.84% for link
prediction.
|
2501.12859
|
Monte-Carlo based non-line-of-sight underwater wireless optical
communication channel modeling and system performance analysis under
turbulence
|
physics.ao-ph cs.IT math.IT
|
Compared with line-of-sight (LOS) communication, nonline-of-sight (NLOS)
underwater wireless optical communication (UWOC) systems have garnered
extensive attention because of their heightened suitability for the intricate
and dynamic underwater environment. In the NLOS channel, photons can reach the
receiver by sea surface reflection or particle scattering. However, research
lacks comprehensive channel models that incorporate sea surface reflection and
particle scattering. Moreover, the presence of ocean turbulence introduces
random fluctuations in the received optical signal based on the average light
intensity. Consequently, this paper adopts the Monte Carlo simulation method
(MCS) to solve the fading-free impulse response of the joint
reflection-scattering channel. Furthermore, a weighted double gamma function
(WDGF) is proposed to characterize the channel impulse response (CIR). Based on
the closed CIR model, the average bit error rate and the performance of the
interruption probability of the UWOC system under turbulence are analyzed. The
conclusions obtained are intended to assist in the design and performance
evaluation of NLOS UWOC systems.
|
2501.12860
|
CrossDiff: Diffusion Probabilistic Model With Cross-conditional
Encoder-Decoder for Crack Segmentation
|
cs.CV
|
Crack Segmentation in industrial concrete surfaces is a challenging task
because cracks usually exhibit intricate morphology with slender appearances.
Traditional segmentation methods often struggle to accurately locate such
cracks, leading to inefficiencies in maintenance and repair processes. In this
paper, we propose a novel diffusion-based model with a cross-conditional
encoder-decoder, named CrossDiff, which is the first to introduce the diffusion
probabilistic model for the crack segmentation task. Specifically, CrossDiff
integrates a cross-encoder and a cross-decoder into the diffusion model to
constitute a cross-shaped diffusion model structure. The cross-encoder enhances
the ability to retain crack details and the cross-decoder helps extract the
semantic features of cracks. As a result, CrossDiff can better handle slender
cracks. Extensive experiments were conducted on five challenging crack datasets
including CFD, CrackTree200, DeepCrack, GAPs384, and Rissbilder. The results
demonstrate that the proposed CrossDiff model achieves impressive performance,
outperforming other state-of-the-art methods by 8.0% in terms of both Dice
score and IoU. The code will be open-source soon.
|
2501.12861
|
Hardware Distortion Modeling for Panel Selection in Large Intelligent
Surfaces
|
eess.SP cs.IT math.IT
|
Hardware distortion in large intelligent surfaces (LISs) may limit their
performance when scaling up such systems. It is of great importance to model
the non-ideal effects in their transceivers to study the hardware distortions
that can affect their performance. Therefore, we have focused on modeling and
studying the effects of nonlinear RX-chains in LISs. We first derive
expressions for SNDR of a LIS with a memory-less polynomial-based model at its
RX-chains. Then we propose a simplified double-parameter exponential model for
the distortion power and show that compared to the polynomial based model, the
exponential model can improve the analytical tractability for SNDR optimization
problems. In particular, we consider a panel selection optimization problems in
a panel-based LIS scenario and show that the proposed model enables us to
derive two closed-form sub-optimal solutions for panel selection, and can be a
favorable alternative to high-order polynomial models in terms of computation
complexity, especially for theoretical works on hardware distortion in MIMO and
LIS systems. Numerical results show that the sub-optimal closed-form solutions
have a near-optimal performance in terms of SNDR compared to the global optimum
found by high-complexity heuristic search methods.
|
2501.12862
|
Mutation-Guided LLM-based Test Generation at Meta
|
cs.SE cs.AI cs.LG
|
This paper describes Meta's ACH system for mutation-guided LLM-based test
generation. ACH generates relatively few mutants (aka simulated faults),
compared to traditional mutation testing. Instead, it focuses on generating
currently undetected faults that are specific to an issue of concern. From
these currently uncaught faults, ACH generates tests that can catch them,
thereby `killing' the mutants and consequently hardening the platform against
regressions. We use privacy concerns to illustrate our approach, but ACH can
harden code against {\em any} type of regression. In total, ACH was applied to
10,795 Android Kotlin classes in 7 software platforms deployed by Meta, from
which it generated 9,095 mutants and 571 privacy-hardening test cases. ACH also
deploys an LLM-based equivalent mutant detection agent that achieves a
precision of 0.79 and a recall of 0.47 (rising to 0.95 and 0.96 with simple
pre-processing). ACH was used by Messenger and WhatsApp test-a-thons where
engineers accepted 73% of its tests, judging 36% to privacy relevant. We
conclude that ACH hardens code against specific concerns and that, even when
its tests do not directly tackle the specific concern, engineers find them
useful for their other benefits.
|
2501.12868
|
As Confidence Aligns: Exploring the Effect of AI Confidence on Human
Self-confidence in Human-AI Decision Making
|
cs.HC cs.AI
|
Complementary collaboration between humans and AI is essential for human-AI
decision making. One feasible approach to achieving it involves accounting for
the calibrated confidence levels of both AI and users. However, this process
would likely be made more difficult by the fact that AI confidence may
influence users' self-confidence and its calibration. To explore these
dynamics, we conducted a randomized behavioral experiment. Our results indicate
that in human-AI decision-making, users' self-confidence aligns with AI
confidence and such alignment can persist even after AI ceases to be involved.
This alignment then affects users' self-confidence calibration. We also found
the presence of real-time correctness feedback of decisions reduced the degree
of alignment. These findings suggest that users' self-confidence is not
independent of AI confidence, which practitioners aiming to achieve better
human-AI collaboration need to be aware of. We call for research focusing on
the alignment of human cognition and behavior with AI.
|
2501.12869
|
Drone Carrier: An Integrated Unmanned Surface Vehicle for Autonomous
Inspection and Intervention in GNSS-Denied Maritime Environment
|
cs.RO cs.AI
|
This paper introduces an innovative drone carrier concept that is applied in
maritime port security or offshore rescue. This system works with a
heterogeneous system consisting of multiple Unmanned Aerial Vehicles (UAVs) and
Unmanned Surface Vehicles (USVs) to perform inspection and intervention tasks
in GNSS-denied or interrupted environments. The carrier, an electric catamaran
measuring 4m by 7m, features a 4m by 6m deck supporting automated takeoff and
landing for four DJI M300 drones, along with a 10kg-payload manipulator
operable in up to level 3 sea conditions. Utilizing an offshore gimbal camera
for navigation, the carrier can autonomously navigate, approach and dock with
non-cooperative vessels, guided by an onboard camera, LiDAR, and Doppler
Velocity Log (DVL) over a 3 km$^2$ area. UAVs equipped with onboard
Ultra-Wideband (UWB) technology execute mapping, detection, and manipulation
tasks using a versatile gripper designed for wet, saline conditions.
Additionally, two UAVs can coordinate to transport large objects to the
manipulator or interact directly with them. These procedures are fully
automated and were successfully demonstrated at the Mohammed Bin Zayed
International Robotic Competition (MBZIRC2024), where the drone carrier
equipped with four UAVS and one manipulator, automatically accomplished the
intervention tasks in sea-level-3 (wave height 1.25m) based on the rough target
information.
|
2501.12877
|
WisdomBot: Tuning Large Language Models with Artificial Intelligence
Knowledge
|
cs.CL
|
Large language models (LLMs) have emerged as powerful tools in natural
language processing (NLP), showing a promising future of artificial generated
intelligence (AGI). Despite their notable performance in the general domain,
LLMs have remained suboptimal in the field of education, owing to the unique
challenges presented by this domain, such as the need for more specialized
knowledge, the requirement for personalized learning experiences, and the
necessity for concise explanations of complex concepts. To address these
issues, this paper presents a novel LLM for education named WisdomBot, which
combines the power of LLMs with educational theories, enabling their seamless
integration into educational contexts. To be specific, we harness
self-instructed knowledge concepts and instructions under the guidance of
Bloom's Taxonomy as training data. To further enhance the accuracy and
professionalism of model's response on factual questions, we introduce two key
enhancements during inference, i.e., local knowledge base retrieval
augmentation and search engine retrieval augmentation during inference. We
substantiate the effectiveness of our approach by applying it to several
Chinese LLMs, thereby showcasing that the fine-tuned models can generate more
reliable and professional responses.
|
2501.12880
|
Advanced deep architecture pruning using single filter performance
|
cs.LG cs.CV
|
Pruning the parameters and structure of neural networks reduces the
computational complexity, energy consumption, and latency during inference.
Recently, a novel underlying mechanism for successful deep learning (DL) was
presented based on a method that quantitatively measures the single filter
performance in each layer of a DL architecture, and a new comprehensive
mechanism of how deep learning works was presented. Herein, we demonstrate how
this understanding paves the path to highly dilute the convolutional layers of
deep architectures without affecting their overall accuracy using applied
filter cluster connections (AFCC). AFCC is exemplified on VGG-11 and
EfficientNet-B0 architectures trained on CIFAR-100, and its high pruning
outperforms other techniques using the same pruning magnitude. Additionally,
this technique is broadened to single nodal performance and highly pruning of
fully connected layers, suggesting a possible implementation to considerably
reduce the complexity of over-parameterized AI tasks.
|
2501.12881
|
Reinforcement learning Based Automated Design of Differential Evolution
Algorithm for Black-box Optimization
|
cs.NE cs.AI
|
Differential evolution (DE) algorithm is recognized as one of the most
effective evolutionary algorithms, demonstrating remarkable efficacy in
black-box optimization due to its derivative-free nature. Numerous enhancements
to the fundamental DE have been proposed, incorporating innovative mutation
strategies and sophisticated parameter tuning techniques to improve
performance. However, no single variant has proven universally superior across
all problems. To address this challenge, we introduce a novel framework that
employs reinforcement learning (RL) to automatically design DE for black-box
optimization through meta-learning. RL acts as an advanced meta-optimizer,
generating a customized DE configuration that includes an optimal
initialization strategy, update rule, and hyperparameters tailored to a
specific black-box optimization problem. This process is informed by a detailed
analysis of the problem characteristics. In this proof-of-concept study, we
utilize a double deep Q-network for implementation, considering a subset of 40
possible strategy combinations and parameter optimizations simultaneously. The
framework's performance is evaluated against black-box optimization benchmarks
and compared with state-of-the-art algorithms. The experimental results
highlight the promising potential of our proposed framework.
|
2501.12884
|
Learning Graph Node Embeddings by Smooth Pair Sampling
|
cs.LG cs.AI
|
Random walk-based node embedding algorithms have attracted a lot of attention
due to their scalability and ease of implementation. Previous research has
focused on different walk strategies, optimization objectives, and embedding
learning models. Inspired by observations on real data, we take a different
approach and propose a new regularization technique. More precisely, the
frequencies of node pairs generated by the skip-gram model on random walk node
sequences follow a highly skewed distribution which causes learning to be
dominated by a fraction of the pairs. We address the issue by designing an
efficient sampling procedure that generates node pairs according to their {\em
smoothed frequency}. Theoretical and experimental results demonstrate the
advantages of our approach.
|
2501.12886
|
Multi-Platform Aggregated Dataset of Online Communities (MADOC)
|
cs.SI cs.CY physics.soc-ph
|
The Multi-platform Aggregated Dataset of Online Communities (MADOC) is a
comprehensive dataset that facilitates computational social science research by
providing FAIR-compliant standardized access to cross-platform analysis of
online social dynamics. MADOC aggregates and standardizes data from Bluesky,
Koo, Reddit, and Voat (2012-2024), containing 18.9 million posts, 236 million
comments, and 23.1 million unique users. The dataset enables comparative
studies of toxic behavior evolution across platforms through standardized
interaction records and sentiment analysis. By providing UUID-anonymized user
histories and temporal alignment of banned communities' activity patterns,
MADOC supports research on content moderation impacts and platform migration
trends. Distributed via Zenodo with persistent identifiers and Python/R
toolkits, the dataset adheres to FAIR principles while addressing post-API-era
research challenges through ethical aggregation of public social media
archives.
|
2501.12892
|
Closed-loop robust control of long-term diabetes progression via
physical activity management
|
eess.SY cs.SY
|
Large clinical evidence acknowledges the crucial role played by physical
activity in delaying the progression of type-2 diabetes. However, the
literature lacks control approaches that leverage exercise for type-2 diabetes
control and more in general lacks a quantitative assessment of medical
guidelines on the recommended amount of physical activity to be performed,
mainly due to the absence of mathematical models that suitably estimate its
benefits on diabetes progression. In this work, in order to provide a
control-theoretical formulation of the exercise, we design a feedback law in
terms of recommended physical activity, following a model predictive control
approach, based on a widespread compact diabetes progression model, suitably
modified to properly account for the long-term effect of the exercise. Moreover
we illustrate how the proposed approach proves to show reliable robustness
properties with respect to initial conditions and parameter perturbations,
which may be used to reflect inter-patient variability. Results are encouraging
in view of the validation of the control law on comprehensive high-dimensional
models of diabetes progression, with the aim of translating the prediction of
the controller into reasonable recommendations and to quantitatively support
medical decision-making.
|
2501.12894
|
Designing and Evaluating an Educational Recommender System with
Different Levels of User Control
|
cs.IR cs.CY cs.HC
|
Educational recommender systems (ERSs) play a crucial role in personalizing
learning experiences and enhancing educational outcomes by providing
recommendations of personalized resources and activities to learners, tailored
to their individual learning needs. However, their effectiveness is often
diminished by insufficient user control and limited transparency. To address
these challenges, in this paper, we present the systematic design and
evaluation of an interactive ERS, in which we introduce different levels of
user control. Concretely, we introduce user control around the input (i.e.,
user profile), process (i.e., recommendation algorithm), and output (i.e.,
recommendations) of the ERS. To evaluate our system, we conducted an online
user study (N=30) to explore the impact of user control on users' perceptions
of the ERS in terms of several important user-centric aspects. Moreover, we
investigated the effects of user control on multiple recommendation goals,
namely transparency, trust, and satisfaction, as well as the interactions
between these goals. Our results demonstrate the positive impact of user
control on user perceived benefits of the ERS. Moreover, our study shows that
user control strongly correlates with transparency and moderately correlates
with trust and satisfaction. In terms of interaction between these goals, our
results reveal that transparency moderately correlates and trust strongly
correlates with satisfaction. Whereas, transparency and trust stand out as less
correlated with each other.
|
2501.12895
|
Test-Time Preference Optimization: On-the-Fly Alignment via Iterative
Textual Feedback
|
cs.CL
|
Large language models (LLMs) demonstrate impressive performance but lack the
flexibility to adapt to human preferences quickly without retraining. In this
work, we introduce Test-time Preference Optimization (TPO), a framework that
aligns LLM outputs with human preferences during inference, removing the need
to update model parameters. Rather than relying on purely numerical rewards,
TPO translates reward signals into textual critiques and uses them as textual
rewards to iteratively refine its response. Evaluations on benchmarks covering
instruction following, preference alignment, safety, and mathematics reveal
that TPO progressively improves alignment with human preferences. Notably,
after only a few TPO steps, the initially unaligned Llama-3.1-70B-SFT model can
surpass the aligned counterpart, Llama-3.1-70B-Instruct. Furthermore, TPO
scales efficiently with both the search width and depth during inference.
Through case studies, we illustrate how TPO exploits the innate capacity of LLM
to interpret and act upon reward signals. Our findings establish TPO as a
practical, lightweight alternative for test-time preference optimization,
achieving alignment on the fly. Our code is publicly available at
https://github.com/yafuly/TPO.
|
2501.12896
|
Irrational Complex Rotations Empower Low-bit Optimizers
|
cs.LG
|
In this paper, we propose a novel optimizer state compression algorithm,
namely $\pi$-Quant, which leverages the properties of irrational numbers (e.g.,
$\pi$) for memory-efficient training. The core idea is based on our
mathematical findings, which show that a pair of parameters can be represented
by a single rotation angle using the complex rotation scheme. Building on this
insight, we map the parameters into a complex space and perform quantization
using the corresponding rotation angles. To efficiently integrate it into
optimization process, we develop an efficient system of geometric equations
that computes the precise rotation angles with linear complexity. We evaluate
$\pi$-Quant on a wide range of tasks. Our experiments show that it can reduce
the bit-width of parameters to 3.32-bit, achieving a 75% reduction in parameter
scale and a 40% decrease in GPU memory usage, all while maintaining full
accuracy.
|
2501.12898
|
DocTTT: Test-Time Training for Handwritten Document Recognition Using
Meta-Auxiliary Learning
|
cs.CV
|
Despite recent significant advancements in Handwritten Document Recognition
(HDR), the efficient and accurate recognition of text against complex
backgrounds, diverse handwriting styles, and varying document layouts remains a
practical challenge. Moreover, this issue is seldom addressed in academic
research, particularly in scenarios with minimal annotated data available. In
this paper, we introduce the DocTTT framework to address these challenges. The
key innovation of our approach is that it uses test-time training to adapt the
model to each specific input during testing. We propose a novel Meta-Auxiliary
learning approach that combines Meta-learning and self-supervised Masked
Autoencoder~(MAE). During testing, we adapt the visual representation
parameters using a self-supervised MAE loss. During training, we learn the
model parameters using a meta-learning framework, so that the model parameters
are learned to adapt to a new input effectively. Experimental results show that
our proposed method significantly outperforms existing state-of-the-art
approaches on benchmark datasets.
|
2501.12900
|
Unified CNNs and transformers underlying learning mechanism reveals
multi-head attention modus vivendi
|
cs.LG cs.CV
|
Convolutional neural networks (CNNs) evaluate short-range correlations in
input images which progress along the layers, whereas vision transformer (ViT)
architectures evaluate long-range correlations, using repeated transformer
encoders composed of fully connected layers. Both are designed to solve complex
classification tasks but from different perspectives. This study demonstrates
that CNNs and ViT architectures stem from a unified underlying learning
mechanism, which quantitatively measures the single-nodal performance (SNP) of
each node in feedforward (FF) and multi-head attention (MHA) subblocks. Each
node identifies small clusters of possible output labels, with additional noise
represented as labels outside these clusters. These features are progressively
sharpened along the transformer encoders, enhancing the signal-to-noise ratio.
This unified underlying learning mechanism leads to two main findings. First,
it enables an efficient applied nodal diagonal connection (ANDC) pruning
technique without affecting the accuracy. Second, based on the SNP, spontaneous
symmetry breaking occurs among the MHA heads, such that each head focuses its
attention on a subset of labels through cooperation among its SNPs.
Consequently, each head becomes an expert in recognizing its designated labels,
representing a quantitative MHA modus vivendi mechanism. These results are
based on a compact convolutional transformer architecture trained on the
CIFAR-100 and Flowers-102 datasets and call for their extension to other
architectures and applications, such as natural language processing.
|
2501.12901
|
Architectural Fusion Through Contextual Partitioning in Large Language
Models: A Novel Approach to Parameterized Knowledge Integration
|
cs.CL cs.AI
|
Contextual Partitioning introduces an innovative approach to enhancing the
architectural design of large-scale computational models through the dynamic
segmentation of parameters into context-aware regions. This methodology
emphasizes the importance of task-specific specialization, achieved through
adaptive parameter allocation mechanisms that align with the linguistic
features of input data. Experimental evaluations demonstrated substantial
improvements in accuracy, perplexity, and contextual coherence across a variety
of linguistic tasks, highlighting the adaptability and scalability of the
proposed framework. By reducing redundancy and enhancing computational
efficiency, Contextual Partitioning not only streamlines model operations but
also expands the scope of applications for advanced language processing
systems. The approach operates autonomously, requiring no external fine-tuning,
thereby addressing a significant limitation in conventional parameter
optimization techniques. Empirical results demonstrate the effectiveness of
gradient-driven segmentation, enabling models to dynamically recalibrate and
specialize in response to task-specific demands. Furthermore, resource
utilization metrics reveal notable reductions in memory usage and training
times, confirming the efficiency of the approach. Observations from qualitative
analyses illustrate improved contextual coherence and logical flow in generated
outputs, reinforcing the practical value of this technique. The findings
collectively demonstrate the potential for Contextual Partitioning to redefine
the scalability and adaptability of computational language architectures in
diverse and complex domains.
|
2501.12902
|
Learning to Optimize Joint Chance-constrained Power Dispatch Problems
|
eess.SY cs.SY
|
The ever-increasing integration of stochastic renewable energy sources into
power systems operation is making the supply-demand balance more challenging.
While joint chance-constrained methods are equipped to model these complexities
and uncertainties, solving these models using the traditional iterative solvers
is time-consuming and can hinder real-time implementation. To overcome the
shortcomings of today's solvers, we propose a fast, scalable, and explainable
machine learning-based optimization proxy. Our solution, called Learning to
Optimize the Optimization of Joint Chance-Constrained Problems (LOOP-JCCP), is
iteration-free and solves the underlying problem in a single-shot. Our model
uses a polyhedral reformulation of the original problem to manage constraint
violations and ensure solution feasibility across various scenarios through
customizable probability settings. To this end, we build on our recent
deterministic solution (LOOP-LC 2.0) by incorporating a set aggregator module
to handle uncertain sample sets of varying sizes and complexities. Our results
verify the feasibility of our near-optimal solutions for joint
chance-constrained power dispatch scenarios. Additionally, our feasibility
guarantees increase the transparency and interpretability of our method, which
is essential for operators to trust the outcomes. We showcase the effectiveness
of our model in solving the stochastic energy management problem of Virtual
Power Plants (VPPs). Our numerical findings complement our theoretical
justifications and demonstrate great flexibility in parameter tuning,
adaptability to diverse datasets, and increased computational speed.
|
2501.12909
|
FilmAgent: A Multi-Agent Framework for End-to-End Film Automation in
Virtual 3D Spaces
|
cs.CL cs.GR cs.MA
|
Virtual film production requires intricate decision-making processes,
including scriptwriting, virtual cinematography, and precise actor positioning
and actions. Motivated by recent advances in automated decision-making with
language agent-based societies, this paper introduces FilmAgent, a novel
LLM-based multi-agent collaborative framework for end-to-end film automation in
our constructed 3D virtual spaces. FilmAgent simulates various crew roles,
including directors, screenwriters, actors, and cinematographers, and covers
key stages of a film production workflow: (1) idea development transforms
brainstormed ideas into structured story outlines; (2) scriptwriting elaborates
on dialogue and character actions for each scene; (3) cinematography determines
the camera setups for each shot. A team of agents collaborates through
iterative feedback and revisions, thereby verifying intermediate scripts and
reducing hallucinations. We evaluate the generated videos on 15 ideas and 4 key
aspects. Human evaluation shows that FilmAgent outperforms all baselines across
all aspects and scores 3.98 out of 5 on average, showing the feasibility of
multi-agent collaboration in filmmaking. Further analysis reveals that
FilmAgent, despite using the less advanced GPT-4o model, surpasses the
single-agent o1, showing the advantage of a well-coordinated multi-agent
system. Lastly, we discuss the complementary strengths and weaknesses of
OpenAI's text-to-video model Sora and our FilmAgent in filmmaking.
|
2501.12910
|
PreciseCam: Precise Camera Control for Text-to-Image Generation
|
cs.CV cs.AI cs.LG
|
Images as an artistic medium often rely on specific camera angles and lens
distortions to convey ideas or emotions; however, such precise control is
missing in current text-to-image models. We propose an efficient and general
solution that allows precise control over the camera when generating both
photographic and artistic images. Unlike prior methods that rely on predefined
shots, we rely solely on four simple extrinsic and intrinsic camera parameters,
removing the need for pre-existing geometry, reference 3D objects, and
multi-view data. We also present a novel dataset with more than 57,000 images,
along with their text prompts and ground-truth camera parameters. Our
evaluation shows precise camera control in text-to-image generation, surpassing
traditional prompt engineering approaches. Our data, model, and code are
publicly available at https://graphics.unizar.es/projects/PreciseCam2024.
|
2501.12911
|
A Selective Homomorphic Encryption Approach for Faster
Privacy-Preserving Federated Learning
|
cs.CR cs.DC cs.LG
|
Federated learning is a machine learning method that supports training models
on decentralized devices or servers, where each holds its local data, removing
the need for data exchange. This approach is especially useful in healthcare,
as it enables training on sensitive data without needing to share them. The
nature of federated learning necessitates robust security precautions due to
data leakage concerns during communication. To address this issue, we propose a
new approach that employs selective encryption, homomorphic encryption,
differential privacy, and bit-wise scrambling to minimize data leakage while
achieving good execution performance. Our technique , FAS (fast and secure
federated learning) is used to train deep learning models on medical imaging
data. We implemented our technique using the Flower framework and compared with
a state-of-the-art federated learning approach that also uses selective
homomorphic encryption. Our experiments were run in a cluster of eleven
physical machines to create a real-world federated learning scenario on
different datasets. We observed that our approach is up to 90\% faster than
applying fully homomorphic encryption on the model weights. In addition, we can
avoid the pretraining step that is required by our competitor and can save up
to 20\% in terms of total execution time. While our approach was faster, it
obtained similar security results as the competitor.
|
2501.12913
|
Set-point control and local stability for flat nonlinear systems using
model-following control
|
eess.SY cs.SY math.OC
|
We consider the set-point control problem for nonlinear systems with flat
output that are subject to perturbations. The nonlinear dynamics as well as the
perturbations are locally Lipschitz. We apply the model-following control (MFC)
approach which consists of a model control loop (MCL) for a feedforward
generation and a process control loop (PCL) that compensates the perturbations
using high-gain feedback. We analyse the resulting closed-loop system and
discuss its relation to a standard flatness-based high-gain approach. In
particular we analyse the estimated region of attraction provided by a
quadratic Lyapunov function. A case study illustrates the approach and
quantifies the region of attraction obtained for each control approach. Using
the initial condition of the model control loop as tuning parameter for the MFC
design, provides that a significantly larger region of attraction can be
guaranteed compared to a conventional single-loop high-gain design.
|
2501.12914
|
A control system framework for counterfactuals: an optimization based
approach
|
eess.SY cs.SY
|
Counterfactuals are a concept inherited from the field of logic and in
general attain to the existence of causal relations between sentences or
events. In particular, this concept has been introduced also in the context of
interpretability in artificial intelligence, where counterfactuals refer to the
minimum change to the feature values that changes the prediction of a
classification model. The artificial intelligence framework of counterfactuals
is mostly focused on machine learning approaches, typically neglecting the
physics of the variables that determine a change in class. However, a
theoretical formulation of counterfactuals in a control system framework -
i.e., able to account for the mechanisms underlying a change in class - is
lacking. To fill this gap, in this work we propose an original control system,
physics-informed, theoretical foundation for counterfactuals, by means of the
formulation of an optimal control problem. We apply the proposed methodology to
a general glucose-insulin regulation model and results appear promising and
pave the way to the possible integration with artificial intelligence
techniques, with the aim of feeding machine learning models with the physics
knowledge acquired through the system framework.
|
2501.12916
|
Trajectory tracking model-following control using Lyapunov redesign with
output time-derivatives to compensate unmatched uncertainties
|
eess.SY cs.SY math.OC
|
We study trajectory tracking for flat nonlinear systems with unmatched
uncertainties using the model-following control (MFC) architecture. We apply
state feedback linearisation control for the process and propose a simplified
implementation of the model control loop which results in a simple model in
Brunovsky-form that represents the nominal feedback linearised dynamics of the
nonlinear process. To compensate possibly unmatched model uncertainties, we
employ Lyapunov redesign with numeric derivatives of the output. It turns out
that for a special initialisation of the model, the MFC reduces to a
single-loop control design. We illustrate our results by a numerical example.
|
2501.12919
|
Contrastive Language-Structure Pre-training Driven by Materials Science
Literature
|
cs.LG cond-mat.mtrl-sci
|
Understanding structure-property relationships is an essential yet
challenging aspect of materials discovery and development. To facilitate this
process, recent studies in materials informatics have sought latent embedding
spaces of crystal structures to capture their similarities based on properties
and functionalities. However, abstract feature-based embedding spaces are
human-unfriendly and prevent intuitive and efficient exploration of the vast
materials space. Here we introduce Contrastive Language--Structure Pre-training
(CLaSP), a learning paradigm for constructing crossmodal embedding spaces
between crystal structures and texts. CLaSP aims to achieve material embeddings
that 1) capture property- and functionality-related similarities between
crystal structures and 2) allow intuitive retrieval of materials via
user-provided description texts as queries. To compensate for the lack of
sufficient datasets linking crystal structures with textual descriptions, CLaSP
leverages a dataset of over 400,000 published crystal structures and
corresponding publication records, including paper titles and abstracts, for
training. We demonstrate the effectiveness of CLaSP through text-based crystal
structure screening and embedding space visualization.
|
2501.12921
|
Generalized Orthogonal de Bruijn Sequences
|
cs.IT math.CO math.IT
|
A de Bruijn sequence of order $k$ over a finite alphabet is a cyclic sequence
with the property that it contains every possible $k$-sequence as a substring
exactly once. Orthogonal de Bruijn sequences are collections of de Bruijn
sequences of the same order, $k$, satisfying the joint constraint that every
$(k+1)$-sequence appears as a substring in at most one of the sequences in the
collection. Both de Bruijn and orthogonal de Bruijn sequences have found
numerous applications in synthetic biology, although the latter topic remains
largely unexplored in the coding theory literature. Here we study three
relevant practical generalizations of orthogonal de Bruijn sequences where we
relax either the constraint that every $(k+1)$-sequence appears exactly once,
or that the sequences themselves are de Bruijn rather than balanced de Bruijn
sequences. We also provide lower and upper bounds on the number of fixed-weight
orthogonal de Bruijn sequences.
|
2501.12927
|
Longitudinal Missing Data Imputation for Predicting Disability Stage of
Patients with Multiple Sclerosis
|
cs.LG
|
Multiple Sclerosis (MS) is a chronic disease characterized by progressive or
alternate impairment of neurological functions (motor, sensory, visual, and
cognitive). Predicting disease progression with a probabilistic and
time-dependent approach might help in suggesting interventions that can delay
the progression of the disease. However, extracting informative knowledge from
irregularly collected longitudinal data is difficult, and missing data pose
significant challenges. MS progression is measured through the Expanded
Disability Status Scale (EDSS), which quantifies and monitors disability in MS
over time. EDSS assesses impairment in eight functional systems (FS).
Frequently, only the EDSS score assigned by clinicians is reported, while FS
sub-scores are missing. Imputing these scores might be useful, especially to
stratify patients according to their phenotype assessed over the disease
progression. This study aimed at i) exploring different methodologies for
imputing missing FS sub-scores, and ii) predicting the EDSS score using
complete clinical data. Results show that Exponential Weighted Moving Average
achieved the lowest error rate in the missing data imputation task;
furthermore, the combination of Classification and Regression Trees for the
imputation and SVM for the prediction task obtained the best accuracy.
|
2501.12931
|
DynamicEarth: How Far are We from Open-Vocabulary Change Detection?
|
cs.CV
|
Monitoring Earth's evolving land covers requires methods capable of detecting
changes across a wide range of categories and contexts. Existing change
detection methods are hindered by their dependency on predefined classes,
reducing their effectiveness in open-world applications. To address this issue,
we introduce open-vocabulary change detection (OVCD), a novel task that bridges
vision and language to detect changes across any category. Considering the lack
of high-quality data and annotation, we propose two training-free frameworks,
M-C-I and I-M-C, which leverage and integrate off-the-shelf foundation models
for the OVCD task. The insight behind the M-C-I framework is to discover all
potential changes and then classify these changes, while the insight of I-M-C
framework is to identify all targets of interest and then determine whether
their states have changed. Based on these two frameworks, we instantiate to
obtain several methods, e.g., SAM-DINOv2-SegEarth-OV, Grounding-DINO-SAM2-DINO,
etc. Extensive evaluations on 5 benchmark datasets demonstrate the superior
generalization and robustness of our OVCD methods over existing supervised and
unsupervised methods. To support continued exploration, we release
DynamicEarth, a dedicated codebase designed to advance research and application
of OVCD. https://likyoo.github.io/DynamicEarth
|
2501.12934
|
Correctness Assessment of Code Generated by Large Language Models Using
Internal Representations
|
cs.SE cs.LG
|
Ensuring the correctness of code generated by Large Language Models (LLMs)
presents a significant challenge in AI-driven software development. Existing
approaches predominantly rely on black-box (closed-box) approaches that
evaluate correctness post-generation, failing to utilize the rich insights
embedded in the LLMs' internal states during code generation. In this paper, we
introduce OPENIA, a novel white-box (open-box) framework that leverages these
internal representations to assess the correctness of LLM-generated code.
OPENIA systematically analyzes the intermediate states of representative
open-source LLMs specialized for code, including DeepSeek-Coder, CodeLlama, and
MagicCoder, across diverse code generation benchmarks. Our empirical analysis
reveals that these internal representations encode latent information, which
strongly correlates with the correctness of the generated code. Building on
these insights, OPENIA uses a white-box/open-box approach to make informed
predictions about code correctness, offering significant advantages in
adaptability and robustness over traditional classification-based methods and
zero-shot approaches. Experimental results demonstrate that OPENIA consistently
outperforms baseline models, achieving higher accuracy, precision, recall, and
F1-Scores with up to a 2X improvement in standalone code generation and a 46%
enhancement in repository-specific scenarios. By unlocking the potential of
in-process signals, OPENIA paves the way for more proactive and efficient
quality assurance mechanisms in LLM-assisted code generation.
|
2501.12935
|
3D Object Manipulation in a Single Image using Generative Models
|
cs.CV
|
Object manipulation in images aims to not only edit the object's presentation
but also gift objects with motion. Previous methods encountered challenges in
concurrently handling static editing and dynamic generation, while also
struggling to achieve fidelity in object appearance and scene lighting. In this
work, we introduce \textbf{OMG3D}, a novel framework that integrates the
precise geometric control with the generative power of diffusion models, thus
achieving significant enhancements in visual performance. Our framework first
converts 2D objects into 3D, enabling user-directed modifications and lifelike
motions at the geometric level. To address texture realism, we propose
CustomRefiner, a texture refinement module that pre-train a customized
diffusion model, aligning the details and style of coarse renderings of 3D
rough model with the original image, further refine the texture. Additionally,
we introduce IllumiCombiner, a lighting processing module that estimates and
corrects background lighting to match human visual perception, resulting in
more realistic shadow effects. Extensive experiments demonstrate the
outstanding visual performance of our approach in both static and dynamic
scenarios. Remarkably, all these steps can be done using one NVIDIA 3090.
Project page is at https://whalesong-zrs.github.io/OMG3D-projectpage/
|
2501.12938
|
Robust Hypothesis Testing with Abstention
|
cs.IT math.IT
|
We study the binary hypothesis testing problem where an adversary may
potentially corrupt a fraction of the samples. The detector is, however,
permitted to abstain from making a decision if (and only if) the adversary is
present. We consider a few natural "contamination models" and characterize for
them the trade-off between the error exponents of the four types of errors --
errors of deciding in favour of the incorrect hypothesis when the adversary is
present and errors of abstaining or deciding in favour of the wrong hypothesis
when the adversary is absent, under the two hypotheses.
|
2501.12942
|
Offline Critic-Guided Diffusion Policy for Multi-User Delay-Constrained
Scheduling
|
cs.AI
|
Effective multi-user delay-constrained scheduling is crucial in various
real-world applications, such as instant messaging, live streaming, and data
center management. In these scenarios, schedulers must make real-time decisions
to satisfy both delay and resource constraints without prior knowledge of
system dynamics, which are often time-varying and challenging to estimate.
Current learning-based methods typically require interactions with actual
systems during the training stage, which can be difficult or impractical, as it
is capable of significantly degrading system performance and incurring
substantial service costs. To address these challenges, we propose a novel
offline reinforcement learning-based algorithm, named \underline{S}cheduling By
\underline{O}ffline Learning with \underline{C}ritic Guidance and
\underline{D}iffusion Generation (SOCD), to learn efficient scheduling policies
purely from pre-collected \emph{offline data}. SOCD innovatively employs a
diffusion-based policy network, complemented by a sampling-free critic network
for policy guidance. By integrating the Lagrangian multiplier optimization into
the offline reinforcement learning, SOCD effectively trains high-quality
constraint-aware policies exclusively from available datasets, eliminating the
need for online interactions with the system. Experimental results demonstrate
that SOCD is resilient to various system dynamics, including partially
observable and large-scale environments, and delivers superior performance
compared to existing methods.
|
2501.12943
|
Ontology-Enhanced Educational Annotation Activities
|
cs.CL cs.DL
|
Information and communications technology and technology-enhanced learning
have unquestionably transformed traditional teaching-learning processes and are
positioned as key factors to promote quality education, one of the basic
sustainable development goals of the 2030 agenda. Document annotation, which
was traditionally carried out with pencil and paper and currently benefits from
digital document annotation tools, is a representative example of this
transformation. Using document annotation tools, students can enrich the
documents with annotations that highlight the most relevant aspects of these
documents. As the conceptual complexity of the learning domain increases, the
annotation of the documents may require comprehensive domain knowledge and an
expert analysis capability that students usually lack. Consequently, a
proliferation of irrelevant, incorrect, and/or poorly decontextualized
annotations may appear, while other relevant aspects are completely ignored by
the students. The main hypothesis proposed by this paper is that the use of a
guiding annotation ontology in the annotation activities is a keystone aspect
to alleviate these shortcomings. Consequently, comprehension is improved,
exhaustive content analysis is promoted, and meta-reflective thinking is
developed. To test this hypothesis, we describe our own annotation tool,
\@note, which fully implements this ontology-enhanced annotation paradigm, and
we provide experimental evidence about how \@note can improve academic
performance via a pilot study concerning critical literary annotation.
|
2501.12946
|
Less is More: Simple yet Effective Heuristic Community Detection with
Graph Convolution Network
|
cs.SI
|
Community detection is crucial in data mining. Traditional methods primarily
focus on graph structure, often neglecting the significance of attribute
features. In contrast, deep learning-based approaches incorporate attribute
features and local structural information through contrastive learning,
improving detection performance. However, existing algorithms' complex design
and joint optimization make them difficult to train and reduce detection
efficiency. Additionally, these methods require the number of communities to be
predefined, making the results susceptible to artificial interference. To
address these challenges, we propose a simple yet effective community detection
algorithm that can adaptively detect communities without relying on data
augmentation and contrastive optimization. The proposed algorithm first
performs community pre-detection to extract global structural information
adaptively. It then utilizes GCN to integrate local structures and attribute
features. Subsequently, it combines global, local structures and attribute
features in the feature space to discover community affiliations. Finally, a
modularity maximization method is employed to optimize the communities based on
these three types of information, thereby uncovering the community affiliation
of each node. We conduct experimental comparisons across various graph
datasets, evaluating the proposed algorithm against traditional methods and
state-of-the-art community detection algorithms. The experimental results
demonstrate that our algorithm achieves greater efficiency and accuracy in
terms of both detection speed and effectiveness. The code is available at
https://github.com/wuanghoong/Less-is-More.git.
|
2501.12948
|
DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via
Reinforcement Learning
|
cs.CL cs.AI cs.LG
|
We introduce our first-generation reasoning models, DeepSeek-R1-Zero and
DeepSeek-R1. DeepSeek-R1-Zero, a model trained via large-scale reinforcement
learning (RL) without supervised fine-tuning (SFT) as a preliminary step,
demonstrates remarkable reasoning capabilities. Through RL, DeepSeek-R1-Zero
naturally emerges with numerous powerful and intriguing reasoning behaviors.
However, it encounters challenges such as poor readability, and language
mixing. To address these issues and further enhance reasoning performance, we
introduce DeepSeek-R1, which incorporates multi-stage training and cold-start
data before RL. DeepSeek-R1 achieves performance comparable to OpenAI-o1-1217
on reasoning tasks. To support the research community, we open-source
DeepSeek-R1-Zero, DeepSeek-R1, and six dense models (1.5B, 7B, 8B, 14B, 32B,
70B) distilled from DeepSeek-R1 based on Qwen and Llama.
|
2501.12954
|
Punctuation patterns in "Finnegans Wake" by James Joyce are largely
translation-invariant
|
cs.CL
|
The complexity characteristics of texts written in natural languages are
significantly related to the rules of punctuation. In particular, the distances
between punctuation marks measured by the number of words quite universally
follow the family of Weibull distributions known from survival analyses.
However, the values of two parameters marking specific forms of these
distributions distinguish specific languages. This is such a strong constraint
that the punctuation distributions of texts translated from the original
language into another adopt quantitative characteristics of the target
language. All these changes take place within Weibull distributions such that
the corresponding hazard functions are always increasing. Recent previous
research shows that James Joyce's famous "Finnegans Wake" is subject to such
extreme distribution from the Weibull family that the corresponding hazard
function is clearly decreasing. At the same time, the distances of sentence
ending punctuation marks, determining the variability of sentence length, have
an almost perfect multifractal organization, so far to such an extent found
nowhere else in the literature. In the present contribution based on several
available translations (Dutch, French, German, Polish, Russian) of "Finnegans
Wake", it is shown that the punctuation characteristics of this work remain
largely translation invariant, contrary to the common cases. These observations
may constitute further evidence that "Finnegans Wake" is a translinguistic work
in this respect as well, in line with Joyce's original intention.
|
2501.12955
|
Multifractal hopscotch in "Hopscotch" by Julio Cortazar
|
cs.CL
|
Punctuation is the main factor introducing correlations in natural language
written texts and it crucially impacts their overall effectiveness,
expressiveness, and readability. Punctuation marks at the end of sentences are
of particular importance as their distribution can determine various complexity
features of written natural language. Here, the sentence length variability
(SLV) time series representing "Hopscotch" by Julio Cortazar are subjected to
quantitative analysis with an attempt to identify their distribution type,
long-memory effects, and potential multiscale patterns. The analyzed novel is
an important and innovative piece of literature whose essential property is
freedom of movement between its building blocks given to a reader by the
author. The statistical consequences of this freedom are closely investigated
in both the original, Spanish version of the novel, and its translations into
English and Polish. Clear evidence of rich multifractality in the SLV dynamics,
with a left-sided asymmetry, however, is observed in all three language
versions as well as in the versions with differently ordered chapters.
|
2501.12956
|
GANQ: GPU-Adaptive Non-Uniform Quantization for Large Language Models
|
cs.LG cs.AI math.OC
|
Large Language Models (LLMs) face significant deployment challenges due to
their substantial resource requirements. While low-bit quantized weights can
reduce memory usage and improve inference efficiency, current hardware lacks
native support for mixed-precision General Matrix Multiplication (mpGEMM),
resulting in inefficient dequantization-based implementations. Moreover,
uniform quantization methods often fail to capture weight distributions
adequately, leading to performance degradation. We propose GANQ (GPU-Adaptive
Non-Uniform Quantization), a layer-wise post-training non-uniform quantization
framework optimized for hardware-efficient lookup table-based mpGEMM. GANQ
achieves superior quantization performance by utilizing a training-free,
GPU-adaptive optimization algorithm to efficiently reduce layer-wise
quantization errors. Extensive experiments demonstrate GANQ's ability to reduce
the perplexity gap from the FP16 baseline compared to state-of-the-art methods
for both 3-bit and 4-bit quantization. Furthermore, when deployed on a single
NVIDIA RTX 4090 GPU, GANQ's quantized models achieve up to 2.57$\times$ speedup
over the baseline, advancing memory and inference efficiency in LLM deployment.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.