id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
2501.00267 | A low order, torsion deformable spatial beam element based on the
absolute nodal coordinate formulation and Bishop frame | cs.CE | Heretofore, the Serret-Frenet frame has been the ubiquitous choice for
analyzing the elastic deformations of beam elements. It is well known that this
frame is undefined at the inflection points and straight segments of the beam
where its curvature is zero, leading to singularities and errors in their
numerical analysis. On the other hand, there exists a lesser-known frame called
Bishop which does not have the caveats of the Serret-Frenet frame and is
well-defined everywhere along the beam centerline. Leveraging the Bishop frame,
in this paper, we propose a new spatial, singularity-free low-order beam
element based on the absolute nodal coordinate formulation for both small and
large deformation applications. This element, named ANCF14, has a constant mass
matrix and can capture longitudinal, transverse (bending) and torsional
deformations. It is a two-noded element with 7 degrees of freedom per node,
which are global nodal coordinates, nodal slopes and their cross-sectional
rotation about the centerline. The newly developed element is tested through
four complex benchmarks. Comparing the ANCF14 results with theoretical and
numerical results provided in other studies confirms the efficiency and
accuracy of the proposed element.
|
2501.00269 | A review of faithfulness metrics for hallucination assessment in Large
Language Models | cs.CL | This review examines the means with which faithfulness has been evaluated
across open-ended summarization, question-answering and machine translation
tasks. We find that the use of LLMs as a faithfulness evaluator is commonly the
metric that is most highly correlated with human judgement. The means with
which other studies have mitigated hallucinations is discussed, with both
retrieval augmented generation (RAG) and prompting framework approaches having
been linked with superior faithfulness, whilst other recommendations for
mitigation are provided. Research into faithfulness is integral to the
continued widespread use of LLMs, as unfaithful responses can pose major risks
to many areas whereby LLMs would otherwise be suitable. Furthermore, evaluating
open-ended generation provides a more comprehensive measure of LLM performance
than commonly used multiple-choice benchmarking, which can help in advancing
the trust that can be placed within LLMs.
|
2501.00273 | Echoes in AI: Quantifying Lack of Plot Diversity in LLM Outputs | cs.CL | With rapid advances in large language models (LLMs), there has been an
increasing application of LLMs in creative content ideation and generation. A
critical question emerges: can current LLMs provide ideas that are diverse
enough to truly bolster the collective creativity? We examine two
state-of-the-art LLMs, GPT-4 and LLaMA-3, on story generation and discover that
LLM-generated stories often consist of plot elements that are echoed across a
number of generations. To quantify this phenomenon, we introduce the Sui
Generis score, which estimates how unlikely a plot element is to appear in
alternative storylines generated by the same LLM. Evaluating on 100 short
stories, we find that LLM-generated stories often contain combinations of
idiosyncratic plot elements echoed frequently across generations, while the
original human-written stories are rarely recreated or even echoed in pieces.
Moreover, our human evaluation shows that the ranking of Sui Generis scores
among story segments correlates moderately with human judgment of surprise
level, even though score computation is completely automatic without relying on
human judgment.
|
2501.00274 | LLM-Rubric: A Multidimensional, Calibrated Approach to Automated
Evaluation of Natural Language Texts | cs.CL | This paper introduces a framework for the automated evaluation of natural
language texts. A manually constructed rubric describes how to assess multiple
dimensions of interest. To evaluate a text, a large language model (LLM) is
prompted with each rubric question and produces a distribution over potential
responses. The LLM predictions often fail to agree well with human judges --
indeed, the humans do not fully agree with one another. However, the multiple
LLM distributions can be $\textit{combined}$ to $\textit{predict}$ each human
judge's annotations on all questions, including a summary question that
assesses overall quality or relevance. LLM-Rubric accomplishes this by training
a small feed-forward neural network that includes both judge-specific and
judge-independent parameters. When evaluating dialogue systems in a human-AI
information-seeking task, we find that LLM-Rubric with 9 questions (assessing
dimensions such as naturalness, conciseness, and citation quality) predicts
human judges' assessment of overall user satisfaction, on a scale of 1--4, with
RMS error $< 0.5$, a $2\times$ improvement over the uncalibrated baseline.
|
2501.00277 | Efficient Human-in-the-Loop Active Learning: A Novel Framework for Data
Labeling in AI Systems | stat.ML cs.AI cs.HC cs.LG | Modern AI algorithms require labeled data. In real world, majority of data
are unlabeled. Labeling the data are costly. this is particularly true for some
areas requiring special skills, such as reading radiology images by physicians.
To most efficiently use expert's time for the data labeling, one promising
approach is human-in-the-loop active learning algorithm. In this work, we
propose a novel active learning framework with significant potential for
application in modern AI systems. Unlike the traditional active learning
methods, which only focus on determining which data point should be labeled,
our framework also introduces an innovative perspective on incorporating
different query scheme. We propose a model to integrate the information from
different types of queries. Based on this model, our active learning frame can
automatically determine how the next question is queried. We further developed
a data driven exploration and exploitation framework into our active learning
method. This method can be embedded in numerous active learning algorithms.
Through simulations on five real-world datasets, including a highly complex
real image task, our proposed active learning framework exhibits higher
accuracy and lower loss compared to other methods.
|
2501.00281 | String commitment from unstructured noisy channels | cs.IT cs.CR math.IT | Noisy channels are valuable resources for cryptography, enabling
information-theoretically secure protocols for cryptographic primitives like
bit commitment and oblivious transfer. While existing work has primarily
considered memoryless channels, we consider more flexible channel resources
that a dishonest player can configure arbitrarily within some constraints on
their min-entropy. We present a protocol for string commitment over such
channels that is complete, hiding, and binding, and derive its achievable
commitment rate, demonstrating the possibility of string commitment in noisy
channels with a stronger adversarial model. The asymptotic commitment rate
coincides with previous results when the adversarial channels are the same
binary symmetric channel as in the honest case.
|
2501.00282 | ReFormer: Generating Radio Fakes for Data Augmentation | cs.LG eess.SP | We present ReFormer, a generative AI (GAI) model that can efficiently
generate synthetic radio-frequency (RF) data, or RF fakes, statistically
similar to the data it was trained on, or with modified statistics, in order to
augment datasets collected in real-world experiments. For applications like
this, adaptability and scalability are important issues. This is why ReFormer
leverages transformer-based autoregressive generation, trained on learned
discrete representations of RF signals. By using prompts, such GAI can be made
to generate the data which complies with specific constraints or conditions,
particularly useful for training channel estimation and modeling. It may also
leverage the data from a source system to generate training data for a target
system. We show how different transformer architectures and other design
choices affect the quality of generated RF fakes, evaluated using metrics such
as precision and recall, classification accuracy and signal constellation
diagrams.
|
2501.00288 | Solving Partial Differential Equations with Random Feature Models | math.NA cs.LG cs.NA | Machine learning based partial differential equations (PDEs) solvers have
received great attention in recent years. Most progress in this area has been
driven by deep neural networks such as physics-informed neural networks (PINNs)
and kernel method. In this paper, we introduce a random feature based framework
toward efficiently solving PDEs. Random feature method was originally proposed
to approximate large-scale kernel machines and can be viewed as a shallow
neural network as well. We provide an error analysis for our proposed method
along with comprehensive numerical results on several PDE benchmarks. In
contrast to the state-of-the-art solvers that face challenges with a large
number of collocation points, our proposed method reduces the computational
complexity. Moreover, the implementation of our method is simple and does not
require additional computational resources. Due to the theoretical guarantee
and advantages in computation, our approach is proven to be efficient for
solving PDEs.
|
2501.00289 | Dual Diffusion for Unified Image Generation and Understanding | cs.CV cs.AI cs.LG | Diffusion models have gained tremendous success in text-to-image generation,
yet still lag behind with visual understanding tasks, an area dominated by
autoregressive vision-language models. We propose a large-scale and fully
end-to-end diffusion model for multi-modal understanding and generation that
significantly improves on existing diffusion-based multimodal models, and is
the first of its kind to support the full suite of vision-language modeling
capabilities. Inspired by the multimodal diffusion transformer (MM-DiT) and
recent advances in discrete diffusion language modeling, we leverage a
cross-modal maximum likelihood estimation framework that simultaneously trains
the conditional likelihoods of both images and text jointly under a single loss
function, which is back-propagated through both branches of the diffusion
transformer. The resulting model is highly flexible and capable of a wide range
of tasks including image generation, captioning, and visual question answering.
Our model attained competitive performance compared to recent unified image
understanding and generation models, demonstrating the potential of multimodal
diffusion modeling as a promising alternative to autoregressive next-token
prediction models.
|
2501.00296 | Predicate Invention from Pixels via Pretrained Vision-Language Models | cs.RO cs.AI cs.CV cs.LG | Our aim is to learn to solve long-horizon decision-making problems in
highly-variable, combinatorially-complex robotics domains given raw sensor
input in the form of images. Previous work has shown that one way to achieve
this aim is to learn a structured abstract transition model in the form of
symbolic predicates and operators, and then plan within this model to solve
novel tasks at test time. However, these learned models do not ground directly
into pixels from just a handful of demonstrations. In this work, we propose to
invent predicates that operate directly over input images by leveraging the
capabilities of pretrained vision-language models (VLMs). Our key idea is that,
given a set of demonstrations, a VLM can be used to propose a set of predicates
that are potentially relevant for decision-making and then to determine the
truth values of these predicates in both the given demonstrations and new image
inputs. We build upon an existing framework for predicate invention, which
generates feature-based predicates operating on object-centric states, to also
generate visual predicates that operate on images. Experimentally, we show that
our approach -- pix2pred -- is able to invent semantically meaningful
predicates that enable generalization to novel, complex, and long-horizon tasks
across two simulated robotic environments.
|
2501.00298 | Enhancing Deployment-Time Predictive Model Robustness for Code Analysis
and Optimization | cs.SE cs.AI | Supervised machine learning techniques have shown promising results in code
analysis and optimization problems. However, a learning-based solution can be
brittle because minor changes in hardware or application workloads -- such as
facing a new CPU architecture or code pattern -- may jeopardize decision
accuracy, ultimately undermining model robustness. We introduce Prom, an
open-source library to enhance the robustness and performance of predictive
models against such changes during deployment. Prom achieves this by using
statistical assessments to identify test samples prone to mispredictions and
using feedback on these samples to improve a deployed model. We showcase Prom
by applying it to 13 representative machine learning models across 5 code
analysis and optimization tasks. Our extensive evaluation demonstrates that
Prom can successfully identify an average of 96% (up to 100%) of
mispredictions. By relabeling up to 5% of the Prom-identified samples through
incremental learning, Prom can help a deployed model achieve a performance
comparable to that attained during its model training phase.
|
2501.00300 | Research on vehicle detection based on improved YOLOv8 network | cs.CV cs.LG | The key to ensuring the safe obstacle avoidance function of autonomous
driving systems lies in the use of extremely accurate vehicle recognition
techniques. However, the variability of the actual road environment and the
diverse characteristics of vehicles and pedestrians together constitute a huge
obstacle to improving detection accuracy, posing a serious challenge to the
realization of this goal. To address the above issues, this paper proposes an
improved YOLOv8 vehicle detection method. Specifically, taking the YOLOv8n-seg
model as the base model, firstly, the FasterNet network is used to replace the
backbone network to achieve the purpose of reducing the computational
complexity and memory while improving the detection accuracy and speed;
secondly, the feature enhancement is achieved by adding the attention mechanism
CBAM to the Neck; and lastly, the loss function CIoU is modified to WIoU, which
optimizes the detection box localization while improving the segmentation
accuracy. The results show that the improved model achieves 98.3%, 89.1% and
88.4% detection accuracy for car, Person and Motorcycle. Compared with the
pre-improvement and YOLOv9 models in six metrics such as Precision.
|
2501.00303 | SAM-Aware Graph Prompt Reasoning Network for Cross-Domain Few-Shot
Segmentation | cs.CV cs.LG | The primary challenge of cross-domain few-shot segmentation (CD-FSS) is the
domain disparity between the training and inference phases, which can exist in
either the input data or the target classes. Previous models struggle to learn
feature representations that generalize to various unknown domains from limited
training domain samples. In contrast, the large-scale visual model SAM,
pre-trained on tens of millions of images from various domains and classes,
possesses excellent generalizability. In this work, we propose a SAM-aware
graph prompt reasoning network (GPRN) that fully leverages SAM to guide CD-FSS
feature representation learning and improve prediction accuracy. Specifically,
we propose a SAM-aware prompt initialization module (SPI) to transform the
masks generated by SAM into visual prompts enriched with high-level semantic
information. Since SAM tends to divide an object into many sub-regions, this
may lead to visual prompts representing the same semantic object having
inconsistent or fragmented features. We further propose a graph prompt
reasoning (GPR) module that constructs a graph among visual prompts to reason
about their interrelationships and enable each visual prompt to aggregate
information from similar prompts, thus achieving global semantic consistency.
Subsequently, each visual prompt embeds its semantic information into the
corresponding mask region to assist in feature representation learning. To
refine the segmentation mask during testing, we also design a non-parameter
adaptive point selection module (APS) to select representative point prompts
from query predictions and feed them back to SAM to refine inaccurate
segmentation results. Experiments on four standard CD-FSS datasets demonstrate
that our method establishes new state-of-the-art results. Code:
https://github.com/CVL-hub/GPRN.
|
2501.00305 | diffIRM: A Diffusion-Augmented Invariant Risk Minimization Framework for
Spatiotemporal Prediction over Graphs | cs.LG | Spatiotemporal prediction over graphs (STPG) is challenging, because
real-world data suffers from the Out-of-Distribution (OOD) generalization
problem, where test data follow different distributions from training ones. To
address this issue, Invariant Risk Minimization (IRM) has emerged as a
promising approach for learning invariant representations across different
environments. However, IRM and its variants are originally designed for
Euclidean data like images, and may not generalize well to graph-structure data
such as spatiotemporal graphs due to spatial correlations in graphs. To
overcome the challenge posed by graph-structure data, the existing graph OOD
methods adhere to the principles of invariance existence, or environment
diversity. However, there is little research that combines both principles in
the STPG problem. A combination of the two is crucial for efficiently
distinguishing between invariant features and spurious ones. In this study, we
fill in this research gap and propose a diffusion-augmented invariant risk
minimization (diffIRM) framework that combines these two principles for the
STPG problem. Our diffIRM contains two processes: i) data augmentation and ii)
invariant learning. In the data augmentation process, a causal mask generator
identifies causal features and a graph-based diffusion model acts as an
environment augmentor to generate augmented spatiotemporal graph data. In the
invariant learning process, an invariance penalty is designed using the
augmented data, and then serves as a regularizer for training the
spatiotemporal prediction model. The real-world experiment uses three human
mobility datasets, i.e. SafeGraph, PeMS04, and PeMS08. Our proposed diffIRM
outperforms baselines.
|
2501.00307 | Fast and Interpretable Mixed-Integer Linear Program Solving by Learning
Model Reduction | cs.LG cs.AI | By exploiting the correlation between the structure and the solution of
Mixed-Integer Linear Programming (MILP), Machine Learning (ML) has become a
promising method for solving large-scale MILP problems. Existing ML-based MILP
solvers mainly focus on end-to-end solution learning, which suffers from the
scalability issue due to the high dimensionality of the solution space. Instead
of directly learning the optimal solution, this paper aims to learn a reduced
and equivalent model of the original MILP as an intermediate step. The reduced
model often corresponds to interpretable operations and is much simpler,
enabling us to solve large-scale MILP problems much faster than existing
commercial solvers. However, current approaches rely only on the optimal
reduced model, overlooking the significant preference information of all
reduced models. To address this issue, this paper proposes a preference-based
model reduction learning method, which considers the relative performance
(i.e., objective cost and constraint feasibility) of all reduced models on each
MILP instance as preferences. We also introduce an attention mechanism to
capture and represent preference information, which helps improve the
performance of model reduction learning tasks. Moreover, we propose a SetCover
based pruning method to control the number of reduced models (i.e., labels),
thereby simplifying the learning process. Evaluation on real-world MILP
problems shows that 1) compared to the state-of-the-art model reduction ML
methods, our method obtains nearly 20% improvement on solution accuracy, and 2)
compared to the commercial solver Gurobi, two to four orders of magnitude
speedups are achieved.
|
2501.00309 | Retrieval-Augmented Generation with Graphs (GraphRAG) | cs.IR cs.CL cs.LG | Retrieval-augmented generation (RAG) is a powerful technique that enhances
downstream task execution by retrieving additional information, such as
knowledge, skills, and tools from external sources. Graph, by its intrinsic
"nodes connected by edges" nature, encodes massive heterogeneous and relational
information, making it a golden resource for RAG in tremendous real-world
applications. As a result, we have recently witnessed increasing attention on
equipping RAG with Graph, i.e., GraphRAG. However, unlike conventional RAG,
where the retriever, generator, and external data sources can be uniformly
designed in the neural-embedding space, the uniqueness of graph-structured
data, such as diverse-formatted and domain-specific relational knowledge, poses
unique and significant challenges when designing GraphRAG for different
domains. Given the broad applicability, the associated design challenges, and
the recent surge in GraphRAG, a systematic and up-to-date survey of its key
concepts and techniques is urgently desired. Following this motivation, we
present a comprehensive and up-to-date survey on GraphRAG. Our survey first
proposes a holistic GraphRAG framework by defining its key components,
including query processor, retriever, organizer, generator, and data source.
Furthermore, recognizing that graphs in different domains exhibit distinct
relational patterns and require dedicated designs, we review GraphRAG
techniques uniquely tailored to each domain. Finally, we discuss research
challenges and brainstorm directions to inspire cross-disciplinary
opportunities. Our survey repository is publicly maintained at
https://github.com/Graph-RAG/GraphRAG/.
|
2501.00310 | Conditional Uncertainty Quantification of Stochastic Dynamical
Structures Considering Measurement Conditions | cs.CE | How to accurately quantify the uncertainty of stochastic dynamical responses
affected by uncertain loads and structural parameters is an important issue in
structural safety and reliability analysis. In this paper, the conditional
uncertainty quantification analysis for the dynamical response of stochastic
structures considering the measurement data with random error is studied in
depth. A method for extracting the key measurement condition, which holds the
most reference value for the uncertainty quantification of response, from the
measurement data is proposed. Considering the key measurement condition and
employing the principle of probability conservation and conditional probability
theory, the quotient-form expressions for the conditional mean, conditional
variance, and conditional probability density function of the stochastic
structural dynamical response are derived and are referred to as the key
conditional quotients (KCQ). A numerical method combining the non-equal
weighted generalized Monte Carlo method, Dirac function smoothing technique,
and online-offline coupled computational strategy is developed for calculating
KCQs. Three linear/nonlinear stochastic dynamical examples are used to verify
that the proposed KCQ method can efficiently and accurately quantify the
uncertainty of the structural response considering measurement conditions. The
examples also compare the traditional non-conditional uncertainty
quantification results with the conditional uncertainty quantification results
given by KCQs, indicating that considering measurement conditions can
significantly reduce the uncertainty of the stochastic dynamical responses,
providing a more refined statistical basis for structural safety and
reliability analysis.
|
2501.00312 | M2I2: Learning Efficient Multi-Agent Communication via Masked State
Modeling and Intention Inference | cs.MA cs.AI | Communication is essential in coordinating the behaviors of multiple agents.
However, existing methods primarily emphasize content, timing, and partners for
information sharing, often neglecting the critical aspect of integrating shared
information. This gap can significantly impact agents' ability to understand
and respond to complex, uncertain interactions, thus affecting overall
communication efficiency. To address this issue, we introduce M2I2, a novel
framework designed to enhance the agents' capabilities to assimilate and
utilize received information effectively. M2I2 equips agents with advanced
capabilities for masked state modeling and joint-action prediction, enriching
their perception of environmental uncertainties and facilitating the
anticipation of teammates' intentions. This approach ensures that agents are
furnished with both comprehensive and relevant information, bolstering more
informed and synergistic behaviors. Moreover, we propose a Dimensional Rational
Network, innovatively trained via a meta-learning paradigm, to identify the
importance of dimensional pieces of information, evaluating their contributions
to decision-making and auxiliary tasks. Then, we implement an importance-based
heuristic for selective information masking and sharing. This strategy
optimizes the efficiency of masked state modeling and the rationale behind
information sharing. We evaluate M2I2 across diverse multi-agent tasks, the
results demonstrate its superior performance, efficiency, and generalization
capabilities, over existing state-of-the-art methods in various complex
scenarios.
|
2501.00315 | Temporal Dynamics Decoupling with Inverse Processing for Enhancing Human
Motion Prediction | cs.CV | Exploring the bridge between historical and future motion behaviors remains a
central challenge in human motion prediction. While most existing methods
incorporate a reconstruction task as an auxiliary task into the decoder,
thereby improving the modeling of spatio-temporal dependencies, they overlook
the potential conflicts between reconstruction and prediction tasks. In this
paper, we propose a novel approach: Temporal Decoupling Decoding with Inverse
Processing (\textbf{$TD^2IP$}). Our method strategically separates
reconstruction and prediction decoding processes, employing distinct decoders
to decode the shared motion features into historical or future sequences.
Additionally, inverse processing reverses motion information in the temporal
dimension and reintroduces it into the model, leveraging the bidirectional
temporal correlation of human motion behaviors. By alleviating the conflicts
between reconstruction and prediction tasks and enhancing the association of
historical and future information, \textbf{$TD^2IP$} fosters a deeper
understanding of motion patterns. Extensive experiments demonstrate the
adaptability of our method within existing methods.
|
2501.00316 | MapEval: A Map-Based Evaluation of Geo-Spatial Reasoning in Foundation
Models | cs.CL | Recent advancements in foundation models have enhanced AI systems'
capabilities in autonomous tool usage and reasoning. However, their ability in
location or map-based reasoning - which improves daily life by optimizing
navigation, facilitating resource discovery, and streamlining logistics - has
not been systematically studied. To bridge this gap, we introduce MapEval, a
benchmark designed to assess diverse and complex map-based user queries with
geo-spatial reasoning. MapEval features three task types (textual, API-based,
and visual) that require collecting world information via map tools, processing
heterogeneous geo-spatial contexts (e.g., named entities, travel distances,
user reviews or ratings, images), and compositional reasoning, which all
state-of-the-art foundation models find challenging. Comprising 700 unique
multiple-choice questions about locations across 180 cities and 54 countries,
MapEval evaluates foundation models' ability to handle spatial relationships,
map infographics, travel planning, and navigation challenges. Using MapEval, we
conducted a comprehensive evaluation of 28 prominent foundation models. While
no single model excelled across all tasks, Claude-3.5-Sonnet, GPT-4o, and
Gemini-1.5-Pro achieved competitive performance overall. However, substantial
performance gaps emerged, particularly in MapEval, where agents with
Claude-3.5-Sonnet outperformed GPT-4o and Gemini-1.5-Pro by 16% and 21%,
respectively, and the gaps became even more amplified when compared to
open-source LLMs. Our detailed analyses provide insights into the strengths and
weaknesses of current models, though all models still fall short of human
performance by more than 20% on average, struggling with complex map images and
rigorous geo-spatial reasoning. This gap highlights MapEval's critical role in
advancing general-purpose foundation models with stronger geo-spatial
understanding.
|
2501.00317 | Spatio-Temporal Multi-Subgraph GCN for 3D Human Motion Prediction | cs.CV cs.LG | Human motion prediction (HMP) involves forecasting future human motion based
on historical data. Graph Convolutional Networks (GCNs) have garnered
widespread attention in this field for their proficiency in capturing
relationships among joints in human motion. However, existing GCN-based methods
tend to focus on either temporal-domain or spatial-domain features, or they
combine spatio-temporal features without fully leveraging the complementarity
and cross-dependency of these two features. In this paper, we propose the
Spatial-Temporal Multi-Subgraph Graph Convolutional Network (STMS-GCN) to
capture complex spatio-temporal dependencies in human motion. Specifically, we
decouple the modeling of temporal and spatial dependencies, enabling
cross-domain knowledge transfer at multiple scales through a spatio-temporal
information consistency constraint mechanism. Besides, we utilize multiple
subgraphs to extract richer motion information and enhance the learning
associations of diverse subgraphs through a homogeneous information constraint
mechanism. Extensive experiments on the standard HMP benchmarks demonstrate the
superiority of our method.
|
2501.00318 | Improving Text-based Person Search via Part-level Cross-modal
Correspondence | cs.CV cs.LG | Text-based person search is the task of finding person images that are the
most relevant to the natural language text description given as query. The main
challenge of this task is a large gap between the target images and text
queries, which makes it difficult to establish correspondence and distinguish
subtle differences across people. To address this challenge, we introduce an
efficient encoder-decoder model that extracts coarse-to-fine embedding vectors
which are semantically aligned across the two modalities without supervision
for the alignment. There is another challenge of learning to capture
fine-grained information with only person IDs as supervision, where similar
body parts of different individuals are considered different due to the lack of
part-level supervision. To tackle this, we propose a novel ranking loss, dubbed
commonality-based margin ranking loss, which quantifies the degree of
commonality of each body part and reflects it during the learning of
fine-grained body part details. As a consequence, it enables our method to
achieve the best records on three public benchmarks.
|
2501.00320 | Autonomous Alignment with Human Value on Altruism through Considerate
Self-imagination and Theory of Mind | cs.AI | With the widespread application of Artificial Intelligence (AI) in human
society, enabling AI to autonomously align with human values has become a
pressing issue to ensure its sustainable development and benefit to humanity.
One of the most important aspects of aligning with human values is the
necessity for agents to autonomously make altruistic, safe, and ethical
decisions, considering and caring for human well-being. Current AI extremely
pursues absolute superiority in certain tasks, remaining indifferent to the
surrounding environment and other agents, which has led to numerous safety
risks. Altruistic behavior in human society originates from humans' capacity
for empathizing others, known as Theory of Mind (ToM), combined with predictive
imaginative interactions before taking action to produce thoughtful and
altruistic behaviors. Inspired by this, we are committed to endow agents with
considerate self-imagination and ToM capabilities, driving them through
implicit intrinsic motivations to autonomously align with human altruistic
values. By integrating ToM within the imaginative space, agents keep an eye on
the well-being of other agents in real time, proactively anticipate potential
risks to themselves and others, and make thoughtful altruistic decisions that
balance negative effects on the environment. The ancient Chinese story of Sima
Guang Smashes the Vat illustrates the moral behavior of the young Sima Guang
smashed a vat to save a child who had accidentally fallen into it, which is an
excellent reference scenario for this paper. We design an experimental scenario
similar to Sima Guang Smashes the Vat and its variants with different
complexities, which reflects the trade-offs and comprehensive considerations
between self-goals, altruistic rescue, and avoiding negative side effects.
|
2501.00321 | OCRBench v2: An Improved Benchmark for Evaluating Large Multimodal
Models on Visual Text Localization and Reasoning | cs.CV cs.AI | Scoring the Optical Character Recognition (OCR) capabilities of Large
Multimodal Models (LMMs) has witnessed growing interest recently. Existing
benchmarks have highlighted the impressive performance of LMMs in text
recognition; however, their abilities on certain challenging tasks, such as
text localization, handwritten content extraction, and logical reasoning,
remain underexplored. To bridge this gap, we introduce OCRBench v2, a
large-scale bilingual text-centric benchmark with currently the most
comprehensive set of tasks (4x more tasks than the previous multi-scene
benchmark OCRBench), the widest coverage of scenarios (31 diverse scenarios
including street scene, receipt, formula, diagram, and so on), and thorough
evaluation metrics, with a total of 10,000 human-verified question-answering
pairs and a high proportion of difficult samples. After carefully benchmarking
state-of-the-art LMMs on OCRBench v2, we find that 20 out of 22 LMMs score
below 50 (100 in total) and suffer from five-type limitations, including less
frequently encountered text recognition, fine-grained perception, layout
perception, complex element parsing, and logical reasoning. The benchmark and
evaluation scripts are available at
https://github.com/Yuliang-liu/MultimodalOCR.
|
2501.00326 | OVGaussian: Generalizable 3D Gaussian Segmentation with Open
Vocabularies | cs.CV cs.LG | Open-vocabulary scene understanding using 3D Gaussian (3DGS) representations
has garnered considerable attention. However, existing methods mostly lift
knowledge from large 2D vision models into 3DGS on a scene-by-scene basis,
restricting the capabilities of open-vocabulary querying within their training
scenes so that lacking the generalizability to novel scenes. In this work, we
propose \textbf{OVGaussian}, a generalizable \textbf{O}pen-\textbf{V}ocabulary
3D semantic segmentation framework based on the 3D \textbf{Gaussian}
representation. We first construct a large-scale 3D scene dataset based on
3DGS, dubbed \textbf{SegGaussian}, which provides detailed semantic and
instance annotations for both Gaussian points and multi-view images. To promote
semantic generalization across scenes, we introduce Generalizable Semantic
Rasterization (GSR), which leverages a 3D neural network to learn and predict
the semantic property for each 3D Gaussian point, where the semantic property
can be rendered as multi-view consistent 2D semantic maps. In the next, we
propose a Cross-modal Consistency Learning (CCL) framework that utilizes
open-vocabulary annotations of 2D images and 3D Gaussians within SegGaussian to
train the 3D neural network capable of open-vocabulary semantic segmentation
across Gaussian-based 3D scenes. Experimental results demonstrate that
OVGaussian significantly outperforms baseline methods, exhibiting robust
cross-scene, cross-domain, and novel-view generalization capabilities. Code and
the SegGaussian dataset will be released.
(https://github.com/runnanchen/OVGaussian).
|
2501.00328 | VoxVietnam: a Large-Scale Multi-Genre Dataset for Vietnamese Speaker
Recognition | cs.SD cs.CL eess.AS | Recent research in speaker recognition aims to address vulnerabilities due to
variations between enrolment and test utterances, particularly in the
multi-genre phenomenon where the utterances are in different speech genres.
Previous resources for Vietnamese speaker recognition are either limited in
size or do not focus on genre diversity, leaving studies in multi-genre effects
unexplored. This paper introduces VoxVietnam, the first multi-genre dataset for
Vietnamese speaker recognition with over 187,000 utterances from 1,406 speakers
and an automated pipeline to construct a dataset on a large scale from public
sources. Our experiments show the challenges posed by the multi-genre
phenomenon to models trained on a single-genre dataset, and demonstrate a
significant increase in performance upon incorporating the VoxVietnam into the
training process. Our experiments are conducted to study the challenges of the
multi-genre phenomenon in speaker recognition and the performance gain when the
proposed dataset is used for multi-genre training.
|
2501.00330 | Exploring the Implicit Semantic Ability of Multimodal Large Language
Models: A Pilot Study on Entity Set Expansion | cs.CL cs.AI cs.IR | The rapid development of multimodal large language models (MLLMs) has brought
significant improvements to a wide range of tasks in real-world applications.
However, LLMs still exhibit certain limitations in extracting implicit semantic
information. In this paper, we apply MLLMs to the Multi-modal Entity Set
Expansion (MESE) task, which aims to expand a handful of seed entities with new
entities belonging to the same semantic class, and multi-modal information is
provided with each entity. We explore the capabilities of MLLMs to understand
implicit semantic information at the entity-level granularity through the MESE
task, introducing a listwise ranking method LUSAR that maps local scores to
global rankings. Our LUSAR demonstrates significant improvements in MLLM's
performance on the MESE task, marking the first use of generative MLLM for ESE
tasks and extending the applicability of listwise ranking.
|
2501.00332 | MAIN-RAG: Multi-Agent Filtering Retrieval-Augmented Generation | cs.CL cs.IR | Large Language Models (LLMs) are becoming essential tools for various natural
language processing tasks but often suffer from generating outdated or
incorrect information. Retrieval-Augmented Generation (RAG) addresses this
issue by incorporating external, real-time information retrieval to ground LLM
responses. However, the existing RAG systems frequently struggle with the
quality of retrieval documents, as irrelevant or noisy documents degrade
performance, increase computational overhead, and undermine response
reliability. To tackle this problem, we propose Multi-Agent Filtering
Retrieval-Augmented Generation (MAIN-RAG), a training-free RAG framework that
leverages multiple LLM agents to collaboratively filter and score retrieved
documents. Specifically, MAIN-RAG introduces an adaptive filtering mechanism
that dynamically adjusts the relevance filtering threshold based on score
distributions, effectively minimizing noise while maintaining high recall of
relevant documents. The proposed approach leverages inter-agent consensus to
ensure robust document selection without requiring additional training data or
fine-tuning. Experimental results across four QA benchmarks demonstrate that
MAIN-RAG consistently outperforms traditional RAG approaches, achieving a 2-11%
improvement in answer accuracy while reducing the number of irrelevant
retrieved documents. Quantitative analysis further reveals that our approach
achieves superior response consistency and answer accuracy over baseline
methods, offering a competitive and practical alternative to training-based
solutions.
|
2501.00334 | Loss-Aware Curriculum Learning for Chinese Grammatical Error Correction | cs.CL cs.AI | Chinese grammatical error correction (CGEC) aims to detect and correct errors
in the input Chinese sentences. Recently, Pre-trained Language Models (PLMS)
have been employed to improve the performance. However, current approaches
ignore that correction difficulty varies across different instances and treat
these samples equally, enhancing the challenge of model learning. To address
this problem, we propose a multi-granularity Curriculum Learning (CL)
framework. Specifically, we first calculate the correction difficulty of these
samples and feed them into the model from easy to hard batch by batch. Then
Instance-Level CL is employed to help the model optimize in the appropriate
direction automatically by regulating the loss function. Extensive experimental
results and comprehensive analyses of various datasets prove the effectiveness
of our method.
|
2501.00339 | Rethinking Layer Removal: Preserving Critical Components with Task-Aware
Singular Value Decomposition | cs.CL cs.LG | Layer removal has emerged as a promising approach for compressing large
language models (LLMs) by leveraging redundancy within layers to reduce model
size and accelerate inference. However, this technique often compromises
internal consistency, leading to performance degradation and instability, with
varying impacts across different model architectures. In this work, we propose
Taco-SVD, a task-aware framework that retains task-critical singular value
directions, preserving internal consistency while enabling efficient
compression. Unlike direct layer removal, Taco-SVD preserves task-critical
transformations to mitigate performance degradation. By leveraging
gradient-based attribution methods, Taco-SVD aligns singular values with
downstream task objectives. Extensive evaluations demonstrate that Taco-SVD
outperforms existing methods in perplexity and task performance across
different architectures while ensuring minimal computational overhead.
|
2501.00340 | Dynamic Prompt Adjustment for Multi-Label Class-Incremental Learning | cs.CV cs.LG | Significant advancements have been made in single label incremental learning
(SLCIL),yet the more practical and challenging multi label class incremental
learning (MLCIL) remains understudied. Recently,visual language models such as
CLIP have achieved good results in classification tasks. However,directly using
CLIP to solve MLCIL issue can lead to catastrophic forgetting. To tackle this
issue, we integrate an improved data replay mechanism and prompt loss to curb
knowledge forgetting. Specifically,our model enhances the prompt information to
better adapt to multi-label classification tasks and employs confidence-based
replay strategy to select representative samples. Moreover, the prompt loss
significantly reduces the model's forgetting of previous knowledge.
Experimental results demonstrate that our method has substantially improved the
performance of MLCIL tasks across multiple benchmark datasets,validating its
effectiveness.
|
2501.00342 | SG-Splatting: Accelerating 3D Gaussian Splatting with Spherical
Gaussians | cs.CV | 3D Gaussian Splatting is emerging as a state-of-the-art technique in novel
view synthesis, recognized for its impressive balance between visual quality,
speed, and rendering efficiency. However, reliance on third-degree spherical
harmonics for color representation introduces significant storage demands and
computational overhead, resulting in a large memory footprint and slower
rendering speed. We introduce SG-Splatting with Spherical Gaussians based color
representation, a novel approach to enhance rendering speed and quality in
novel view synthesis. Our method first represents view-dependent color using
Spherical Gaussians, instead of three degree spherical harmonics, which largely
reduces the number of parameters used for color representation, and
significantly accelerates the rendering process. We then develop an efficient
strategy for organizing multiple Spherical Gaussians, optimizing their
arrangement to achieve a balanced and accurate scene representation. To further
improve rendering quality, we propose a mixed representation that combines
Spherical Gaussians with low-degree spherical harmonics, capturing both high-
and low-frequency color information effectively. SG-Splatting also has
plug-and-play capability, allowing it to be easily integrated into existing
systems. This approach improves computational efficiency and overall visual
fidelity, making it a practical solution for real-time applications.
|
2501.00343 | Chunk-Distilled Language Modeling | cs.CL cs.AI | We introduce Chunk-Distilled Language Modeling (CD-LM), an approach to text
generation that addresses two challenges in current large language models
(LLMs): the inefficiency of token-level generation, and the difficulty of
adapting to new data and knowledge. Our method combines deep network-based LLMs
with a straightforward retrieval module, which allows the generation of
multi-token text chunks at a single decoding step. Our retrieval framework
enables flexible construction of model- or domain-specific datastores, either
leveraging the internal knowledge of existing models, or incorporating expert
insights from human-annotated corpora. This adaptability allows for enhanced
control over the language model's distribution without necessitating additional
training. We present the CD-LM formulation along with performance metrics
demonstrating its ability to improve language model performance and efficiency
across a diverse set of downstream tasks. Code and data will be made publicly
available.
|
2501.00346 | CNC: Cross-modal Normality Constraint for Unsupervised Multi-class
Anomaly Detection | cs.CV cs.AI cs.LG | Existing unsupervised distillation-based methods rely on the differences
between encoded and decoded features to locate abnormal regions in test images.
However, the decoder trained only on normal samples still reconstructs abnormal
patch features well, degrading performance. This issue is particularly
pronounced in unsupervised multi-class anomaly detection tasks. We attribute
this behavior to over-generalization(OG) of decoder: the significantly
increasing diversity of patch patterns in multi-class training enhances the
model generalization on normal patches, but also inadvertently broadens its
generalization to abnormal patches. To mitigate OG, we propose a novel approach
that leverages class-agnostic learnable prompts to capture common textual
normality across various visual patterns, and then apply them to guide the
decoded features towards a normal textual representation, suppressing
over-generalization of the decoder on abnormal patterns. To further improve
performance, we also introduce a gated mixture-of-experts module to specialize
in handling diverse patch patterns and reduce mutual interference between them
in multi-class training. Our method achieves competitive performance on the
MVTec AD and VisA datasets, demonstrating its effectiveness.
|
2501.00348 | Temporal Information Reconstruction and Non-Aligned Residual in Spiking
Neural Networks for Speech Classification | cs.SD cs.AI eess.AS | Recently, it can be noticed that most models based on spiking neural networks
(SNNs) only use a same level temporal resolution to deal with speech
classification problems, which makes these models cannot learn the information
of input data at different temporal scales. Additionally, owing to the
different time lengths of the data before and after the sub-modules of many
models, the effective residual connections cannot be applied to optimize the
training processes of these models.To solve these problems, on the one hand, we
reconstruct the temporal dimension of the audio spectrum to propose a novel
method named as Temporal Reconstruction (TR) by referring the hierarchical
processing process of the human brain for understanding speech. Then, the
reconstructed SNN model with TR can learn the information of input data at
different temporal scales and model more comprehensive semantic information
from audio data because it enables the networks to learn the information of
input data at different temporal resolutions. On the other hand, we propose the
Non-Aligned Residual (NAR) method by analyzing the audio data, which allows the
residual connection can be used in two audio data with different time lengths.
We have conducted plentiful experiments on the Spiking Speech Commands (SSC),
the Spiking Heidelberg Digits (SHD), and the Google Speech Commands v0.02 (GSC)
datasets. According to the experiment results, we have achieved the
state-of-the-art (SOTA) result 81.02\% on SSC for the test classification
accuracy of all SNN models, and we have obtained the SOTA result 96.04\% on SHD
for the classification accuracy of all models.
|
2501.00352 | PanoSLAM: Panoptic 3D Scene Reconstruction via Gaussian SLAM | cs.CV cs.RO | Understanding geometric, semantic, and instance information in 3D scenes from
sequential video data is essential for applications in robotics and augmented
reality. However, existing Simultaneous Localization and Mapping (SLAM) methods
generally focus on either geometric or semantic reconstruction. In this paper,
we introduce PanoSLAM, the first SLAM system to integrate geometric
reconstruction, 3D semantic segmentation, and 3D instance segmentation within a
unified framework. Our approach builds upon 3D Gaussian Splatting, modified
with several critical components to enable efficient rendering of depth, color,
semantic, and instance information from arbitrary viewpoints. To achieve
panoptic 3D scene reconstruction from sequential RGB-D videos, we propose an
online Spatial-Temporal Lifting (STL) module that transfers 2D panoptic
predictions from vision models into 3D Gaussian representations. This STL
module addresses the challenges of label noise and inconsistencies in 2D
predictions by refining the pseudo labels across multi-view inputs, creating a
coherent 3D representation that enhances segmentation accuracy. Our experiments
show that PanoSLAM outperforms recent semantic SLAM methods in both mapping and
tracking accuracy. For the first time, it achieves panoptic 3D reconstruction
of open-world environments directly from the RGB-D video.
(https://github.com/runnanchen/PanoSLAM)
|
2501.00353 | RAG-Instruct: Boosting LLMs with Diverse Retrieval-Augmented
Instructions | cs.CL cs.AI cs.LG | Retrieval-Augmented Generation (RAG) has emerged as a key paradigm for
enhancing large language models (LLMs) by incorporating external knowledge.
However, current RAG methods face two limitations: (1) they only cover limited
RAG scenarios. (2) They suffer from limited task diversity due to the lack of a
general RAG dataset. To address these limitations, we propose RAG-Instruct, a
general method for synthesizing diverse and high-quality RAG instruction data
based on any source corpus. Our approach leverages (1) five RAG paradigms,
which encompass diverse query-document relationships, and (2) instruction
simulation, which enhances instruction diversity and quality by utilizing the
strengths of existing instruction datasets. Using this method, we construct a
40K instruction dataset from Wikipedia, comprehensively covering diverse RAG
scenarios and tasks. Experiments demonstrate that RAG-Instruct effectively
enhances LLMs' RAG capabilities, achieving strong zero-shot performance and
significantly outperforming various RAG baselines across a diverse set of
tasks. RAG-Instruct is publicly available at
https://github.com/FreedomIntelligence/RAG-Instruct.
|
2501.00356 | A New Dataset and Methodology for Malicious URL Classification | cs.LG cs.CR | Malicious URL (Uniform Resource Locator) classification is a pivotal aspect
of Cybersecurity, offering defense against web-based threats. Despite deep
learning's promise in this area, its advancement is hindered by two main
challenges: the scarcity of comprehensive, open-source datasets and the
limitations of existing models, which either lack real-time capabilities or
exhibit suboptimal performance. In order to address these gaps, we introduce a
novel, multi-class dataset for malicious URL classification, distinguishing
between benign, phishing and malicious URLs, named DeepURLBench. The data has
been rigorously cleansed and structured, providing a superior alternative to
existing datasets. Notably, the multi-class approach enhances the performance
of deep learning models, as compared to a standard binary classification
approach. Additionally, we propose improvements to string-based URL
classifiers, applying these enhancements to URLNet. Key among these is the
integration of DNS-derived features, which enrich the model's capabilities and
lead to notable performance gains while preserving real-time runtime
efficiency-achieving an effective balance for cybersecurity applications.
|
2501.00358 | Embodied VideoAgent: Persistent Memory from Egocentric Videos and
Embodied Sensors Enables Dynamic Scene Understanding | cs.CV | This paper investigates the problem of understanding dynamic 3D scenes from
egocentric observations, a key challenge in robotics and embodied AI. Unlike
prior studies that explored this as long-form video understanding and utilized
egocentric video only, we instead propose an LLM-based agent, Embodied
VideoAgent, which constructs scene memory from both egocentric video and
embodied sensory inputs (e.g. depth and pose sensing). We further introduce a
VLM-based approach to automatically update the memory when actions or
activities over objects are perceived. Embodied VideoAgent attains significant
advantages over counterparts in challenging reasoning and planning tasks in 3D
scenes, achieving gains of 4.9% on Ego4D-VQ3D, 5.8% on OpenEQA, and 11.7% on
EnvQA. We have also demonstrated its potential in various embodied AI tasks
including generating embodied interactions and perception for robot
manipulation. The code and demo will be made public.
|
2501.00360 | A Novel Shape Guided Transformer Network for Instance Segmentation in
Remote Sensing Images | cs.CV cs.LG | Instance segmentation performance in remote sensing images (RSIs) is
significantly affected by two issues: how to extract accurate boundaries of
objects from remote imaging through the dynamic atmosphere, and how to
integrate the mutual information of related object instances scattered over a
vast spatial region. In this study, we propose a novel Shape Guided Transformer
Network (SGTN) to accurately extract objects at the instance level. Inspired by
the global contextual modeling capacity of the self-attention mechanism, we
propose an effective transformer encoder termed LSwin, which incorporates
vertical and horizontal 1D global self-attention mechanisms to obtain better
global-perception capacity for RSIs than the popular local-shifted-window based
Swin Transformer. To achieve accurate instance mask segmentation, we introduce
a shape guidance module (SGM) to emphasize the object boundary and shape
information. The combination of SGM, which emphasizes the local detail
information, and LSwin, which focuses on the global context relationships,
achieve excellent RSI instance segmentation. Their effectiveness was validated
through comprehensive ablation experiments. Especially, LSwin is proved better
than the popular ResNet and Swin transformer encoder at the same level of
efficiency. Compared to other instance segmentation methods, our SGTN achieves
the highest average precision (AP) scores on two single-class public datasets
(WHU dataset and BITCC dataset) and a multi-class public dataset (NWPU VHR-10
dataset). Code will be available at http://gpcv.whu.edu.cn/data/.
|
2501.00364 | $\texttt{FORM}$: Learning Expressive and Transferable First-Order Logic
Reward Machines | cs.AI cs.FL cs.LO cs.SC | Reward machines (RMs) are an effective approach for addressing non-Markovian
rewards in reinforcement learning (RL) through finite-state machines.
Traditional RMs, which label edges with propositional logic formulae, inherit
the limited expressivity of propositional logic. This limitation hinders the
learnability and transferability of RMs since complex tasks will require
numerous states and edges. To overcome these challenges, we propose First-Order
Reward Machines ($\texttt{FORM}$s), which use first-order logic to label edges,
resulting in more compact and transferable RMs. We introduce a novel method for
$\textbf{learning}$ $\texttt{FORM}$s and a multi-agent formulation for
$\textbf{exploiting}$ them and facilitate their transferability, where multiple
agents collaboratively learn policies for a shared $\texttt{FORM}$. Our
experimental results demonstrate the scalability of $\texttt{FORM}$s with
respect to traditional RMs. Specifically, we show that $\texttt{FORM}$s can be
effectively learnt for tasks where traditional RM learning approaches fail. We
also show significant improvements in learning speed and task transferability
thanks to the multi-agent learning framework and the abstraction provided by
the first-order language.
|
2501.00365 | Low-Rank Adaptation for Foundation Models: A Comprehensive Review | cs.LG cs.AI | The rapid advancement of foundation modelslarge-scale neural networks trained
on diverse, extensive datasetshas revolutionized artificial intelligence,
enabling unprecedented advancements across domains such as natural language
processing, computer vision, and scientific discovery. However, the substantial
parameter count of these models, often reaching billions or trillions, poses
significant challenges in adapting them to specific downstream tasks. Low-Rank
Adaptation (LoRA) has emerged as a highly promising approach for mitigating
these challenges, offering a parameter-efficient mechanism to fine-tune
foundation models with minimal computational overhead. This survey provides the
first comprehensive review of LoRA techniques beyond large Language Models to
general foundation models, including recent techniques foundations, emerging
frontiers and applications of low-rank adaptation across multiple domains.
Finally, this survey discusses key challenges and future research directions in
theoretical understanding, scalability, and robustness. This survey serves as a
valuable resource for researchers and practitioners working with efficient
foundation model adaptation.
|
2501.00367 | Who Gets Recommended? Investigating Gender, Race, and Country
Disparities in Paper Recommendations from Large Language Models | cs.IR cs.CY cs.DL | This paper investigates the performance of several representative large
models in the tasks of literature recommendation and explores potential biases
in research exposure. The results indicate that not only LLMs' overall
recommendation accuracy remains limited but also the models tend to recommend
literature with greater citation counts, later publication date, and larger
author teams. Yet, in scholar recommendation tasks, there is no evidence that
LLMs disproportionately recommend male, white, or developed-country authors,
contrasting with patterns of known human biases.
|
2501.00368 | Design Optimizer for Soft Growing Robot Manipulators in
Three-Dimensional Environments | cs.RO cs.AI cs.NE | Soft growing robots are novel devices that mimic plant-like growth for
navigation in cluttered or dangerous environments. Their ability to adapt to
surroundings, combined with advancements in actuation and manufacturing
technologies, allows them to perform specialized manipulation tasks. This work
presents an approach for design optimization of soft growing robots;
specifically, the three-dimensional extension of the optimizer designed for
planar manipulators. This tool is intended to be used by engineers and robot
enthusiasts before manufacturing their robot: it suggests the optimal size of
the robot for solving a specific task. The design process models a
multi-objective optimization problem to refine a soft manipulator's kinematic
chain. Thanks to the novel Rank Partitioning algorithm integrated into
Evolutionary Computation (EC) algorithms, this method achieves high precision
in reaching targets and is efficient in resource usage. Results show
significantly high performance in solving three-dimensional tasks, whereas
comparative experiments indicate that the optimizer features robust output when
tested with different EC algorithms, particularly genetic algorithms.
|
2501.00371 | Structured Codes for Distributed Matrix Multiplication | cs.IT math.IT | Our work addresses the well-known open problem of distributed computing of
bilinear functions of two correlated sources ${\bf A}$ and ${\bf B}$. In a
setting with two nodes, with the first node having access to ${\bf A}$ and the
second to ${\bf B}$, we establish bounds on the optimal sum-rate that allows a
receiver to compute an important class of non-linear functions, and in
particular bilinear functions, including dot products $\langle {\bf A},{\bf
B}\rangle$, and general matrix products ${\bf A}^{\intercal}{\bf B}$ over
finite fields. The bounds are tight, for large field sizes, for which case we
can derive the exact fundamental performance limits for all problem dimensions
and a large class of sources. Our achievability scheme involves the design of
non-linear transformations of ${\bf A}$ and ${\bf B}$, which are carefully
calibrated to work synergistically with the structured linear encoding scheme
by K\"orner and Marton. The subsequent converse derived here, calibrates the
Han-Kobayashi approach to yield a relatively tight converse on the sum rate. We
also demonstrate unbounded compression gains over Slepian-Wolf coding,
depending on the source correlations. In the end, our work derives fundamental
limits for distributed computing of a crucial class of functions, succinctly
capturing the computation structures and source correlations.
Our findings are subsequently applied to the practical
master-workers-receiver framework, where each of $N$ distributed workers has a
bounded memory reflecting a bounded computational capability. By combining the
above scheme with the polynomial code framework, we design novel structured
polynomial codes for distributed matrix multiplication, and show that our codes
can surpass the performance of the existing state of art, while also adapting
these new codes to support chain matrix multiplications and
information-theoretically secure computations.
|
2501.00375 | Token Pruning for Caching Better: 9 Times Acceleration on Stable
Diffusion for Free | cs.CV cs.LG | Stable Diffusion has achieved remarkable success in the field of
text-to-image generation, with its powerful generative capabilities and diverse
generation results making a lasting impact. However, its iterative denoising
introduces high computational costs and slows generation speed, limiting
broader adoption. The community has made numerous efforts to reduce this
computational burden, with methods like feature caching attracting attention
due to their effectiveness and simplicity. Nonetheless, simply reusing features
computed at previous timesteps causes the features across adjacent timesteps to
become similar, reducing the dynamics of features over time and ultimately
compromising the quality of generated images. In this paper, we introduce a
dynamics-aware token pruning (DaTo) approach that addresses the limitations of
feature caching. DaTo selectively prunes tokens with lower dynamics, allowing
only high-dynamic tokens to participate in self-attention layers, thereby
extending feature dynamics across timesteps. DaTo combines feature caching with
token pruning in a training-free manner, achieving both temporal and token-wise
information reuse. Applied to Stable Diffusion on the ImageNet, our approach
delivered a 9$\times$ speedup while reducing FID by 0.33, indicating enhanced
image quality. On the COCO-30k, we observed a 7$\times$ acceleration coupled
with a notable FID reduction of 2.17.
|
2501.00378 | STARFormer: A Novel Spatio-Temporal Aggregation Reorganization
Transformer of FMRI for Brain Disorder Diagnosis | eess.IV cs.CV cs.LG | Many existing methods that use functional magnetic resonance imaging (fMRI)
classify brain disorders, such as autism spectrum disorder (ASD) and attention
deficit hyperactivity disorder (ADHD), often overlook the integration of
spatial and temporal dependencies of the blood oxygen level-dependent (BOLD)
signals, which may lead to inaccurate or imprecise classification results. To
solve this problem, we propose a Spatio-Temporal Aggregation eorganization
ransformer (STARFormer) that effectively captures both spatial and temporal
features of BOLD signals by incorporating three key modules. The region of
interest (ROI) spatial structure analysis module uses eigenvector centrality
(EC) to reorganize brain regions based on effective connectivity, highlighting
critical spatial relationships relevant to the brain disorder. The temporal
feature reorganization module systematically segments the time series into
equal-dimensional window tokens and captures multiscale features through
variable window and cross-window attention. The spatio-temporal feature fusion
module employs a parallel transformer architecture with dedicated temporal and
spatial branches to extract integrated features. The proposed STARFormer has
been rigorously evaluated on two publicly available datasets for the
classification of ASD and ADHD. The experimental results confirm that the
STARFormer achieves state-of-the-art performance across multiple evaluation
metrics, providing a more accurate and reliable tool for the diagnosis of brain
disorders and biomedical research. The codes will be available at:
https://github.com/NZWANG/STARFormer.
|
2501.00379 | Federated Dropout: Convergence Analysis and Resource Allocation | cs.LG cs.IT math.IT | Federated Dropout is an efficient technique to overcome both communication
and computation bottlenecks for deploying federated learning at the network
edge. In each training round, an edge device only needs to update and transmit
a sub-model, which is generated by the typical method of dropout in deep
learning, and thus effectively reduces the per-round latency.
\textcolor{blue}{However, the theoretical convergence analysis for Federated
Dropout is still lacking in the literature, particularly regarding the
quantitative influence of dropout rate on convergence}. To address this issue,
by using the Taylor expansion method, we mathematically show that the gradient
variance increases with a scaling factor of $\gamma/(1-\gamma)$, with $\gamma
\in [0, \theta)$ denoting the dropout rate and $\theta$ being the maximum
dropout rate ensuring the loss function reduction. Based on the above
approximation, we provide the convergence analysis for Federated Dropout.
Specifically, it is shown that a larger dropout rate of each device leads to a
slower convergence rate. This provides a theoretical foundation for reducing
the convergence latency by making a tradeoff between the per-round latency and
the overall rounds till convergence. Moreover, a low-complexity algorithm is
proposed to jointly optimize the dropout rate and the bandwidth allocation for
minimizing the loss function in all rounds under a given per-round latency and
limited network resources. Finally, numerical results are provided to verify
the effectiveness of the proposed algorithm.
|
2501.00381 | Toward Information Theoretic Active Inverse Reinforcement Learning | cs.LG stat.ML | As AI systems become increasingly autonomous, aligning their decision-making
to human preferences is essential. In domains like autonomous driving or
robotics, it is impossible to write down the reward function representing these
preferences by hand. Inverse reinforcement learning (IRL) offers a promising
approach to infer the unknown reward from demonstrations. However, obtaining
human demonstrations can be costly. Active IRL addresses this challenge by
strategically selecting the most informative scenarios for human demonstration,
reducing the amount of required human effort. Where most prior work allowed
querying the human for an action at one state at a time, we motivate and
analyse scenarios where we collect longer trajectories. We provide an
information-theoretic acquisition function, propose an efficient approximation
scheme, and illustrate its performance through a set of gridworld experiments
as groundwork for future work expanding to more general settings.
|
2501.00382 | Adventures in Demand Analysis Using AI | econ.GN cs.AI q-fin.EC stat.AP stat.ML | This paper advances empirical demand analysis by integrating multimodal
product representations derived from artificial intelligence (AI). Using a
detailed dataset of toy cars on \textit{Amazon.com}, we combine text
descriptions, images, and tabular covariates to represent each product using
transformer-based embedding models. These embeddings capture nuanced
attributes, such as quality, branding, and visual characteristics, that
traditional methods often struggle to summarize. Moreover, we fine-tune these
embeddings for causal inference tasks. We show that the resulting embeddings
substantially improve the predictive accuracy of sales ranks and prices and
that they lead to more credible causal estimates of price elasticity. Notably,
we uncover strong heterogeneity in price elasticity driven by these
product-specific features. Our findings illustrate that AI-driven
representations can enrich and modernize empirical demand analysis. The
insights generated may also prove valuable for applied causal inference more
broadly.
|
2501.00383 | Proactive Conversational Agents with Inner Thoughts | cs.HC cs.AI | One of the long-standing aspirations in conversational AI is to allow them to
autonomously take initiatives in conversations, i.e., being proactive. This is
especially challenging for multi-party conversations. Prior NLP research
focused mainly on predicting the next speaker from contexts like preceding
conversations. In this paper, we demonstrate the limitations of such methods
and rethink what it means for AI to be proactive in multi-party, human-AI
conversations. We propose that just like humans, rather than merely reacting to
turn-taking cues, a proactive AI formulates its own inner thoughts during a
conversation, and seeks the right moment to contribute. Through a formative
study with 24 participants and inspiration from linguistics and cognitive
psychology, we introduce the Inner Thoughts framework. Our framework equips AI
with a continuous, covert train of thoughts in parallel to the overt
communication process, which enables it to proactively engage by modeling its
intrinsic motivation to express these thoughts. We instantiated this framework
into two real-time systems: an AI playground web app and a chatbot. Through a
technical evaluation and user studies with human participants, our framework
significantly surpasses existing baselines on aspects like anthropomorphism,
coherence, intelligence, and turn-taking appropriateness.
|
2501.00384 | S-Diff: An Anisotropic Diffusion Model for Collaborative Filtering in
Spectral Domain | cs.IR | Recovering user preferences from user-item interaction matrices is a key
challenge in recommender systems. While diffusion models can sample and
reconstruct preferences from latent distributions, they often fail to capture
similar users' collective preferences effectively. Additionally, latent
variables degrade into pure Gaussian noise during the forward process, lowering
the signal-to-noise ratio, which in turn degrades performance. To address this,
we propose S-Diff, inspired by graph-based collaborative filtering, better to
utilize low-frequency components in the graph spectral domain. S-Diff maps user
interaction vectors into the spectral domain and parameterizes diffusion noise
to align with graph frequency. This anisotropic diffusion retains significant
low-frequency components, preserving a high signal-to-noise ratio. S-Diff
further employs a conditional denoising network to encode user interactions,
recovering true preferences from noisy data. This method achieves strong
results across multiple datasets.
|
2501.00390 | Impossibility of Self-Organized Aggregation without Computation | cs.RO cs.CG cs.DC cs.MA cs.SY eess.SY | In their seminal work, Gauci et al. (2014) studied the fundamental task of
aggregation, wherein multiple robots need to gather without an a priori
agreed-upon meeting location, using minimal hardware. That paper considered
differential-drive robots that are memoryless and unable to compute. Moreover,
the robots cannot communicate with one another and are only equipped with a
simple sensor that determines whether another robot is directly in front of
them. Despite those severe limitations, Gauci et al. introduced a controller
and proved mathematically that it aggregates a system of two robots for any
initial state. Unfortunately, for larger systems, the same controller
aggregates empirically in many cases but not all. Thus, the question of whether
a controller exists that aggregates for any number of robots remains open. In
this paper, we show that no such controller exists by investigating the
geometric structure of controllers. In addition, we disprove the aggregation
proof of the paper above for two robots and present an alternative controller
alongside a simple and rigorous aggregation proof.
|
2501.00391 | Trajectories of Change: Approaches for Tracking Knowledge Evolution | cs.CL physics.hist-ph | We explore local vs. global evolution of knowledge systems through the
framework of socio-epistemic networks (SEN), applying two complementary methods
to a corpus of scientific texts. The framework comprises three interconnected
layers-social, semiotic (material), and semantic-proposing a multilayered
approach to understanding structural developments of knowledge. To analyse
diachronic changes on the semantic layer, we first use information-theoretic
measures based on relative entropy to detect semantic shifts, assess their
significance, and identify key driving features. Second, variations in document
embedding densities reveal changes in semantic neighbourhoods, tracking how
concentration of similar documents increase, remain stable, or disperse. This
enables us to trace document trajectories based on content (topics) or metadata
(authorship, institution). Case studies of Joseph Silk and Hans-J\"urgen Treder
illustrate how individual scholar's work aligns with broader disciplinary
shifts in general relativity and gravitation research, demonstrating the
applications, limitations, and further potential of this approach.
|
2501.00397 | Efficient Relational Context Perception for Knowledge Graph Completion | cs.LG cs.AI cs.CL | Knowledge Graphs (KGs) provide a structured representation of knowledge but
often suffer from challenges of incompleteness. To address this, link
prediction or knowledge graph completion (KGC) aims to infer missing new facts
based on existing facts in KGs. Previous knowledge graph embedding models are
limited in their ability to capture expressive features, especially when
compared to deeper, multi-layer models. These approaches also assign a single
static embedding to each entity and relation, disregarding the fact that
entities and relations can exhibit different behaviors in varying graph
contexts. Due to complex context over a fact triple of a KG, existing methods
have to leverage complex non-linear context encoder, like transformer, to
project entity and relation into low dimensional representations, resulting in
high computation cost. To overcome these limitations, we propose Triple
Receptance Perception (TRP) architecture to model sequential information,
enabling the learning of dynamic context of entities and relations. Then we use
tensor decomposition to calculate triple scores, providing robust relational
decoding capabilities. This integration allows for more expressive
representations. Experiments on benchmark datasets such as YAGO3-10, UMLS,
FB15k, and FB13 in link prediction and triple classification tasks demonstrate
that our method performs better than several state-of-the-art models, proving
the effectiveness of the integration.
|
2501.00398 | TSPE: Task-Specific Prompt Ensemble for Improved Zero-Shot Audio
Classification | cs.SD cs.AI cs.CL cs.LG eess.AS | Audio-language models (ALMs) excel in zero-shot audio classification, a task
where models classify previously unseen audio clips at test time by leveraging
descriptive natural language prompts. We introduce TSPE (Task-Specific Prompt
Ensemble), a simple, training-free hard prompting method that boosts ALEs'
zero-shot performance by customizing prompts for diverse audio classification
tasks. Rather than using generic template-based prompts like "Sound of a car"
we generate context-rich prompts, such as "Sound of a car coming from a
tunnel". Specifically, we leverage label information to identify suitable sound
attributes, such as "loud" and "feeble", and appropriate sound sources, such as
"tunnel" and "street" and incorporate this information into the prompts used by
Audio-Language Models (ALMs) for audio classification. Further, to enhance
audio-text alignment, we perform prompt ensemble across TSPE-generated
task-specific prompts. When evaluated on 12 diverse audio classification
datasets, TSPE improves performance across ALMs by showing an absolute
improvement of 1.23-16.36% over vanilla zero-shot evaluation.
|
2501.00399 | Movable Superdirective Pairs: A Phase Shifter-Free Approach to mmWave
Communications | cs.IT eess.SP math.IT | In this letter, we propose a novel Movable Superdirective Pairs (MSP)
approach that combines movable antennas with superdirective pair arrays to
enhance the performance of millimeter-wave (mmWave) communications on the user
side. By controlling the rotation angles and positions of superdirective
antenna pairs, the proposed MSP approach maximizes the received signal-to-noise
ratio (SNR) of multipath signals without relying on phase shifters or
attenuators. This approach addresses the limitations of traditional
superdirective antennas, which are typically restricted to the endfire
direction and suffer from reduced scanning bandwidth and increased complexity.
An efficient algorithm based on alternating optimization and the gradient
projection method is developed to solve the non-convex optimization problem of
antennas' joint rotating positioning. Simulation results demonstrate that the
MSP approach achieves significant performance gains over fixed-position array
(FPA) employing Maximum Ratio Combining (MRC), while reducing system complexity
and hardware costs.
|
2501.00417 | PureRank: A Parameter-Free Recursive Importance Measure for Network
Nodes | cs.SI | PageRank, widely used for network analysis in various fields, is a form of
Katz centrality based on the recursive definition of importance (RDI). However,
PageRank has a free parameter known as the damping factor, whose recommended
value is 0.85, although its validity has been guaranteed theoretically or
rationally. To solve this problem, we propose PureRank, a new parameter-free
RDI-based importance measure. The term ``pure'' in PureRank denotes the purity
of its parameter-free nature, which ensures the uniqueness of importance scores
and eliminates subjective and empirical adjustments. PureRank also offers
computational advantages over PageRank, such as improved parallelizability and
scalability, due to the use of strongly connected component decomposition in
its definition. Furthermore, we introduce the concept of splitting for networks
with both positively and negatively weighted links and extend PureRank to such
general networks, providing a more versatile tool for network analysis.
|
2501.00418 | Generalizing Trust: Weak-to-Strong Trustworthiness in Language Models | cs.LG cs.AI | The rapid proliferation of generative AI, especially large language models,
has led to their integration into a variety of applications. A key phenomenon
known as weak-to-strong generalization - where a strong model trained on a weak
model's outputs surpasses the weak model in task performance - has gained
significant attention. Yet, whether critical trustworthiness properties such as
robustness, fairness, and privacy can generalize similarly remains an open
question. In this work, we study this question by examining if a stronger model
can inherit trustworthiness properties when fine-tuned on a weaker model's
outputs, a process we term weak-to-strong trustworthiness generalization. To
address this, we introduce two foundational training strategies: 1) Weak
Trustworthiness Finetuning (Weak TFT), which leverages trustworthiness
regularization during the fine-tuning of the weak model, and 2) Weak and
Weak-to-Strong Trustworthiness Finetuning (Weak+WTS TFT), which extends
regularization to both weak and strong models. Our experimental evaluation on
real-world datasets reveals that while some trustworthiness properties, such as
fairness, adversarial, and OOD robustness, show significant improvement in
transfer when both models were regularized, others like privacy do not exhibit
signs of weak-to-strong trustworthiness. As the first study to explore
trustworthiness generalization via weak-to-strong generalization, our work
provides valuable insights into the potential and limitations of weak-to-strong
generalization.
|
2501.00420 | KAE: Kolmogorov-Arnold Auto-Encoder for Representation Learning | cs.LG | The Kolmogorov-Arnold Network (KAN) has recently gained attention as an
alternative to traditional multi-layer perceptrons (MLPs), offering improved
accuracy and interpretability by employing learnable activation functions on
edges. In this paper, we introduce the Kolmogorov-Arnold Auto-Encoder (KAE),
which integrates KAN with autoencoders (AEs) to enhance representation learning
for retrieval, classification, and denoising tasks. Leveraging the flexible
polynomial functions in KAN layers, KAE captures complex data patterns and
non-linear relationships. Experiments on benchmark datasets demonstrate that
KAE improves latent representation quality, reduces reconstruction errors, and
achieves superior performance in downstream tasks such as retrieval,
classification, and denoising, compared to standard autoencoders and other KAN
variants. These results suggest KAE's potential as a useful tool for
representation learning. Our code is available at
\url{https://github.com/SciYu/KAE/}.
|
2501.00421 | Outlier-Robust Linear System Identification Under Heavy-tailed Noise | eess.SY cs.LG cs.SY math.OC | We consider the problem of estimating the state transition matrix of a linear
time-invariant (LTI) system, given access to multiple independent trajectories
sampled from the system. Several recent papers have conducted a non-asymptotic
analysis of this problem, relying crucially on the assumption that the process
noise is either Gaussian or sub-Gaussian, i.e., "light-tailed". In sharp
contrast, we work under a significantly weaker noise model, assuming nothing
more than the existence of the fourth moment of the noise distribution. For
this setting, we provide the first set of results demonstrating that one can
obtain sample-complexity bounds for linear system identification that are
nearly of the same order as under sub-Gaussian noise. To achieve such results,
we develop a novel robust system identification algorithm that relies on
constructing multiple weakly-concentrated estimators, and then boosting their
performance using suitable tools from high-dimensional robust statistics.
Interestingly, our analysis reveals how the kurtosis of the noise distribution,
a measure of heavy-tailedness, affects the number of trajectories needed to
achieve desired estimation error bounds. Finally, we show that our algorithm
and analysis technique can be easily extended to account for scenarios where an
adversary can arbitrarily corrupt a small fraction of the collected trajectory
data. Our work takes the first steps towards building a robust statistical
learning theory for control under non-ideal assumptions on the data-generating
process.
|
2501.00425 | Whisper Turns Stronger: Augmenting Wav2Vec 2.0 for Superior ASR in
Low-Resource Languages | cs.CL cs.SD eess.AS | Approaching Speech-to-Text and Automatic Speech Recognition problems in
low-resource languages is notoriously challenging due to the scarcity of
validated datasets and the diversity of dialects. Arabic, Russian, and
Portuguese exemplify these difficulties, being low-resource languages due to
the many dialects of these languages across different continents worldwide.
Moreover, the variety of accents and pronunciations of such languages
complicate ASR models' success. With the increasing popularity of Deep Learning
and Transformers, acoustic models like the renowned Wav2Vec2 have achieved
superior performance in the Speech Recognition field compared to
state-of-the-art approaches. However, despite Wav2Vec2's improved efficiency
over traditional methods, its performance significantly declines for
under-represented languages, even though it requires significantly less labeled
data. This paper introduces an end-to-end framework that enhances ASR systems
fine-tuned on Wav2Vec2 through data augmentation techniques. To validate our
framework's effectiveness, we conducted a detailed experimental evaluation
using three datasets from Mozilla's Common Voice project in Arabic, Russian,
and Portuguese. Additionally, the framework presented in this paper
demonstrates robustness to different diacritics. Ultimately, our approach
outperforms two previous baseline models, which are the pre-trained Wav2Vec2
and the well-known Whisper ASR model, resulting in an average relative
improvement of 33.9\% in Word Error Rate and a 53.2\% relative improvement in
Character Error Rate.
|
2501.00426 | B2Net: Camouflaged Object Detection via Boundary Aware and Boundary
Fusion | cs.CV cs.LG | Camouflaged object detection (COD) aims to identify objects in images that
are well hidden in the environment due to their high similarity to the
background in terms of texture and color. However, existing most
boundary-guided camouflage object detection algorithms tend to generate object
boundaries early in the network, and inaccurate edge priors often introduce
noises in object detection. Address on this issue, we propose a novel network
named B2Net aiming to enhance the accuracy of obtained boundaries by reusing
boundary-aware modules at different stages of the network. Specifically, we
present a Residual Feature Enhanced Module (RFEM) with the goal of integrating
more discriminative feature representations to enhance detection accuracy and
reliability. After that, the Boundary Aware Module (BAM) is introduced to
explore edge cues twice by integrating spatial information from low-level
features and semantic information from high-level features. Finally, we design
the Cross-scale Boundary Fusion Module(CBFM) that integrate information across
different scales in a top-down manner, merging boundary features with object
features to obtain a comprehensive feature representation incorporating
boundary information. Extensive experimental results on three challenging
benchmark datasets demonstrate that our proposed method B2Net outperforms 15
state-of-art methods under widely used evaluation metrics. Code will be made
publicly available.
|
2501.00430 | Enhancing LLM Reasoning with Multi-Path Collaborative Reactive and
Reflection agents | cs.CL | Agents have demonstrated their potential in scientific reasoning tasks
through large language models. However, they often face challenges such as
insufficient accuracy and degeneration of thought when handling complex
reasoning tasks, which impede their performance. To overcome these issues, we
propose the Reactive and Reflection agents with Multi-Path Reasoning (RR-MP)
Framework, aimed at enhancing the reasoning capabilities of LLMs. Our approach
improves scientific reasoning accuracy by employing a multi-path reasoning
mechanism where each path consists of a reactive agent and a reflection agent
that collaborate to prevent degeneration of thought inherent in single-agent
reliance. Additionally, the RR-MP framework does not require additional
training; it utilizes multiple dialogue instances for each reasoning path and a
separate summarizer to consolidate insights from all paths. This design
integrates diverse perspectives and strengthens reasoning across each path. We
conducted zero-shot and few-shot evaluations on tasks involving moral
scenarios, college-level physics, and mathematics. Experimental results
demonstrate that our method outperforms baseline approaches, highlighting the
effectiveness and advantages of the RR-MP framework in managing complex
scientific reasoning tasks.
|
2501.00432 | OV-HHIR: Open Vocabulary Human Interaction Recognition Using Cross-modal
Integration of Large Language Models | cs.CV cs.LG | Understanding human-to-human interactions, especially in contexts like public
security surveillance, is critical for monitoring and maintaining safety.
Traditional activity recognition systems are limited by fixed vocabularies,
predefined labels, and rigid interaction categories that often rely on
choreographed videos and overlook concurrent interactive groups. These
limitations make such systems less adaptable to real-world scenarios, where
interactions are diverse and unpredictable. In this paper, we propose an open
vocabulary human-to-human interaction recognition (OV-HHIR) framework that
leverages large language models to generate open-ended textual descriptions of
both seen and unseen human interactions in open-world settings without being
confined to a fixed vocabulary. Additionally, we create a comprehensive,
large-scale human-to-human interaction dataset by standardizing and combining
existing public human interaction datasets into a unified benchmark. Extensive
experiments demonstrate that our method outperforms traditional
fixed-vocabulary classification systems and existing cross-modal language
models for video understanding, setting the stage for more intelligent and
adaptable visual understanding systems in surveillance and beyond.
|
2501.00436 | Intuitive Analysis of the Quantization-based Optimization: From
Stochastic and Quantum Mechanical Perspective | cs.LG quant-ph | In this paper, we present an intuitive analysis of the optimization technique
based on the quantization of an objective function. Quantization of an
objective function is an effective optimization methodology that decreases the
measure of a level set containing several saddle points and local minima and
finds the optimal point at the limit level set. To investigate the dynamics of
quantization-based optimization, we derive an overdamped Langevin dynamics
model from an intuitive analysis to minimize the level set by iterative
quantization. We claim that quantization-based optimization involves the
quantities of thermodynamical and quantum mechanical optimization as the core
methodologies of global optimization. Furthermore, on the basis of the proposed
SDE, we provide thermodynamic and quantum mechanical analysis with
Witten-Laplacian. The simulation results with the benchmark functions, which
compare the performance of the nonlinear optimization, demonstrate the validity
of the quantization-based optimization.
|
2501.00437 | Unleashing Text-to-Image Diffusion Prior for Zero-Shot Image Captioning | cs.CV cs.CL cs.LG cs.MM | Recently, zero-shot image captioning has gained increasing attention, where
only text data is available for training. The remarkable progress in
text-to-image diffusion model presents the potential to resolve this task by
employing synthetic image-caption pairs generated by this pre-trained prior.
Nonetheless, the defective details in the salient regions of the synthetic
images introduce semantic misalignment between the synthetic image and text,
leading to compromised results. To address this challenge, we propose a novel
Patch-wise Cross-modal feature Mix-up (PCM) mechanism to adaptively mitigate
the unfaithful contents in a fine-grained manner during training, which can be
integrated into most of encoder-decoder frameworks, introducing our PCM-Net.
Specifically, for each input image, salient visual concepts in the image are
first detected considering the image-text similarity in CLIP space. Next, the
patch-wise visual features of the input image are selectively fused with the
textual features of the salient visual concepts, leading to a mixed-up feature
map with less defective content. Finally, a visual-semantic encoder is
exploited to refine the derived feature map, which is further incorporated into
the sentence decoder for caption generation. Additionally, to facilitate the
model training with synthetic data, a novel CLIP-weighted cross-entropy loss is
devised to prioritize the high-quality image-text pairs over the low-quality
counterparts. Extensive experiments on MSCOCO and Flickr30k datasets
demonstrate the superiority of our PCM-Net compared with state-of-the-art
VLMs-based approaches. It is noteworthy that our PCM-Net ranks first in both
in-domain and cross-domain zero-shot image captioning. The synthetic dataset
SynthImgCap and code are available at https://jianjieluo.github.io/SynthImgCap.
|
2501.00444 | Knowledge-aware equation discovery with automated background knowledge
extraction | cs.AI | In differential equation discovery algorithms, a priori expert knowledge is
mainly used implicitly to constrain the form of the expected equation, making
it impossible for the algorithm to truly discover equations. Instead, most
differential equation discovery algorithms try to recover the coefficients for
a known structure. In this paper, we describe an algorithm that allows the
discovery of unknown equations using automatically or manually extracted
background knowledge. Instead of imposing rigid constraints, we modify the
structure space so that certain terms are likely to appear within the crossover
and mutation operators. In this way, we mimic expertly chosen terms while
preserving the possibility of obtaining any equation form. The paper shows that
the extraction and use of knowledge allows it to outperform the SINDy algorithm
in terms of search stability and robustness. Synthetic examples are given for
Burgers, wave, and Korteweg--De Vries equations.
|
2501.00448 | A Complex Frequency-Based Control for Inverter-Based Resources | eess.SY cs.SY | This paper proposes a novel control for Inverter-based Resources (IBRs) based
on the Complex Frequency (CF) concept. The controller's objective is to
maintain a constant CF of the voltage at the terminals of the IBR by adjusting
its current reference. This current is imposed based on the well-known power
flow equation, the dynamics of which are calculated through the estimation of
the CF of the voltages of the adjacent buses. Performance is evaluated by
analyzing local variations in frequency and magnitude of the voltage, as well
as the response of the system's Center of Inertia (CoI) frequency, and then
compared with conventional frequency droop, PI voltage controllers and virtual
inertia. The case study utilizes the WSCC 9-bus system and a 1479-bus model of
the Irish transmission grid and considers various contingencies and
sensitivities such as the impact of limiters, delays, noise, R/X ratio, and EMT
dynamics. Results show that the proposed scheme consistently outperforms the
conventional controllers, leading to significant improvements in the overall
dynamic response of the system.
|
2501.00449 | Do Students with Different Personality Traits Demonstrate Different
Physiological Signals in Video-based Learning? | cs.HC cs.AI | Past researches show that personality trait is a strong predictor for ones
academic performance. Today, mature and verified marker systems for assessing
personality traits already exist. However, marker systems-based assessing
methods have their own limitations. For example, dishonest responses cannot be
avoided. In this research, the goal is to develop a method that can overcome
the limitations. The proposed method will rely on physiological signals for the
assessment. Thirty participants have participated in this experiment. Based on
the statistical results, we found that there are correlations between students
personality traits and their physiological signal change when learning via
videos. Specifically, we found that participants degree of extraversion,
agreeableness, conscientiousness, and openness to experiences are correlated
with the variance of heart rates, the variance of GSR values, and the skewness
of voice frequencies, etc.
|
2501.00450 | An OpenFOAM face-centred solver for incompressible flows robust to mesh
distortion | physics.flu-dyn cs.CE cs.NA math.NA | This work presents an overview of mesh-induced errors commonly experienced by
cell-centred finite volumes (CCFV), for which the face-centred finite volume
(FCFV) paradigm offers competitive solutions. In particular, a robust FCFV
solver for incompressible laminar flows is integrated in OpenFOAM and tested on
a set of steady-state and transient benchmarks. The method outperforms standard
simpleFoam and pimpleFoam algorithms in terms of optimal convergence, accuracy,
stability, and robustness. Special attention is devoted to motivate and
numerically demonstrate the ability of the FCFV method to treat non-orthogonal,
stretched, and skewed meshes, where CCFV schemes exhibit shortcomings.
|
2501.00452 | Unrolled Creative Adversarial Network For Generating Novel Musical
Pieces | cs.SD cs.LG eess.AS | Music generation has been established as a prominent topic in artificial
intelligence and machine learning over recent years. In most recent works on
RNN-based neural network methods have been applied for sequence generation. In
contrast, generative adversarial networks (GANs) and their counterparts have
been explored by very few researchersfor music generation.
In this paper, a classical system was employed alongside a new system to
generate creative music. Both systems were designed based on adversarial
networks to generate music by learning from examples. The classical system was
trained to learn a set of music pieces without differentiating between classes,
whereas the new system was trained to learn the different composers and their
styles to generate a creative music piece by deviating from the learned
composers' styles.
The base structure utilized was generative adversarial networks (GANs), which
are capable of generating novel outputs given a set of inputs to learn from and
mimic their distribution. It has been shown in previous work that GANs are
limited in their original design with respect to creative outputs. Building on
the Creative Adversarial Networks (CAN) , this work applied them in the music
domain rather than the visual art domain. Additionally, unrolled CAN was
introduced to prevent mode collapse. Experiments were conducted on both GAN and
CAN for generating music, and their capabilities were measured in terms of
deviation from the input set.
|
2501.00457 | Differentiable Prompt Learning for Vision Language Models | cs.LG cs.AI cs.CL cs.CV | Prompt learning is an effective way to exploit the potential of large-scale
pre-trained foundational models. Continuous prompts parameterize context tokens
in prompts by turning them into differentiable vectors. Deep continuous prompts
insert prompts not only in the input but also in the intermediate hidden
representations. Manually designed deep continuous prompts exhibit a remarkable
improvement compared to the zero-shot pre-trained model on downstream tasks.
How to automate the continuous prompt design is an underexplored area, and a
fundamental question arises, is manually designed deep prompt strategy optimal?
To answer this question, we propose a method dubbed differentiable prompt
learning (DPL). The DPL method is formulated as an optimization problem to
automatically determine the optimal context length of the prompt to be added to
each layer, where the objective is to maximize the performance. We test the DPL
method on the pre-trained CLIP. We empirically find that by using only limited
data, our DPL method can find deep continuous prompt configuration with high
confidence. The performance on the downstream tasks exhibits the superiority of
the automatic design: our method boosts the average test accuracy by 2.60% on
11 datasets compared to baseline methods. Besides, our method focuses only on
the prompt configuration (i.e. context length for each layer), which means that
our method is compatible with the baseline methods that have sophisticated
designs to boost the performance. The DPL method can be deployed to large
language models or computer vision models at no cost.
|
2501.00461 | Efficient support ticket resolution using Knowledge Graphs | cs.AI cs.LG cs.MA | A review of over 160,000 customer cases indicates that about 90% of time is
spent by the product support for solving around 10% of subset of tickets where
a trivial solution may not exist. Many of these challenging cases require the
support of several engineers working together within a "swarm", and some also
need to go to development support as bugs. These challenging customer issues
represent a major opportunity for machine learning and knowledge graph that
identifies the ideal engineer / group of engineers(swarm) that can best address
the solution, reducing the wait times for the customer. The concrete ML task we
consider here is a learning-to-rank(LTR) task that given an incident and a set
of engineers currently assigned to the incident (which might be the empty set
in the non-swarming context), produce a ranked list of engineers best fit to
help resolve that incident. To calculate the rankings, we may consider a wide
variety of input features including the incident description provided by the
customer, the affected component(s), engineer ratings of their expertise,
knowledge base article text written by engineers, response to customer text
written by engineers, and historic swarming data. The central hypothesis test
is that by including a holistic set of contextual data around which cases an
engineer has solved, we can significantly improve the LTR algorithm over
benchmark models. The article proposes a novel approach of modelling Knowledge
Graph embeddings from multiple data sources, including the swarm information.
The results obtained proves that by incorporating this additional context, we
can improve the recommendations significantly over traditional machine learning
methods like TF-IDF.
|
2501.00463 | SAT-LDM: Provably Generalizable Image Watermarking for Latent Diffusion
Models with Self-Augmented Training | cs.LG cs.CR cs.CV | The rapid proliferation of AI-generated images necessitates effective
watermarking techniques to protect intellectual property and detect fraudulent
content. While existing training-based watermarking methods show promise, they
often struggle with generalizing across diverse prompts and tend to introduce
visible artifacts. To this end, we propose a novel, provably generalizable
image watermarking approach for Latent Diffusion Models, termed Self-Augmented
Training (SAT-LDM). Our method aligns the training and testing phases through a
free generation distribution, thereby enhancing the watermarking module's
generalization capabilities. We theoretically consolidate SAT-LDM by proving
that the free generation distribution contributes to its tight generalization
bound, without the need for additional data collection. Extensive experiments
show that SAT-LDM not only achieves robust watermarking but also significantly
improves the quality of watermarked images across a wide range of prompts.
Moreover, our experimental analyses confirm the strong generalization abilities
of SAT-LDM. We hope that our method provides a practical and efficient solution
for securing high-fidelity AI-generated content.
|
2501.00464 | Addressing Challenges in Data Quality and Model Generalization for
Malaria Detection | cs.LG eess.SP | Malaria remains a significant global health burden, particularly in
resource-limited regions where timely and accurate diagnosis is critical to
effective treatment and control. Deep Learning (DL) has emerged as a
transformative tool for automating malaria detection and it offers high
accuracy and scalability. However, the effectiveness of these models is
constrained by challenges in data quality and model generalization including
imbalanced datasets, limited diversity and annotation variability. These issues
reduce diagnostic reliability and hinder real-world applicability.
This article provides a comprehensive analysis of these challenges and their
implications for malaria detection performance. Key findings highlight the
impact of data imbalances which can lead to a 20\% drop in F1-score and
regional biases which significantly hinder model generalization. Proposed
solutions, such as GAN-based augmentation, improved accuracy by 15-20\% by
generating synthetic data to balance classes and enhance dataset diversity.
Domain adaptation techniques, including transfer learning, further improved
cross-domain robustness by up to 25\% in sensitivity.
Additionally, the development of diverse global datasets and collaborative
data-sharing frameworks is emphasized as a cornerstone for equitable and
reliable malaria diagnostics. The role of explainable AI techniques in
improving clinical adoption and trustworthiness is also underscored. By
addressing these challenges, this work advances the field of AI-driven malaria
detection and provides actionable insights for researchers and practitioners.
The proposed solutions aim to support the development of accessible and
accurate diagnostic tools, particularly for resource-constrained populations.
|
2501.00465 | Dementia Detection using Multi-modal Methods on Audio Data | cs.LG | Dementia is a neurodegenerative disease that causes gradual cognitive
impairment, which is very common in the world and undergoes a lot of research
every year to prevent and cure it. It severely impacts the patient's ability to
remember events and communicate clearly, where most variations of it have no
known cure, but early detection can help alleviate symptoms before they become
worse. One of the main symptoms of dementia is difficulty in expressing ideas
through speech. This paper attempts to talk about a model developed to predict
the onset of the disease using audio recordings from patients. An ASR-based
model was developed that generates transcripts from the audio files using
Whisper model and then applies RoBERTa regression model to generate an MMSE
score for the patient. This score can be used to predict the extent to which
the cognitive ability of a patient has been affected. We use the PROCESS_V1
dataset for this task, which is introduced through the PROCESS Grand Challenge
2025. The model achieved an RMSE score of 2.6911 which is around 10 percent
lower than the described baseline.
|
2501.00467 | Score-Based Metropolis-Hastings Algorithms | cs.LG stat.CO | In this paper, we introduce a new approach for integrating score-based models
with the Metropolis-Hastings algorithm. While traditional score-based diffusion
models excel in accurately learning the score function from data points, they
lack an energy function, making the Metropolis-Hastings adjustment step
inaccessible. Consequently, the unadjusted Langevin algorithm is often used for
sampling using estimated score functions. The lack of an energy function then
prevents the application of the Metropolis-adjusted Langevin algorithm and
other Metropolis-Hastings methods, limiting the wealth of other algorithms
developed that use acceptance functions. We address this limitation by
introducing a new loss function based on the \emph{detailed balance condition},
allowing the estimation of the Metropolis-Hastings acceptance probabilities
given a learned score function. We demonstrate the effectiveness of the
proposed method for various scenarios, including sampling from heavy-tail
distributions.
|
2501.00476 | Design To Convert a Wired PLC into Wireless PLC | cs.HC cs.SY eess.SY physics.ins-det | This paper implies Bluetooth technology, which is put into effect to alter
extant, wired into wireless Programmable Logic Controller (PLC). Here two
Bluetooth devices are employed as a transceiver to transmit and receives the
input signal to contrive wireless PLC. The main advantage of PLC is to control
the output according to the status of input. In Bluetooth technology, the
handshaking between the two Bluetooth modules takes place, which is interfaced
with a microcontroller board (Arduino board) and then to PLC such that field
devices can be controlled without wire.
|
2501.00480 | Lyapunov-based Resilient Secondary Synchronization Strategy of AC
Microgrids Under Exponentially Energy-Unbounded FDI Attacks | eess.SY cs.SY | This article presents fully distributed Lyapunov-based attack-resilient
secondary control strategies for islanded inverter-based AC microgrids,
designed to counter a broad spectrum of energy-unbounded False Data Injection
(FDI) attacks, including exponential attacks, targeting control input channels.
While distributed control improves scalability and reliability, it also
increases susceptibility to cyber threats. The proposed strategies, supported
by rigorous Lyapunov-based proofs, ensure uniformly ultimately bounded (UUB)
convergence for frequency regulation, voltage containment, and power sharing,
even under severe cyber attacks. The effectiveness of the proposed approach has
been demonstrated through case studies on a modified IEEE 34-bus system,
leveraging simulations and real-time Hardware-in-the-Loop experiments with
OPAL-RT.
|
2501.00485 | Two Cases of Deduction with Non-referring Descriptions | cs.LO cs.CL | Formal reasoning with non-denoting terms, esp. non-referring descriptions
such as "the King of France", is still an under-investigated area. The recent
exception being a series of papers e.g. by Indrzejczak, Zawidzki and K\"rbis.
The present paper offers an alternative to their approach since instead of free
logic and sequent calculus, it's framed in partial type theory with natural
deduction in sequent style. Using a Montague- and Tich\'y-style formalization
of natural language, the paper successfully handles deduction with intensional
transitives whose complements are non-referring descriptions, and derives
Strawsonian rules for existential presuppositions of sentences with such
descriptions.
|
2501.00502 | Exploring Physics-Informed Neural Networks for Crop Yield Loss
Forecasting | cs.LG cs.AI | In response to climate change, assessing crop productivity under extreme
weather conditions is essential to enhance food security. Crop simulation
models, which align with physical processes, offer explainability but often
perform poorly. Conversely, machine learning (ML) models for crop modeling are
powerful and scalable yet operate as black boxes and lack adherence to crop
growths physical principles. To bridge this gap, we propose a novel method that
combines the strengths of both approaches by estimating the water use and the
crop sensitivity to water scarcity at the pixel level. This approach enables
yield loss estimation grounded in physical principles by sequentially solving
the equation for crop yield response to water scarcity, using an enhanced loss
function. Leveraging Sentinel-2 satellite imagery, climate data, simulated
water use data, and pixel-level yield data, our model demonstrates high
accuracy, achieving an R2 of up to 0.77, matching or surpassing
state-of-the-art models like RNNs and Transformers. Additionally, it provides
interpretable and physical consistent outputs, supporting industry,
policymakers, and farmers in adapting to extreme weather conditions.
|
2501.00507 | Real-Time Sampling-Based Safe Motion Planning for Robotic Manipulators
in Dynamic Environments | cs.RO | In this paper, we present the main features of Dynamic Rapidly-exploring
Generalized Bur Tree (DRGBT) algorithm, a sampling-based planner for dynamic
environments. We provide a detailed time analysis and appropriate scheduling to
facilitate a real-time operation. To this end, an extensive analysis is
conducted to identify the time-critical routines and their dependence on the
number of obstacles. Furthermore, information about the distance to obstacles
is used to compute a structure called dynamic expanded bubble of free
configuration space, which is then utilized to establish sufficient conditions
for a guaranteed safe motion of the robot while satisfying all kinematic
constraints. An extensive randomized simulation trial is conducted to compare
the proposed algorithm to a competing state-of-the-art method. Finally, an
experimental study on a real robot is carried out covering a variety of
scenarios including those with human presence. The results show the
effectiveness and feasibility of real-time execution of the proposed motion
planning algorithm within a typical sensor-based arrangement, using cheap
hardware and sequential architecture, without the necessity for GPUs or heavy
parallelization.
|
2501.00508 | Active Learning of General Halfspaces: Label Queries vs Membership
Queries | cs.LG | We study the problem of learning general (i.e., not necessarily homogeneous)
halfspaces under the Gaussian distribution on $R^d$ in the presence of some
form of query access. In the classical pool-based active learning model, where
the algorithm is allowed to make adaptive label queries to previously sampled
points, we establish a strong information-theoretic lower bound ruling out
non-trivial improvements over the passive setting. Specifically, we show that
any active learner requires label complexity of
$\tilde{\Omega}(d/(\log(m)\epsilon))$, where $m$ is the number of unlabeled
examples. Specifically, to beat the passive label complexity of $\tilde{O}
(d/\epsilon)$, an active learner requires a pool of $2^{poly(d)}$ unlabeled
samples. On the positive side, we show that this lower bound can be
circumvented with membership query access, even in the agnostic model.
Specifically, we give a computationally efficient learner with query complexity
of $\tilde{O}(\min\{1/p, 1/\epsilon\} + d\cdot polylog(1/\epsilon))$ achieving
error guarantee of $O(opt)+\epsilon$. Here $p \in [0, 1/2]$ is the bias and
$opt$ is the 0-1 loss of the optimal halfspace. As a corollary, we obtain a
strong separation between the active and membership query models. Taken
together, our results characterize the complexity of learning general
halfspaces under Gaussian marginals in these models.
|
2501.00509 | Fotheidil: an Automatic Transcription System for the Irish Language | cs.CL cs.SD eess.AS | This paper sets out the first web-based transcription system for the Irish
language - Fotheidil, a system that utilises speech-related AI technologies as
part of the ABAIR initiative. The system includes both off-the-shelf
pre-trained voice activity detection and speaker diarisation models and models
trained specifically for Irish automatic speech recognition and capitalisation
and punctuation restoration. Semi-supervised learning is explored to improve
the acoustic model of a modular TDNN-HMM ASR system, yielding substantial
improvements for out-of-domain test sets and dialects that are underrepresented
in the supervised training set. A novel approach to capitalisation and
punctuation restoration involving sequence-to-sequence models is compared with
the conventional approach using a classification model. Experimental results
show here also substantial improvements in performance. The system will be made
freely available for public use, and represents an important resource to
researchers and others who transcribe Irish language materials. Human-corrected
transcriptions will be collected and included in the training dataset as the
system is used, which should lead to incremental improvements to the ASR model
in a cyclical, community-driven fashion.
|
2501.00510 | VinT-6D: A Large-Scale Object-in-hand Dataset from Vision, Touch and
Proprioception | cs.RO | This paper addresses the scarcity of large-scale datasets for accurate
object-in-hand pose estimation, which is crucial for robotic in-hand
manipulation within the ``Perception-Planning-Control" paradigm. Specifically,
we introduce VinT-6D, the first extensive multi-modal dataset integrating
vision, touch, and proprioception, to enhance robotic manipulation. VinT-6D
comprises 2 million VinT-Sim and 0.1 million VinT-Real splits, collected via
simulations in MuJoCo and Blender and a custom-designed real-world platform.
This dataset is tailored for robotic hands, offering models with whole-hand
tactile perception and high-quality, well-aligned data. To the best of our
knowledge, the VinT-Real is the largest considering the collection difficulties
in the real-world environment so that it can bridge the gap of simulation to
real compared to the previous works. Built upon VinT-6D, we present a benchmark
method that shows significant improvements in performance by fusing multi-modal
information. The project is available at https://VinT-6D.github.io/.
|
2501.00511 | Stochastic Extragradient with Flip-Flop Shuffling & Anchoring: Provable
Improvements | cs.LG math.OC | In minimax optimization, the extragradient (EG) method has been extensively
studied because it outperforms the gradient descent-ascent method in
convex-concave (C-C) problems. Yet, stochastic EG (SEG) has seen limited
success in C-C problems, especially for unconstrained cases. Motivated by the
recent progress of shuffling-based stochastic methods, we investigate the
convergence of shuffling-based SEG in unconstrained finite-sum minimax
problems, in search of convergent shuffling-based SEG. Our analysis reveals
that both random reshuffling and the recently proposed flip-flop shuffling
alone can suffer divergence in C-C problems. However, with an additional simple
trick called anchoring, we develop the SEG with flip-flop anchoring (SEG-FFA)
method which successfully converges in C-C problems. We also show upper and
lower bounds in the strongly-convex-strongly-concave setting, demonstrating
that SEG-FFA has a provably faster convergence rate compared to other
shuffling-based methods.
|
2501.00513 | Fine-grained Video-Text Retrieval: A New Benchmark and Method | cs.CV cs.IR cs.LG | The ability of perceiving fine-grained spatial and temporal information is
crucial for video-language retrieval. However, the existing video retrieval
benchmarks, such as MSRVTT and MSVD, fail to efficiently evaluate the
fine-grained retrieval ability of video-language models (VLMs) due to a lack of
detailed annotations. To address this problem, we present FIBER, a FIne-grained
BEnchmark for text to video Retrieval, containing 1,000 videos sourced from the
FineAction dataset. Uniquely, our FIBER benchmark provides detailed
human-annotated spatial annotations and temporal annotations for each video,
making it possible to independently evaluate the spatial and temporal bias of
VLMs on video retrieval task. Besides, we employ a text embedding method to
unlock the capability of fine-grained video-language understanding of
Multimodal Large Language Models (MLLMs). Surprisingly, the experiment results
show that our Video Large Language Encoder (VLLE) performs comparably to
CLIP-based models on traditional benchmarks and has a stronger capability of
fine-grained representation with lower spatial-temporal bias. Project page:
https://fiber-bench.github.io.
|
2501.00514 | H-Net: A Multitask Architecture for Simultaneous 3D Force Estimation and
Stereo Semantic Segmentation in Intracardiac Catheters | eess.IV cs.AI cs.CV cs.LG cs.RO | The success rate of catheterization procedures is closely linked to the
sensory data provided to the surgeon. Vision-based deep learning models can
deliver both tactile and visual information in a sensor-free manner, while also
being cost-effective to produce. Given the complexity of these models for
devices with limited computational resources, research has focused on force
estimation and catheter segmentation separately. However, there is a lack of a
comprehensive architecture capable of simultaneously segmenting the catheter
from two different angles and estimating the applied forces in 3D. To bridge
this gap, this work proposes a novel, lightweight, multi-input, multi-output
encoder-decoder-based architecture. It is designed to segment the catheter from
two points of view and concurrently measure the applied forces in the x, y, and
z directions. This network processes two simultaneous X-Ray images, intended to
be fed by a biplane fluoroscopy system, showing a catheter's deflection from
different angles. It uses two parallel sub-networks with shared parameters to
output two segmentation maps corresponding to the inputs. Additionally, it
leverages stereo vision to estimate the applied forces at the catheter's tip in
3D. The architecture features two input channels, two classification heads for
segmentation, and a regression head for force estimation through a single
end-to-end architecture. The output of all heads was assessed and compared with
the literature, demonstrating state-of-the-art performance in both segmentation
and force estimation. To the best of the authors' knowledge, this is the first
time such a model has been proposed
|
2501.00517 | A Method for Enhancing the Safety of Large Model Generation Based on
Multi-dimensional Attack and Defense | cs.CR cs.AI | Currently, large models are prone to generating harmful content when faced
with complex attack instructions, significantly reducing their defensive
capabilities. To address this issue, this paper proposes a method based on
constructing data aligned with multi-dimensional attack defense to enhance the
generative security of large models. The core of our method lies in improving
the effectiveness of safe alignment learning for large models by innova-tively
increasing the diversity of attack instruction dimensions and the accuracy of
generat-ing safe responses. To validate the effectiveness of our method, beyond
existing security evaluation benchmarks, we additionally designed new security
evaluation benchmarks and conducted comparative experiments using Llama3.2 as
the baseline model. The final ex-perimental results demonstrate that our method
can significantly improve the generative security of large models under complex
instructional attacks, while also maintaining and enhancing the models' general
capabilities.
|
2501.00520 | Innovative Silicosis and Pneumonia Classification: Leveraging Graph
Transformer Post-hoc Modeling and Ensemble Techniques | cs.CV cs.LG | This paper presents a comprehensive study on the classification and detection
of Silicosis-related lung inflammation. Our main contributions include 1) the
creation of a newly curated chest X-ray (CXR) image dataset named SVBCX that is
tailored to the nuances of lung inflammation caused by distinct agents,
providing a valuable resource for silicosis and pneumonia research community;
and 2) we propose a novel deep-learning architecture that integrates graph
transformer networks alongside a traditional deep neural network module for the
effective classification of silicosis and pneumonia. Additionally, we employ
the Balanced Cross-Entropy (BalCE) as a loss function to ensure more uniform
learning across different classes, enhancing the model's ability to discern
subtle differences in lung conditions. The proposed model architecture and loss
function selection aim to improve the accuracy and reliability of inflammation
detection, particularly in the context of Silicosis. Furthermore, our research
explores the efficacy of an ensemble approach that combines the strengths of
diverse model architectures. Experimental results on the constructed dataset
demonstrate promising outcomes, showcasing substantial enhancements compared to
baseline models. The ensemble of models achieves a macro-F1 score of 0.9749 and
AUC ROC scores exceeding 0.99 for each class, underscoring the effectiveness of
our approach in accurate and robust lung inflammation classification.
|
2501.00522 | TinyHelen's First Curriculum: Training and Evaluating Tiny Language
Models in a Simpler Language Environment | cs.CL cs.AI | Training language models (LMs) and their application agents is increasingly
costly due to large datasets and models, making test failures difficult to
bear. Simplified language environments serve as primordial training and testing
grounds, retaining essential commonsense and communication skills but in a more
digestible form, potentially enhancing the learning efficiency of LMs, and thus
reducing the required model size and data volume for effective training and
evaluation. In these simplified language environments, workable strategies for
small models, datasets, and agents may be adaptable to larger models, datasets,
and agents in complex language environments.
To create such environments, we focus on two aspects: i) minimizing language
dataset noise and complexity, and ii) preserving the essential text
distribution characteristics. Unlike previous methods, we propose a pipeline to
refine text data by eliminating noise, minimizing vocabulary, and maintaining
genre-specific patterns (e.g., for books, conversation, code, etc.).
Implementing this pipeline with large LMs, we have created a leaner suite of LM
training and evaluation datasets: 71M Leaner-Pretrain, 7M Leaner-Instruct,
Leaner-Glue for assessing linguistic proficiency, and Leaner-Eval for testing
instruction-following ability.
Our experiments show that leaner pre-training boosts LM learning efficiency.
Tiny LMs trained on these datasets outperform those trained on original
datasets in instruction-following across different language granularity levels.
Moreover, the Leaner-Pretrain dataset's alignment with conventional large LM
training sets enables resource-optimized analysis of how learning objectives,
model architectures, and training techniques impact performance on language
modeling and downstream tasks. Our code and datasets are available at
https://github.com/EmpathYang/TinyHelen.git.
|
2501.00523 | Event-Triggered Observer-Based Fixed-Time Consensus Control for
Uncertain Nonlinear Multiagent Systems with Unknown States | eess.SY cs.SY | This paper introduces a novel approach for achieving fixed-time tracking
consensus control in multiagent systems (MASs). Departing from the reliance on
traditional controllers, our innovative controller integrates modified tuning
and Lyapunov functions to guarantee stability and convergence. Furthermore, we
have implemented an event-triggered strategy aimed at reducing the frequency of
updates, alongside an output-feedback observer to manage unmeasured states
effectively. To address the challenges posed by unknown functions and
algebraic-loop problems, we opted for radial basis function neural networks
(RBF NNs), chosen for their superior performance. Our methodology successfully
mitigates Zeno's behavior and ensures stability within a narrowly defined set.
The efficacy of our proposed solution is validated through two illustrative
simulation examples.
|
2501.00525 | Is Segment Anything Model 2 All You Need for Surgery Video Segmentation?
A Systematic Evaluation | cs.CV | Surgery video segmentation is an important topic in the surgical AI field. It
allows the AI model to understand the spatial information of a surgical scene.
Meanwhile, due to the lack of annotated surgical data, surgery segmentation
models suffer from limited performance. With the emergence of SAM2 model, a
large foundation model for video segmentation trained on natural videos,
zero-shot surgical video segmentation became more realistic but meanwhile
remains to be explored. In this paper, we systematically evaluate the
performance of SAM2 model in zero-shot surgery video segmentation task. We
conducted experiments under different configurations, including different
prompting strategies, robustness, etc. Moreover, we conducted an empirical
evaluation over the performance, including 9 datasets with 17 different types
of surgeries.
|
2501.00527 | Exploiting Boundary Loss for the Hierarchical Panoptic Segmentation of
Plants and Leaves | cs.CV cs.LG | Precision agriculture leverages data and machine learning so that farmers can
monitor their crops and target interventions precisely. This enables the
precision application of herbicide only to weeds, or the precision application
of fertilizer only to undernourished crops, rather than to the entire field.
The approach promises to maximize yields while minimizing resource use and harm
to the surrounding environment. To this end, we propose a hierarchical panoptic
segmentation method that simultaneously determines leaf count (as an identifier
of plant growth)and locates weeds within an image. In particular, our approach
aims to improve the segmentation of smaller instances like the leaves and weeds
by incorporating focal loss and boundary loss. Not only does this result in
competitive performance, achieving a PQ+ of 81.89 on the standard training set,
but we also demonstrate we can improve leaf-counting accuracy with our method.
The code is available at
https://github.com/madeleinedarbyshire/HierarchicalMask2Former.
|
2501.00528 | PyMilo: A Python Library for ML I/O | cs.LG cs.AI | PyMilo is an open-source Python package that addresses the limitations of
existing Machine Learning (ML) model storage formats by providing a
transparent, reliable, and safe method for exporting and deploying trained
models. Current formats, such as pickle and other binary formats, have
significant problems, such as reliability, safety, and transparency issues. In
contrast, PyMilo serializes ML models in a transparent non-executable format,
enabling straightforward and safe model exchange, while also facilitating the
deserialization and deployment of exported models in production environments.
This package aims to provide a seamless, end-to-end solution for the
exportation and importation of pre-trained ML models, which simplifies the
model development and deployment pipeline.
|
2501.00529 | Sinhala Transliteration: A Comparative Analysis Between Rule-based and
Seq2Seq Approaches | cs.CL | Due to reasons of convenience and lack of tech literacy, transliteration
(i.e., Romanizing native scripts instead of using localization tools) is
eminently prevalent in the context of low-resource languages such as Sinhala,
which have their own writing script. In this study, our focus is on Romanized
Sinhala transliteration. We propose two methods to address this problem: Our
baseline is a rule-based method, which is then compared against our second
method where we approach the transliteration problem as a sequence-to-sequence
task akin to the established Neural Machine Translation (NMT) task. For the
latter, we propose a Transformer-based Encode-Decoder solution. We witnessed
that the Transformer-based method could grab many ad-hoc patterns within the
Romanized scripts compared to the rule-based method. The code base associated
with this paper is available on GitHub -
https://github.com/kasunw22/Sinhala-Transliterator/
|
2501.00530 | Superposition in Transformers: A Novel Way of Building Mixture of
Experts | cs.CL cs.AI | Catastrophic forgetting remains a major challenge when adapting large
language models (LLMs) to new tasks or domains. Conventional fine-tuning often
overwrites existing knowledge, causing performance degradation on original
tasks. We introduce Superposition in Transformers, a novel architecture that
leverages autoencoders to superimpose the hidden representations of a base
model and a fine-tuned model within a shared parameter space. By using
B-spline-based blending coefficients and autoencoders that adaptively
reconstruct hidden states based on the input data distribution, our method
effectively mitigates catastrophic forgetting and enables a new paradigm of
"in-model" superposition. This approach preserves original model capabilities
while allowing compact domain-specific expertise to be added, and it supports
dynamic switching between model states during inference.
|
2501.00533 | Rapid Learning in Constrained Minimax Games with Negative Momentum | cs.LG | In this paper, we delve into the utilization of the negative momentum
technique in constrained minimax games. From an intuitive mechanical
standpoint, we introduce a novel framework for momentum buffer updating, which
extends the findings of negative momentum from the unconstrained setting to the
constrained setting and provides a universal enhancement to the classic
game-solver algorithms. Additionally, we provide theoretical guarantee of
convergence for our momentum-augmented algorithms with entropy regularizer. We
then extend these algorithms to their extensive-form counterparts. Experimental
results on both Normal Form Games (NFGs) and Extensive Form Games (EFGs)
demonstrate that our momentum techniques can significantly improve algorithm
performance, surpassing both their original versions and the SOTA baselines by
a large margin.
|
2501.00537 | Extending XReason: Formal Explanations for Adversarial Detection | cs.AI cs.CR cs.LG | Explainable Artificial Intelligence (XAI) plays an important role in
improving the transparency and reliability of complex machine learning models,
especially in critical domains such as cybersecurity. Despite the prevalence of
heuristic interpretation methods such as SHAP and LIME, these techniques often
lack formal guarantees and may produce inconsistent local explanations. To
fulfill this need, few tools have emerged that use formal methods to provide
formal explanations. Among these, XReason uses a SAT solver to generate formal
instance-level explanation for XGBoost models. In this paper, we extend the
XReason tool to support LightGBM models as well as class-level explanations.
Additionally, we implement a mechanism to generate and detect adversarial
examples in XReason. We evaluate the efficiency and accuracy of our approach on
the CICIDS-2017 dataset, a widely used benchmark for detecting network attacks.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.