id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
2501.00538 | Adaptive Tabu Dropout for Regularization of Deep Neural Network | cs.LG | Dropout is an effective strategy for the regularization of deep neural
networks. Applying tabu to the units that have been dropped in the recent epoch
and retaining them for training ensures diversification in dropout. In this
paper, we improve the Tabu Dropout mechanism for training deep neural networks
in two ways. Firstly, we propose to use tabu tenure, or the number of epochs a
particular unit will not be dropped. Different tabu tenures provide
diversification to boost the training of deep neural networks based on the
search landscape. Secondly, we propose an adaptive tabu algorithm that
automatically selects the tabu tenure based on the training performances
through epochs. On several standard benchmark datasets, the experimental
results show that the adaptive tabu dropout and tabu tenure dropout diversify
and perform significantly better compared to the standard dropout and basic
tabu dropout mechanisms.
|
2501.00539 | MCP-Solver: Integrating Language Models with Constraint Programming
Systems | cs.AI cs.CL cs.LG cs.SE | While Large Language Models (LLMs) perform exceptionally well at natural
language tasks, they often struggle with precise formal reasoning and the
rigorous specification of problems. We present MCP-Solver, a prototype
implementation of the Model Context Protocol that demonstrates the potential
for systematic integration between LLMs and constraint programming systems. Our
implementation provides interfaces for the creation, editing, and validation of
a constraint model. Through an item-based editing approach with integrated
validation, the system ensures model consistency at every modification step and
enables structured iterative refinement. The system handles concurrent solving
sessions and maintains a persistent knowledge base of modeling insights.
Initial experiments suggest that this integration can effectively combine LLMs'
natural language understanding with constraint-solving capabilities. Our
open-source implementation is proof of concept for integrating formal reasoning
systems with LLMs through standardized protocols. While further research is
needed to establish comprehensive formal guarantees, this work takes a first
step toward principled integration of natural language processing with
constraint-based reasoning.
|
2501.00546 | Performance Analysis and Optimization of STAR-RIS-Aided Cell-Free
Massive MIMO Systems Relying on Imperfect Hardware | cs.IT eess.SP math.IT | Simultaneously transmitting and reflecting reconfigurable intelligent surface
(STAR-RIS)-aided cell-free massive multiple-input multiple-output (CF-mMIMO)
systems are investigated under spatially correlated fading channels using
realistic imperfect hardware. Specifically, the transceiver distortions,
\textcolor{black}{time-varying phase noise, and RIS phase shift errors} are
considered. Upon considering imperfect hardware and pilot contamination, we
derive a linear minimum mean-square error (MMSE) criterion-based cascaded
channel estimator. Moreover, a closed-form expression of the downlink ergodic
spectral efficiency (SE) is derived based on maximum ratio (MR) based transmit
precoding and channel statistics, where both a finite number of access points
(APs) and STAR-RIS elements as well as imperfect hardware are considered.
Furthermore, by exploiting the ergodic signal-to-interference-plus-noise ratios
(SINRs) among user equipment (UE), a max-min fairness problem is formulated for
the joint optimization of the passive transmitting and reflecting beamforming
(BF) at the STAR-RIS as well as of the power control coefficients. An
alternating optimization (AO) algorithm is proposed for solving the resultant
problems, where iterative adaptive particle swarm optimization (APSO) and
bisection methods are proposed for circumventing the non-convexity of the RIS
passive BF and the quasi-concave power control sub-problems, respectively. Our
simulation results illustrate that the STAR-RIS-aided CF-mMIMO system attains
higher SE than its RIS-aided counterpart. The performance of different hardware
parameters is also evaluated. Additionally, it is demonstrated that the SE of
the worst UE can be significantly improved by exploiting the proposed AO-based
algorithm compared to conventional solutions associated with random passive BF
and equal-power scenarios.
|
2501.00549 | So Timely, Yet So Stale: The Impact of Clock Drift in Real-Time Systems | cs.IT cs.NI cs.SY eess.SY math.IT | In this paper, we address the problem of timely delivery of status update
packets in a real-time communication system, where a transmitter sends status
updates generated by a source to a receiver over an unreliable channel. The
timestamps of transmitted and received packets are measured using separate
clocks located at the transmitter and receiver, respectively. To account for
possible clock drift between these two clocks, we consider both deterministic
and probabilistic drift scenarios. We analyze the system's performance
regarding the Age of Information (AoI) and derive closed-form expressions for
the distribution and the average AoI under both clock drift models.
Additionally, we explore the impact of key system parameters on the average AoI
through analytical and numerical results.
|
2501.00555 | Monty Hall and Optimized Conformal Prediction to Improve Decision-Making
with LLMs | cs.LG cs.AI stat.AP stat.ML | Large language models (LLMs) are empowering decision-making in several
applications, including tool or API usage and answering multiple-choice
questions (MCQs). However, they often make overconfident, incorrect
predictions, which can be risky in high-stakes settings like healthcare and
finance. To mitigate these risks, recent works have used conformal prediction
(CP), a model-agnostic framework for distribution-free uncertainty
quantification. CP transforms a \emph{score function} into prediction sets that
contain the true answer with high probability. While CP provides this coverage
guarantee for arbitrary scores, the score quality significantly impacts
prediction set sizes. Prior works have relied on LLM logits or other heuristic
scores, lacking quality guarantees. We address this limitation by introducing
CP-OPT, an optimization framework to learn scores that minimize set sizes while
maintaining coverage. Furthermore, inspired by the Monty Hall problem, we
extend CP's utility beyond uncertainty quantification to improve accuracy. We
propose \emph{conformal revision of questions} (CROQ) to revise the problem by
narrowing down the available choices to those in the prediction set. The
coverage guarantee of CP ensures that the correct choice is in the revised
question prompt with high probability, while the smaller number of choices
increases the LLM's chances of answering it correctly. Experiments on MMLU,
ToolAlpaca, and TruthfulQA datasets with Gemma-2, Llama-3 and Phi-3 models show
that CP-OPT significantly reduces set sizes while maintaining coverage, and
CROQ improves accuracy over the standard inference, especially when paired with
CP-OPT scores. Together, CP-OPT and CROQ offer a robust framework for improving
both the safety and accuracy of LLM-driven decision-making.
|
2501.00556 | Finding the Underlying Viscoelastic Constitutive Equation via Universal
Differential Equations and Differentiable Physics | physics.flu-dyn cs.LG | This research employs Universal Differential Equations (UDEs) alongside
differentiable physics to model viscoelastic fluids, merging conventional
differential equations, neural networks and numerical methods to reconstruct
missing terms in constitutive models. This study focuses on analyzing four
viscoelastic models: Upper Convected Maxwell (UCM), Johnson-Segalman, Giesekus,
and Exponential Phan-Thien-Tanner (ePTT), through the use of synthetic
datasets. The methodology was tested across different experimental conditions,
including oscillatory and startup flows. While the UDE framework effectively
predicts shear and normal stresses for most models, it demonstrates some
limitations when applied to the ePTT model. The findings underscore the
potential of UDEs in fluid mechanics while identifying critical areas for
methodological improvement. Also, a model distillation approach was employed to
extract simplified models from complex ones, emphasizing the versatility and
robustness of UDEs in rheological modeling.
|
2501.00559 | AraSTEM: A Native Arabic Multiple Choice Question Benchmark for
Evaluating LLMs Knowledge In STEM Subjects | cs.CL cs.AI | Large Language Models (LLMs) have shown remarkable capabilities, not only in
generating human-like text, but also in acquiring knowledge. This highlights
the need to go beyond the typical Natural Language Processing downstream
benchmarks and asses the various aspects of LLMs including knowledge and
reasoning. Numerous benchmarks have been developed to evaluate LLMs knowledge,
but they predominantly focus on the English language. Given that many LLMs are
multilingual, relying solely on benchmarking English knowledge is insufficient.
To address this issue, we introduce AraSTEM, a new Arabic multiple-choice
question dataset aimed at evaluating LLMs knowledge in STEM subjects. The
dataset spans a range of topics at different levels which requires models to
demonstrate a deep understanding of scientific Arabic in order to achieve high
accuracy. Our findings show that publicly available models of varying sizes
struggle with this dataset, and underscores the need for more localized
language models. The dataset is freely accessible on Hugging Face.
|
2501.00560 | Re-evaluating Automatic LLM System Ranking for Alignment with Human
Preference | cs.CL cs.AI cs.LG | Evaluating and ranking the capabilities of different LLMs is crucial for
understanding their performance and alignment with human preferences. Due to
the high cost and time-consuming nature of human evaluations, an automatic LLM
bencher (i.e., an automatic evaluation framework that aims to rank LLMs based
on their alignment with human preferences) is indispensable. An automatic LLM
bencher consists of four components: the input set (e.g., a user instruction),
the evaluation model (e.g., an LLM), the evaluation type (e.g., pairwise
comparison), and the aggregation method (e.g., the ELO rating system). However,
previous work has not thoroughly explored how to select these components or how
their different combinations influence the results. In this work, through
controlled experiments, we provide a series of recommendations on how to choose
each component to better automate the evaluation of LLMs. Furthermore, we
discovered that when evaluating LLMs with similar performance, the performance
of the automatic LLM bencher declines sharply, underscoring the limitations of
current benchers and calling for future work. Lastly, we found that the
evaluation models' performance at the instance level (e.g., the accuracy of
selecting the best output) does not always align with their effectiveness when
used as a component of a bencher, highlighting the importance of dedicated
system-level evaluation of benchers.
|
2501.00562 | An Overview and Discussion on Using Large Language Models for
Implementation Generation of Solutions to Open-Ended Problems | cs.CL cs.AI | Large Language Models offer new opportunities to devise automated
implementation generation methods that can tackle problem solving activities
beyond traditional methods, which require algorithmic specifications and can
use only static domain knowledge, like performance metrics and libraries of
basic building blocks. Large Language Models could support creating new methods
to support problem solving activities for open-ended problems, like problem
framing, exploring possible solving approaches, feature elaboration and
combination, more advanced implementation assessment, and handling unexpected
situations. This report summarized the current work on Large Language Models,
including model prompting, Reinforcement Learning, and Retrieval-Augmented
Generation. Future research requirements were also discussed.
|
2501.00565 | Polynomial time sampling from log-smooth distributions in fixed
dimension under semi-log-concavity of the forward diffusion with application
to strongly dissipative distributions | stat.CO cs.LG math.ST stat.TH | In this article, we provide a stochastic sampling algorithm with polynomial
complexity in fixed dimension that leverages the recent advances on diffusion
models where it is shown that under mild conditions, sampling can be achieved
via an accurate estimation of intermediate scores across the marginals
$(p_t)_{t\ge 0}$ of the standard Ornstein-Uhlenbeck process started at $\mu$,
the density we wish to sample from. The heart of our method consists into
approaching these scores via a computationally cheap estimator and relating the
variance of this estimator to the smoothness properties of the forward process.
Under the assumption that the density to sample from is $L$-log-smooth and that
the forward process is semi-log-concave: $-\nabla^2 \log(p_t) \succeq -\beta
I_d$ for some $\beta \geq 0$, we prove that our algorithm achieves an expected
$\epsilon$ error in $\text{KL}$ divergence in
$O(d^7(L+\beta)^2L^{d+2}\epsilon^{-2(d+3)}(d+m_2(\mu))^{2(d+1)})$ time with
$m_2(\mu)$ the second order moment of $\mu$. In particular, our result allows
to fully transfer the problem of sampling from a log-smooth distribution into a
regularity estimate problem. As an application, we derive an exponential
complexity improvement for the problem of sampling from an $L$-log-smooth
distribution that is $\alpha$-strongly log-concave outside some ball of radius
$R$: after proving that such distributions verify the semi-log-concavity
assumption, a result which might be of independent interest, we recover a
$poly(R, L, \alpha^{-1}, \epsilon^{-1})$ complexity in fixed dimension which
exponentially improves upon the previously known $poly(e^{LR^2}, L,\alpha^{-1},
\log(\epsilon^{-1}))$ complexity in the low precision regime.
|
2501.00569 | Probing Visual Language Priors in VLMs | cs.CV cs.LG | Despite recent advances in Vision-Language Models (VLMs), they may over-rely
on visual language priors existing in their training data rather than true
visual reasoning. To investigate this, we introduce ViLP, a benchmark featuring
deliberately out-of-distribution images synthesized via image generation models
and out-of-distribution Q&A pairs. Each question in ViLP is coupled with three
potential answers and three corresponding images: one that can be resolved by
text priors alone and two that demand visual reasoning. Although, humans
achieve near-perfect accuracy, modern VLMs falter; for instance, GPT-4 achieves
only 66.17% on ViLP. To alleviate this, we propose a self-improving framework
in which models generate new VQA data, then apply pixel-level and semantic
corruptions to form "good-bad" image pairs for self-training. Our training
objectives compel VLMs to focus more on the actual visual inputs, and we
demonstrate their effectiveness in boosting the performance of open-source
VLMs, including LLaVA-v1.5 and Cambrian.
|
2501.00571 | KnowRA: Knowledge Retrieval Augmented Method for Document-level Relation
Extraction with Comprehensive Reasoning Abilities | cs.CL | Document-level relation extraction (Doc-RE) aims to extract relations between
entities across multiple sentences. Therefore, Doc-RE requires more
comprehensive reasoning abilities like humans, involving complex cross-sentence
interactions between entities, contexts, and external general knowledge,
compared to the sentence-level RE. However, most existing Doc-RE methods focus
on optimizing single reasoning ability, but lack the ability to utilize
external knowledge for comprehensive reasoning on long documents. To solve
these problems, a knowledge retrieval augmented method, named KnowRA, was
proposed with comprehensive reasoning to autonomously determine whether to
accept external knowledge to assist DocRE. Firstly, we constructed a document
graph for semantic encoding and integrated the co-reference resolution model to
augment the co-reference reasoning ability. Then, we expanded the document
graph into a document knowledge graph by retrieving the external knowledge base
for common-sense reasoning and a novel knowledge filtration method was
presented to filter out irrelevant knowledge. Finally, we proposed the axis
attention mechanism to build direct and indirect associations with intermediary
entities for achieving cross-sentence logical reasoning. Extensive experiments
conducted on two datasets verified the effectiveness of our method compared to
the state-of-the-art baselines. Our code is available at
https://anonymous.4open.science/r/KnowRA.
|
2501.00574 | VideoChat-Flash: Hierarchical Compression for Long-Context Video
Modeling | cs.CV cs.LG | Long-context modeling is a critical capability for multimodal large language
models (MLLMs), enabling them to process long-form contents with implicit
memorization. Despite its advances, handling extremely long videos remains
challenging due to the difficulty in maintaining crucial features over extended
sequences. This paper introduces a Hierarchical visual token Compression (HiCo)
method designed for high-fidelity representation and a practical context
modeling system VideoChat-Flash tailored for multimodal long-sequence
processing. HiCo capitalizes on the redundancy of visual information in long
videos to compress long video context from the clip-level to the video-level,
reducing the compute significantly while preserving essential details.
VideoChat-Flash features a multi-stage short-to-long learning scheme, a rich
dataset of real-world long videos named LongVid, and an upgraded
"Needle-In-A-video-Haystack" (NIAH) for evaluating context capacities. In
extensive experiments, VideoChat-Flash shows the leading performance on both
mainstream long and short video benchmarks at the 2B and 7B model scale. It
firstly gets 99.1% accuracy over 10,000 frames in NIAH among open-source
models.
|
2501.00581 | Causal Graph Guided Steering of LLM Values via Prompts and Sparse
Autoencoders | cs.CL cs.AI cs.LG | As large language models (LLMs) become increasingly integrated into critical
applications, aligning their behavior with human values presents significant
challenges. Current methods, such as Reinforcement Learning from Human Feedback
(RLHF), often focus on a limited set of values and can be resource-intensive.
Furthermore, the correlation between values has been largely overlooked and
remains underutilized. Our framework addresses this limitation by mining a
causal graph that elucidates the implicit relationships among various values
within the LLMs. Leveraging the causal graph, we implement two lightweight
mechanisms for value steering: prompt template steering and Sparse Autoencoder
feature steering, and analyze the effects of altering one value dimension on
others. Extensive experiments conducted on Gemma-2B-IT and Llama3-8B-IT
demonstrate the effectiveness and controllability of our steering methods.
|
2501.00584 | Online Video Understanding: A Comprehensive Benchmark and
Memory-Augmented Method | cs.CV cs.LG | Multimodal Large Language Models (MLLMs) have shown significant progress in
offline video understanding. However, applying these models to real-world
scenarios, such as autonomous driving and human-computer interaction, presents
unique challenges due to the need for real-time processing of continuous online
video streams. To this end, this paper presents systematic efforts from three
perspectives: evaluation benchmark, model architecture, and training strategy.
First, we introduce OVBench, a comprehensive question-answering benchmark
specifically designed to evaluate models' ability to perceive, memorize, and
reason within online video contexts. It features six core task types across
three temporal contexts-past, present, and future-forming 16 subtasks from
diverse datasets. Second, we propose a new Pyramid Memory Bank (PMB) that
effectively retains key spatiotemporal information in video streams. Third, we
proposed an offline-to-online learning paradigm, designing an interleaved
dialogue format for online video data and constructing an instruction-tuning
dataset tailored for online video training. This framework led to the
development of VideoChat-Online, a robust and efficient model for online video
understanding. Despite the lower computational cost and higher efficiency,
VideoChat-Online outperforms existing state-of-the-art offline and online
models across popular offline video benchmarks and OVBench, demonstrating the
effectiveness of our model architecture and training strategy.
|
2501.00585 | Sidewalk Hazard Detection Using Variational Autoencoder and One-Class
SVM | cs.CV cs.LG cs.RO | The unpredictable nature of outdoor settings introduces numerous safety
concerns, making hazard detection crucial for safe navigation. This paper
introduces a novel system for sidewalk safety navigation utilizing a hybrid
approach that combines a Variational Autoencoder (VAE) with a One-Class Support
Vector Machine (OCSVM). The system is designed to detect anomalies on sidewalks
that could potentially pose walking hazards. A dataset comprising over 15,000
training frames and 5,000 testing frames was collected using video recordings,
capturing various sidewalk scenarios, including normal and hazardous
conditions. During deployment, the VAE utilizes its reconstruction mechanism to
detect anomalies within a frame. Poor reconstruction by the VAE implies the
presence of an anomaly, after which the OCSVM is used to confirm whether the
anomaly is hazardous or non-hazardous. The proposed VAE model demonstrated
strong performance, with a high Area Under the Curve (AUC) of 0.94, effectively
distinguishing anomalies that could be potential hazards. The OCSVM is employed
to reduce the detection of false hazard anomalies, such as manhole or water
valve covers. This approach achieves an accuracy of 91.4%, providing a highly
reliable system for distinguishing between hazardous and non-hazardous
scenarios. These results suggest that the proposed system offers a robust
solution for hazard detection in uncertain environments.
|
2501.00586 | Advanced Lung Nodule Segmentation and Classification for Early Detection
of Lung Cancer using SAM and Transfer Learning | eess.IV cs.CV cs.LG | Lung cancer is an extremely lethal disease primarily due to its late-stage
diagnosis and significant mortality rate, making it the major cause of
cancer-related demises globally. Machine Learning (ML) and Convolution Neural
network (CNN) based Deep Learning (DL) techniques are primarily used for
precise segmentation and classification of cancerous nodules in the CT
(Computed Tomography) or MRI images. This study introduces an innovative
approach to lung nodule segmentation by utilizing the Segment Anything Model
(SAM) combined with transfer learning techniques. Precise segmentation of lung
nodules is crucial for the early detection of lung cancer. The proposed method
leverages Bounding Box prompts and a vision transformer model to enhance
segmentation performance, achieving high accuracy, Dice Similarity Coefficient
(DSC) and Intersection over Union (IoU) metrics. The integration of SAM and
Transfer Learning significantly improves Computer-Aided Detection (CAD) systems
in medical imaging, particularly for lung cancer diagnosis. The findings
demonstrate the proposed model effectiveness in precisely segmenting lung
nodules from CT scans, underscoring its potential to advance early detection
and improve patient care outcomes in lung cancer diagnosis. The results show
SAM Model with transfer learning achieving a DSC of 97.08% and an IoU of 95.6%,
for segmentation and accuracy of 96.71% for classification indicates that ,its
performance is noteworthy compared to existing techniques.
|
2501.00588 | Privacy-Preserving Distributed Defense Framework for DC Microgrids
Against Exponentially Unbounded False Data Injection Attacks | eess.SY cs.SY | This paper introduces a novel, fully distributed control framework for DC
microgrids, enhancing resilience against exponentially unbounded false data
injection (EU-FDI) attacks. Our framework features a consensus-based secondary
control for each converter, effectively addressing these advanced threats. To
further safeguard sensitive operational data, a privacy-preserving mechanism is
incorporated into the control design, ensuring that critical information
remains secure even under adversarial conditions. Rigorous Lyapunov stability
analysis confirms the framework's ability to maintain critical DC microgrid
operations like voltage regulation and load sharing under EU-FDI threats. The
framework's practicality is validated through hardware-in-the-loop experiments,
demonstrating its enhanced resilience and robust privacy protection against the
complex challenges posed by quick variant FDI attacks.
|
2501.00593 | Setting Standards in Turkish NLP: TR-MMLU for Large Language Model
Evaluation | cs.CL | Language models have made remarkable advancements in understanding and
generating human language, achieving notable success across a wide array of
applications. However, evaluating these models remains a significant challenge,
particularly for resource-limited languages such as Turkish. To address this
gap, we introduce the Turkish MMLU (TR-MMLU) benchmark, a comprehensive
evaluation framework designed to assess the linguistic and conceptual
capabilities of large language models (LLMs) in Turkish. TR-MMLU is constructed
from a carefully curated dataset comprising 6200 multiple-choice questions
across 62 sections, selected from a pool of 280000 questions spanning 67
disciplines and over 800 topics within the Turkish education system. This
benchmark provides a transparent, reproducible, and culturally relevant tool
for evaluating model performance. It serves as a standard framework for Turkish
NLP research, enabling detailed analyses of LLMs' capabilities in processing
Turkish text and fostering the development of more robust and accurate language
models. In this study, we evaluate state-of-the-art LLMs on TR-MMLU, providing
insights into their strengths and limitations for Turkish-specific tasks. Our
findings reveal critical challenges, such as the impact of tokenization and
fine-tuning strategies, and highlight areas for improvement in model design. By
setting a new standard for evaluating Turkish language models, TR-MMLU aims to
inspire future innovations and support the advancement of Turkish NLP research.
|
2501.00595 | Unbiased GNN Learning via Fairness-Aware Subgraph Diffusion | cs.LG cs.AI | Graph Neural Networks (GNNs) have demonstrated remarkable efficacy in
tackling a wide array of graph-related tasks across diverse domains. However, a
significant challenge lies in their propensity to generate biased predictions,
particularly with respect to sensitive node attributes such as age and gender.
These biases, inherent in many machine learning models, are amplified in GNNs
due to the message-passing mechanism, which allows nodes to influence each
other, rendering the task of making fair predictions notably challenging. This
issue is particularly pertinent in critical domains where model fairness holds
paramount importance. In this paper, we propose a novel generative
Fairness-Aware Subgraph Diffusion (FASD) method for unbiased GNN learning. The
method initiates by strategically sampling small subgraphs from the original
large input graph, and then proceeds to conduct subgraph debiasing via
generative fairness-aware graph diffusion processes based on stochastic
differential equations (SDEs). To effectively diffuse unfairness in the input
data, we introduce additional adversary bias perturbations to the subgraphs
during the forward diffusion process, and train score-based models to predict
these applied perturbations, enabling them to learn the underlying dynamics of
the biases present in the data. Subsequently, the trained score-based models
are utilized to further debias the original subgraph samples through the
reverse diffusion process. Finally, FASD induces fair node predictions on the
input graph by performing standard GNN learning on the debiased subgraphs.
Experimental results demonstrate the superior performance of the proposed
method over state-of-the-art Fair GNN baselines across multiple benchmark
datasets.
|
2501.00597 | Gaze Prediction as a Function of Eye Movement Type and Individual
Differences | cs.HC cs.LG | Eye movement prediction is a promising area of research with the potential to
improve performance and the user experience of systems based on eye-tracking
technology. In this study, we analyze individual differences in gaze prediction
performance. We use three fundamentally different models within the analysis:
the lightweight Long Short-Term Memory network (LSTM), the transformer-based
network for multivariate time series representation learning (TST), and the
Oculomotor Plant Mathematical Model wrapped in the Kalman Filter framework
(OPKF). Each solution was assessed on different eye-movement types. We show
important subject-to-subject variation for all models and eye-movement types.
We found that fixation noise is associated with poorer gaze prediction in
fixation. For saccades, higher velocities are associated with poorer gaze
prediction performance. We think these individual differences are important and
propose that future research should report statistics related to inter-subject
variation. We also propose that future models should be designed to reduce
subject-to-subject variation.
|
2501.00598 | "Dialogue" vs "Dialog" in NLP and AI research: Statistics from a
Confused Discourse | cs.CL | Within computing research, there are two spellings for an increasingly
important term - dialogue and dialog. We analyze thousands of research papers
to understand this "dialog(ue) debacle". Among publications in top venues that
use "dialog(ue)" in the title or abstract, 72% use "dialogue", 24% use
"dialog", and 5% use both in the same title and abstract. This split
distribution is more common in Computing than any other academic discipline. We
investigate trends over ~20 years of NLP/AI research, not finding clear
evidence of a shift over time. Author nationality is weakly correlated with
spelling choice, but far from explains the mixed use. Many prolific authors
publish papers with both spellings. We use several methods (such as syntactic
parses and LM embeddings) to study how dialog(ue) context influences spelling,
finding limited influence. Combining these results together, we discuss
different theories that might explain the dialog(ue) divergence.
|
2501.00599 | VideoRefer Suite: Advancing Spatial-Temporal Object Understanding with
Video LLM | cs.CV cs.AI cs.LG | Video Large Language Models (Video LLMs) have recently exhibited remarkable
capabilities in general video understanding. However, they mainly focus on
holistic comprehension and struggle with capturing fine-grained spatial and
temporal details. Besides, the lack of high-quality object-level video
instruction data and a comprehensive benchmark further hinders their
advancements. To tackle these challenges, we introduce the VideoRefer Suite to
empower Video LLM for finer-level spatial-temporal video understanding, i.e.,
enabling perception and reasoning on any objects throughout the video.
Specially, we thoroughly develop VideoRefer Suite across three essential
aspects: dataset, model, and benchmark. Firstly, we introduce a multi-agent
data engine to meticulously curate a large-scale, high-quality object-level
video instruction dataset, termed VideoRefer-700K. Next, we present the
VideoRefer model, which equips a versatile spatial-temporal object encoder to
capture precise regional and sequential representations. Finally, we
meticulously create a VideoRefer-Bench to comprehensively assess the
spatial-temporal understanding capability of a Video LLM, evaluating it across
various aspects. Extensive experiments and analyses demonstrate that our
VideoRefer model not only achieves promising performance on video referring
benchmarks but also facilitates general video understanding capabilities.
|
2501.00601 | DreamDrive: Generative 4D Scene Modeling from Street View Images | cs.CV cs.AI cs.GR | Synthesizing photo-realistic visual observations from an ego vehicle's
driving trajectory is a critical step towards scalable training of self-driving
models. Reconstruction-based methods create 3D scenes from driving logs and
synthesize geometry-consistent driving videos through neural rendering, but
their dependence on costly object annotations limits their ability to
generalize to in-the-wild driving scenarios. On the other hand, generative
models can synthesize action-conditioned driving videos in a more generalizable
way but often struggle with maintaining 3D visual consistency. In this paper,
we present DreamDrive, a 4D spatial-temporal scene generation approach that
combines the merits of generation and reconstruction, to synthesize
generalizable 4D driving scenes and dynamic driving videos with 3D consistency.
Specifically, we leverage the generative power of video diffusion models to
synthesize a sequence of visual references and further elevate them to 4D with
a novel hybrid Gaussian representation. Given a driving trajectory, we then
render 3D-consistent driving videos via Gaussian splatting. The use of
generative priors allows our method to produce high-quality 4D scenes from
in-the-wild driving data, while neural rendering ensures 3D-consistent video
generation from the 4D scenes. Extensive experiments on nuScenes and street
view images demonstrate that DreamDrive can generate controllable and
generalizable 4D driving scenes, synthesize novel views of driving videos with
high fidelity and 3D consistency, decompose static and dynamic elements in a
self-supervised manner, and enhance perception and planning tasks for
autonomous driving.
|
2501.00602 | STORM: Spatio-Temporal Reconstruction Model for Large-Scale Outdoor
Scenes | cs.CV cs.LG | We present STORM, a spatio-temporal reconstruction model designed for
reconstructing dynamic outdoor scenes from sparse observations. Existing
dynamic reconstruction methods often rely on per-scene optimization, dense
observations across space and time, and strong motion supervision, resulting in
lengthy optimization times, limited generalization to novel views or scenes,
and degenerated quality caused by noisy pseudo-labels for dynamics. To address
these challenges, STORM leverages a data-driven Transformer architecture that
directly infers dynamic 3D scene representations--parameterized by 3D Gaussians
and their velocities--in a single forward pass. Our key design is to aggregate
3D Gaussians from all frames using self-supervised scene flows, transforming
them to the target timestep to enable complete (i.e., "amodal") reconstructions
from arbitrary viewpoints at any moment in time. As an emergent property, STORM
automatically captures dynamic instances and generates high-quality masks using
only reconstruction losses. Extensive experiments on public datasets show that
STORM achieves precise dynamic scene reconstruction, surpassing
state-of-the-art per-scene optimization methods (+4.3 to 6.6 PSNR) and existing
feed-forward approaches (+2.1 to 4.7 PSNR) in dynamic regions. STORM
reconstructs large-scale outdoor scenes in 200ms, supports real-time rendering,
and outperforms competitors in scene flow estimation, improving 3D EPE by
0.422m and Acc5 by 28.02%. Beyond reconstruction, we showcase four additional
applications of our model, illustrating the potential of self-supervised
learning for broader dynamic scene understanding.
|
2501.00603 | DiC: Rethinking Conv3x3 Designs in Diffusion Models | cs.CV cs.LG | Diffusion models have shown exceptional performance in visual generation
tasks. Recently, these models have shifted from traditional U-Shaped
CNN-Attention hybrid structures to fully transformer-based isotropic
architectures. While these transformers exhibit strong scalability and
performance, their reliance on complicated self-attention operation results in
slow inference speeds. Contrary to these works, we rethink one of the simplest
yet fastest module in deep learning, 3x3 Convolution, to construct a scaled-up
purely convolutional diffusion model. We first discover that an Encoder-Decoder
Hourglass design outperforms scalable isotropic architectures for Conv3x3, but
still under-performing our expectation. Further improving the architecture, we
introduce sparse skip connections to reduce redundancy and improve scalability.
Based on the architecture, we introduce conditioning improvements including
stage-specific embeddings, mid-block condition injection, and conditional
gating. These improvements lead to our proposed Diffusion CNN (DiC), which
serves as a swift yet competitive diffusion architecture baseline. Experiments
on various scales and settings show that DiC surpasses existing diffusion
transformers by considerable margins in terms of performance while keeping a
good speed advantage. Project page: https://github.com/YuchuanTian/DiC
|
2501.00606 | Time-Varying Graph Learning for Data with Heavy-Tailed Distribution | cs.LG | Graph models provide efficient tools to capture the underlying structure of
data defined over networks. Many real-world network topologies are subject to
change over time. Learning to model the dynamic interactions between entities
in such networks is known as time-varying graph learning. Current methodology
for learning such models often lacks robustness to outliers in the data and
fails to handle heavy-tailed distributions, a common feature in many real-world
datasets (e.g., financial data). This paper addresses the problem of learning
time-varying graph models capable of efficiently representing heavy-tailed
data. Unlike traditional approaches, we incorporate graph structures with
specific spectral properties to enhance data clustering in our model. Our
proposed method, which can also deal with noise and missing values in the data,
is based on a stochastic approach, where a non-negative vector auto-regressive
(VAR) model captures the variations in the graph and a Student-t distribution
models the signal originating from this underlying time-varying graph. We
propose an iterative method to learn time-varying graph topologies within a
semi-online framework where only a mini-batch of data is used to update the
graph. Simulations with both synthetic and real datasets demonstrate the
efficacy of our model in analyzing heavy-tailed data, particularly those found
in financial markets.
|
2501.00608 | Optimizing Speech-Input Length for Speaker-Independent Depression
Classification | cs.CL eess.AS | Machine learning models for speech-based depression classification offer
promise for health care applications. Despite growing work on depression
classification, little is understood about how the length of speech-input
impacts model performance. We analyze results for speaker-independent
depression classification using a corpus of over 1400 hours of speech from a
human-machine health screening application. We examine performance as a
function of response input length for two NLP systems that differ in overall
performance.
Results for both systems show that performance depends on natural length,
elapsed length, and ordering of the response within a session. Systems share a
minimum length threshold, but differ in a response saturation threshold, with
the latter higher for the better system. At saturation it is better to pose a
new question to the speaker, than to continue the current response. These and
additional reported results suggest how applications can be better designed to
both elicit and process optimal input lengths for depression classification.
|
2501.00611 | Optimal design of triply-periodic minimal surface implants for bone
repair | cs.CE | This work proposes a gradient-based method to design bone implants using
triply-periodic minimal surfaces (TPMS) of spatially varying thickness to
maximize bone in-growth. Bone growth into the implant is estimated using a
finite element based mechanobiological model considering the magnitude and
frequency of in vivo loads, as well as the density distribution of the
surrounding bone. The wall thicknesses of the implant unit cells are determined
via linear interpolation of the thicknesses over a user defined grid of control
points, avoiding mesh dependency and providing control over the sensitivity
computation costs. The TPMS structure is modeled as a homogenized material to
reduce computational cost. Local properties of the implant are determined at
run-time on an element-by-element basis using a pre-constructed surrogate model
of the TPMS's physical and geometric properties as a function of the local wall
thickness and the density of in-grown bone. Design sensitivities of the bone
growth within the implant are computed using the direct sensitivity method. The
methodology is demonstrated on a cementless hip, optimizing the implant for
bone growth subject to wall thickness constraints to ensure manufacturability
and allow cell infiltration.
|
2501.00612 | Breaking through the classical Shannon entropy limit: A new frontier
through logical semantics | cs.IT math.IT | Information theory has provided foundations for the theories of several
application areas critical for modern society, including communications,
computer storage, and AI. A key aspect of Shannon's 1948 theory is a sharp
lower bound on the number of bits needed to encode and communicate a string of
symbols. When he introduced the theory, Shannon famously excluded any notion of
semantics behind the symbols being communicated. This semantics-free notion
went on to have massive impact on communication and computing technologies,
even as multiple proposals for reintroducing semantics in a theory of
information were being made, notably one where Carnap and Bar-Hillel used logic
and reasoning to capture semantics. In this paper we present, for the first
time, a Shannon-style analysis of a communication system equipped with a
deductive reasoning capability, implemented using logical inference. We use
some of the most important techniques developed in information theory to
demonstrate significant and sometimes surprising gains in communication
efficiency availed to us through such capability, demonstrated also through
practical codes. We thus argue that proposals for a semantic information theory
should include the power of deductive reasoning to magnify the value of
transmitted bits as we strive to fully unlock the inherent potential of
semantics.
|
2501.00615 | Predicting Barge Presence and Quantity on Inland Waterways using Vessel
Tracking Data: A Machine Learning Approach | cs.LG | This study presents a machine learning approach to predict the number of
barges transported by vessels on inland waterways using tracking data from the
Automatic Identification System (AIS). While AIS tracks the location of tug and
tow vessels, it does not monitor the presence or number of barges transported
by those vessels. Understanding the number and types of barges conveyed along
river segments, between ports, and at ports is crucial for estimating the
quantities of freight transported on the nation's waterways. This insight is
also valuable for waterway management and infrastructure operations impacting
areas such as targeted dredging operations, and data-driven resource
allocation. Labeled sample data was generated using observations from traffic
cameras located along key river segments and matched to AIS data records. A
sample of 164 vessels representing up to 42 barge convoys per vessel was used
for model development. The methodology involved first predicting barge presence
and then predicting barge quantity. Features derived from the AIS data included
speed measures, vessel characteristics, turning measures, and interaction
terms. For predicting barge presence, the AdaBoost model achieved an F1 score
of 0.932. For predicting barge quantity, the Random Forest combined with an
AdaBoost ensemble model achieved an F1 score of 0.886. Bayesian optimization
was used for hyperparameter tuning. By advancing predictive modeling for inland
waterways, this study offers valuable insights for transportation planners and
organizations, which require detailed knowledge of traffic volumes, including
the flow of commodities, their destinations, and the tonnage moving in and out
of ports.
|
2501.00617 | Toward Corpus Size Requirements for Training and Evaluating Depression
Risk Models Using Spoken Language | cs.CL cs.SD eess.AS | Mental health risk prediction is a growing field in the speech community, but
many studies are based on small corpora. This study illustrates how variations
in test and train set sizes impact performance in a controlled study. Using a
corpus of over 65K labeled data points, results from a fully crossed design of
different train/test size combinations are provided. Two model types are
included: one based on language and the other on speech acoustics. Both use
methods current in this domain. An age-mismatched test set was also included.
Results show that (1) test sizes below 1K samples gave noisy results, even for
larger training set sizes; (2) training set sizes of at least 2K were needed
for stable results; (3) NLP and acoustic models behaved similarly with
train/test size variations, and (4) the mismatched test set showed the same
patterns as the matched test set. Additional factors are discussed, including
label priors, model strength and pre-training, unique speakers, and data
lengths. While no single study can specify exact size requirements, results
demonstrate the need for appropriately sized train and test sets for future
studies of mental health risk prediction from speech and language.
|
2501.00619 | A Study on Context Length and Efficient Transformers for Biomedical
Image Analysis | cs.CV cs.AI cs.LG | Biomedical imaging modalities often produce high-resolution,
multi-dimensional images that pose computational challenges for deep neural
networks. These computational challenges are compounded when training
transformers due to the self-attention operator, which scales quadratically
with context length. Recent developments in long-context models have potential
to alleviate these difficulties and enable more efficient application of
transformers to large biomedical images, although a systematic evaluation on
this topic is lacking. In this study, we investigate the impact of context
length on biomedical image analysis and we evaluate the performance of recently
proposed long-context models. We first curate a suite of biomedical imaging
datasets, including 2D and 3D data for segmentation, denoising, and
classification tasks. We then analyze the impact of context length on network
performance using the Vision Transformer and Swin Transformer by varying patch
size and attention window size. Our findings reveal a strong relationship
between context length and performance, particularly for pixel-level prediction
tasks. Finally, we show that recent long-context models demonstrate significant
improvements in efficiency while maintaining comparable performance, though we
highlight where gaps remain. This work underscores the potential and challenges
of using long-context models in biomedical imaging.
|
2501.00623 | Global dense vector representations for words or items using shared
parameter alternating Tweedie model | cs.LG stat.ML | In this article, we present a model for analyzing the cooccurrence count data
derived from practical fields such as user-item or item-item data from online
shopping platform, cooccurring word-word pairs in sequences of texts. Such data
contain important information for developing recommender systems or studying
relevance of items or words from non-numerical sources. Different from
traditional regression models, there are no observations for covariates.
Additionally, the cooccurrence matrix is typically of so high dimension that it
does not fit into a computer's memory for modeling. We extract numerical data
by defining windows of cooccurrence using weighted count on the continuous
scale. Positive probability mass is allowed for zero observations. We present
Shared parameter Alternating Tweedie (SA-Tweedie) model and an algorithm to
estimate the parameters. We introduce a learning rate adjustment used along
with the Fisher scoring method in the inner loop to help the algorithm stay on
track of optimizing direction. Gradient descent with Adam update was also
considered as an alternative method for the estimation. Simulation studies and
an application showed that our algorithm with Fisher scoring and learning rate
adjustment outperforms the other two methods. Pseudo-likelihood approach with
alternating parameter update was also studied. Numerical studies showed that
the pseudo-likelihood approach is not suitable in our shared parameter
alternating regression models with unobserved covariates.
|
2501.00625 | Gaussian Building Mesh (GBM): Extract a Building's 3D Mesh with Google
Earth and Gaussian Splatting | cs.CV cs.GR | Recently released open-source pre-trained foundational image segmentation and
object detection models (SAM2+GroundingDINO) allow for geometrically consistent
segmentation of objects of interest in multi-view 2D images. Users can use
text-based or click-based prompts to segment objects of interest without
requiring labeled training datasets. Gaussian Splatting allows for the learning
of the 3D representation of a scene's geometry and radiance based on 2D images.
Combining Google Earth Studio, SAM2+GroundingDINO, 2D Gaussian Splatting, and
our improvements in mask refinement based on morphological operations and
contour simplification, we created a pipeline to extract the 3D mesh of any
building based on its name, address, or geographic coordinates.
|
2501.00628 | Matrix factorization and prediction for high dimensional co-occurrence
count data via shared parameter alternating zero inflated Gamma model | cs.LG stat.ML | High-dimensional sparse matrix data frequently arise in various applications.
A notable example is the weighted word-word co-occurrence count data, which
summarizes the weighted frequency of word pairs appearing within the same
context window. This type of data typically contains highly skewed non-negative
values with an abundance of zeros. Another example is the co-occurrence of
item-item or user-item pairs in e-commerce, which also generates
high-dimensional data. The objective is to utilize this data to predict the
relevance between items or users. In this paper, we assume that items or users
can be represented by unknown dense vectors. The model treats the co-occurrence
counts as arising from zero-inflated Gamma random variables and employs cosine
similarity between the unknown vectors to summarize item-item relevance. The
unknown values are estimated using the shared parameter alternating
zero-inflated Gamma regression models (SA-ZIG). Both canonical link and log
link models are considered. Two parameter updating schemes are proposed, along
with an algorithm to estimate the unknown parameters. Convergence analysis is
presented analytically. Numerical studies demonstrate that the SA-ZIG using
Fisher scoring without learning rate adjustment may fail to fi nd the maximum
likelihood estimate. However, the SA-ZIG with learning rate adjustment performs
satisfactorily in our simulation studies.
|
2501.00632 | Different thresholding methods on Nearest Shrunken Centroid algorithm | stat.ML cs.LG | This article considers the impact of different thresholding methods to the
Nearest Shrunken Centroid algorithm, which is popularly referred as the
Prediction Analysis of Microarrays (PAM) for high-dimensional classification.
PAM uses soft thresholding to achieve high computational efficiency and high
classification accuracy but in the price of retaining too many features. When
applied to microarray human cancers, PAM selected 2611 features on average from
10 multi-class datasets. Such a large number of features make it difficult to
perform follow up study. One reason behind this problem is the soft
thresholding, which is known to produce biased parameter estimate in regression
analysis. In this article, we extend the PAM algorithm with two other
thresholding methods, hard and order thresholding, and a deep search algorithm
to achieve better thresholding parameter estimate. The modified algorithms are
extensively tested and compared to the original one based on real data and
Monte Carlo studies. In general, the modification not only gave better cancer
status prediction accuracy, but also resulted in more parsimonious models with
significantly smaller number of features.
|
2501.00636 | Applying Graph Explanation to Operator Fusion | cs.LG cs.CV | Layer fusion techniques are critical to improving the inference efficiency of
deep neural networks (DNN) for deployment. Fusion aims to lower inference costs
by reducing data transactions between an accelerator's on-chip buffer and DRAM.
This is accomplished by grouped execution of multiple operations like
convolution and activations together into single execution units - fusion
groups. However, on-chip buffer capacity limits fusion group size and
optimizing fusion on whole DNNs requires partitioning into multiple fusion
groups. Finding the optimal groups is a complex problem where the presence of
invalid solutions hampers traditional search algorithms and demands robust
approaches. In this paper we incorporate Explainable AI, specifically Graph
Explanation Techniques (GET), into layer fusion. Given an invalid fusion group,
we identify the operations most responsible for group invalidity, then use this
knowledge to recursively split the original fusion group via a greedy
tree-based algorithm to minimize DRAM access. We pair our scheme with common
algorithms and optimize DNNs on two types of layer fusion: Line-Buffer Depth
First (LBDF) and Branch Requirement Reduction (BRR). Experiments demonstrate
the efficacy of our scheme on several popular and classical convolutional
neural networks like ResNets and MobileNets. Our scheme achieves over 20% DRAM
Access reduction on EfficientNet-B3.
|
2501.00637 | Flash-Split: 2D Reflection Removal with Flash Cues and Latent Diffusion
Separation | cs.CV cs.LG | Transparent surfaces, such as glass, create complex reflections that obscure
images and challenge downstream computer vision applications. We introduce
Flash-Split, a robust framework for separating transmitted and reflected light
using a single (potentially misaligned) pair of flash/no-flash images. Our core
idea is to perform latent-space reflection separation while leveraging the
flash cues. Specifically, Flash-Split consists of two stages. Stage 1 separates
apart the reflection latent and transmission latent via a dual-branch diffusion
model conditioned on an encoded flash/no-flash latent pair, effectively
mitigating the flash/no-flash misalignment issue. Stage 2 restores
high-resolution, faithful details to the separated latents, via a cross-latent
decoding process conditioned on the original images before separation. By
validating Flash-Split on challenging real-world scenes, we demonstrate
state-of-the-art reflection separation performance and significantly outperform
the baseline methods.
|
2501.00641 | Rethink Delay Doppler Channels and Time-Frequency Coding | eess.SP cs.IT math.IT | In this paper, we rethink delay Doppler channels (also called doubly
selective channels). We prove that no modulation schemes (including the current
active VOFDM/OTFS) can compensate a non-trivial Doppler spread well. We then
discuss some of the existing methods to deal with time-varying channels, in
particular time-frequency (TF) coding in an OFDM system. TF coding is
equivalent to space-time coding in the math part. We also summarize state of
the art on space-time coding that was an active research topic over a decade
ago.
|
2501.00642 | Enabling New HDLs with Agents | cs.AR cs.AI cs.LG cs.PL | Large Language Models (LLMs) based agents are transforming the programming
language landscape by facilitating learning for beginners, enabling code
generation, and optimizing documentation workflows. Hardware Description
Languages (HDLs), with their smaller user community, stand to benefit
significantly from the application of LLMs as tools for learning new HDLs. This
paper investigates the challenges and solutions of enabling LLMs for HDLs,
particularly for HDLs that LLMs have not been previously trained on. This work
introduces HDLAgent, an AI agent optimized for LLMs with limited knowledge of
various HDLs. It significantly enhances off-the-shelf LLMs.
|
2501.00643 | Design optimization of dynamic flexible multibody systems using the
discrete adjoint variable method | math.OC cs.CE | The design space of dynamic multibody systems (MBSs), particularly those with
flexible components, is considerably large. Consequently, having a means to
efficiently explore this space and find the optimum solution within a feasible
timeframe is crucial. It is well known that for problems with several design
variables, sensitivity analysis using the adjoint variable method extensively
reduces the computational costs. This paper presents the novel extension of the
discrete adjoint variable method to the design optimization of dynamic flexible
MBSs. The extension involves deriving the adjoint equations directly from the
discrete, rather than the continuous, equations of motion. This results in a
system of algebraic equations that is computationally less demanding to solve
compared to the system of differential algebraic equations produced by the
continuous adjoint variable method. To describe the proposed method, it is
integrated with a numerical time-stepping algorithm based on geometric
variational integrators. The developed technique is then applied to the
optimization of MBSs composed of springs, dampers, beams and rigid bodies,
considering both geometrical (e.g., positions of joints) and non-geometrical
(e.g., mechanical properties of components) design variables. To validate the
developed methods and show their applicability, three numerical examples are
provided.
|
2501.00644 | Efficient Standardization of Clinical Notes using Large Language Models | cs.CL cs.AI | Clinician notes are a rich source of patient information but often contain
inconsistencies due to varied writing styles, colloquialisms, abbreviations,
medical jargon, grammatical errors, and non-standard formatting. These
inconsistencies hinder the extraction of meaningful data from electronic health
records (EHRs), posing challenges for quality improvement, population health,
precision medicine, decision support, and research.
We present a large language model approach to standardizing a corpus of 1,618
clinical notes. Standardization corrected an average of $4.9 +/- 1.8$
grammatical errors, $3.3 +/- 5.2$ spelling errors, converted $3.1 +/- 3.0$
non-standard terms to standard terminology, and expanded $15.8 +/- 9.1$
abbreviations and acronyms per note. Additionally, notes were re-organized into
canonical sections with standardized headings. This process prepared notes for
key concept extraction, mapping to medical ontologies, and conversion to
interoperable data formats such as FHIR.
Expert review of randomly sampled notes found no significant data loss after
standardization. This proof-of-concept study demonstrates that standardization
of clinical notes can improve their readability, consistency, and usability,
while also facilitating their conversion into interoperable data formats.
|
2501.00645 | SoundBrush: Sound as a Brush for Visual Scene Editing | cs.CV cs.LG cs.SD eess.AS | We propose SoundBrush, a model that uses sound as a brush to edit and
manipulate visual scenes. We extend the generative capabilities of the Latent
Diffusion Model (LDM) to incorporate audio information for editing visual
scenes. Inspired by existing image-editing works, we frame this task as a
supervised learning problem and leverage various off-the-shelf models to
construct a sound-paired visual scene dataset for training. This richly
generated dataset enables SoundBrush to learn to map audio features into the
textual space of the LDM, allowing for visual scene editing guided by diverse
in-the-wild sound. Unlike existing methods, SoundBrush can accurately
manipulate the overall scenery or even insert sounding objects to best match
the audio inputs while preserving the original content. Furthermore, by
integrating with novel view synthesis techniques, our framework can be extended
to edit 3D scenes, facilitating sound-driven 3D scene manipulation. Demos are
available at https://soundbrush.github.io/.
|
2501.00647 | Lightweight G-YOLOv11: Advancing Efficient Fracture Detection in
Pediatric Wrist X-rays | eess.IV cs.CV | Computer-aided diagnosis (CAD) systems have greatly improved the
interpretation of medical images by radiologists and surgeons. However, current
CAD systems for fracture detection in X-ray images primarily rely on large,
resource-intensive detectors, which limits their practicality in clinical
settings. To address this limitation, we propose a novel lightweight CAD system
based on the YOLO detector for fracture detection. This system, named ghost
convolution-based YOLOv11 (G-YOLOv11), builds on the latest version of the YOLO
detector family and incorporates the ghost convolution operation for feature
extraction. The ghost convolution operation generates the same number of
feature maps as traditional convolution but requires fewer linear operations,
thereby reducing the detector's computational resource requirements. We
evaluated the performance of the proposed G-YOLOv11 detector on the
GRAZPEDWRI-DX dataset, achieving an mAP@0.5 of 0.535 with an inference time of
2.4 ms on an NVIDIA A10 GPU. Compared to the standard YOLOv11l, G-YOLOv11l
achieved reductions of 13.6% in mAP@0.5 and 68.7% in size. These results
establish a new state-of-the-art benchmark in terms of efficiency,
outperforming existing detectors. Code and models are available at
https://github.com/AbdesselamFerdi/G-YOLOv11.
|
2501.00651 | Taming Feed-forward Reconstruction Models as Latent Encoders for 3D
Generative Models | cs.CV cs.LG | Recent AI-based 3D content creation has largely evolved along two paths:
feed-forward image-to-3D reconstruction approaches and 3D generative models
trained with 2D or 3D supervision. In this work, we show that existing
feed-forward reconstruction methods can serve as effective latent encoders for
training 3D generative models, thereby bridging these two paradigms. By reusing
powerful pre-trained reconstruction models, we avoid computationally expensive
encoder network training and obtain rich 3D latent features for generative
modeling for free. However, the latent spaces of reconstruction models are not
well-suited for generative modeling due to their unstructured nature. To enable
flow-based model training on these latent features, we develop post-processing
pipelines, including protocols to standardize the features and spatial
weighting to concentrate on important regions. We further incorporate a 2D
image space perceptual rendering loss to handle the high-dimensional latent
spaces. Finally, we propose a multi-stream transformer-based rectified flow
architecture to achieve linear scaling and high-quality text-conditioned 3D
generation. Our framework leverages the advancements of feed-forward
reconstruction models to enhance the scalability of 3D generative modeling,
achieving both high computational efficiency and state-of-the-art performance
in text-to-3D generation.
|
2501.00654 | ICONS: Influence Consensus for Vision-Language Data Selection | cs.CV cs.CL cs.LG | Visual Instruction Tuning typically requires a large amount of
vision-language training data. This data often containing redundant information
that increases computational costs without proportional performance gains. In
this work, we introduce ICONS, a gradient-driven Influence CONsensus approach
for vision-language data Selection that selects a compact training dataset for
efficient multi-task training. The key element of our approach is cross-task
influence consensus, which uses majority voting across task-specific influence
matrices to identify samples that are consistently valuable across multiple
tasks, allowing us to effectively prioritize data that optimizes for overall
performance. Experiments show that models trained on our selected data (20% of
LLaVA-665K) achieve 98.6% of the relative performance obtained using the full
dataset. Additionally, we release this subset, LLaVA-ICONS-133K, a compact yet
highly informative subset of LLaVA-665K visual instruction tuning data,
preserving high impact training data for efficient vision-language model
development.
|
2501.00655 | Finding Missed Code Size Optimizations in Compilers using LLMs | cs.SE cs.LG cs.PL | Compilers are complex, and significant effort has been expended on testing
them. Techniques such as random program generation and differential testing
have proved highly effective and have uncovered thousands of bugs in production
compilers. The majority of effort has been expended on validating that a
compiler produces correct code for a given input, while less attention has been
paid to ensuring that the compiler produces performant code.
In this work we adapt differential testing to the task of identifying missed
optimization opportunities in compilers. We develop a novel testing approach
which combines large language models (LLMs) with a series of differential
testing strategies and use them to find missing code size optimizations in C /
C++ compilers.
The advantage of our approach is its simplicity. We offload the complex task
of generating random code to an off-the-shelf LLM, and use heuristics and
analyses to identify anomalous compiler behavior. Our approach requires fewer
than 150 lines of code to implement. This simplicity makes it extensible. By
simply changing the target compiler and initial LLM prompt we port the approach
from C / C++ to Rust and Swift, finding bugs in both. To date we have reported
24 confirmed bugs in production compilers, and conclude that LLM-assisted
testing is a promising avenue for detecting optimization bugs in real world
compilers.
|
2501.00656 | 2 OLMo 2 Furious | cs.CL cs.LG | We present OLMo 2, the next generation of our fully open language models.
OLMo 2 includes dense autoregressive models with improved architecture and
training recipe, pretraining data mixtures, and instruction tuning recipes. Our
modified model architecture and training recipe achieve both better training
stability and improved per-token efficiency. Our updated pretraining data
mixture introduces a new, specialized data mix called Dolmino Mix 1124, which
significantly improves model capabilities across many downstream task
benchmarks when introduced via late-stage curriculum training (i.e. specialized
data during the annealing phase of pretraining). Finally, we incorporate best
practices from T\"ulu 3 to develop OLMo 2-Instruct, focusing on permissive data
and extending our final-stage reinforcement learning with verifiable rewards
(RLVR). Our OLMo 2 base models sit at the Pareto frontier of performance to
compute, often matching or outperforming open-weight only models like Llama 3.1
and Qwen 2.5 while using fewer FLOPs and with fully transparent training data,
code, and recipe. Our fully open OLMo 2-Instruct models are competitive with or
surpassing open-weight only models of comparable size, including Qwen 2.5,
Llama 3.1 and Gemma 2. We release all OLMo 2 artifacts openly -- models at 7B
and 13B scales, both pretrained and post-trained, including their full training
data, training code and recipes, training logs and thousands of intermediate
checkpoints. The final instruction model is available on the Ai2 Playground as
a free research demo.
|
2501.00657 | Relative Pose Observability Analysis Using Dual Quaternions | eess.SY cs.RO cs.SY math.AG math.DG | Relative pose (position and orientation) estimation is an essential component
of many robotics applications. Fiducial markers, such as the AprilTag visual
fiducial system, yield a relative pose measurement from a single marker
detection and provide a powerful tool for pose estimation. In this paper, we
perform a Lie algebraic nonlinear observability analysis on a nonlinear dual
quaternion system that is composed of a relative pose measurement model and a
relative motion model. We prove that many common dual quaternion expressions
yield Jacobian matrices with advantageous block structures and rank properties
that are beneficial for analysis. We show that using a dual quaternion
representation yields an observability matrix with a simple block triangular
structure and satisfies the necessary full rank condition.
|
2501.00658 | Understanding and Mitigating Bottlenecks of State Space Models through
the Lens of Recency and Over-smoothing | cs.LG | Structured State Space Models (SSMs) have emerged as alternatives to
transformers. While SSMs are often regarded as effective in capturing
long-sequence dependencies, we rigorously demonstrate that they are inherently
limited by strong recency bias. Our empirical studies also reveal that this
bias impairs the models' ability to recall distant information and introduces
robustness issues. Our scaling experiments then discovered that deeper
structures in SSMs can facilitate the learning of long contexts. However,
subsequent theoretical analysis reveals that as SSMs increase in depth, they
exhibit another inevitable tendency toward over-smoothing, e.g., token
representations becoming increasingly indistinguishable. This fundamental
dilemma between recency and over-smoothing hinders the scalability of existing
SSMs. Inspired by our theoretical findings, we propose to polarize two channels
of the state transition matrices in SSMs, setting them to zero and one,
respectively, simultaneously addressing recency bias and over-smoothing.
Experiments demonstrate that our polarization technique consistently enhances
the associative recall accuracy of long-range tokens and unlocks SSMs to
benefit further from deeper architectures. All source codes are released at
https://github.com/VITA-Group/SSM-Bottleneck.
|
2501.00659 | Why Are Positional Encodings Nonessential for Deep Autoregressive
Transformers? Revisiting a Petroglyph | cs.LG cs.CL | Do autoregressive Transformer language models require explicit positional
encodings (PEs)? The answer is "no" as long as they have more than one layer --
they can distinguish sequences with permuted tokens without requiring explicit
PEs. This property has been known since early efforts (those contemporary with
GPT-2) adopting the Transformer for language modeling. However, this result
does not appear to have been well disseminated and was even rediscovered
recently. This may be partially due to a sudden growth of the language modeling
community after the advent of GPT-2, but perhaps also due to the lack of a
clear explanation in prior publications, despite being commonly understood by
practitioners in the past. Here we review this long-forgotten explanation why
explicit PEs are nonessential for multi-layer autoregressive Transformers (in
contrast, one-layer models require PEs to discern order information of their
input tokens). We also review the origin of this result, and hope to
re-establish it as a common knowledge.
|
2501.00663 | Titans: Learning to Memorize at Test Time | cs.LG cs.AI cs.CL | Over more than a decade there has been an extensive research effort on how to
effectively utilize recurrent models and attention. While recurrent models aim
to compress the data into a fixed-size memory (called hidden state), attention
allows attending to the entire context window, capturing the direct
dependencies of all tokens. This more accurate modeling of dependencies,
however, comes with a quadratic cost, limiting the model to a fixed-length
context. We present a new neural long-term memory module that learns to
memorize historical context and helps attention to attend to the current
context while utilizing long past information. We show that this neural memory
has the advantage of fast parallelizable training while maintaining a fast
inference. From a memory perspective, we argue that attention due to its
limited context but accurate dependency modeling performs as a short-term
memory, while neural memory due to its ability to memorize the data, acts as a
long-term, more persistent, memory. Based on these two modules, we introduce a
new family of architectures, called Titans, and present three variants to
address how one can effectively incorporate memory into this architecture. Our
experimental results on language modeling, common-sense reasoning, genomics,
and time series tasks show that Titans are more effective than Transformers and
recent modern linear recurrent models. They further can effectively scale to
larger than 2M context window size with higher accuracy in needle-in-haystack
tasks compared to baselines.
|
2501.00664 | Grade Inflation in Generative Models | cs.AI cs.LG stat.ML | Generative models hold great potential, but only if one can trust the
evaluation of the data they generate. We show that many commonly used quality
scores for comparing two-dimensional distributions of synthetic vs.
ground-truth data give better results than they should, a phenomenon we call
the "grade inflation problem." We show that the correlation score, Jaccard
score, earth-mover's score, and Kullback-Leibler (relative-entropy) score all
suffer grade inflation. We propose that any score that values all datapoints
equally, as these do, will also exhibit grade inflation; we refer to such
scores as "equipoint" scores. We introduce the concept of "equidensity" scores,
and present the Eden score, to our knowledge the first example of such a score.
We found that Eden avoids grade inflation and agrees better with human
perception of goodness-of-fit than the equipoint scores above. We propose that
any reasonable equidensity score will avoid grade inflation. We identify a
connection between equidensity scores and R\'enyi entropy of negative order. We
conclude that equidensity scores are likely to outperform equipoint scores for
generative models, and for comparing low-dimensional distributions more
generally.
|
2501.00669 | Leaf diseases detection using deep learning methods | cs.LG cs.AI cs.CV | This study, our main topic is to devlop a new deep-learning approachs for
plant leaf disease identification and detection using leaf image datasets. We
also discussed the challenges facing current methods of leaf disease detection
and how deep learning may be used to overcome these challenges and enhance the
accuracy of disease detection. Therefore, we have proposed a novel method for
the detection of various leaf diseases in crops, along with the identification
and description of an efficient network architecture that encompasses
hyperparameters and optimization methods. The effectiveness of different
architectures was compared and evaluated to see the best architecture
configuration and to create an effective model that can quickly detect leaf
disease. In addition to the work done on pre-trained models, we proposed a new
model based on CNN, which provides an efficient method for identifying and
detecting plant leaf disease. Furthermore, we evaluated the efficacy of our
model and compared the results to those of some pre-trained state-of-the-art
architectures.
|
2501.00673 | Controlled Causal Hallucinations Can Estimate Phantom Nodes in
Multiexpert Mixtures of Fuzzy Cognitive Maps | cs.LG | An adaptive multiexpert mixture of feedback causal models can approximate
missing or phantom nodes in large-scale causal models. The result gives a
scalable form of \emph{big knowledge}. The mixed model approximates a sampled
dynamical system by approximating its main limit-cycle equilibria. Each expert
first draws a fuzzy cognitive map (FCM) with at least one missing causal node
or variable. FCMs are directed signed partial-causality cyclic graphs. They mix
naturally through convex combination to produce a new causal feedback FCM.
Supervised learning helps each expert FCM estimate its phantom node by
comparing the FCM's partial equilibrium with the complete multi-node
equilibrium. Such phantom-node estimation allows partial control over these
causal hallucinations and helps approximate the future trajectory of the
dynamical system. But the approximation can be computationally heavy. Mixing
the tuned expert FCMs gives a practical way to find several phantom nodes and
thereby better approximate the feedback system's true equilibrium behavior.
|
2501.00677 | Deeply Learned Robust Matrix Completion for Large-scale Low-rank Data
Recovery | cs.LG cs.CV cs.IT cs.NA math.IT math.NA stat.ML | Robust matrix completion (RMC) is a widely used machine learning tool that
simultaneously tackles two critical issues in low-rank data analysis: missing
data entries and extreme outliers. This paper proposes a novel scalable and
learnable non-convex approach, coined Learned Robust Matrix Completion (LRMC),
for large-scale RMC problems. LRMC enjoys low computational complexity with
linear convergence. Motivated by the proposed theorem, the free parameters of
LRMC can be effectively learned via deep unfolding to achieve optimum
performance. Furthermore, this paper proposes a flexible
feedforward-recurrent-mixed neural network framework that extends deep
unfolding from fix-number iterations to infinite iterations. The superior
empirical performance of LRMC is verified with extensive experiments against
state-of-the-art on synthetic datasets and real applications, including video
background subtraction, ultrasound imaging, face modeling, and cloud removal
from satellite imagery.
|
2501.00684 | IGC: Integrating a Gated Calculator into an LLM to Solve Arithmetic
Tasks Reliably and Efficiently | cs.LG cs.CL | Solving arithmetic tasks is a simple and fundamental skill, yet modern Large
Language Models (LLMs) have great difficulty with them. We introduce the
Integrated Gated Calculator (IGC), a module that enables LLMs to perform
arithmetic by emulating a calculator on the GPU. We finetune a Llama model with
our module and test it on the BigBench Arithmetic benchmark, where it beats the
State of the Art, outperforming all models on the benchmark, including models
almost two orders of magnitude larger. Our approach takes only a single
iteration to run and requires no external tools. It performs arithmetic
operations entirely inside the LLM without the need to produce intermediate
tokens. It is computationally efficient, interpretable, and avoids side-effects
on tasks that do not require arithmetic operations. It reliably achieves 98\%
to 99\% accuracy across multiple training runs and for all subtasks, including
the substantially harder subtask of multiplication, which was previously
unsolved.
|
2501.00691 | Labels Generated by Large Language Model Helps Measuring People's
Empathy in Vitro | cs.CL cs.LG | Large language models (LLMs) have revolutionised numerous fields, with
LLM-as-a-service (LLMSaaS) having a strong generalisation ability that offers
accessible solutions directly without the need for costly training. In contrast
to the widely studied prompt engineering for task solving directly (in vivo),
this paper explores its potential in in-vitro applications. These involve using
LLM to generate labels to help the supervised training of mainstream models by
(1) noisy label correction and (2) training data augmentation with
LLM-generated labels. In this paper, we evaluate this approach in the emerging
field of empathy computing -- automating the prediction of psychological
questionnaire outcomes from inputs like text sequences. Specifically,
crowdsourced datasets in this domain often suffer from noisy labels that
misrepresent underlying empathy. By leveraging LLM-generated labels to train
pre-trained language models (PLMs) like RoBERTa, we achieve statistically
significant accuracy improvements over baselines, achieving a state-of-the-art
Pearson correlation coefficient of 0.648 on NewsEmp benchmarks. In addition, we
bring insightful discussions, including current challenges in empathy
computing, data biases in training data and evaluation metric selection. Code
and LLM-generated data are available at
https://github.com/hasan-rakibul/LLMPathy (available once the paper is
accepted).
|
2501.00692 | Adjoint sharding for very long context training of state space models | cs.LG cs.AI cs.CL | Despite very fast progress, efficiently training large language models (LLMs)
in very long contexts remains challenging. Existing methods fall back to
training LLMs with short contexts (a maximum of a few thousands tokens in
training) and use inference time techniques when evaluating on long contexts
(above 1M tokens context window at inference). As opposed to
long-context-inference, training on very long context input prompts is quickly
limited by GPU memory availability and by the prohibitively long training times
it requires on state-of-the-art hardware. Meanwhile, many real-life
applications require not only inference but also training/fine-tuning with long
context on specific tasks. Such applications include, for example, augmenting
the context with various sources of raw reference information for fact
extraction, fact summarization, or fact reconciliation tasks. We propose
adjoint sharding, a novel technique that comprises sharding gradient
calculation during training to reduce memory requirements by orders of
magnitude, making training on very long context computationally tractable.
Adjoint sharding is based on the adjoint method and computes equivalent
gradients to backpropagation. We also propose truncated adjoint sharding to
speed up the algorithm while maintaining performance. We provide a distributed
version, and a paralleled version of adjoint sharding to further speed up
training. Empirical results show the proposed adjoint sharding algorithm
reduces memory usage by up to 3X with a 1.27B parameter large language model on
1M context length training. This allows to increase the maximum context length
during training or fine-tuning of a 1.27B parameter model from 35K tokens to
above 100K tokens on a training infrastructure composed of five AWS P4
instances.
|
2501.00693 | Beyond Model Scale Limits: End-Edge-Cloud Federated Learning with
Self-Rectified Knowledge Agglomeration | cs.DC cs.LG | The rise of End-Edge-Cloud Collaboration (EECC) offers a promising paradigm
for Artificial Intelligence (AI) model training across end devices, edge
servers, and cloud data centers, providing enhanced reliability and reduced
latency. Hierarchical Federated Learning (HFL) can benefit from this paradigm
by enabling multi-tier model aggregation across distributed computing nodes.
However, the potential of HFL is significantly constrained by the inherent
heterogeneity and dynamic characteristics of EECC environments. Specifically,
the uniform model structure bounded by the least powerful end device across all
computing nodes imposes a performance bottleneck. Meanwhile, coupled
heterogeneity in data distributions and resource capabilities across tiers
disrupts hierarchical knowledge transfer, leading to biased updates and
degraded performance. Furthermore, the mobility and fluctuating connectivity of
computing nodes in EECC environments introduce complexities in dynamic node
migration, further compromising the robustness of the training process. To
address multiple challenges within a unified framework, we propose
End-Edge-Cloud Federated Learning with Self-Rectified Knowledge Agglomeration
(FedEEC), which is a novel EECC-empowered FL framework that allows the trained
models from end, edge, to cloud to grow larger in size and stronger in
generalization ability. FedEEC introduces two key innovations: (1) Bridge
Sample Based Online Distillation Protocol (BSBODP), which enables knowledge
transfer between neighboring nodes through generated bridge samples, and (2)
Self-Knowledge Rectification (SKR), which refines the transferred knowledge to
prevent suboptimal cloud model optimization. The proposed framework effectively
handles both cross-tier resource heterogeneity and effective knowledge transfer
between neighboring nodes, while satisfying the migration-resilient
requirements of EECC.
|
2501.00696 | Cost and Reward Infused Metric Elicitation | cs.LG | In machine learning, metric elicitation refers to the selection of
performance metrics that best reflect an individual's implicit preferences for
a given application. Currently, metric elicitation methods only consider
metrics that depend on the accuracy values encoded within a given model's
confusion matrix. However, focusing solely on confusion matrices does not
account for other model feasibility considerations such as varied monetary
costs or latencies. In our work, we build upon the multiclass metric
elicitation framework of Hiranandani et al., extrapolating their proposed
Diagonal Linear Performance Metric Elicitation (DLPME) algorithm to account for
additional bounded costs and rewards. Our experimental results with synthetic
data demonstrate our approach's ability to quickly converge to the true metric.
|
2501.00697 | PANDA -- Paired Anti-hate Narratives Dataset from Asia: Using an
LLM-as-a-Judge to Create the First Chinese Counterspeech Dataset | cs.CL | Despite the global prevalence of Modern Standard Chinese language,
counterspeech (CS) resources for Chinese remain virtually nonexistent. To
address this gap in East Asian counterspeech research we introduce the a corpus
of Modern Standard Mandarin counterspeech that focuses on combating hate speech
in Mainland China. This paper proposes a novel approach of generating CS by
using an LLM-as-a-Judge, simulated annealing, LLMs zero-shot CN generation and
a round-robin algorithm. This is followed by manual verification for quality
and contextual relevance. This paper details the methodology for creating
effective counterspeech in Chinese and other non-Eurocentric languages,
including unique cultural patterns of which groups are maligned and linguistic
patterns in what kinds of discourse markers are programmatically marked as hate
speech (HS). Analysis of the generated corpora, we provide strong evidence for
the lack of open-source, properly labeled Chinese hate speech data and the
limitations of using an LLM-as-Judge to score possible answers in Chinese.
Moreover, the present corpus serves as the first East Asian language based CS
corpus and provides an essential resource for future research on counterspeech
generation and evaluation.
|
2501.00700 | Knowledge-Guided Prompt Learning for Deepfake Facial Image Detection | cs.CV | Recent generative models demonstrate impressive performance on synthesizing
photographic images, which makes humans hardly to distinguish them from
pristine ones, especially on realistic-looking synthetic facial images.
Previous works mostly focus on mining discriminative artifacts from vast amount
of visual data. However, they usually lack the exploration of prior knowledge
and rarely pay attention to the domain shift between training categories (e.g.,
natural and indoor objects) and testing ones (e.g., fine-grained human facial
images), resulting in unsatisfactory detection performance. To address these
issues, we propose a novel knowledge-guided prompt learning method for deepfake
facial image detection. Specifically, we retrieve forgery-related prompts from
large language models as expert knowledge to guide the optimization of
learnable prompts. Besides, we elaborate test-time prompt tuning to alleviate
the domain shift, achieving significant performance improvement and boosting
the application in real-world scenarios. Extensive experiments on
DeepFakeFaceForensics dataset show that our proposed approach notably
outperforms state-of-the-art methods.
|
2501.00701 | ResKoopNet: Learning Koopman Representations for Complex Dynamics with
Spectral Residuals | cs.LG math.DS | Analyzing long-term behaviors in high-dimensional nonlinear dynamical systems
remains challenging, with the Koopman operator framework providing a powerful
global linearization approach, though existing methods for approximating its
spectral components often suffer from theoretical limitations and reliance on
predefined dictionaries. While Residual Dynamic Mode Decomposition (ResDMD)
introduced the spectral residual to assess the accuracy of Koopman operator
approximation, its only filters precomputed spectra, which prevents it from
fully discovering the Koopman operator's complete spectral information (a
limitation sometimes referred to as the 'spectral inclusion' problem). We
introduce ResKoopNet (Residual-based Koopman-learning Network), a novel method
that addresses this limitation by explicitly minimizing the spectral residual
to compute Koopman eigenpairs, which can identify a more precise and complete
spectrum of the Koopman operator. This approach provides theoretical guarantees
while maintaining computational adaptability through a neural network
implementation. Experiments on physical and biological systems demonstrate
ResKoopNet's superior accuracy in spectral approximation compared to existing
methods, particularly for systems with continuous spectra and high dimensional,
which makes it as an effective tool for analyzing complex dynamical systems.
|
2501.00704 | Kolmogorov GAM Networks are all you need! | cs.LG stat.CO | Kolmogorov GAM (K-GAM) networks are shown to be an efficient architecture for
training and inference. They are an additive model with an embedding that is
independent of the function of interest. They provide an alternative to the
transformer architecture. They are the machine learning version of Kolmogorov's
Superposition Theorem (KST) which provides an efficient representations of a
multivariate function. Such representations have use in machine learning for
encoding dictionaries (a.k.a. "look-up" tables). KST theory also provides a
representation based on translates of the K\"oppen function. The goal of our
paper is to interpret this representation in a machine learning context for
applications in Artificial Intelligence (AI). Our architecture is equivalent to
a topological embedding which is independent of the function together with an
additive layer that uses a Generalized Additive Model (GAM). This provides a
class of learning procedures with far fewer parameters than current deep
learning algorithms. Implementation can be parallelizable which makes our
algorithms computationally attractive. To illustrate our methodology, we use
the Iris data from statistical learning. We also show that our additive model
with non-linear embedding provides an alternative to transformer architectures
which from a statistical viewpoint are kernel smoothers. Additive KAN models
therefore provide a natural alternative to transformers. Finally, we conclude
with directions for future research.
|
2501.00707 | Everywhere Attack: Attacking Locally and Globally to Boost Targeted
Transferability | cs.CV cs.AI cs.CR | Adversarial examples' (AE) transferability refers to the phenomenon that AEs
crafted with one surrogate model can also fool other models. Notwithstanding
remarkable progress in untargeted transferability, its targeted counterpart
remains challenging. This paper proposes an everywhere scheme to boost targeted
transferability. Our idea is to attack a victim image both globally and
locally. We aim to optimize 'an army of targets' in every local image region
instead of the previous works that optimize a high-confidence target in the
image. Specifically, we split a victim image into non-overlap blocks and
jointly mount a targeted attack on each block. Such a strategy mitigates
transfer failures caused by attention inconsistency between surrogate and
victim models and thus results in stronger transferability. Our approach is
method-agnostic, which means it can be easily combined with existing
transferable attacks for even higher transferability. Extensive experiments on
ImageNet demonstrate that the proposed approach universally improves the
state-of-the-art targeted attacks by a clear margin, e.g., the transferability
of the widely adopted Logit attack can be improved by 28.8%-300%.We also
evaluate the crafted AEs on a real-world platform: Google Cloud Vision. Results
further support the superiority of the proposed method.
|
2501.00709 | KAN KAN Buff Signed Graph Neural Networks? | cs.LG | Graph Representation Learning aims to create effective embeddings for nodes
and edges that encapsulate their features and relationships. Graph Neural
Networks (GNNs) leverage neural networks to model complex graph structures.
Recently, the Kolmogorov-Arnold Neural Network (KAN) has emerged as a promising
alternative to the traditional Multilayer Perceptron (MLP), offering improved
accuracy and interpretability with fewer parameters. In this paper, we propose
the integration of KANs into Signed Graph Convolutional Networks (SGCNs),
leading to the development of KAN-enhanced SGCNs (KASGCN). We evaluate KASGCN
on tasks such as signed community detection and link sign prediction to improve
embedding quality in signed networks. Our experimental results indicate that
KASGCN exhibits competitive or comparable performance to standard SGCNs across
the tasks evaluated, with performance variability depending on the specific
characteristics of the signed graph and the choice of parameter settings. These
findings suggest that KASGCNs hold promise for enhancing signed graph analysis
with context-dependent effectiveness.
|
2501.00712 | Rethinking Addressing in Language Models via Contexualized Equivariant
Positional Encoding | cs.CL cs.LG | Transformers rely on both content-based and position-based addressing
mechanisms to make predictions, but existing positional encoding techniques
often diminish the effectiveness of position-based addressing. Many current
methods enforce rigid patterns in attention maps, limiting the ability to model
long-range dependencies and adapt to diverse tasks. Additionally, most
positional encodings are learned as general biases, lacking the specialization
required for different instances within a dataset. To address this, we propose
con$\textbf{T}$extualized equivari$\textbf{A}$nt $\textbf{P}$osition
$\textbf{E}$mbedding ($\textbf{TAPE}$), a novel framework that enhances
positional embeddings by incorporating sequence content across layers. TAPE
introduces dynamic, context-aware positional encodings, overcoming the
constraints of traditional fixed patterns. By enforcing permutation and
orthogonal equivariance, TAPE ensures the stability of positional encodings
during updates, improving robustness and adaptability. Our method can be easily
integrated into pre-trained transformers, offering parameter-efficient
fine-tuning with minimal overhead. Extensive experiments shows that TAPE
achieves superior performance in language modeling, arithmetic reasoning, and
long-context retrieval tasks compared to existing positional embedding
techniques.
|
2501.00713 | CODEOFCONDUCT at Multilingual Counterspeech Generation: A Context-Aware
Model for Robust Counterspeech Generation in Low-Resource Languages | cs.CL | This paper introduces a context-aware model for robust counterspeech
generation, which achieved significant success in the MCG-COLING-2025 shared
task. Our approach particularly excelled in low-resource language settings. By
leveraging a simulated annealing algorithm fine-tuned on multilingual datasets,
the model generates factually accurate responses to hate speech.
We demonstrate state-of-the-art performance across four languages (Basque,
English, Italian, and Spanish), with our system ranking first for Basque,
second for Italian, and third for both English and Spanish. Notably, our model
swept all three top positions for Basque, highlighting its effectiveness in
low-resource scenarios.
Evaluation of the shared task employs both traditional metrics (BLEU, ROUGE,
BERTScore, Novelty) and JudgeLM based on LLM. We present a detailed analysis of
our results, including an empirical evaluation of the model performance and
comprehensive score distributions across evaluation metrics.
This work contributes to the growing body of research on multilingual
counterspeech generation, offering insights into developing robust models that
can adapt to diverse linguistic and cultural contexts in the fight against
online hate speech.
|
2501.00715 | eRevise+RF: A Writing Evaluation System for Assessing Student Essay
Revisions and Providing Formative Feedback | cs.CL cs.AI | The ability to revise essays in response to feedback is important for
students' writing success. An automated writing evaluation (AWE) system that
supports students in revising their essays is thus essential. We present
eRevise+RF, an enhanced AWE system for assessing student essay revisions (e.g.,
changes made to an essay to improve its quality in response to essay feedback)
and providing revision feedback. We deployed the system with 6 teachers and 406
students across 3 schools in Pennsylvania and Louisiana. The results confirmed
its effectiveness in (1) assessing student essays in terms of evidence usage,
(2) extracting evidence and reasoning revisions across essays, and (3)
determining revision success in responding to feedback. The evaluation also
suggested eRevise+RF is a helpful system for young students to improve their
argumentative writing skills through revision and formative feedback.
|
2501.00722 | Performance-Barrier Event-Triggered PDE Control of Traffic Flow | eess.SY cs.SY | For stabilizing stop-and-go oscillations in traffic flow by actuating a
variable speed limit (VSL) at a downstream boundary of a freeway segment, we
introduce event-triggered PDE backstepping designs employing the recent concept
of performance-barrier event-triggered control (P-ETC). Our design is for
linearized hyperbolic Aw-Rascle-Zhang (ARZ) PDEs governing traffic velocity and
density. Compared to continuous feedback, ETC provides a piecewise-constant VSL
commands-more likely to be obeyed by human drivers. Unlike the existing regular
ETC (R-ETC), which enforces conservatively a strict decrease of a Lyapunov
function, our performance-barrier (P-ETC) approach permits an increase, as long
as the Lyapunov function remains below a performance barrier, resulting in
fewer control updates than R-ETC. To relieve VSL from continuously monitoring
the triggering function, we also develop periodic event-triggered (PETC) and
self-triggered (STC) versions of both R-ETC and P-ETC. These are referred to as
R/P-PETC and R/P-STC, respectively, and we show that they both guarantee
Zeno-free behavior and exponential convergence in the spatial $L^2$ norm. With
comparative simulations, we illustrate the benefits of the performance-barrier
designs through traffic metrics (driver comfort, safety, travel time, fuel
consumption). The proposed algorithms reduce discomfort nearly in half relative
to driver behavior without VSL, while tripling the driver safety, measured by
the average dwell time, relative to the R-ETC frequent-switching VSL schedule.
|
2501.00725 | Automatic Construction of Pattern Classifiers Capable of Continuous
Incremental Learning and Unlearning Tasks Based on Compact-Sized
Probabilistic Neural Network | cs.LG cs.CV | This paper proposes a novel approach to pattern classification using a
probabilistic neural network model. The strategy is based on a compact-sized
probabilistic neural network capable of continuous incremental learning and
unlearning tasks. The network is constructed/reconstructed using a simple,
one-pass network-growing algorithm with no hyperparameter tuning. Then, given
the training dataset, its structure and parameters are automatically determined
and can be dynamically varied in continual incremental and decremental learning
situations. The algorithm proposed in this work involves no iterative or
arduous matrix-based parameter approximations but a simple data-driven updating
scheme. Simulation results using nine publicly available databases demonstrate
the effectiveness of this approach, showing that compact-sized probabilistic
neural networks constructed have a much smaller number of hidden units compared
to the original probabilistic neural network model and yet can achieve a
similar classification performance to that of multilayer perceptron neural
networks in standard classification tasks, while also exhibiting sufficient
capability in continuous class incremental learning and unlearning tasks.
|
2501.00726 | Enhancing Unsupervised Feature Selection via Double Sparsity Constrained
Optimization | math.OC cs.LG | Unsupervised feature selection (UFS) is widely applied in machine learning
and pattern recognition. However, most of the existing methods only consider a
single sparsity, which makes it difficult to select valuable and discriminative
feature subsets from the original high-dimensional feature set. In this paper,
we propose a new UFS method called DSCOFS via embedding double sparsity
constrained optimization into the classical principal component analysis (PCA)
framework. Double sparsity refers to using $\ell_{2,0}$-norm and $\ell_0$-norm
to simultaneously constrain variables, by adding the sparsity of different
types, to achieve the purpose of improving the accuracy of identifying
differential features. The core is that $\ell_{2,0}$-norm can remove irrelevant
and redundant features, while $\ell_0$-norm can filter out irregular noisy
features, thereby complementing $\ell_{2,0}$-norm to improve discrimination. An
effective proximal alternating minimization method is proposed to solve the
resulting nonconvex nonsmooth model. Theoretically, we rigorously prove that
the sequence generated by our method globally converges to a stationary point.
Numerical experiments on three synthetic datasets and eight real-world datasets
demonstrate the effectiveness, stability, and convergence of the proposed
method. In particular, the average clustering accuracy (ACC) and normalized
mutual information (NMI) are improved by at least 3.34% and 3.02%,
respectively, compared with the state-of-the-art methods. More importantly, two
common statistical tests and a new feature similarity metric verify the
advantages of double sparsity. All results suggest that our proposed DSCOFS
provides a new perspective for feature selection.
|
2501.00733 | On Importance of Layer Pruning for Smaller BERT Models and Low Resource
Languages | cs.CL cs.LG | This study explores the effectiveness of layer pruning for developing more
efficient BERT models tailored to specific downstream tasks in low-resource
languages. Our primary objective is to evaluate whether pruned BERT models can
maintain high performance while reducing model size and complexity. We
experiment with several BERT variants, including MahaBERT-v2 and Google-Muril,
applying different pruning strategies and comparing their performance to
smaller, scratch-trained models like MahaBERT-Small and MahaBERT-Smaller. We
fine-tune these models on Marathi datasets, specifically Short Headlines
Classification (SHC), Long Paragraph Classification (LPC) and Long Document
Classification (LDC), to assess their classification accuracy. Our findings
demonstrate that pruned models, despite having fewer layers, achieve comparable
performance to their fully-layered counterparts while consistently
outperforming scratch-trained models of similar size. Notably, pruning layers
from the middle of the model proves to be the most effective strategy, offering
performance competitive with pruning from the top and bottom. However, there is
no clear winner, as different pruning strategies perform better in different
model and dataset combinations. Additionally, monolingual BERT models
outperform multilingual ones in these experiments. This approach, which reduces
computational demands, provides a faster and more efficient alternative to
training smaller models from scratch, making advanced NLP models more
accessible for low-resource languages without compromising classification
accuracy.
|
2501.00734 | DDD: Discriminative Difficulty Distance for plant disease diagnosis | cs.CV cs.LG | Recent studies on plant disease diagnosis using machine learning (ML) have
highlighted concerns about the overestimated diagnostic performance due to
inappropriate data partitioning, where training and test datasets are derived
from the same source (domain). Plant disease diagnosis presents a challenging
classification task, characterized by its fine-grained nature, vague symptoms,
and the extensive variability of image features within each domain. In this
study, we propose the concept of Discriminative Difficulty Distance (DDD), a
novel metric designed to quantify the domain gap between training and test
datasets while assessing the classification difficulty of test data. DDD
provides a valuable tool for identifying insufficient diversity in training
data, thus supporting the development of more diverse and robust datasets. We
investigated multiple image encoders trained on different datasets and examined
whether the distances between datasets, measured using low-dimensional
representations generated by the encoders, are suitable as a DDD metric. The
study utilized 244,063 plant disease images spanning four crops and 34 disease
classes collected from 27 domains. As a result, we demonstrated that even if
the test images are from different crops or diseases than those used to train
the encoder, incorporating them allows the construction of a distance measure
for a dataset that strongly correlates with the difficulty of diagnosis
indicated by the disease classifier developed independently. Compared to the
base encoder, pre-trained only on ImageNet21K, the correlation higher by 0.106
to 0.485, reaching a maximum of 0.909.
|
2501.00738 | Learning Weather Models from Data with WSINDy | physics.geo-ph cs.LG physics.comp-ph | The multiscale and turbulent nature of Earth's atmosphere has historically
rendered accurate weather modeling a hard problem. Recently, there has been an
explosion of interest surrounding data-driven approaches to weather modeling,
which in many cases show improved forecasting accuracy and computational
efficiency when compared to traditional methods. However, many of the current
data-driven approaches employ highly parameterized neural networks, often
resulting in uninterpretable models and limited gains in scientific
understanding. In this work, we address the interpretability problem by
explicitly discovering partial differential equations governing various weather
phenomena, identifying symbolic mathematical models with direct physical
interpretations. The purpose of this paper is to demonstrate that, in
particular, the Weak form Sparse Identification of Nonlinear Dynamics (WSINDy)
algorithm can learn effective weather models from both simulated and
assimilated data. Our approach adapts the standard WSINDy algorithm to work
with high-dimensional fluid data of arbitrary spatial dimension. Moreover, we
develop an approach for handling terms that are not integrable-by-parts, such
as advection operators.
|
2501.00739 | Smooth Reference Command Generation and Control for Transition Flight of
VTOL Aircraft Using Time-Varying Optimization | eess.SY cs.SY | Vertical take-off and landing (VTOL) aircraft pose a challenge in generating
reference commands during transition flight. While sparsity between hover and
cruise flight modes can be promoted for effective transitions by formulating
$\ell_{1}$-norm minimization problems, solving these problems offline pointwise
in time can lead to non-smooth reference commands, resulting in abrupt
transitions. This study addresses this limitation by proposing a time-varying
optimization method that explicitly considers time dependence. By leveraging a
prediction-correction interior-point time-varying optimization framework, the
proposed method solves an ordinary differential equation to update reference
commands continuously over time, enabling smooth reference command generation
in real time. Numerical simulations with a two-dimensional Lift+Cruise vehicle
validate the effectiveness of the proposed method, demonstrating its ability to
generate smooth reference commands online.
|
2501.00740 | RORem: Training a Robust Object Remover with Human-in-the-Loop | cs.CV | Despite the significant advancements, existing object removal methods
struggle with incomplete removal, incorrect content synthesis and blurry
synthesized regions, resulting in low success rates. Such issues are mainly
caused by the lack of high-quality paired training data, as well as the
self-supervised training paradigm adopted in these methods, which forces the
model to in-paint the masked regions, leading to ambiguity between synthesizing
the masked objects and restoring the background. To address these issues, we
propose a semi-supervised learning strategy with human-in-the-loop to create
high-quality paired training data, aiming to train a Robust Object Remover
(RORem). We first collect 60K training pairs from open-source datasets to train
an initial object removal model for generating removal samples, and then
utilize human feedback to select a set of high-quality object removal pairs,
with which we train a discriminator to automate the following training data
generation process. By iterating this process for several rounds, we finally
obtain a substantial object removal dataset with over 200K pairs. Fine-tuning
the pre-trained stable diffusion model with this dataset, we obtain our RORem,
which demonstrates state-of-the-art object removal performance in terms of both
reliability and image quality. Particularly, RORem improves the object removal
success rate over previous methods by more than 18\%. The dataset, source code
and trained model are available at https://github.com/leeruibin/RORem.
|
2501.00741 | Towards End-to-End Neuromorphic Voxel-based 3D Object Reconstruction
Without Physical Priors | cs.CV cs.AI | Neuromorphic cameras, also known as event cameras, are asynchronous
brightness-change sensors that can capture extremely fast motion without
suffering from motion blur, making them particularly promising for 3D
reconstruction in extreme environments. However, existing research on 3D
reconstruction using monocular neuromorphic cameras is limited, and most of the
methods rely on estimating physical priors and employ complex multi-step
pipelines. In this work, we propose an end-to-end method for dense voxel 3D
reconstruction using neuromorphic cameras that eliminates the need to estimate
physical priors. Our method incorporates a novel event representation to
enhance edge features, enabling the proposed feature-enhancement model to learn
more effectively. Additionally, we introduced Optimal Binarization Threshold
Selection Principle as a guideline for future related work, using the optimal
reconstruction results achieved with threshold optimization as the benchmark.
Our method achieves a 54.6% improvement in reconstruction accuracy compared to
the baseline method.
|
2501.00742 | Experimental Demonstration of an Optical Neural PDE Solver via On-Chip
PINN Training | cs.LG cs.AR physics.optics | Partial differential equation (PDE) is an important math tool in science and
engineering. This paper experimentally demonstrates an optical neural PDE
solver by leveraging the back-propagation-free on-photonic-chip training of
physics-informed neural networks.
|
2501.00743 | AttriReBoost: A Gradient-Free Propagation Optimization Method for Cold
Start Mitigation in Attribute Missing Graphs | cs.LG cs.AI | Missing attribute issues are prevalent in the graph learning, leading to
biased outcomes in Graph Neural Networks (GNNs). Existing methods that rely on
feature propagation are prone to cold start problem, particularly when dealing
with attribute resetting and low-degree nodes, which hinder effective
propagation and convergence. To address these challenges, we propose
AttriReBoost (ARB), a novel method that incorporates propagation-based method
to mitigate cold start problems in attribute-missing graphs. ARB enhances
global feature propagation by redefining initial boundary conditions and
strategically integrating virtual edges, thereby improving node connectivity
and ensuring more stable and efficient convergence. This method facilitates
gradient-free attribute reconstruction with lower computational overhead. The
proposed method is theoretically grounded, with its convergence rigorously
established. Extensive experiments on several real-world benchmark datasets
demonstrate the effectiveness of ARB, achieving an average accuracy improvement
of 5.11% over state-of-the-art methods. Additionally, ARB exhibits remarkable
computational efficiency, processing a large-scale graph with 2.49 million
nodes in just 16 seconds on a single GPU. Our code is available at
https://github.com/limengran98/ARB.
|
2501.00744 | A Distributional Evaluation of Generative Image Models | stat.ML cs.LG | Generative models are ubiquitous in modern artificial intelligence (AI)
applications. Recent advances have led to a variety of generative modeling
approaches that are capable of synthesizing highly realistic samples. Despite
these developments, evaluating the distributional match between the synthetic
samples and the target distribution in a statistically principled way remains a
core challenge. We focus on evaluating image generative models, where studies
often treat human evaluation as the gold standard. Commonly adopted metrics,
such as the Fr\'echet Inception Distance (FID), do not sufficiently capture the
differences between the learned and target distributions, because the
assumption of normality ignores differences in the tails. We propose the
Embedded Characteristic Score (ECS), a comprehensive metric for evaluating the
distributional match between the learned and target sample distributions, and
explore its connection with moments and tail behavior. We derive natural
properties of ECS and show its practical use via simulations and an empirical
study.
|
2501.00745 | Dynamics of Adversarial Attacks on Large Language Model-Based Search
Engines | cs.CL cs.AI cs.GT cs.IR econ.TH | The increasing integration of Large Language Model (LLM) based search engines
has transformed the landscape of information retrieval. However, these systems
are vulnerable to adversarial attacks, especially ranking manipulation attacks,
where attackers craft webpage content to manipulate the LLM's ranking and
promote specific content, gaining an unfair advantage over competitors. In this
paper, we study the dynamics of ranking manipulation attacks. We frame this
problem as an Infinitely Repeated Prisoners' Dilemma, where multiple players
strategically decide whether to cooperate or attack. We analyze the conditions
under which cooperation can be sustained, identifying key factors such as
attack costs, discount rates, attack success rates, and trigger strategies that
influence player behavior. We identify tipping points in the system dynamics,
demonstrating that cooperation is more likely to be sustained when players are
forward-looking. However, from a defense perspective, we find that simply
reducing attack success probabilities can, paradoxically, incentivize attacks
under certain conditions. Furthermore, defensive measures to cap the upper
bound of attack success rates may prove futile in some scenarios. These
insights highlight the complexity of securing LLM-based systems. Our work
provides a theoretical foundation and practical insights for understanding and
mitigating their vulnerabilities, while emphasizing the importance of adaptive
security strategies and thoughtful ecosystem design.
|
2501.00747 | DIVE: Diversified Iterative Self-Improvement | cs.CL | Recent advances in large language models (LLMs) have demonstrated the
effectiveness of Iterative Self-Improvement (ISI) techniques. However,
continuous training on self-generated data leads to reduced output diversity, a
limitation particularly critical in reasoning tasks where diverse solution
paths are essential. We present DIVE (Diversified Iterative Self-Improvement),
a novel framework that addresses this challenge through two key components:
Sample Pool Expansion for broader solution exploration, and Data Selection for
balancing diversity and quality in preference pairs. Experiments on MATH and
GSM8k datasets show that DIVE achieves a 10% to 45% relative increase in output
diversity metrics while maintaining performance quality compared to vanilla
ISI. Our ablation studies confirm both components' significance in achieving
these improvements. Code is available at https://github.com/qinyiwei/DIVE.
|
2501.00750 | Beyond Text: Implementing Multimodal Large Language Model-Powered
Multi-Agent Systems Using a No-Code Platform | cs.AI | This study proposes the design and implementation of a multimodal LLM-based
Multi-Agent System (MAS) leveraging a No-Code platform to address the practical
constraints and significant entry barriers associated with AI adoption in
enterprises. Advanced AI technologies, such as Large Language Models (LLMs),
often pose challenges due to their technical complexity and high implementation
costs, making them difficult for many organizations to adopt. To overcome these
limitations, this research develops a No-Code-based Multi-Agent System designed
to enable users without programming knowledge to easily build and manage AI
systems. The study examines various use cases to validate the applicability of
AI in business processes, including code generation from image-based notes,
Advanced RAG-based question-answering systems, text-based image generation, and
video generation using images and prompts. These systems lower the barriers to
AI adoption, empowering not only professional developers but also general users
to harness AI for significantly improved productivity and efficiency. By
demonstrating the scalability and accessibility of No-Code platforms, this
study advances the democratization of AI technologies within enterprises and
validates the practical applicability of Multi-Agent Systems, ultimately
contributing to the widespread adoption of AI across various industries.
|
2501.00751 | HCMA-UNet: A Hybrid CNN-Mamba UNet with Inter-Slice Self-Attention for
Efficient Breast Cancer Segmentation | eess.IV cs.CV | Breast cancer lesion segmentation in DCE-MRI remains challenging due to
heterogeneous tumor morphology and indistinct boundaries. To address these
challenges, this study proposes a novel hybrid segmentation network, HCMA-UNet,
for lesion segmentation of breast cancer. Our network consists of a lightweight
CNN backbone and a Multi-view Inter-Slice Self-Attention Mamba (MISM) module.
The MISM module integrates Visual State Space Block (VSSB) and Inter-Slice
Self-Attention (ISSA) mechanism, effectively reducing parameters through
Asymmetric Split Channel (ASC) strategy to achieve efficient tri-directional
feature extraction. Our lightweight model achieves superior performance with
2.87M parameters and 126.44 GFLOPs. A Feature-guided Region-aware loss function
(FRLoss) is proposed to enhance segmentation accuracy. Extensive experiments on
one private and two public DCE-MRI breast cancer datasets demonstrate that our
approach achieves state-of-the-art performance while maintaining computational
efficiency. FRLoss also exhibits good cross-architecture generalization
capabilities. The source code and dataset is available on this link.
|
2501.00752 | Foreground-Covering Prototype Generation and Matching for SAM-Aided
Few-Shot Segmentation | cs.CV | We propose Foreground-Covering Prototype Generation and Matching to resolve
Few-Shot Segmentation (FSS), which aims to segment target regions in unlabeled
query images based on labeled support images. Unlike previous research, which
typically estimates target regions in the query using support prototypes and
query pixels, we utilize the relationship between support and query prototypes.
To achieve this, we utilize two complementary features: SAM Image Encoder
features for pixel aggregation and ResNet features for class consistency.
Specifically, we construct support and query prototypes with SAM features and
distinguish query prototypes of target regions based on ResNet features. For
the query prototype construction, we begin by roughly guiding foreground
regions within SAM features using the conventional pseudo-mask, then employ
iterative cross-attention to aggregate foreground features into learnable
tokens. Here, we discover that the cross-attention weights can effectively
alternate the conventional pseudo-mask. Therefore, we use the attention-based
pseudo-mask to guide ResNet features to focus on the foreground, then infuse
the guided ResNet feature into the learnable tokens to generate
class-consistent query prototypes. The generation of the support prototype is
conducted symmetrically to that of the query one, with the pseudo-mask replaced
by the ground-truth mask. Finally, we compare these query prototypes with
support ones to generate prompts, which subsequently produce object masks
through the SAM Mask Decoder. Our state-of-the-art performances on various
datasets validate the effectiveness of the proposed method for FSS. Our
official code is available at https://github.com/SuhoPark0706/FCP
|
2501.00755 | An AI-powered Bayesian generative modeling approach for causal inference
in observational studies | stat.ML cs.AI cs.LG stat.ME | Causal inference in observational studies with high-dimensional covariates
presents significant challenges. We introduce CausalBGM, an AI-powered Bayesian
generative modeling approach that captures the causal relationship among
covariates, treatment, and outcome variables. The core innovation of CausalBGM
lies in its ability to estimate the individual treatment effect (ITE) by
learning individual-specific distributions of a low-dimensional latent feature
set (e.g., latent confounders) that drives changes in both treatment and
outcome. This approach not only effectively mitigates confounding effects but
also provides comprehensive uncertainty quantification, offering reliable and
interpretable causal effect estimates at the individual level. CausalBGM adopts
a Bayesian model and uses a novel iterative algorithm to update the model
parameters and the posterior distribution of latent features until convergence.
This framework leverages the power of AI to capture complex dependencies among
variables while adhering to the Bayesian principles. Extensive experiments
demonstrate that CausalBGM consistently outperforms state-of-the-art methods,
particularly in scenarios with high-dimensional covariates and large-scale
datasets. Its Bayesian foundation ensures statistical rigor, providing robust
and well-calibrated posterior intervals. By addressing key limitations of
existing methods, CausalBGM emerges as a robust and promising framework for
advancing causal inference in modern applications in fields such as genomics,
healthcare, and social sciences. CausalBGM is maintained at the website
https://causalbgm.readthedocs.io/.
|
2501.00756 | FasterSTS: A Faster Spatio-Temporal Synchronous Graph Convolutional
Networks for Traffic flow Forecasting | cs.LG | Accurate traffic flow prediction heavily relies on the spatio-temporal
correlation of traffic flow data. Most current studies separately capture
correlations in spatial and temporal dimensions, making it difficult to capture
complex spatio-temporal heterogeneity, and often at the expense of increasing
model complexity to improve prediction accuracy. Although there have been
groundbreaking attempts in the field of spatio-temporal synchronous modeling,
significant limitations remain in terms of performance and complexity
control.This study proposes a quicker and more effective spatio-temporal
synchronous traffic flow forecast model to address these issues.
|
2501.00757 | Beyond Static Datasets: A Behavior-Driven Entity-Specific Simulation to
Overcome Data Scarcity and Train Effective Crypto Anti-Money Laundering
Models | cs.CR cs.LG | For different factors/reasons, ranging from inherent characteristics and
features providing decentralization, enhanced privacy, ease of transactions,
etc., to implied external hardships in enforcing regulations, contradictions in
data sharing policies, etc., cryptocurrencies have been severely abused for
carrying out numerous malicious and illicit activities including money
laundering, darknet transactions, scams, terrorism financing, arm trades.
However, money laundering is a key crime to be mitigated to also suspend the
movement of funds from other illicit activities. Billions of dollars are
annually being laundered. It is getting extremely difficult to identify money
laundering in crypto transactions owing to many layering strategies available
today, and rapidly evolving tactics, and patterns the launderers use to
obfuscate the illicit funds. Many detection methods have been proposed ranging
from naive approaches involving complete manual investigation to machine
learning models. However, there are very limited datasets available for
effectively training machine learning models. Also, the existing datasets are
static and class-imbalanced, posing challenges for scalability and suitability
to specific scenarios, due to lack of customization to varying requirements.
This has been a persistent challenge in literature. In this paper, we propose
behavior embedded entity-specific money laundering-like transaction simulation
that helps in generating various transaction types and models the transactions
embedding the behavior of several entities observed in this space. The paper
discusses the design and architecture of the simulator, a custom dataset we
generated using the simulator, and the performance of models trained on this
synthetic data in detecting real addresses involved in money laundering.
|
2501.00758 | Less is More: Token Context-aware Learning for Object Tracking | cs.CV | Recently, several studies have shown that utilizing contextual information to
perceive target states is crucial for object tracking. They typically capture
context by incorporating multiple video frames. However, these naive
frame-context methods fail to consider the importance of each patch within a
reference frame, making them susceptible to noise and redundant tokens, which
deteriorates tracking performance. To address this challenge, we propose a new
token context-aware tracking pipeline named LMTrack, designed to automatically
learn high-quality reference tokens for efficient visual tracking. Embracing
the principle of Less is More, the core idea of LMTrack is to analyze the
importance distribution of all reference tokens, where important tokens are
collected, continually attended to, and updated. Specifically, a novel Token
Context Memory module is designed to dynamically collect high-quality
spatio-temporal information of a target in an autoregressive manner,
eliminating redundant background tokens from the reference frames. Furthermore,
an effective Unidirectional Token Attention mechanism is designed to establish
dependencies between reference tokens and search frame, enabling robust
cross-frame association and target localization. Extensive experiments
demonstrate the superiority of our tracker, achieving state-of-the-art results
on tracking benchmarks such as GOT-10K, TrackingNet, and LaSOT.
|
2501.00759 | Enhancing Transformers for Generalizable First-Order Logical Entailment | cs.CL cs.AI | Transformers, as a fundamental deep learning architecture, have demonstrated
remarkable capabilities in reasoning. This paper investigates the generalizable
first-order logical reasoning ability of transformers with their parameterized
knowledge and explores ways to improve it. The first-order reasoning capability
of transformers is assessed through their ability to perform first-order
logical entailment, which is quantitatively measured by their performance in
answering knowledge graph queries. We establish connections between (1) two
types of distribution shifts studied in out-of-distribution generalization and
(2) the unseen knowledge and query settings discussed in the task of knowledge
graph query answering, enabling a characterization of fine-grained
generalizability. Results on our comprehensive dataset show that transformers
outperform previous methods specifically designed for this task and provide
detailed empirical evidence on the impact of input query syntax, token
embedding, and transformer architectures on the reasoning capability of
transformers. Interestingly, our findings reveal a mismatch between positional
encoding and other design choices in transformer architectures employed in
prior practices. This discovery motivates us to propose a more sophisticated,
logic-aware architecture, TEGA, to enhance the capability for generalizable
first-order logical entailment in transformers.
|
2501.00762 | Residual connections provably mitigate oversmoothing in graph neural
networks | cs.LG math.DS math.PR stat.ML | Graph neural networks (GNNs) have achieved remarkable empirical success in
processing and representing graph-structured data across various domains.
However, a significant challenge known as "oversmoothing" persists, where
vertex features become nearly indistinguishable in deep GNNs, severely
restricting their expressive power and practical utility. In this work, we
analyze the asymptotic oversmoothing rates of deep GNNs with and without
residual connections by deriving explicit convergence rates for a normalized
vertex similarity measure. Our analytical framework is grounded in the
multiplicative ergodic theorem. Furthermore, we demonstrate that adding
residual connections effectively mitigates or prevents oversmoothing across
several broad families of parameter distributions. The theoretical findings are
strongly supported by numerical experiments.
|
2501.00765 | Beyond Words: AuralLLM and SignMST-C for Precise Sign Language
Production and Bidirectional Accessibility | cs.CV cs.LG | Although sign language recognition aids non-hearing-impaired understanding,
many hearing-impaired individuals still rely on sign language alone due to
limited literacy, underscoring the need for advanced sign language production
and translation (SLP and SLT) systems. In the field of sign language
production, the lack of adequate models and datasets restricts practical
applications. Existing models face challenges in production accuracy and pose
control, making it difficult to provide fluent sign language expressions across
diverse scenarios. Additionally, data resources are scarce, particularly
high-quality datasets with complete sign vocabulary and pose annotations. To
address these issues, we introduce CNText2Sign and CNSign, comprehensive
datasets to benchmark SLP and SLT, respectively, with CNText2Sign covering
gloss and landmark mappings for SLP, and CNSign providing extensive
video-to-text data for SLT. To improve the accuracy and applicability of sign
language systems, we propose the AuraLLM and SignMST-C models. AuraLLM,
incorporating LoRA and RAG techniques, achieves a BLEU-4 score of 50.41 on the
CNText2Sign dataset, enabling precise control over gesture semantics and
motion. SignMST-C employs self-supervised rapid motion video pretraining,
achieving a BLEU-4 score of 31.03/32.08 on the PHOENIX2014-T benchmark, setting
a new state-of-the-art. These models establish robust baselines for the
datasets released for their respective tasks.
|
2501.00773 | Revisiting Graph Neural Networks on Graph-level Tasks: Comprehensive
Experiments, Analysis, and Improvements | cs.LG cs.AI cs.DB | Graphs are essential data structures for modeling complex interactions in
domains such as social networks, molecular structures, and biological systems.
Graph-level tasks, which predict properties or classes for the entire graph,
are critical for applications, such as molecular property prediction and
subgraph counting. Graph Neural Networks (GNNs) have shown promise in these
tasks, but their evaluations are often limited to narrow datasets, tasks, and
inconsistent experimental setups, restricting their generalizability. To
address these limitations, we propose a unified evaluation framework for
graph-level GNNs. This framework provides a standardized setting to evaluate
GNNs across diverse datasets, various graph tasks (e.g., graph classification
and regression), and challenging scenarios, including noisy, imbalanced, and
few-shot graphs. Additionally, we propose a novel GNN model with enhanced
expressivity and generalization capabilities. Specifically, we enhance the
expressivity of GNNs through a $k$-path rooted subgraph approach, enabling the
model to effectively count subgraphs (e.g., paths and cycles). Moreover, we
introduce a unified graph contrastive learning algorithm for graphs across
diverse domains, which adaptively removes unimportant edges to augment graphs,
thereby significantly improving generalization performance. Extensive
experiments demonstrate that our model achieves superior performance against
fourteen effective baselines across twenty-seven graph datasets, establishing
it as a robust and generalizable model for graph-level tasks.
|
2501.00777 | FitCF: A Framework for Automatic Feature Importance-guided
Counterfactual Example Generation | cs.CL cs.LG | Counterfactual examples are widely used in natural language processing (NLP)
as valuable data to improve models, and in explainable artificial intelligence
(XAI) to understand model behavior. The automated generation of counterfactual
examples remains a challenging task even for large language models (LLMs),
despite their impressive performance on many tasks. In this paper, we first
introduce ZeroCF, a faithful approach for leveraging important words derived
from feature attribution methods to generate counterfactual examples in a
zero-shot setting. Second, we present a new framework, FitCF, which further
verifies aforementioned counterfactuals by label flip verification and then
inserts them as demonstrations for few-shot prompting, outperforming two
state-of-the-art baselines. Through ablation studies, we identify the
importance of each of FitCF's core components in improving the quality of
counterfactuals, as assessed through flip rate, perplexity, and similarity
measures. Furthermore, we show the effectiveness of LIME and Integrated
Gradients as backbone attribution methods for FitCF and find that the number of
demonstrations has the largest effect on performance. Finally, we reveal a
strong correlation between the faithfulness of feature attribution scores and
the quality of generated counterfactuals.
|
2501.00778 | Decoding the Flow: CauseMotion for Emotional Causality Analysis in
Long-form Conversations | cs.CL cs.CY | Long-sequence causal reasoning seeks to uncover causal relationships within
extended time series data but is hindered by complex dependencies and the
challenges of validating causal links. To address the limitations of
large-scale language models (e.g., GPT-4) in capturing intricate emotional
causality within extended dialogues, we propose CauseMotion, a long-sequence
emotional causal reasoning framework grounded in Retrieval-Augmented Generation
(RAG) and multimodal fusion. Unlike conventional methods relying only on
textual information, CauseMotion enriches semantic representations by
incorporating audio-derived features-vocal emotion, emotional intensity, and
speech rate-into textual modalities. By integrating RAG with a sliding window
mechanism, it effectively retrieves and leverages contextually relevant
dialogue segments, thus enabling the inference of complex emotional causal
chains spanning multiple conversational turns. To evaluate its effectiveness,
we constructed the first benchmark dataset dedicated to long-sequence emotional
causal reasoning, featuring dialogues with over 70 turns. Experimental results
demonstrate that the proposed RAG-based multimodal integrated approach, the
efficacy of substantially enhances both the depth of emotional understanding
and the causal inference capabilities of large-scale language models. A GLM-4
integrated with CauseMotion achieves an 8.7% improvement in causal accuracy
over the original model and surpasses GPT-4o by 1.2%. Additionally, on the
publicly available DiaASQ dataset, CauseMotion-GLM-4 achieves state-of-the-art
results in accuracy, F1 score, and causal reasoning accuracy.
|
2501.00779 | REM: A Scalable Reinforced Multi-Expert Framework for Multiplex
Influence Maximization | cs.SI cs.AI | In social online platforms, identifying influential seed users to maximize
influence spread is a crucial as it can greatly diminish the cost and efforts
required for information dissemination. While effective, traditional methods
for Multiplex Influence Maximization (MIM) have reached their performance
limits, prompting the emergence of learning-based approaches. These novel
methods aim for better generalization and scalability for more sizable graphs
but face significant challenges, such as (1) inability to handle unknown
diffusion patterns and (2) reliance on high-quality training samples. To
address these issues, we propose the Reinforced Expert Maximization framework
(REM). REM leverages a Propagation Mixture of Experts technique to encode
dynamic propagation of large multiplex networks effectively in order to
generate enhanced influence propagation. Noticeably, REM treats a generative
model as a policy to autonomously generate different seed sets and learn how to
improve them from a Reinforcement Learning perspective. Extensive experiments
on several real-world datasets demonstrate that REM surpasses state-of-the-art
methods in terms of influence spread, scalability, and inference time in
influence maximization tasks.
|
2501.00782 | Navigating Nuance: In Quest for Political Truth | cs.CL cs.IR | This study investigates the several nuanced rationales for countering the
rise of political bias. We evaluate the performance of the Llama-3 (70B)
language model on the Media Bias Identification Benchmark (MBIB), based on a
novel prompting technique that incorporates subtle reasons for identifying
political leaning. Our findings underscore the challenges of detecting
political bias and highlight the potential of transfer learning methods to
enhance future models. Through our framework, we achieve a comparable
performance with the supervised and fully fine-tuned ConvBERT model, which is
the state-of-the-art model, performing best among other baseline models for the
political bias task on MBIB. By demonstrating the effectiveness of our
approach, we contribute to the development of more robust tools for mitigating
the spread of misinformation and polarization. Our codes and dataset are made
publicly available in github.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.