id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
2501.02941 | A Coupled PFEM-DEM Model for Fluid-Granular Flows with Free-Surface
Dynamics Applied to Landslides | cs.CE | Free surface and granular fluid mechanics problems combine the challenges of
fluid dynamics with aspects of granular behaviour. This type of problem is
particularly relevant in contexts such as the flow of sediments in rivers, the
movement of granular soils in reservoirs, or the interactions between a fluid
and granular materials in industrial processes such as silos. The numerical
simulation of these phenomena is challenging because the solution depends not
only on the multiple phases that strongly interact with each other, but also on
the need to describe the geometric evolution of the different interfaces. This
paper presents an approach to the simulation of fluid-granular phenomena
involving strongly deforming free surfaces. The Discrete Element Method (DEM)
is combined with the Particle Finite Element Method (PFEM) and the fluid-grain
interface is treated by a two-way coupling between the two phases. The
fluid-air interface is solved by a free surface model. The geometric and
topological variations are therefore naturally provided by the full Lagrangian
description of all phases. The approach is validated on benchmark test cases
such as two-phase dam failures and then applied to a real landslide problem.
|
2501.02942 | Improved Approximation Algorithms for Low-Rank Problems Using
Semidefinite Optimization | math.OC cs.LG math.PR | Inspired by the impact of the Goemans-Williamson algorithm on combinatorial
optimization, we construct an analogous relax-then-sample strategy for low-rank
optimization problems. First, for orthogonally constrained quadratic
optimization problems, we derive a semidefinite relaxation and a randomized
rounding scheme, which obtains provably near-optimal solutions, mimicking the
blueprint from Goemans and Williamson for the Max-Cut problem. We then extend
our approach to generic low-rank optimization problems by developing new
semidefinite relaxations that are both tighter and more broadly applicable than
those in prior works. Although our original proposal introduces large
semidefinite matrices as decision variables, we show that most of the blocks in
these matrices can be safely omitted without altering the optimal value, hence
improving the scalability of our approach. Using several examples (including
matrix completion, basis pursuit, and reduced-rank regression), we show how to
reduce the size of our relaxation even further. Finally, we numerically
illustrate the effectiveness and scalability of our relaxation and our sampling
scheme on orthogonally constrained quadratic optimization and matrix completion
problems.
|
2501.02945 | The Tabular Foundation Model TabPFN Outperforms Specialized Time Series
Forecasting Models Based on Simple Features | cs.LG | Foundation models have become popular in forecasting due to their ability to
make accurate predictions, even with minimal fine-tuning on specific datasets.
In this paper, we demonstrate how the newly released regression variant of
TabPFN, a general tabular foundation model, can be applied to time series
forecasting. We propose a straightforward approach, TabPFN-TS, which pairs
TabPFN with simple feature engineering to achieve strong forecasting
performance. Despite its simplicity and with only 11M parameters, TabPFN-TS
outperforms Chronos-Mini, a model of similar size, and matches or even slightly
outperforms Chronos-Large, which has 65-fold more parameters. A key strength of
our method lies in its reliance solely on artificial data during pre-training,
avoiding the need for large training datasets and eliminating the risk of
benchmark contamination.
|
2501.02949 | MSA-CNN: A Lightweight Multi-Scale CNN with Attention for Sleep Stage
Classification | cs.LG eess.SP | Recent advancements in machine learning-based signal analysis, coupled with
open data initiatives, have fuelled efforts in automatic sleep stage
classification. Despite the proliferation of classification models, few have
prioritised reducing model complexity, which is a crucial factor for practical
applications. In this work, we introduce Multi-Scale and Attention
Convolutional Neural Network (MSA-CNN), a lightweight architecture featuring as
few as ~10,000 parameters. MSA-CNN leverages a novel multi-scale module
employing complementary pooling to eliminate redundant filter parameters and
dense convolutions. Model complexity is further reduced by separating temporal
and spatial feature extraction and using cost-effective global spatial
convolutions. This separation of tasks not only reduces model complexity but
also mirrors the approach used by human experts in sleep stage scoring. We
evaluated both small and large configurations of MSA-CNN against nine
state-of-the-art baseline models across three public datasets, treating
univariate and multivariate models separately. Our evaluation, based on
repeated cross-validation and re-evaluation of all baseline models,
demonstrated that the large MSA-CNN outperformed all baseline models on all
three datasets in terms of accuracy and Cohen's kappa, despite its
significantly reduced parameter count. Lastly, we explored various model
variants and conducted an in-depth analysis of the key modules and techniques,
providing deeper insights into the underlying mechanisms. The code for our
models, baselines, and evaluation procedures is available at
https://github.com/sgoerttler/MSA-CNN.
|
2501.02950 | Key-value memory in the brain | q-bio.NC cs.AI cs.LG | Classical models of memory in psychology and neuroscience rely on
similarity-based retrieval of stored patterns, where similarity is a function
of retrieval cues and the stored patterns. While parsimonious, these models do
not allow distinct representations for storage and retrieval, despite their
distinct computational demands. Key-value memory systems, in contrast,
distinguish representations used for storage (values) and those used for
retrieval (keys). This allows key-value memory systems to optimize
simultaneously for fidelity in storage and discriminability in retrieval. We
review the computational foundations of key-value memory, its role in modern
machine learning systems, related ideas from psychology and neuroscience,
applications to a number of empirical puzzles, and possible biological
implementations.
|
2501.02955 | MotionBench: Benchmarking and Improving Fine-grained Video Motion
Understanding for Vision Language Models | cs.CV | In recent years, vision language models (VLMs) have made significant
advancements in video understanding. However, a crucial capability -
fine-grained motion comprehension - remains under-explored in current
benchmarks. To address this gap, we propose MotionBench, a comprehensive
evaluation benchmark designed to assess the fine-grained motion comprehension
of video understanding models. MotionBench evaluates models' motion-level
perception through six primary categories of motion-oriented question types and
includes data collected from diverse sources, ensuring a broad representation
of real-world video content. Experimental results reveal that existing VLMs
perform poorly in understanding fine-grained motions. To enhance VLM's ability
to perceive fine-grained motion within a limited sequence length of LLM, we
conduct extensive experiments reviewing VLM architectures optimized for video
feature compression and propose a novel and efficient Through-Encoder (TE)
Fusion method. Experiments show that higher frame rate inputs and TE Fusion
yield improvements in motion understanding, yet there is still substantial room
for enhancement. Our benchmark aims to guide and motivate the development of
more capable video understanding models, emphasizing the importance of
fine-grained motion comprehension. Project page: https://motion-bench.github.io .
|
2501.02961 | A Point Process Model for Optimizing Repeated Personalized Action
Delivery to Users | stat.ML cs.LG | This paper provides a formalism for an important class of causal inference
problems inspired by user-advertiser interaction in online advertiser. Then
this formalism is specialized to an extension of temporal marked point
processes and the neural point processes are suggested as practical solutions
to some interesting special cases.
|
2501.02962 | SceneVTG++: Controllable Multilingual Visual Text Generation in the Wild | cs.CV | Generating visual text in natural scene images is a challenging task with
many unsolved problems. Different from generating text on artificially designed
images (such as posters, covers, cartoons, etc.), the text in natural scene
images needs to meet the following four key criteria: (1) Fidelity: the
generated text should appear as realistic as a photograph and be completely
accurate, with no errors in any of the strokes. (2) Reasonability: the text
should be generated on reasonable carrier areas (such as boards, signs, walls,
etc.), and the generated text content should also be relevant to the scene. (3)
Utility: the generated text can facilitate to the training of natural scene OCR
(Optical Character Recognition) tasks. (4) Controllability: The attribute of
the text (such as font and color) should be controllable as needed. In this
paper, we propose a two stage method, SceneVTG++, which simultaneously
satisfies the four aspects mentioned above. SceneVTG++ consists of a Text
Layout and Content Generator (TLCG) and a Controllable Local Text Diffusion
(CLTD). The former utilizes the world knowledge of multi modal large language
models to find reasonable text areas and recommend text content according to
the nature scene background images, while the latter generates controllable
multilingual text based on the diffusion model. Through extensive experiments,
we respectively verified the effectiveness of TLCG and CLTD, and demonstrated
the state-of-the-art text generation performance of SceneVTG++. In addition,
the generated images have superior utility in OCR tasks like text detection and
text recognition. Codes and datasets will be available.
|
2501.02964 | Socratic Questioning: Learn to Self-guide Multimodal Reasoning in the
Wild | cs.CV cs.AI | Complex visual reasoning remains a key challenge today. Typically, the
challenge is tackled using methodologies such as Chain of Thought (COT) and
visual instruction tuning. However, how to organically combine these two
methodologies for greater success remains unexplored. Also, issues like
hallucinations and high training cost still need to be addressed. In this work,
we devise an innovative multi-round training and reasoning framework suitable
for lightweight Multimodal Large Language Models (MLLMs). Our self-questioning
approach heuristically guides MLLMs to focus on visual clues relevant to the
target problem, reducing hallucinations and enhancing the model's ability to
describe fine-grained image details. This ultimately enables the model to
perform well in complex visual reasoning and question-answering tasks. We have
named this framework Socratic Questioning(SQ). To facilitate future research,
we create a multimodal mini-dataset named CapQA, which includes 1k images of
fine-grained activities, for visual instruction tuning and evaluation, our
proposed SQ method leads to a 31.2% improvement in the hallucination score. Our
extensive experiments on various benchmarks demonstrate SQ's remarkable
capabilities in heuristic self-questioning, zero-shot visual reasoning and
hallucination mitigation. Our model and code will be publicly available.
|
2501.02966 | Human Gaze Boosts Object-Centered Representation Learning | cs.CV cs.LG | Recent self-supervised learning (SSL) models trained on human-like egocentric
visual inputs substantially underperform on image recognition tasks compared to
humans. These models train on raw, uniform visual inputs collected from
head-mounted cameras. This is different from humans, as the anatomical
structure of the retina and visual cortex relatively amplifies the central
visual information, i.e. around humans' gaze location. This selective
amplification in humans likely aids in forming object-centered visual
representations. Here, we investigate whether focusing on central visual
information boosts egocentric visual object learning. We simulate 5-months of
egocentric visual experience using the large-scale Ego4D dataset and generate
gaze locations with a human gaze prediction model. To account for the
importance of central vision in humans, we crop the visual area around the gaze
location. Finally, we train a time-based SSL model on these modified inputs.
Our experiments demonstrate that focusing on central vision leads to better
object-centered representations. Our analysis shows that the SSL model
leverages the temporal dynamics of the gaze movements to build stronger visual
representations. Overall, our work marks a significant step toward bio-inspired
learning of visual representations.
|
2501.02968 | FlipedRAG: Black-Box Opinion Manipulation Attacks to Retrieval-Augmented
Generation of Large Language Models | cs.IR | Retrieval-Augmented Generation (RAG) addresses hallucination and real-time
constraints by dynamically retrieving relevant information from a knowledge
database to supplement the LLMs' input. When presented with a query, RAG
selects the most semantically similar texts from its knowledge bases and uses
them as context for the LLMs to generate more accurate responses. RAG also
creates a new attack surface, especially since RAG databases are frequently
sourced from public domains. While existing studies have predominantly focused
on optimizing RAG's performance and efficiency, emerging research has begun
addressing the security concerns associated with RAG. However, these works have
some limitations, typically focusing on either white-box methodologies or
heuristic-based black-box attacks. Furthermore, prior research has mainly
targeted simple factoid question answering, which is neither practically
challenging nor resistant to correction. In this paper, we unveil a more
realistic and threatening scenario: opinion manipulation for controversial
topics against RAG. Particularly, we propose a novel RAG black-box attack
method, termed FlipedRAG, which is transfer-based. By leveraging instruction
engineering, we obtain partial retrieval model outputs from black-box RAG
system, facilitating the training of surrogate models to enhance the
effectiveness of opinion manipulation attack. Extensive experimental results
confirms that our approach significantly enhances the average success rate of
opinion manipulation by 16.7%. It achieves an average of a 50% directional
change in the opinion polarity of RAG responses across four themes.
Additionally, it induces a 20% shift in user cognition. Furthermore, we discuss
the efficacy of potential defense mechanisms and conclude that they are
insufficient in mitigating this type of attack, highlighting the urgent need to
develop novel defensive strategies.
|
2501.02969 | LOHA: Direct Graph Spectral Contrastive Learning Between Low-pass and
High-pass Views | cs.LG | Spectral Graph Neural Networks effectively handle graphs with different
homophily levels, with low-pass filter mining feature smoothness and high-pass
filter capturing differences. When these distinct filters could naturally form
two opposite views for self-supervised learning, the commonalities between the
counterparts for the same node remain unexplored, leading to suboptimal
performance. In this paper, a simple yet effective self-supervised contrastive
framework, LOHA, is proposed to address this gap. LOHA optimally leverages
low-pass and high-pass views by embracing "harmony in diversity". Rather than
solely maximizing the difference between these distinct views, which may lead
to feature separation, LOHA harmonizes the diversity by treating the
propagation of graph signals from both views as a composite feature.
Specifically, a novel high-dimensional feature named spectral signal trend is
proposed to serve as the basis for the composite feature, which remains
relatively unaffected by changing filters and focuses solely on original
feature differences. LOHA achieves an average performance improvement of 2.8%
over runner-up models on 9 real-world datasets with varying homophily levels.
Notably, LOHA even surpasses fully-supervised models on several datasets, which
underscores the potential of LOHA in advancing the efficacy of spectral GNNs
for diverse graph structures.
|
2501.02971 | Proof-of-Data: A Consensus Protocol for Collaborative Intelligence | cs.CR cs.AI | Existing research on federated learning has been focused on the setting where
learning is coordinated by a centralized entity. Yet the greatest potential of
future collaborative intelligence would be unleashed in a more open and
democratized setting with no central entity in a dominant role, referred to as
"decentralized federated learning". New challenges arise accordingly in
achieving both correct model training and fair reward allocation with
collective effort among all participating nodes, especially with the threat of
the Byzantine node jeopardising both tasks.
In this paper, we propose a blockchain-based decentralized Byzantine
fault-tolerant federated learning framework based on a novel Proof-of-Data
(PoD) consensus protocol to resolve both the "trust" and "incentive"
components. By decoupling model training and contribution accounting, PoD is
able to enjoy not only the benefit of learning efficiency and system liveliness
from asynchronous societal-scale PoW-style learning but also the finality of
consensus and reward allocation from epoch-based BFT-style voting. To mitigate
false reward claims by data forgery from Byzantine attacks, a privacy-aware
data verification and contribution-based reward allocation mechanism is
designed to complete the framework. Our evaluation results show that PoD
demonstrates performance in model training close to that of the centralized
counterpart while achieving trust in consensus and fairness for reward
allocation with a fault tolerance ratio of 1/3.
|
2501.02973 | HaWoR: World-Space Hand Motion Reconstruction from Egocentric Videos | cs.CV | Despite the advent in 3D hand pose estimation, current methods predominantly
focus on single-image 3D hand reconstruction in the camera frame, overlooking
the world-space motion of the hands. Such limitation prohibits their direct use
in egocentric video settings, where hands and camera are continuously in
motion. In this work, we propose HaWoR, a high-fidelity method for hand motion
reconstruction in world coordinates from egocentric videos. We propose to
decouple the task by reconstructing the hand motion in the camera space and
estimating the camera trajectory in the world coordinate system. To achieve
precise camera trajectory estimation, we propose an adaptive egocentric SLAM
framework that addresses the shortcomings of traditional SLAM methods,
providing robust performance under challenging camera dynamics. To ensure
robust hand motion trajectories, even when the hands move out of view frustum,
we devise a novel motion infiller network that effectively completes the
missing frames of the sequence. Through extensive quantitative and qualitative
evaluations, we demonstrate that HaWoR achieves state-of-the-art performance on
both hand motion reconstruction and world-frame camera trajectory estimation
under different egocentric benchmark datasets. Code and models are available on
https://hawor-project.github.io/ .
|
2501.02975 | Fuzzy Granule Density-Based Outlier Detection with Multi-Scale Granular
Balls | cs.LG cs.AI | Outlier detection refers to the identification of anomalous samples that
deviate significantly from the distribution of normal data and has been
extensively studied and used in a variety of practical tasks. However, most
unsupervised outlier detection methods are carefully designed to detect
specified outliers, while real-world data may be entangled with different types
of outliers. In this study, we propose a fuzzy rough sets-based multi-scale
outlier detection method to identify various types of outliers. Specifically, a
novel fuzzy rough sets-based method that integrates relative fuzzy granule
density is first introduced to improve the capability of detecting local
outliers. Then, a multi-scale view generation method based on granular-ball
computing is proposed to collaboratively identify group outliers at different
levels of granularity. Moreover, reliable outliers and inliers determined by
the three-way decision are used to train a weighted support vector machine to
further improve the performance of outlier detection. The proposed method
innovatively transforms unsupervised outlier detection into a semi-supervised
classification problem and for the first time explores the fuzzy rough
sets-based outlier detection from the perspective of multi-scale granular
balls, allowing for high adaptability to different types of outliers. Extensive
experiments carried out on both artificial and UCI datasets demonstrate that
the proposed outlier detection method significantly outperforms the
state-of-the-art methods, improving the results by at least 8.48% in terms of
the Area Under the ROC Curve (AUROC) index. { The source codes are released at
\url{https://github.com/Xiaofeng-Tan/MGBOD}. }
|
2501.02976 | STAR: Spatial-Temporal Augmentation with Text-to-Video Models for
Real-World Video Super-Resolution | cs.CV | Image diffusion models have been adapted for real-world video
super-resolution to tackle over-smoothing issues in GAN-based methods. However,
these models struggle to maintain temporal consistency, as they are trained on
static images, limiting their ability to capture temporal dynamics effectively.
Integrating text-to-video (T2V) models into video super-resolution for improved
temporal modeling is straightforward. However, two key challenges remain:
artifacts introduced by complex degradations in real-world scenarios, and
compromised fidelity due to the strong generative capacity of powerful T2V
models (\textit{e.g.}, CogVideoX-5B). To enhance the spatio-temporal quality of
restored videos, we introduce\textbf{~\name}
(\textbf{S}patial-\textbf{T}emporal \textbf{A}ugmentation with T2V models for
\textbf{R}eal-world video super-resolution), a novel approach that leverages
T2V models for real-world video super-resolution, achieving realistic spatial
details and robust temporal consistency. Specifically, we introduce a Local
Information Enhancement Module (LIEM) before the global attention block to
enrich local details and mitigate degradation artifacts. Moreover, we propose a
Dynamic Frequency (DF) Loss to reinforce fidelity, guiding the model to focus
on different frequency components across diffusion steps. Extensive experiments
demonstrate\textbf{~\name}~outperforms state-of-the-art methods on both
synthetic and real-world datasets.
|
2501.02977 | CAMP: Collaborative Attention Model with Profiles for Vehicle Routing
Problems | cs.MA cs.AI | The profiled vehicle routing problem (PVRP) is a generalization of the
heterogeneous capacitated vehicle routing problem (HCVRP) in which the
objective is to optimize the routes of vehicles to serve client demands subject
to different vehicle profiles, with each having a preference or constraint on a
per-client basis. While existing learning methods have shown promise for
solving the HCVRP in real-time, no learning method exists to solve the more
practical and challenging PVRP. In this paper, we propose a Collaborative
Attention Model with Profiles (CAMP), a novel approach that learns efficient
solvers for PVRP using multi-agent reinforcement learning. CAMP employs a
specialized attention-based encoder architecture to embed profiled client
embeddings in parallel for each vehicle profile. We design a communication
layer between agents for collaborative decision-making across profiled
embeddings at each decoding step and a batched pointer mechanism to attend to
the profiled embeddings to evaluate the likelihood of the next actions. We
evaluate CAMP on two variants of PVRPs: PVRP with preferences, which explicitly
influence the reward function, and PVRP with zone constraints with different
numbers of agents and clients, demonstrating that our learned solvers achieve
competitive results compared to both classical state-of-the-art neural
multi-agent models in terms of solution quality and computational efficiency.
We make our code openly available at https://github.com/ai4co/camp.
|
2501.02979 | Registering Source Tokens to Target Language Spaces in Multilingual
Neural Machine Translation | cs.CL | The multilingual neural machine translation (MNMT) aims for arbitrary
translations across multiple languages. Although MNMT-specific models trained
by parallel data offer low costs in training and deployment, their performance
consistently lags behind that of large language models (LLMs). In this work, we
introduce registering, a novel method that enables a small MNMT-specific model
to compete with LLMs. Specifically, we insert a set of artificial tokens
specifying the target language, called registers, into the input sequence
between the source and target tokens. By modifying the attention mask, the
target token generation only pays attention to the activation of registers,
representing the source tokens in the target language space. Experiments on
EC-40, a large-scale benchmark, show that our method advances the
state-of-the-art of MNMT. We further pre-train two models, namely MITRE
(multilingual translation with registers), by 9.3 billion sentence pairs across
24 languages collected from public corpus. One of them, MITRE-913M, outperforms
NLLB-3.3B, achieves comparable performance with commercial LLMs, and shows
strong adaptability in fine-tuning. Finally, we open-source our models to
facilitate further research and development in MNMT:
https://github.com/zhiqu22/mitre.
|
2501.02981 | CONTINUUM: Detecting APT Attacks through Spatial-Temporal Graph Neural
Networks | cs.CR cs.AI cs.NI | Advanced Persistent Threats (APTs) represent a significant challenge in
cybersecurity due to their sophisticated and stealthy nature. Traditional
Intrusion Detection Systems (IDS) often fall short in detecting these
multi-stage attacks. Recently, Graph Neural Networks (GNNs) have been employed
to enhance IDS capabilities by analyzing the complex relationships within
networked data. However, existing GNN-based solutions are hampered by high
false positive rates and substantial resource consumption. In this paper, we
present a novel IDS designed to detect APTs using a Spatio-Temporal Graph
Neural Network Autoencoder. Our approach leverages spatial information to
understand the interactions between entities within a graph and temporal
information to capture the evolution of the graph over time. This dual
perspective is crucial for identifying the sequential stages of APTs.
Furthermore, to address privacy and scalability concerns, we deploy our
architecture in a federated learning environment. This setup ensures that local
data remains on-premise while encrypted model-weights are shared and aggregated
using homomorphic encryption, maintaining data privacy and security. Our
evaluation shows that this system effectively detects APTs with lower false
positive rates and optimized resource usage compared to existing methods,
highlighting the potential of spatio-temporal analysis and federated learning
in enhancing cybersecurity defenses.
|
2501.02982 | A Bio-Inspired Research Paradigm of Collision Perception Neurons
Enabling Neuro-Robotic Integration: The LGMD Case | cs.NE cs.AI q-bio.NC | Compared to human vision, insect visual systems excel at rapid and precise
collision detection, despite relying on only tens of thousands of neurons
organized through a few neuropils. This efficiency makes them an attractive
model system for developing artificial collision-detecting systems.
Specifically, researchers have identified collision-selective neurons in the
locust's optic lobe, called lobula giant movement detectors (LGMDs), which
respond specifically to approaching objects. Research upon LGMD neurons began
in the early 1970s. Initially, due to their large size, these neurons were
identified as motion detectors, but their role as looming detectors was
recognized over time. Since then, progress in neuroscience, computational
modeling of LGMD's visual neural circuits, and LGMD-based robotics has advanced
in tandem, each field supporting and driving the others. Today, with a deeper
understanding of LGMD neurons, LGMD-based models have significantly improved
collision-free navigation in mobile robots including ground and aerial robots.
This review highlights recent developments in LGMD research from the
perspectives of neuroscience, computational modeling, and robotics. It
emphasizes a biologically plausible research paradigm, where insights from
neuroscience inform real-world applications, which would in turn validate and
advance neuroscience. With strong support from extensive research and growing
application demand, this paradigm has reached a mature stage and demonstrates
versatility across different areas of neuroscience research, thereby enhancing
our understanding of the interconnections between neuroscience, computational
modeling, and robotics. Furthermore, other motion-sensitive neurons have also
shown promising potential for adopting this research paradigm.
|
2501.02989 | Classifier Weighted Mixture models | stat.ML cs.LG | This paper proposes an extension of standard mixture stochastic models, by
replacing the constant mixture weights with functional weights defined using a
classifier. Classifier Weighted Mixtures enable straightforward density
evaluation, explicit sampling, and enhanced expressivity in variational
estimation problems, without increasing the number of components nor the
complexity of the mixture components.
|
2501.02990 | SurgRIPE challenge: Benchmark of Surgical Robot Instrument Pose
Estimation | cs.CV cs.RO | Accurate instrument pose estimation is a crucial step towards the future of
robotic surgery, enabling applications such as autonomous surgical task
execution. Vision-based methods for surgical instrument pose estimation provide
a practical approach to tool tracking, but they often require markers to be
attached to the instruments. Recently, more research has focused on the
development of marker-less methods based on deep learning. However, acquiring
realistic surgical data, with ground truth instrument poses, required for deep
learning training, is challenging. To address the issues in surgical instrument
pose estimation, we introduce the Surgical Robot Instrument Pose Estimation
(SurgRIPE) challenge, hosted at the 26th International Conference on Medical
Image Computing and Computer-Assisted Intervention (MICCAI) in 2023. The
objectives of this challenge are: (1) to provide the surgical vision community
with realistic surgical video data paired with ground truth instrument poses,
and (2) to establish a benchmark for evaluating markerless pose estimation
methods. The challenge led to the development of several novel algorithms that
showcased improved accuracy and robustness over existing methods. The
performance evaluation study on the SurgRIPE dataset highlights the potential
of these advanced algorithms to be integrated into robotic surgery systems,
paving the way for more precise and autonomous surgical procedures. The
SurgRIPE challenge has successfully established a new benchmark for the field,
encouraging further research and development in surgical robot instrument pose
estimation.
|
2501.02992 | GLFC: Unified Global-Local Feature and Contrast Learning with
Mamba-Enhanced UNet for Synthetic CT Generation from CBCT | eess.IV cs.AI cs.CV | Generating synthetic Computed Tomography (CT) images from Cone Beam Computed
Tomography (CBCT) is desirable for improving the image quality of CBCT.
Existing synthetic CT (sCT) generation methods using Convolutional Neural
Networks (CNN) and Transformers often face difficulties in effectively
capturing both global and local features and contrasts for high-quality sCT
generation. In this work, we propose a Global-Local Feature and Contrast
learning (GLFC) framework for sCT generation. First, a Mamba-Enhanced UNet
(MEUNet) is introduced by integrating Mamba blocks into the skip connections of
a high-resolution UNet for effective global and local feature learning. Second,
we propose a Multiple Contrast Loss (MCL) that calculates synthetic loss at
different intensity windows to improve quality for both soft tissues and bone
regions. Experiments on the SynthRAD2023 dataset demonstrate that GLFC improved
the SSIM of sCT from 77.91% to 91.50% compared with the original CBCT, and
significantly outperformed several existing methods for sCT generation. The
code is available at https://github.com/HiLab-git/GLFC
|
2501.02994 | NeuroPMD: Neural Fields for Density Estimation on Product Manifolds | stat.ML cs.LG | We propose a novel deep neural network methodology for density estimation on
product Riemannian manifold domains. In our approach, the network directly
parameterizes the unknown density function and is trained using a penalized
maximum likelihood framework, with a penalty term formed using manifold
differential operators. The network architecture and estimation algorithm are
carefully designed to handle the challenges of high-dimensional product
manifold domains, effectively mitigating the curse of dimensionality that
limits traditional kernel and basis expansion estimators, as well as overcoming
the convergence issues encountered by non-specialized neural network methods.
Extensive simulations and a real-world application to brain structural
connectivity data highlight the clear advantages of our method over the
competing alternatives.
|
2501.02997 | CALM: Curiosity-Driven Auditing for Large Language Models | cs.AI cs.CL | Auditing Large Language Models (LLMs) is a crucial and challenging task. In
this study, we focus on auditing black-box LLMs without access to their
parameters, only to the provided service. We treat this type of auditing as a
black-box optimization problem where the goal is to automatically uncover
input-output pairs of the target LLMs that exhibit illegal, immoral, or unsafe
behaviors. For instance, we may seek a non-toxic input that the target LLM
responds to with a toxic output or an input that induces the hallucinative
response from the target LLM containing politically sensitive individuals. This
black-box optimization is challenging due to the scarcity of feasible points,
the discrete nature of the prompt space, and the large search space. To address
these challenges, we propose Curiosity-Driven Auditing for Large Language
Models (CALM), which uses intrinsically motivated reinforcement learning to
finetune an LLM as the auditor agent to uncover potential harmful and biased
input-output pairs of the target LLM. CALM successfully identifies derogatory
completions involving celebrities and uncovers inputs that elicit specific
names under the black-box setting. This work offers a promising direction for
auditing black-box LLMs. Our code is available at
https://github.com/x-zheng16/CALM.git.
|
2501.03005 | PiLaMIM: Toward Richer Visual Representations by Integrating Pixel and
Latent Masked Image Modeling | cs.CV | In Masked Image Modeling (MIM), two primary methods exist: Pixel MIM and
Latent MIM, each utilizing different reconstruction targets, raw pixels and
latent representations, respectively. Pixel MIM tends to capture low-level
visual details such as color and texture, while Latent MIM focuses on
high-level semantics of an object. However, these distinct strengths of each
method can lead to suboptimal performance in tasks that rely on a particular
level of visual features. To address this limitation, we propose PiLaMIM, a
unified framework that combines Pixel MIM and Latent MIM to integrate their
complementary strengths. Our method uses a single encoder along with two
distinct decoders: one for predicting pixel values and another for latent
representations, ensuring the capture of both high-level and low-level visual
features. We further integrate the CLS token into the reconstruction process to
aggregate global context, enabling the model to capture more semantic
information. Extensive experiments demonstrate that PiLaMIM outperforms key
baselines such as MAE, I-JEPA and BootMAE in most cases, proving its
effectiveness in extracting richer visual representations.
|
2501.03006 | TransPixeler: Advancing Text-to-Video Generation with Transparency | cs.CV | Text-to-video generative models have made significant strides, enabling
diverse applications in entertainment, advertising, and education. However,
generating RGBA video, which includes alpha channels for transparency, remains
a challenge due to limited datasets and the difficulty of adapting existing
models. Alpha channels are crucial for visual effects (VFX), allowing
transparent elements like smoke and reflections to blend seamlessly into
scenes. We introduce TransPixeler, a method to extend pretrained video models
for RGBA generation while retaining the original RGB capabilities. TransPixar
leverages a diffusion transformer (DiT) architecture, incorporating
alpha-specific tokens and using LoRA-based fine-tuning to jointly generate RGB
and alpha channels with high consistency. By optimizing attention mechanisms,
TransPixar preserves the strengths of the original RGB model and achieves
strong alignment between RGB and alpha channels despite limited training data.
Our approach effectively generates diverse and consistent RGBA videos,
advancing the possibilities for VFX and interactive content creation.
|
2501.03008 | Quality Estimation based Feedback Training for Improving Pronoun
Translation | cs.CL cs.AI | Pronoun translation is a longstanding challenge in neural machine translation
(NMT), often requiring inter-sentential context to ensure linguistic accuracy.
To address this, we introduce ProNMT, a novel framework designed to enhance
pronoun and overall translation quality in context-aware machine translation
systems. ProNMT leverages Quality Estimation (QE) models and a unique Pronoun
Generation Likelihood-Based Feedback mechanism to iteratively fine-tune
pre-trained NMT models without relying on extensive human annotations. The
framework combines QE scores with pronoun-specific rewards to guide training,
ensuring improved handling of linguistic nuances. Extensive experiments
demonstrate significant gains in pronoun translation accuracy and general
translation quality across multiple metrics. ProNMT offers an efficient,
scalable, and context-aware approach to improving NMT systems, particularly in
translating context-dependent elements like pronouns.
|
2501.03012 | Analyzing Fine-tuning Representation Shift for Multimodal LLMs Steering
alignment | cs.AI cs.CL cs.CV | Multimodal LLMs have reached remarkable levels of proficiency in
understanding multimodal inputs, driving extensive research to develop
increasingly powerful models. However, much less attention has been paid to
understanding and explaining the underlying mechanisms of these models. Most
existing explainability research examines these models only in their final
states, overlooking the dynamic representational shifts that occur during
training. In this work, we systematically analyze the evolution of hidden state
representations to reveal how fine-tuning alters the internal structure of a
model to specialize in new multimodal tasks. Using a concept-based approach, we
map hidden states to interpretable visual and textual concepts, enabling us to
trace changes in encoded concepts across modalities as training progresses. We
also demonstrate the use of shift vectors to capture these concepts changes.
These shift vectors allow us to recover fine-tuned concepts by shifting those
in the original model. Finally, we explore the practical impact of our findings
on model steering, showing that we can adjust multimodal LLMs behaviors without
any training, such as modifying answer types, captions style, or biasing the
model toward specific responses. Our work sheds light on how multimodal
representations evolve through fine-tuning and offers a new perspective for
interpreting model adaptation in multimodal tasks. The code for this project is
publicly available at https://github.com/mshukor/xl-vlms.
|
2501.03016 | Classification of LCD and self-dual codes over a finite non-unital local
ring | cs.IT math.IT | This work explores LCD and self-dual codes over a noncommutative non-unital
ring $ E_p= \langle r,s ~|~ pr =ps=0,~ r^2=r,~ s^2=s,~ rs=r,~ sr=s \rangle$ of
order $p^2$ where $p$ is a prime. Initially, we study the monomial equivalence
of two free $E_p$-linear codes. In addition, a necessary and sufficient
condition is derived for a free $E_p$-linear code to be MDS and almost MDS
(AMDS). Then, we use these results to classify MDS and AMDS LCD codes over
$E_2$ and $E_3$ under monomial equivalence for lengths up to $6$. Subsequently,
we study left self-dual codes over the ring $E_p$ and classify MDS and AMDS
left self-dual codes over $E_2$ and $E_3$ for lengths up to $12$. Finally, we
study self-dual codes over the ring $E_p$ and classify MDS and AMDS self-dual
codes over $E_2$ and $E_3$ for smaller lengths.
|
2501.03017 | Convexity in ReLU Neural Networks: beyond ICNNs? | cs.LG | Convex functions and their gradients play a critical role in mathematical
imaging, from proximal optimization to Optimal Transport. The successes of deep
learning has led many to use learning-based methods, where fixed functions or
operators are replaced by learned neural networks. Regardless of their
empirical superiority, establishing rigorous guarantees for these methods often
requires to impose structural constraints on neural architectures, in
particular convexity. The most popular way to do so is to use so-called Input
Convex Neural Networks (ICNNs). In order to explore the expressivity of ICNNs,
we provide necessary and sufficient conditions for a ReLU neural network to be
convex. Such characterizations are based on product of weights and activations,
and write nicely for any architecture in the path-lifting framework. As
particular applications, we study our characterizations in depth for 1 and
2-hidden-layer neural networks: we show that every convex function implemented
by a 1-hidden-layer ReLU network can be also expressed by an ICNN with the same
architecture; however this property no longer holds with more layers. Finally,
we provide a numerical procedure that allows an exact check of convexity for
ReLU neural networks with a large number of affine regions.
|
2501.03018 | Probably Correct Optimal Stable Matching for Two-Sided Markets Under
Uncertainty | cs.LG | We consider a learning problem for the stable marriage model under unknown
preferences for the left side of the market. We focus on the centralized case,
where at each time step, an online platform matches the agents, and obtains a
noisy evaluation reflecting their preferences. Our aim is to quickly identify
the stable matching that is left-side optimal, rendering this a pure
exploration problem with bandit feedback. We specifically aim to find Probably
Correct Optimal Stable Matchings and present several bandit algorithms to do
so. Our findings provide a foundational understanding of how to efficiently
gather and utilize preference information to identify the optimal stable
matching in two-sided markets under uncertainty. An experimental analysis on
synthetic data complements theoretical results on sample complexities for the
proposed methods.
|
2501.03020 | AC-aware Optimization Framework for Under-Frequency Load Shedding | eess.SY cs.SY | Under-frequency load shedding (UFLS) prevents system collapse during large
disturbances. Increased penetration of distributed energy resources (DERs) and
reduced system inertia makes it challenging to design a static UFLS scheme,
which relies on preset frequency thresholds and load shed fractions to meet
design criteria across all possible operating conditions. Due to non-linearity
and traceability issues, previous adaptive UFLS schemes use simplified
tractable frequency models that overlook AC network effects such as
voltage-dependent load/generation. This paper leverages model order reduction
techniques to obtain a higher fidelity low-order model of system frequency
dynamics that captures AC network effects while incorporating turbine governor
action and their associated limits. The model is then used in a new AC-aware
predictive optimization framework to adapt UFLS setpoints periodically based on
current operating conditions while minimizing load shed. Validated on a
1,648-bus system with PSS/E simulations, the proposed method meets design
criteria under various operating conditions and disturbance scenarios.
Furthermore, the framework outperforms conventional static UFLS schemes and
adaptive UFLS schemes based on simplified dynamic models.
|
2501.03021 | A Trust-Guided Approach to MR Image Reconstruction with Side Information | eess.IV cs.CV cs.LG | Reducing MRI scan times can improve patient care and lower healthcare costs.
Many acceleration methods are designed to reconstruct diagnostic-quality images
from limited sets of acquired $\textit{k}$-space data. This task can be framed
as a linear inverse problem (LIP), where, as a result of undersampling, the
forward operator may become rank-deficient or exhibit small singular values.
This results in ambiguities in reconstruction, in which multiple generally
incorrect or non-diagnostic images can map to the same acquired data. To
address such ambiguities, it is crucial to incorporate prior knowledge, for
example in the form of regularization. Another form of prior knowledge less
commonly used in medical imaging is contextual side information garnered from
other sources than the current acquisition. Here, we propose the
$\textbf{T}$rust-$\textbf{G}$uided $\textbf{V}$ariational $\textbf{N}$etwork
$\textbf{(TGVN)}$, a novel end-to-end deep learning framework that effectively
integrates side information into LIPs. TGVN eliminates undesirable solutions
from the ambiguous space of the forward operator while remaining faithful to
the acquired data. We demonstrate its effectiveness in multi-coil,
multi-contrast MR image reconstruction, where incomplete or low-quality
measurements from one contrast are used as side information to reconstruct
high-quality images of another contrast from heavily under-sampled data. Our
method is robust across different contrasts, anatomies, and field strengths.
Compared to baselines that also utilize side information, TGVN achieves
superior image quality at challenging under-sampling levels, drastically
speeding up acquisition while minimizing hallucinations. Our approach is also
versatile enough to incorporate many different types of side information
(including previous scans or even text) into any LIP.
|
2501.03026 | Putnam's Critical and Explanatory Tendencies Interpreted from a Machine
Learning Perspective | cs.CY cs.AI cs.LG | Making sense of theory choice in normal and across extraordinary science is
central to philosophy of science. The emergence of machine learning models has
the potential to act as a wrench in the gears of current debates. In this
paper, I will attempt to reconstruct the main movements that lead to and came
out of Putnam's critical and explanatory tendency distinction, argue for the
biconditional necessity of the tendencies, and conceptualize that wrench
through a machine learning interpretation of my claim.
|
2501.03028 | Enabling Efficient Optimal Control of Reconfigurable Battery Systems by
Unifying System Modeling and Narrowing Search Space | eess.SY cs.SY physics.app-ph | Reconfigurable battery systems (RBSs) are emerging as a promising solution to
improving fault tolerance, charge and thermal balance, energy delivery, etc. To
optimize these performance metrics, high-dimensional nonlinear integer
programming problems need to be formulated and solved. During this process, it
is necessary to address multiple challenges stemming from nonlinear battery
characteristics, discrete switch states, dynamic system configurations, as well
as the curse of dimensionality inherent in large-scale systems. Thus, we
propose a unified modeling framework to accommodate various potential
configurations of an RBS and even to cover different RBS designs, significantly
facilitating the topology design and optimization problem formulation for RBSs.
Moreover, to solve the formulated RBS problems, the search space is tailored to
encompass only feasible SSVs, thereby ensuring safe system operation while
substantially curtailing search efforts. These proposed methods, focusing on
unifying the system modeling and narrowing the search space, lay a solid
foundation for effectively formulating and efficiently solving RBS optimal
control problems. The accuracy and effectiveness of the proposed methods are
demonstrated by simulation and experimental tests.
|
2501.03030 | DDRM-PR: Fourier Phase Retrieval using Denoising Diffusion Restoration
Models | eess.IV cs.CV | Diffusion models have demonstrated their utility as learned priors for
solving various inverse problems. However, most existing approaches are limited
to linear inverse problems. This paper exploits the efficient and unsupervised
posterior sampling framework of Denoising Diffusion Restoration Models (DDRM)
for the solution of nonlinear phase retrieval problem, which requires
reconstructing an image from its noisy intensity-only measurements such as
Fourier intensity. The approach combines the model-based alternating-projection
methods with the DDRM to utilize pretrained unconditional diffusion priors for
phase retrieval. The performance is demonstrated through both simulations and
experimental data. Results demonstrate the potential of this approach for
improving the alternating-projection methods as well as its limitations.
|
2501.03035 | Quantization Meets Reasoning: Exploring LLM Low-Bit Quantization
Degradation for Mathematical Reasoning | cs.CL cs.AI | Large language models have achieved significant advancements in complex
mathematical reasoning benchmarks, such as MATH. However, their substantial
computational requirements present challenges for practical deployment. Model
quantization has emerged as an effective strategy to reduce memory usage and
computational costs by employing lower precision and bit-width representations.
In this study, we systematically evaluate the impact of quantization on
mathematical reasoning tasks. Our results demonstrate that aggressive
quantization methods like AWQ and GPTQ introduce up to 32.39% accuracy
degradation (average 11.31%) on Llama-3 models, particularly in numerical
computation and reasoning planning. To address this, we introduce a
multidimensional evaluation framework combining qualitative capability analysis
and quantitative error assessment. We further develop targeted recovery
strategies, showing that fine-tuning quantized models on only 545 task-specific
examples for 3 minutes on 4 GPUs effectively restores reasoning capabilities to
near full-precision levels. Additionally, our error assessment pipeline
achieves 98.9% accuracy in diagnosing and localizing errors across 3,366
failure cases, providing actionable insights for mitigating
quantization-induced degradation.
|
2501.03038 | Piano Transcription by Hierarchical Language Modeling with Pretrained
Roll-based Encoders | cs.SD cs.AI cs.LG eess.AS | Automatic Music Transcription (AMT), aiming to get musical notes from raw
audio, typically uses frame-level systems with piano-roll outputs or language
model (LM)-based systems with note-level predictions. However, frame-level
systems require manual thresholding, while the LM-based systems struggle with
long sequences. In this paper, we propose a hybrid method combining pre-trained
roll-based encoders with an LM decoder to leverage the strengths of both
methods. Besides, our approach employs a hierarchical prediction strategy,
first predicting onset and pitch, then velocity, and finally offset. The
hierarchical prediction strategy reduces computational costs by breaking down
long sequences into different hierarchies. Evaluated on two benchmark
roll-based encoders, our method outperforms traditional piano-roll outputs 0.01
and 0.022 in onset-offset-velocity F1 score, demonstrating its potential as a
performance-enhancing plug-in for arbitrary roll-based music transcription
encoder.
|
2501.03040 | ChronoSense: Exploring Temporal Understanding in Large Language Models
with Time Intervals of Events | cs.LG cs.CL | Large Language Models (LLMs) have achieved remarkable success in various NLP
tasks, yet they still face significant challenges in reasoning and arithmetic.
Temporal reasoning, a critical component of natural language understanding, has
raised increasing research attention. However, comprehensive testing of Allen's
interval relations (e.g., before, after, during) -- a fundamental framework for
temporal relationships -- remains underexplored. To fill this gap, we present
ChronoSense, a new benchmark for evaluating LLMs' temporal understanding. It
includes 16 tasks, focusing on identifying the Allen relation between two
temporal events and temporal arithmetic, using both abstract events and
real-world data from Wikidata. We assess the performance of seven recent LLMs
using this benchmark and the results indicate that models handle Allen
relations, even symmetrical ones, quite differently. Moreover, the findings
suggest that the models may rely on memorization to answer time-related
questions. Overall, the models' low performance highlights the need for
improved temporal understanding in LLMs and ChronoSense offers a robust
framework for future research in this area. Our dataset and the source code are
available at https://github.com/duyguislakoglu/chronosense.
|
2501.03041 | Group Shapley with Robust Significance Testing and Its Application to
Bond Recovery Rate Prediction | stat.ML cs.LG | We propose Group Shapley, a metric that extends the classical
individual-level Shapley value framework to evaluate the importance of feature
groups, addressing the structured nature of predictors commonly found in
business and economic data. More importantly, we develop a significance testing
procedure based on a three-cumulant chi-square approximation and establish the
asymptotic properties of the test statistics for Group Shapley values. Our
approach can effectively handle challenging scenarios, including sparse or
skewed distributions and small sample sizes, outperforming alternative tests
such as the Wald test. Simulations confirm that the proposed test maintains
robust empirical size and demonstrates enhanced power under diverse conditions.
To illustrate the method's practical relevance in advancing Explainable AI, we
apply our framework to bond recovery rate predictions using a global dataset
(1996-2023) comprising 2,094 observations and 98 features, grouped into 16
subgroups and five broader categories: bond characteristics, firm fundamentals,
industry-specific factors, market-related variables, and macroeconomic
indicators. Our results identify the market-related variables group as the most
influential. Furthermore, Lorenz curves and Gini indices reveal that Group
Shapley assigns feature importance more equitably compared to individual
Shapley values.
|
2501.03045 | Single-Channel Distance-Based Source Separation for Mobile GPU in
Outdoor and Indoor Environments | eess.AS cs.AI | This study emphasizes the significance of exploring distance-based source
separation (DSS) in outdoor environments. Unlike existing studies that
primarily focus on indoor settings, the proposed model is designed to capture
the unique characteristics of outdoor audio sources. It incorporates advanced
techniques, including a two-stage conformer block, a linear relation-aware
self-attention (RSA), and a TensorFlow Lite GPU delegate. While the linear RSA
may not capture physical cues as explicitly as the quadratic RSA, the linear
RSA enhances the model's context awareness, leading to improved performance on
the DSS that requires an understanding of physical cues in outdoor and indoor
environments. The experimental results demonstrated that the proposed model
overcomes the limitations of existing approaches and considerably enhances
energy efficiency and real-time inference speed on mobile devices.
|
2501.03049 | Convergence in On-line Learning of Static and Dynamic Systems | eess.SY cs.SY | The paper derives new conditions for global convergence of the adaptive
moment generation algorithm when applied for on-line, or equivalently,
recursive supervised learning of static and dynamic nonlinear systems. The
paper also proposes a difference equation based nonlinear dynamic model, that
enforces {\em structure} and results in a new type of recurrent neural network.
The convergence analysis applies averaging using Ljung's associated
differential equation method. It is first proved that the asymptotic update
behaviour of the adaptive moment generation algorithm is equivalent to a scaled
stochastic gradient update for the standard hyper-parameter setting, or
equivalent to a sign-sign update strategy in case the internal filtering of the
algorithm is turned off. The analysis is concluded by proving global
convergence to the set of parameters that gives a correct input-output
description of the true system. The two hyper-parameter settings are evaluated
with a Monte-Carlo analysis when the adaptive moment generation algorithm is
applied for learning of nonlinear automotive cruise control dynamics. This
validates the correct operation of the structured recurrent neural network and
confirms the expected reduced convergence speed for the sign-sign update case.
|
2501.03053 | Dr. Tongue: Sign-Oriented Multi-label Detection for Remote Tongue
Diagnosis | eess.IV cs.CV | Tongue diagnosis is a vital tool in Western and Traditional Chinese Medicine,
providing key insights into a patient's health by analyzing tongue attributes.
The COVID-19 pandemic has heightened the need for accurate remote medical
assessments, emphasizing the importance of precise tongue attribute recognition
via telehealth. To address this, we propose a Sign-Oriented multi-label
Attributes Detection framework. Our approach begins with an adaptive tongue
feature extraction module that standardizes tongue images and mitigates
environmental factors. This is followed by a Sign-oriented Network (SignNet)
that identifies specific tongue attributes, emulating the diagnostic process of
experienced practitioners and enabling comprehensive health evaluations. To
validate our methodology, we developed an extensive tongue image dataset
specifically designed for telemedicine. Unlike existing datasets, ours is
tailored for remote diagnosis, with a comprehensive set of attribute labels.
This dataset will be openly available, providing a valuable resource for
research. Initial tests have shown improved accuracy in detecting various
tongue attributes, highlighting our framework's potential as an essential tool
for remote medical assessments.
|
2501.03054 | A Passive Mechanical Add-on for Treadmill Exercise (P-MATE) in Stroke
Rehabilitation | cs.RO physics.med-ph | Robotic rehabilitation can deliver high-dose gait therapy and improve motor
function after a stroke. However, for many devices, high costs and lengthy
setup times limit clinical adoption. Thus, we designed, built, and evaluated
the Passive Mechanical Add-on for Treadmill Exercise (P-MATE), a low-cost
passive end-effector add-on for treadmills that couples the movement of the
paretic and non-paretic legs via a reciprocating system of elastic cables and
pulleys. Two human-device mechanical interfaces were designed to attach the
elastic cables to the user. The P-MATE and two interface prototypes were tested
with a physical therapist and eight unimpaired participants. Biomechanical
data, including kinematics and interaction forces, were collected alongside
standardized questionnaires to assess usability and user experience. Both
interfaces were quick and easy to attach, though user experience differed,
highlighting the need for personalization. We also identified areas for future
improvement, including pretension adjustments, tendon derailing prevention, and
understanding long-term impacts on user gait. Our preliminary findings
underline the potential of the P-MATE to provide effective, accessible, and
sustainable stroke gait rehabilitation.
|
2501.03055 | To Analyze and Regulate Human-in-the-loop Learning for Congestion Games | cs.GT cs.AI | In congestion games, selfish users behave myopically to crowd to the shortest
paths, and the social planner designs mechanisms to regulate such selfish
routing through information or payment incentives. However, such mechanism
design requires the knowledge of time-varying traffic conditions and it is the
users themselves to learn and report past road experiences to the social
planner (e.g., Waze or Google Maps). When congestion games meet mobile
crowdsourcing, it is critical to incentivize selfish users to explore
non-shortest paths in the best exploitation-exploration trade-off. First, we
consider a simple but fundamental parallel routing network with one
deterministic path and multiple stochastic paths for users with an average
arrival probability $\lambda$. We prove that the current myopic routing policy
(widely used in Waze and Google Maps) misses both exploration (when strong
hazard belief) and exploitation (when weak hazard belief) as compared to the
social optimum. Due to the myopic policy's under-exploration, we prove that the
caused price of anarchy (PoA) is larger than
\(\frac{1}{1-\rho^{\frac{1}{\lambda}}}\), which can be arbitrarily large as
discount factor \(\rho\rightarrow1\). To mitigate such huge efficiency loss, we
propose a novel selective information disclosure (SID) mechanism: we only
reveal the latest traffic information to users when they intend to over-explore
stochastic paths upon arrival, while hiding such information when they want to
under-explore. We prove that our mechanism successfully reduces PoA to be less
than~\(2\). Besides the parallel routing network, we further extend our
mechanism and PoA results to any linear path graphs with multiple intermediate
nodes.
|
2501.03058 | Survival Analysis Revisited: Understanding and Unifying Poisson,
Exponential, and Cox Models in Fall Risk Analysis | cs.LG cs.AI | This paper explores foundational and applied aspects of survival analysis,
using fall risk assessment as a case study. It revisits key time-related
probability distributions and statistical methods, including logistic
regression, Poisson regression, Exponential regression, and the Cox
Proportional Hazards model, offering a unified perspective on their
relationships within the survival analysis framework. A contribution of this
work is the step-by-step derivation and clarification of the relationships
among these models, particularly demonstrating that Poisson regression in the
survival context is a specific case of the Cox model. These insights address
gaps in understanding and reinforce the simplicity and interpretability of
survival models. The paper also emphasizes the practical utility of survival
analysis by connecting theoretical insights with real-world applications. In
the context of fall detection, it demonstrates how these models can
simultaneously predict fall risk, analyze contributing factors, and estimate
time-to-event outcomes within a single streamlined framework. In contrast,
advanced deep learning methods often require complex post-hoc interpretation
and separate training for different tasks particularly when working with
structured numerical data. This highlights the enduring relevance of classical
statistical frameworks and makes survival models especially valuable in
healthcare settings, where explainability and robustness are critical. By
unifying foundational concepts and offering a cohesive perspective on
time-to-event analysis, this work serves as an accessible resource for
understanding survival models and applying them effectively to diverse
analytical challenges.
|
2501.03059 | Through-The-Mask: Mask-based Motion Trajectories for Image-to-Video
Generation | cs.CV cs.AI cs.LG | We consider the task of Image-to-Video (I2V) generation, which involves
transforming static images into realistic video sequences based on a textual
description. While recent advancements produce photorealistic outputs, they
frequently struggle to create videos with accurate and consistent object
motion, especially in multi-object scenarios. To address these limitations, we
propose a two-stage compositional framework that decomposes I2V generation
into: (i) An explicit intermediate representation generation stage, followed by
(ii) A video generation stage that is conditioned on this representation. Our
key innovation is the introduction of a mask-based motion trajectory as an
intermediate representation, that captures both semantic object information and
motion, enabling an expressive but compact representation of motion and
semantics. To incorporate the learned representation in the second stage, we
utilize object-level attention objectives. Specifically, we consider a spatial,
per-object, masked-cross attention objective, integrating object-specific
prompts into corresponding latent space regions and a masked spatio-temporal
self-attention objective, ensuring frame-to-frame consistency for each object.
We evaluate our method on challenging benchmarks with multi-object and
high-motion scenarios and empirically demonstrate that the proposed method
achieves state-of-the-art results in temporal coherence, motion realism, and
text-prompt faithfulness. Additionally, we introduce \benchmark, a new
challenging benchmark for single-object and multi-object I2V generation, and
demonstrate our method's superiority on this benchmark. Project page is
available at https://guyyariv.github.io/TTM/.
|
2501.03064 | Trust Modeling in Counseling Conversations: A Benchmark Study | cs.CL | In mental health counseling, a variety of earlier studies have focused on
dialogue modeling. However, most of these studies give limited to no emphasis
on the quality of interaction between a patient and a therapist. The
therapeutic bond between a patient and a therapist directly correlates with
effective mental health counseling. It involves developing the patient's trust
on the therapist over the course of counseling. To assess the therapeutic bond
in counseling, we introduce trust as a therapist-assistive metric. Our
definition of trust involves patients' willingness and openness to express
themselves and, consequently, receive better care. We conceptualize it as a
dynamic trajectory observable through textual interactions during the
counseling. To facilitate trust modeling, we present MENTAL-TRUST, a novel
counseling dataset comprising manual annotation of 212 counseling sessions with
first-of-its-kind seven expert-verified ordinal trust levels. We project our
problem statement as an ordinal classification task for trust quantification
and propose a new benchmark, TrustBench, comprising a suite of classical and
state-of-the-art language models on MENTAL-TRUST. We evaluate the performance
across a suite of metrics and lay out an exhaustive set of findings. Our study
aims to unfold how trust evolves in therapeutic interactions.
|
2501.03068 | SGLDBench: A Benchmark Suite for Stress-Guided Lightweight 3D Designs | cs.CE | We introduce the Stress-Guided Lightweight Design Benchmark (SGLDBench), a
comprehensive benchmark suite to apply and evaluate material layout strategies
for generating stiff lightweight designs in 3D domains. SGLDBench provides a
seamlessly integrated simulation and analysis framework, providing six
reference strategies accompanied by a scalable multigrid elasticity solver to
efficiently execute these strategies and validate the stiffness of their
results. This facilitates systematic analysis and comparison of design
strategies regarding the mechanical properties they achieve. SGLDBench enables
the evaluation of diverse settings of load conditions and, through the tight
integration of the solver, enables support for high-resolution designs and
stiffness analysis. Moreover, SGLDBench emphasizes visual analysis to explore
relations between the geometric structure of a design and the distribution of
stresses, providing insights into the specific properties and behaviors of
different design strategies. SGLDBenchs' specific features are highlighted in
several experiments, by comparing the results of reference strategies with
respect to geometric and mechanical properties.
|
2501.03069 | Benefit evaluation of V2X-enhanced braking in view obstructed crossing
use cases | eess.SY cs.SY | If a crash between two vehicles is imminent, an Automatic Emergency Brake
(AEB) is activated to avoid or mitigate the accident. However, the trigger
mechanism of the AEB relies on the vehicle's onboard sensors, such as radar and
cameras, that require a line of sight to detect the crash opponent. If the line
of sight is impaired, for example by bad weather or an obstruction, the AEB
cannot be activated in time to avoid the crash. To deal with these cases, a
2-stage braking system is proposed, where the first stage consists of a partial
brake that is triggered by Vehicle-to-everything (V2X) communication. The
second stage is composed of the standard AEB that is triggered exclusively by
an onboard sensor detection. The performance of this V2X-enhanced 2-stage
braking system is analysed in obstructed crossing use cases and the results are
compared against the use of an AEB-only system. The benefit is quantitatively
assessed by determination of the crash avoidance rate and, if the crash cannot
be avoided, by estimation of the crash severity mitigation.
|
2501.03070 | Slim multi-scale convolutional autoencoder-based reduced-order models
for interpretable features of a complex dynamical system | physics.flu-dyn cs.LG | In recent years, data-driven deep learning models have gained significant
interest in the analysis of turbulent dynamical systems. Within the context of
reduced-order models (ROMs), convolutional autoencoders (CAEs) pose a
universally applicable alternative to conventional approaches. They can learn
nonlinear transformations directly from data, without prior knowledge of the
system. However, the features generated by such models lack interpretability.
Thus, the resulting model is a black-box which effectively reduces the
complexity of the system, but does not provide insights into the meaning of the
latent features. To address this critical issue, we introduce a novel
interpretable CAE approach for high-dimensional fluid flow data that maintains
the reconstruction quality of conventional CAEs and allows for feature
interpretation. Our method can be easily integrated into any existing CAE
architecture with minor modifications of the training process. We compare our
approach to Proper Orthogonal Decomposition (POD) and two existing methods for
interpretable CAEs. We apply all methods to three different experimental
turbulent Rayleigh-B\'enard convection datasets with varying complexity. Our
results show that the proposed method is lightweight, easy to train, and
achieves relative reconstruction performance improvements of up to 6.4% over
POD for 64 modes. The relative improvement increases to up to 229.8% as the
number of modes decreases. Additionally, our method delivers interpretable
features similar to those of POD and is significantly less resource-intensive
than existing CAE approaches, using less than 2% of the parameters. These
approaches either trade interpretability for reconstruction performance or only
provide interpretability to a limited extend.
|
2501.03072 | OpenTable data with multi-criteria ratings | cs.IR | With the development of recommender systems (RSs), several promising systems
have emerged, such as context-aware RS, multi-criteria RS, and group RS.
Multi-criteria recommender systems (MCRSs) are designed to provide personalized
recommendations by considering user preferences in multiple attributes or
criteria simultaneously. Unlike traditional RSs that typically focus on a
single rating, these systems help users make more informed decisions by
considering their diverse preferences and needs across various dimensions. In
this article, we release the OpenTable data set which was crawled from
OpenTable.com. The data set can be considered as a benchmark data set for
multi-criteria recommendations.
|
2501.03074 | AIF-SFDA: Autonomous Information Filter-driven Source-Free Domain
Adaptation for Medical Image Segmentation | cs.CV | Decoupling domain-variant information (DVI) from domain-invariant information
(DII) serves as a prominent strategy for mitigating domain shifts in the
practical implementation of deep learning algorithms. However, in medical
settings, concerns surrounding data collection and privacy often restrict
access to both training and test data, hindering the empirical decoupling of
information by existing methods. To tackle this issue, we propose an Autonomous
Information Filter-driven Source-free Domain Adaptation (AIF-SFDA) algorithm,
which leverages a frequency-based learnable information filter to autonomously
decouple DVI and DII. Information Bottleneck (IB) and Self-supervision (SS) are
incorporated to optimize the learnable frequency filter. The IB governs the
information flow within the filter to diminish redundant DVI, while SS
preserves DII in alignment with the specific task and image modality. Thus, the
autonomous information filter can overcome domain shifts relying solely on
target data. A series of experiments covering various medical image modalities
and segmentation tasks were conducted to demonstrate the benefits of AIF-SFDA
through comparisons with leading algorithms and ablation studies. The code is
available at https://github.com/JingHuaMan/AIF-SFDA.
|
2501.03078 | Qinco2: Vector Compression and Search with Improved Implicit Neural
Codebooks | cs.LG | Vector quantization is a fundamental technique for compression and
large-scale nearest neighbor search. For high-accuracy operating points,
multi-codebook quantization associates data vectors with one element from each
of multiple codebooks. An example is residual quantization (RQ), which
iteratively quantizes the residual error of previous steps. Dependencies
between the different parts of the code are, however, ignored in RQ, which
leads to suboptimal rate-distortion performance. QINCo recently addressed this
inefficiency by using a neural network to determine the quantization codebook
in RQ based on the vector reconstruction from previous steps. In this paper we
introduce QINCo2 which extends and improves QINCo with (i) improved vector
encoding using codeword pre-selection and beam-search, (ii) a fast approximate
decoder leveraging codeword pairs to establish accurate short-lists for search,
and (iii) an optimized training procedure and network architecture. We conduct
experiments on four datasets to evaluate QINCo2 for vector compression and
billion-scale nearest neighbor search. We obtain outstanding results in both
settings, improving the state-of-the-art reconstruction MSE by 34% for 16-byte
vector compression on BigANN, and search accuracy by 24% with 8-byte encodings
on Deep1M.
|
2501.03079 | Wheel-GINS: A GNSS/INS Integrated Navigation System with a Wheel-mounted
IMU | cs.RO | A long-term accurate and robust localization system is essential for mobile
robots to operate efficiently outdoors. Recent studies have shown the
significant advantages of the wheel-mounted inertial measurement unit
(Wheel-IMU)-based dead reckoning system. However, it still drifts over extended
periods because of the absence of external correction signals. To achieve the
goal of long-term accurate localization, we propose Wheel-GINS, a Global
Navigation Satellite System (GNSS)/inertial navigation system (INS) integrated
navigation system using a Wheel-IMU. Wheel-GINS fuses the GNSS position
measurement with the Wheel-IMU via an extended Kalman filter to limit the
long-term error drift and provide continuous state estimation when the GNSS
signal is blocked. Considering the specificities of the GNSS/Wheel-IMU
integration, we conduct detailed modeling and online estimation of the
Wheel-IMU installation parameters, including the Wheel-IMU leverarm and
mounting angle and the wheel radius error. Experimental results have shown that
Wheel-GINS outperforms the traditional GNSS/Odometer/INS integrated navigation
system during GNSS outages. At the same time, Wheel-GINS can effectively
estimate the Wheel-IMU installation parameters online and, consequently,
improve the localization accuracy and practicality of the system. The source
code of our implementation is publicly available
(https://github.com/i2Nav-WHU/Wheel-GINS).
|
2501.03080 | Joint Secrecy Rate Achieving and Authentication Enhancement via
Tag-based Encoding in Chaotic UAV Communication Environment | eess.SP cs.SY eess.SY | Secure communication is crucial in many emerging systems enabled by unmanned
aerial vehicle (UAV) communication networks. To protect legitimate
communication in a chaotic UAV environment, where both eavesdropping and
jamming become straightforward from multiple adversaries with line-of-sight
signal propagation, a new reliable and integrated physical layer security
mechanism is proposed in this paper for a massive
multiple-input-multiple-output (MIMO) UAV system. Particularly, a physical
layer fingerprint, also called a tag, is first embedded into each message for
authentication purpose. We then propose to reuse the tag additionally as a
reference to encode each message to ensure secrecy for confidentiality
enhancement at a low cost. Specifically, we create a new dual-reference
symmetric tag generation mechanism by inputting an encoding-insensitive feature
of plaintext along with the key into a hash function. At a legitimate receiver,
an expected tag, reliable for decoding, can be symmetrically regenerated based
on the received ciphertext, and authentication can be performed by comparing
the regenerated reference tag to the received tag. However, an illegitimate
receiver can only receive the fuzzy tag which can not be used to decode the
received message. Additionally, we introduce artificial noise (AN) to degrade
eavesdropping to further decrease message leakage. To verify the efficiency of
our proposed tag-based encoding (TBE) scheme, we formulate two optimization
problems including ergodic sum secrecy rate maximization and authentication
fail probability minimization. The power allocation solutions are derived by
difference-of-convex (DC) programming and the Lagrange method, respectively.
The simulation results demonstrate the superior performance of the proposed TBE
approach compared to the prior AN-aided tag embedding scheme.
|
2501.03085 | Personalized Fashion Recommendation with Image Attributes and Aesthetics
Assessment | cs.IR cs.AI | Personalized fashion recommendation is a difficult task because 1) the
decisions are highly correlated with users' aesthetic appetite, which previous
work frequently overlooks, and 2) many new items are constantly rolling out
that cause strict cold-start problems in the popular identity (ID)-based
recommendation methods. These new items are critical to recommend because of
trend-driven consumerism. In this work, we aim to provide more accurate
personalized fashion recommendations and solve the cold-start problem by
converting available information, especially images, into two attribute graphs
focusing on optimized image utilization and noise-reducing user modeling.
Compared with previous methods that separate image and text as two components,
the proposed method combines image and text information to create a richer
attributes graph. Capitalizing on the advancement of large language and vision
models, we experiment with extracting fine-grained attributes efficiently and
as desired using two different prompts. Preliminary experiments on the IQON3000
dataset have shown that the proposed method achieves competitive accuracy
compared with baselines.
|
2501.03088 | Sentiment-guided Commonsense-aware Response Generation for Mental Health
Counseling | cs.CL | The crisis of mental health issues is escalating. Effective counseling serves
as a critical lifeline for individuals suffering from conditions like PTSD,
stress, etc. Therapists forge a crucial therapeutic bond with clients, steering
them towards positivity. Unfortunately, the massive shortage of professionals,
high costs, and mental health stigma pose significant barriers to consulting
therapists. As a substitute, Virtual Mental Health Assistants (VMHAs) have
emerged in the digital healthcare space. However, most existing VMHAs lack the
commonsense to understand the nuanced sentiments of clients to generate
effective responses. To this end, we propose EmpRes, a novel sentiment-guided
mechanism incorporating commonsense awareness for generating responses. By
leveraging foundation models and harnessing commonsense knowledge, EmpRes aims
to generate responses that effectively shape the client's sentiment towards
positivity. We evaluate the performance of EmpRes on HOPE, a benchmark
counseling dataset, and observe a remarkable performance improvement compared
to the existing baselines across a suite of qualitative and quantitative
metrics. Moreover, our extensive empirical analysis and human evaluation show
that the generation ability of EmpRes is well-suited and, in some cases,
surpasses the gold standard. Further, we deploy EmpRes as a chat interface for
users seeking mental health support. We address the deployed system's
effectiveness through an exhaustive user study with a significant positive
response. Our findings show that 91% of users find the system effective, 80%
express satisfaction, and over 85.45% convey a willingness to continue using
the interface and recommend it to others, demonstrating the practical
applicability of EmpRes in addressing the pressing challenges of mental health
support, emphasizing user feedback, and ethical considerations in a real-world
context.
|
2501.03095 | A Novel Structure-Agnostic Multi-Objective Approach for Weight-Sharing
Compression in Deep Neural Networks | cs.CV cs.NE | Deep neural networks suffer from storing millions and billions of weights in
memory post-training, making challenging memory-intensive models to deploy on
embedded devices. The weight-sharing technique is one of the popular
compression approaches that use fewer weight values and share across specific
connections in the network. In this paper, we propose a multi-objective
evolutionary algorithm (MOEA) based compression framework independent of neural
network architecture, dimension, task, and dataset. We use uniformly sized bins
to quantize network weights into a single codebook (lookup table) for efficient
weight representation. Using MOEA, we search for Pareto optimal $k$ bins by
optimizing two objectives. Then, we apply the iterative merge technique to
non-dominated Pareto frontier solutions by combining neighboring bins without
degrading performance to decrease the number of bins and increase the
compression ratio. Our approach is model- and layer-independent, meaning the
weights are mixed in the clusters from any layer, and the uniform quantization
method used in this work has $O(N)$ complexity instead of non-uniform
quantization methods such as k-means with $O(Nkt)$ complexity. In addition, we
use the center of clusters as the shared weight values instead of retraining
shared weights, which is computationally expensive. The advantage of using
evolutionary multi-objective optimization is that it can obtain non-dominated
Pareto frontier solutions with respect to performance and shared weights. The
experimental results show that we can reduce the neural network memory by
$13.72 \sim14.98 \times$ on CIFAR-10, $11.61 \sim 12.99\times$ on CIFAR-100,
and $7.44 \sim 8.58\times$ on ImageNet showcasing the effectiveness of the
proposed deep neural network compression framework.
|
2501.03102 | Enhancing Multirotor Drone Efficiency: Exploring Minimum Energy
Consumption Rate of Forward Flight under Varying Payload | cs.RO cs.SY eess.SY | Multirotor unmanned aerial vehicle is a prevailing type of aircraft with wide
real-world applications. Energy efficiency is a critical aspect of its
performance, determining the range and duration of the missions that can be
performed. In this study, we show both analytically and numerically that the
optimum of a key energy efficiency index in forward flight, namely energy per
meter traveled per unit mass, is a constant under different vehicle mass
(including payload). Note that this relationship is only true under the optimal
forward velocity that minimizes the energy consumption (under different mass),
but not under arbitrary velocity. The study is based on a previously developed
model capturing the first-principle energy dynamics of the multirotor, and a
key step is to prove that the pitch angle under optimal velocity is a constant.
By employing both analytical derivation and validation studies, the research
provides critical insights into the optimization of multirotor energy
efficiency, and facilitate the development of flight control strategies to
extend mission duration and range.
|
2501.03103 | MVP: Multimodal Emotion Recognition based on Video and Physiological
Signals | cs.CV | Human emotions entail a complex set of behavioral, physiological and
cognitive changes. Current state-of-the-art models fuse the behavioral and
physiological components using classic machine learning, rather than recent
deep learning techniques. We propose to fill this gap, designing the Multimodal
for Video and Physio (MVP) architecture, streamlined to fuse video and
physiological signals. Differently then others approaches, MVP exploits the
benefits of attention to enable the use of long input sequences (1-2 minutes).
We have studied video and physiological backbones for inputting long sequences
and evaluated our method with respect to the state-of-the-art. Our results show
that MVP outperforms former methods for emotion recognition based on facial
videos, EDA, and ECG/PPG.
|
2501.03112 | LangFair: A Python Package for Assessing Bias and Fairness in Large
Language Model Use Cases | cs.CL cs.AI cs.CY cs.LG | Large Language Models (LLMs) have been observed to exhibit bias in numerous
ways, potentially creating or worsening outcomes for specific groups identified
by protected attributes such as sex, race, sexual orientation, or age. To help
address this gap, we introduce LangFair, an open-source Python package that
aims to equip LLM practitioners with the tools to evaluate bias and fairness
risks relevant to their specific use cases. The package offers functionality to
easily generate evaluation datasets, comprised of LLM responses to
use-case-specific prompts, and subsequently calculate applicable metrics for
the practitioner's use case. To guide in metric selection, LangFair offers an
actionable decision framework.
|
2501.03113 | Balancing Efficiency and Expressiveness: Subgraph GNNs with Walk-Based
Centrality | cs.LG cs.NE | We propose an expressive and efficient approach that combines the strengths
of two prominent extensions of Graph Neural Networks (GNNs): Subgraph GNNs and
Structural Encodings (SEs). Our approach leverages walk-based centrality
measures, both as a powerful form of SE and also as a subgraph selection
strategy for Subgraph GNNs. By drawing a connection to perturbation analysis,
we highlight the effectiveness of centrality-based sampling, and show it
significantly reduces the computational burden associated with Subgraph GNNs.
Further, we combine our efficient Subgraph GNN with SEs derived from the
calculated centrality and demonstrate this hybrid approach, dubbed HyMN, gains
in discriminative power. HyMN effectively addresses the expressiveness
limitations of Message Passing Neural Networks (MPNNs) while mitigating the
computational costs of Subgraph GNNs. Through a series of experiments on
synthetic and real-world tasks, we show it outperforms other subgraph sampling
approaches while being competitive with full-bag Subgraph GNNs and other
state-of-the-art approaches with a notably reduced runtime.
|
2501.03118 | TEE-based Key-Value Stores: a Survey | cs.CR cs.DB | Key-Value Stores (KVSs) are No-SQL databases that store data as key-value
pairs and have gained popularity due to their simplicity, scalability, and fast
retrieval capabilities. However, storing sensitive data in KVSs requires strong
security properties to prevent data leakage and unauthorized tampering. While
software (SW)-based encryption techniques are commonly used to maintain data
confidentiality and integrity, they suffer from several drawbacks. They
strongly assume trust in the hosting system stack and do not secure data during
processing unless using performance-heavy techniques (e.g., homomorphic
encryption). Alternatively, Trusted Execution Environments (TEEs) provide a
solution that enforces the confidentiality and integrity of code and data at
the CPU level, allowing users to build trusted applications in an untrusted
environment. They also secure data in use by providing an encapsulated
processing environment called enclave. Nevertheless, TEEs come with their own
set of drawbacks, including performance issues due to memory size limitations
and CPU context switching. This paper examines the state of the art in
TEE-based confidential KVSs and highlights common design strategies used in
KVSs to leverage TEE security features while overcoming their inherent
limitations. This work aims to provide a comprehensive understanding of the use
of TEEs in KVSs and to identify research directions for future work.
|
2501.03119 | From Models to Network Topologies: A Topology Inference Attack in
Decentralized Federated Learning | cs.LG cs.AI | Federated Learning (FL) is widely recognized as a privacy-preserving machine
learning paradigm due to its model-sharing mechanism that avoids direct data
exchange. However, model training inevitably leaves exploitable traces that can
be used to infer sensitive information. In Decentralized FL (DFL), the overlay
topology significantly influences its models' convergence, robustness, and
security. This study explores the feasibility of inferring the overlay topology
of DFL systems based solely on model behavior, introducing a novel Topology
Inference Attack. A taxonomy of topology inference attacks is proposed,
categorizing them by the attacker's capabilities and knowledge. Practical
attack strategies are developed for different scenarios, and quantitative
experiments are conducted to identify key factors influencing the attack
effectiveness. Experimental results demonstrate that analyzing only the public
models of individual nodes can accurately infer the DFL topology, underscoring
the risk of sensitive information leakage in DFL systems. This finding offers
valuable insights for improving privacy preservation in decentralized learning
environments.
|
2501.03120 | CAT: Content-Adaptive Image Tokenization | cs.CV | Most existing image tokenizers encode images into a fixed number of tokens or
patches, overlooking the inherent variability in image complexity. To address
this, we introduce Content-Adaptive Tokenizer (CAT), which dynamically adjusts
representation capacity based on the image content and encodes simpler images
into fewer tokens. We design a caption-based evaluation system that leverages
large language models (LLMs) to predict content complexity and determine the
optimal compression ratio for a given image, taking into account factors
critical to human perception. Trained on images with diverse compression
ratios, CAT demonstrates robust performance in image reconstruction. We also
utilize its variable-length latent representations to train Diffusion
Transformers (DiTs) for ImageNet generation. By optimizing token allocation,
CAT improves the FID score over fixed-ratio baselines trained with the same
flops and boosts the inference throughput by 18.5%.
|
2501.03122 | Normalizing Batch Normalization for Long-Tailed Recognition | cs.CV | In real-world scenarios, the number of training samples across classes
usually subjects to a long-tailed distribution. The conventionally trained
network may achieve unexpected inferior performance on the rare class compared
to the frequent class. Most previous works attempt to rectify the network bias
from the data-level or from the classifier-level. Differently, in this paper,
we identify that the bias towards the frequent class may be encoded into
features, i.e., the rare-specific features which play a key role in
discriminating the rare class are much weaker than the frequent-specific
features. Based on such an observation, we introduce a simple yet effective
approach, normalizing the parameters of Batch Normalization (BN) layer to
explicitly rectify the feature bias. To achieve this end, we represent the
Weight/Bias parameters of a BN layer as a vector, normalize it into a unit one
and multiply the unit vector by a scalar learnable parameter. Through
decoupling the direction and magnitude of parameters in BN layer to learn, the
Weight/Bias exhibits a more balanced distribution and thus the strength of
features becomes more even. Extensive experiments on various long-tailed
recognition benchmarks (i.e., CIFAR-10/100-LT, ImageNet-LT and iNaturalist
2018) show that our method outperforms previous state-of-the-arts remarkably.
The code and checkpoints are available at https://github.com/yuxiangbao/NBN.
|
2501.03124 | PRMBench: A Fine-grained and Challenging Benchmark for Process-Level
Reward Models | cs.CL cs.AI cs.LG | Process-level Reward Models (PRMs) are crucial for complex reasoning and
decision-making tasks, where each intermediate step plays an important role in
the reasoning process. Since language models are prone to various types of
errors during the reasoning process, PRMs are required to possess nuanced
capabilities for detecting various implicit error types in real-world
scenarios. However, current benchmarks primarily focus on step correctness,
failing to evaluate PRMs' performance systematically. To address this gap, we
introduce PRMBench, a process-level benchmark specifically designed to assess
the fine-grained error detection capabilities of PRMs. PRMBench comprises 6,216
carefully designed problems and 83,456 step-level labels, evaluating models
across multiple dimensions, including simplicity, soundness, and sensitivity.
In our experiments on 15 models, spanning both open-source PRMs and
closed-source large language models prompted as critic models, we uncover
significant weaknesses in current PRMs. These findings underscore the
challenges inherent in process-level evaluation and highlight key directions
for future research. We hope PRMBench can be a robust bench for advancing
research on PRM evaluation and development.
|
2501.03130 | Learning DAGs and Root Causes from Time-Series Data | cs.LG stat.ML | We introduce DAG-TFRC, a novel method for learning directed acyclic graphs
(DAGs) from time series with few root causes. By this, we mean that the data
are generated by a small number of events at certain, unknown nodes and time
points under a structural vector autoregression model. For such data, we (i)
learn the DAGs representing both the instantaneous and time-lagged dependencies
between nodes, and (ii) discover the location and time of the root causes. For
synthetic data with few root causes, DAG-TFRC shows superior performance in
accuracy and runtime over prior work, scaling up to thousands of nodes.
Experiments on simulated and real-world financial data demonstrate the
viability of our sparse root cause assumption. On S&P 500 data, DAG-TFRC
successfully clusters stocks by sectors and discovers major stock movements as
root causes.
|
2501.03132 | Communication Bounds for the Distributed Experts Problem | cs.LG | In this work, we study the experts problem in the distributed setting where
an expert's cost needs to be aggregated across multiple servers. Our study
considers various communication models such as the message-passing model and
the broadcast model, along with multiple aggregation functions, such as summing
and taking the $\ell_p$ norm of an expert's cost across servers. We propose the
first communication-efficient protocols that achieve near-optimal regret in
these settings, even against a strong adversary who can choose the inputs
adaptively. Additionally, we give a conditional lower bound showing that the
communication of our protocols is nearly optimal. Finally, we implement our
protocols and demonstrate empirical savings on the HPO-B benchmarks.
|
2501.03137 | Distributionally Robust Control Synthesis for Stochastic Systems with
Safety and Reach-Avoid Specifications | eess.SY cs.SY | We investigate the problem of synthesizing distributionally robust control
policies for stochastic systems under safety and reach-avoid specifications.
Using a game-theoretical framework, we consider the setting where the
probability distribution of the disturbance at each time step is selected from
an ambiguity set defined by the Wasserstein distance. The goal is to synthesize
a distributionally robust control policy that ensures the satisfaction
probability exceeds a specified threshold under any distribution within the
ambiguity set. First, for both safety and reach-avoid specifications, we
establish the existence of optimal policies by leveraging the dynamic
programming principles. Then we demonstrate how the associated optimization
problem can be efficiently solved using the dual representation of Wasserstein
distributionally robust optimization. Furthermore, for safety specifications in
particular, we introduce a novel concept of distributionally robust control
barrier certificates and show how these enable the efficient synthesis of
controllers through sum-of-squares programming techniques. Finally, our
experimental results reveal that incorporating distributional robustness during
the synthesis phase significantly improves the satisfaction probability during
online execution, even with limited statistical knowledge of the disturbance
distribution.
|
2501.03139 | VicSim: Enhancing Victim Simulation with Emotional and Linguistic
Fidelity | cs.CL cs.HC | Scenario-based training has been widely adopted in many public service
sectors. Recent advancements in Large Language Models (LLMs) have shown promise
in simulating diverse personas to create these training scenarios. However,
little is known about how LLMs can be developed to simulate victims for
scenario-based training purposes. In this paper, we introduce VicSim (victim
simulator), a novel model that addresses three key dimensions of user
simulation: informational faithfulness, emotional dynamics, and language style
(e.g., grammar usage). We pioneer the integration of scenario-based victim
modeling with GAN-based training workflow and key-information-based prompting,
aiming to enhance the realism of simulated victims. Our adversarial training
approach teaches the discriminator to recognize grammar and emotional cues as
reliable indicators of synthetic content. According to evaluations by human
raters, the VicSim model outperforms GPT-4 in terms of human-likeness.
|
2501.03142 | Co-Activation Graph Analysis of Safety-Verified and Explainable Deep
Reinforcement Learning Policies | cs.AI cs.LG | Deep reinforcement learning (RL) policies can demonstrate unsafe behaviors
and are challenging to interpret. To address these challenges, we combine RL
policy model checking--a technique for determining whether RL policies exhibit
unsafe behaviors--with co-activation graph analysis--a method that maps neural
network inner workings by analyzing neuron activation patterns--to gain insight
into the safe RL policy's sequential decision-making. This combination lets us
interpret the RL policy's inner workings for safe decision-making. We
demonstrate its applicability in various experiments.
|
2501.03145 | Geometry Restoration and Dewarping of Camera-Captured Document Images | cs.CV cs.AI cs.LG | This research focuses on developing a method for restoring the topology of
digital images of paper documents captured by a camera, using algorithms for
detection, segmentation, geometry restoration, and dewarping. Our methodology
employs deep learning (DL) for document outline detection, followed by computer
vision (CV) to create a topological 2D grid using cubic polynomial
interpolation and correct nonlinear distortions by remapping the image. Using
classical CV methods makes the document topology restoration process more
efficient and faster, as it requires significantly fewer computational
resources and memory. We developed a new pipeline for automatic document
dewarping and reconstruction, along with a framework and annotated dataset to
demonstrate its efficiency. Our experiments confirm the promise of our
methodology and its superiority over existing benchmarks (including mobile apps
and popular DL solutions, such as RectiNet, DocGeoNet, and DocTr++) both
visually and in terms of document readability via Optical Character Recognition
(OCR) and geometry restoration metrics. This paves the way for creating
high-quality digital copies of paper documents and enhancing the efficiency of
OCR systems. Project page: https://github.com/HorizonParadox/DRCCBI
|
2501.03151 | Large language models for artificial general intelligence (AGI): A
survey of foundational principles and approaches | cs.AI cs.CV cs.LG | Generative artificial intelligence (AI) systems based on large-scale
pretrained foundation models (PFMs) such as vision-language models, large
language models (LLMs), diffusion models and vision-language-action (VLA)
models have demonstrated the ability to solve complex and truly non-trivial AI
problems in a wide variety of domains and contexts. Multimodal large language
models (MLLMs), in particular, learn from vast and diverse data sources,
allowing rich and nuanced representations of the world and, thereby, providing
extensive capabilities, including the ability to reason, engage in meaningful
dialog; collaborate with humans and other agents to jointly solve complex
problems; and understand social and emotional aspects of humans. Despite this
impressive feat, the cognitive abilities of state-of-the-art LLMs trained on
large-scale datasets are still superficial and brittle. Consequently, generic
LLMs are severely limited in their generalist capabilities. A number of
foundational problems -- embodiment, symbol grounding, causality and memory --
are required to be addressed for LLMs to attain human-level general
intelligence. These concepts are more aligned with human cognition and provide
LLMs with inherent human-like cognitive properties that support the realization
of physically-plausible, semantically meaningful, flexible and more
generalizable knowledge and intelligence. In this work, we discuss the
aforementioned foundational issues and survey state-of-the art approaches for
implementing these concepts in LLMs. Specifically, we discuss how the
principles of embodiment, symbol grounding, causality and memory can be
leveraged toward the attainment of artificial general intelligence (AGI) in an
organic manner.
|
2501.03152 | The Scaling Law for LoRA Base on Mutual Information Upper Bound | cs.LG cs.AI | LoRA (Low-Rank Adaptation) is a widely used model fine-tuning method. In
fine-tuning, the law among model performance, model parameters, and data
complexity has been a focal issue in the field. Existing methods often leverage
external metrics (such as cross-entropy or perplexity) to evaluate model
performance. In the fine-tuning process for large models, two types of
knowledge are typically involved: the frozen, general knowledge acquired by the
model during pre-training and the new knowledge learned through the LoRA module
from the current data. Generally, the less LoRA's learned knowledge relies on
the large model, the more it captures the specific knowledge of new data,
thereby enhancing its adaptability to new tasks. However, external metrics do
not readily capture the dependency relationship between these two types of
knowledge. Therefore, we designed an internal metric based on the Mutual
Information Upper Bound (MIUB) theory to investigate the scaling law of
large-model LoRA fine-tuning. In our experiments, we validated this approach on
benchmark datasets, using the Llama3-8B and Phi3-3B models. The results show
that the proposed MIUB metric aligns more accurately and stably with the
scaling law of LoRA fine-tuning compared to cross-entropy and perplexity.
|
2501.03153 | Segment Anything Model for Zero-shot Single Particle Tracking in Liquid
Phase Transmission Electron Microscopy | cs.CV physics.data-an | Liquid phase transmission electron microscopy (LPTEM) offers an unparalleled
combination of spatial and temporal resolution, making it a promising tool for
single particle tracking at the nanoscale. However, the absence of a
standardized framework for identifying and tracking nanoparticles in noisy
LPTEM videos has impeded progress in the field to develop this technique as a
single particle tracking tool. To address this, we leveraged Segment Anything
Model 2 (SAM 2), released by Meta, which is a foundation model developed for
segmenting videos and images. Here, we demonstrate that SAM 2 can successfully
segment LPTEM videos in a zero-shot manner and without requiring fine-tuning.
Building on this capability, we introduce SAM4EM, a comprehensive framework
that integrates promptable video segmentation with particle tracking and
statistical analysis, providing an end-to-end LPTEM analysis framework for
single particle tracking. SAM4EM achieves nearly 50-fold higher accuracy in
segmenting and analyzing LPTEM videos compared to state-of-the-art methods,
paving the way for broader applications of LPTEM in nanoscale imaging.
|
2501.03156 | Application of $J$-Integral to a Random Elastic Medium | cond-mat.soft cs.CE | This study investigates the use of the $J$-integral to compute the statistics
of the energy release rate of a random elastic medium. The spatial variability
of the elastic modulus is modeled as a homogeneous lognormal random field.
Within the framework of Monte Carlo simulation, a modified contour integral is
applied to evaluate the first and second statistical moments of the energy
release rate. These results are compared with the energy release rate
calculated from the potential energy function. The comparison shows that, if
the random field of elastic modulus is homogeneous in space, the path
independence of the classical $J$-integral remains valid for calculating the
mean energy release rate. However, this path independence does not extend to
the higher order statistical moments. The simulation further reveals the effect
of the correlation length of the spatially varying elastic modulus on the
energy release rate of the specimen.
|
2501.03160 | Statistical Reconstruction For Anisotropic X-ray Dark-Field Tomography | cs.CE | Anisotropic X-ray Dark-Field Tomography (AXDT) is a novel imaging technology
that enables the extraction of fiber structures on the micrometer scale, far
smaller than standard X-ray Computed Tomography (CT) setups. Directional and
structural information is relevant in medical diagnostics and material testing.
Compared to existing solutions, AXDT could prove a viable alternative.
Reconstruction methods in AXDT have so far been driven by practicality.
Improved methods could make AXDT more accessible. We contribute numerically
stable implementations and validation of advanced statistical reconstruction
methods that incorporate the statistical noise behavior of the imaging system.
We further provide a new statistical reconstruction formulation that retains
the advanced noise assumptions of the imaging setup while being efficient and
easy to optimize. Finally, we provide a detailed analysis of the optimization
behavior for all models regarding AXDT. Our experiments show that statistical
reconstruction outperforms the previously used model, and particularly the
noise performance is superior. While the previously proposed statistical method
is effective, it is computationally expensive, and our newly proposed
formulation proves highly efficient with identical performance. Our theoretical
analysis opens the possibility to new and more advanced reconstruction
algorithms, which in turn enable future research in AXDT.
|
2501.03162 | Deep-Relative-Trust-Based Diffusion for Decentralized Deep Learning | cs.LG eess.SP | Decentralized learning strategies allow a collection of agents to learn
efficiently from local data sets without the need for central aggregation or
orchestration. Current decentralized learning paradigms typically rely on an
averaging mechanism to encourage agreement in the parameter space. We argue
that in the context of deep neural networks, which are often
over-parameterized, encouraging consensus of the neural network outputs, as
opposed to their parameters can be more appropriate. This motivates the
development of a new decentralized learning algorithm, termed DRT diffusion,
based on deep relative trust (DRT), a recently introduced similarity measure
for neural networks. We provide convergence analysis for the proposed strategy,
and numerically establish its benefit to generalization, especially with sparse
topologies, in an image classification task.
|
2501.03166 | Semantic Captioning: Benchmark Dataset and Graph-Aware Few-Shot
In-Context Learning for SQL2Text | cs.CL cs.LG | Large Language Models (LLMs) have demonstrated remarkable performance in
various NLP tasks, including semantic parsing, which translates natural
language into formal code representations. However, the reverse process,
translating code into natural language, termed semantic captioning, has
received less attention. This task is becoming increasingly important as LLMs
are integrated into platforms for code generation, security analysis, and
educational purposes. In this paper, we focus on the captioning of SQL query
(SQL2Text) to address the critical need for understanding and explaining SQL
queries in an era where LLM-generated code poses potential security risks. We
repurpose Text2SQL datasets for SQL2Text by introducing an iterative ICL prompt
using GPT-4o to generate multiple additional utterances, which enhances the
robustness of the datasets for the reverse task. We conduct our experiments
using in-context learning (ICL) based on different sample selection methods,
emphasizing smaller, more computationally efficient LLMs. Our findings
demonstrate that leveraging the inherent graph properties of SQL for ICL sample
selection significantly outperforms random selection by up to 39% on BLEU score
and provides better results than alternative methods. Dataset and codes are
published: https://github.com/aliwister/ast-icl.
|
2501.03172 | GLiREL -- Generalist Model for Zero-Shot Relation Extraction | cs.CL cs.AI cs.LG | We introduce GLiREL (Generalist Lightweight model for zero-shot Relation
Extraction), an efficient architecture and training paradigm for zero-shot
relation classification. Inspired by recent advancements in zero-shot named
entity recognition, this work presents an approach to efficiently and
accurately predict zero-shot relationship labels between multiple entities in a
single forward pass. Experiments using the FewRel and WikiZSL benchmarks
demonstrate that our approach achieves state-of-the-art results on the
zero-shot relation classification task. In addition, we contribute a protocol
for synthetically-generating datasets with diverse relation labels.
|
2501.03173 | MObI: Multimodal Object Inpainting Using Diffusion Models | cs.CV | Safety-critical applications, such as autonomous driving, require extensive
multimodal data for rigorous testing. Methods based on synthetic data are
gaining prominence due to the cost and complexity of gathering real-world data
but require a high degree of realism and controllability in order to be useful.
This paper introduces MObI, a novel framework for Multimodal Object Inpainting
that leverages a diffusion model to create realistic and controllable object
inpaintings across perceptual modalities, demonstrated for both camera and
lidar simultaneously. Using a single reference RGB image, MObI enables objects
to be seamlessly inserted into existing multimodal scenes at a 3D location
specified by a bounding box, while maintaining semantic consistency and
multimodal coherence. Unlike traditional inpainting methods that rely solely on
edit masks, our 3D bounding box conditioning gives objects accurate spatial
positioning and realistic scaling. As a result, our approach can be used to
insert novel objects flexibly into multimodal scenes, providing significant
advantages for testing perception models.
|
2501.03176 | Scalable Forward-Forward Algorithm | cs.LG cs.NE | We propose a scalable Forward-Forward (FF) algorithm that eliminates the need
for backpropagation by training each layer separately. Unlike backpropagation,
FF avoids backward gradients and can be more modular and memory efficient,
making it appealing for large networks. We extend FF to modern convolutional
architectures, such as MobileNetV3 and ResNet18, by introducing a new way to
compute losses for convolutional layers. Experiments show that our method
achieves performance comparable to standard backpropagation. Furthermore, when
we divide the network into blocks, such as the residual blocks in ResNet, and
apply backpropagation only within each block, but not across blocks, our hybrid
design tends to outperform backpropagation baselines while maintaining a
similar training speed. Finally, we present experiments on small datasets and
transfer learning that confirm the adaptability of our method.
|
2501.03181 | FaceSpeak: Expressive and High-Quality Speech Synthesis from Human
Portraits of Different Styles | cs.SD cs.AI eess.AS | Humans can perceive speakers' characteristics (e.g., identity, gender,
personality and emotion) by their appearance, which are generally aligned to
their voice style. Recently, vision-driven Text-to-speech (TTS) scholars
grounded their investigations on real-person faces, thereby restricting
effective speech synthesis from applying to vast potential usage scenarios with
diverse characters and image styles. To solve this issue, we introduce a novel
FaceSpeak approach. It extracts salient identity characteristics and emotional
representations from a wide variety of image styles. Meanwhile, it mitigates
the extraneous information (e.g., background, clothing, and hair color, etc.),
resulting in synthesized speech closely aligned with a character's persona.
Furthermore, to overcome the scarcity of multi-modal TTS data, we have devised
an innovative dataset, namely Expressive Multi-Modal TTS, which is diligently
curated and annotated to facilitate research in this domain. The experimental
results demonstrate our proposed FaceSpeak can generate portrait-aligned voice
with satisfactory naturalness and quality.
|
2501.03182 | Boosting Explainability through Selective Rationalization in Pre-trained
Language Models | cs.CL cs.AI | The widespread application of pre-trained language models (PLMs) in natural
language processing (NLP) has led to increasing concerns about their
explainability. Selective rationalization is a self-explanatory framework that
selects human-intelligible input subsets as rationales for predictions. Recent
studies have shown that applying existing rationalization frameworks to PLMs
will result in severe degeneration and failure problems, producing sub-optimal
or meaningless rationales. Such failures severely damage trust in
rationalization methods and constrain the application of rationalization
techniques on PLMs. In this paper, we find that the homogeneity of tokens in
the sentences produced by PLMs is the primary contributor to these problems. To
address these challenges, we propose a method named Pre-trained Language
Model's Rationalization (PLMR), which splits PLMs into a generator and a
predictor to deal with NLP tasks while providing interpretable rationales. The
generator in PLMR also alleviates homogeneity by pruning irrelevant tokens,
while the predictor uses full-text information to standardize predictions.
Experiments conducted on two widely used datasets across multiple PLMs
demonstrate the effectiveness of the proposed method PLMR in addressing the
challenge of applying selective rationalization to PLMs. Codes:
https://github.com/ylb777/PLMR.
|
2501.03183 | Classifier-Guided Captioning Across Modalities | cs.CL cs.AI cs.SD eess.AS | Most current captioning systems use language models trained on data from
specific settings, such as image-based captioning via Amazon Mechanical Turk,
limiting their ability to generalize to other modality distributions and
contexts. This limitation hinders performance in tasks like audio or video
captioning, where different semantic cues are needed. Addressing this challenge
is crucial for creating more adaptable and versatile captioning frameworks
applicable across diverse real-world contexts. In this work, we introduce a
method to adapt captioning networks to the semantics of alternative settings,
such as capturing audibility in audio captioning, where it is crucial to
describe sounds and their sources. Our framework consists of two main
components: (i) a frozen captioning system incorporating a language model (LM),
and (ii) a text classifier that guides the captioning system. The classifier is
trained on a dataset automatically generated by GPT-4, using tailored prompts
specifically designed to enhance key aspects of the generated captions.
Importantly, the framework operates solely during inference, eliminating the
need for further training of the underlying captioning model. We evaluate the
framework on various models and modalities, with a focus on audio captioning,
and report promising results. Notably, when combined with an existing zero-shot
audio captioning system, our framework improves its quality and sets
state-of-the-art performance in zero-shot audio captioning.
|
2501.03184 | Noise-Robust Target-Speaker Voice Activity Detection Through
Self-Supervised Pretraining | eess.AS cs.LG cs.SD | Target-Speaker Voice Activity Detection (TS-VAD) is the task of detecting the
presence of speech from a known target-speaker in an audio frame. Recently,
deep neural network-based models have shown good performance in this task.
However, training these models requires extensive labelled data, which is
costly and time-consuming to obtain, particularly if generalization to unseen
environments is crucial. To mitigate this, we propose a causal, Self-Supervised
Learning (SSL) pretraining framework, called Denoising Autoregressive
Predictive Coding (DN-APC), to enhance TS-VAD performance in noisy conditions.
We also explore various speaker conditioning methods and evaluate their
performance under different noisy conditions. Our experiments show that DN-APC
improves performance in noisy conditions, with a general improvement of approx.
2% in both seen and unseen noise. Additionally, we find that FiLM conditioning
provides the best overall performance. Representation analysis via tSNE plots
reveals robust initial representations of speech and non-speech from
pretraining. This underscores the effectiveness of SSL pretraining in improving
the robustness and performance of TS-VAD models in noisy environments.
|
2501.03187 | Turn-based Multi-Agent Reinforcement Learning Model Checking | cs.AI cs.LG cs.MA | In this paper, we propose a novel approach for verifying the compliance of
turn-based multi-agent reinforcement learning (TMARL) agents with complex
requirements in stochastic multiplayer games. Our method overcomes the
limitations of existing verification approaches, which are inadequate for
dealing with TMARL agents and not scalable to large games with multiple agents.
Our approach relies on tight integration of TMARL and a verification technique
referred to as model checking. We demonstrate the effectiveness and scalability
of our technique through experiments in different types of environments. Our
experiments show that our method is suited to verify TMARL agents and scales
better than naive monolithic model checking.
|
2501.03190 | Multimodal Machine Learning Can Predict Videoconference Fluidity and
Enjoyment | cs.LG cs.HC eess.AS eess.IV | Videoconferencing is now a frequent mode of communication in both
professional and informal settings, yet it often lacks the fluidity and
enjoyment of in-person conversation. This study leverages multimodal machine
learning to predict moments of negative experience in videoconferencing. We
sampled thousands of short clips from the RoomReader corpus, extracting audio
embeddings, facial actions, and body motion features to train models for
identifying low conversational fluidity, low enjoyment, and classifying
conversational events (backchanneling, interruption, or gap). Our best models
achieved an ROC-AUC of up to 0.87 on hold-out videoconference sessions, with
domain-general audio features proving most critical. This work demonstrates
that multimodal audio-video signals can effectively predict high-level
subjective conversational outcomes. In addition, this is a contribution to
research on videoconferencing user experience by showing that multimodal
machine learning can be used to identify rare moments of negative user
experience for further study or mitigation.
|
2501.03191 | CLIX: Cross-Lingual Explanations of Idiomatic Expressions | cs.CL | Automated definition generation systems have been proposed to support
vocabulary expansion for language learners. The main barrier to the success of
these systems is that learners often struggle to understand definitions due to
the presence of potentially unfamiliar words and grammar, particularly when
non-standard language is involved. To address these challenges, we propose
CLIX, the task of Cross-Lingual explanations of Idiomatic eXpressions. We
explore the capabilities of current NLP models for this task, and observe that
while it remains challenging, large language models show promise. Finally, we
perform a detailed error analysis to highlight the key challenges that need to
be addressed before we can reliably incorporate these systems into educational
tools.
|
2501.03200 | The FACTS Grounding Leaderboard: Benchmarking LLMs' Ability to Ground
Responses to Long-Form Input | cs.CL | We introduce FACTS Grounding, an online leaderboard and associated benchmark
that evaluates language models' ability to generate text that is factually
accurate with respect to given context in the user prompt. In our benchmark,
each prompt includes a user request and a full document, with a maximum length
of 32k tokens, requiring long-form responses. The long-form responses are
required to be fully grounded in the provided context document while fulfilling
the user request. Models are evaluated using automated judge models in two
phases: (1) responses are disqualified if they do not fulfill the user request;
(2) they are judged as accurate if the response is fully grounded in the
provided document. The automated judge models were comprehensively evaluated
against a held-out test-set to pick the best prompt template, and the final
factuality score is an aggregate of multiple judge models to mitigate
evaluation bias. The FACTS Grounding leaderboard will be actively maintained
over time, and contains both public and private splits to allow for external
participation while guarding the integrity of the leaderboard. It can be found
at https://www.kaggle.com/facts-leaderboard.
|
2501.03203 | Detecting AI-Generated Text in Educational Content: Leveraging Machine
Learning and Explainable AI for Academic Integrity | cs.CL cs.AI cs.CY | This study seeks to enhance academic integrity by providing tools to detect
AI-generated content in student work using advanced technologies. The findings
promote transparency and accountability, helping educators maintain ethical
standards and supporting the responsible integration of AI in education. A key
contribution of this work is the generation of the CyberHumanAI dataset, which
has 1000 observations, 500 of which are written by humans and the other 500
produced by ChatGPT. We evaluate various machine learning (ML) and deep
learning (DL) algorithms on the CyberHumanAI dataset comparing human-written
and AI-generated content from Large Language Models (LLMs) (i.e., ChatGPT).
Results demonstrate that traditional ML algorithms, specifically XGBoost and
Random Forest, achieve high performance (83% and 81% accuracies respectively).
Results also show that classifying shorter content seems to be more challenging
than classifying longer content. Further, using Explainable Artificial
Intelligence (XAI) we identify discriminative features influencing the ML
model's predictions, where human-written content tends to use a practical
language (e.g., use and allow). Meanwhile AI-generated text is characterized by
more abstract and formal terms (e.g., realm and employ). Finally, a comparative
analysis with GPTZero show that our narrowly focused, simple, and fine-tuned
model can outperform generalized systems like GPTZero. The proposed model
achieved approximately 77.5% accuracy compared to GPTZero's 48.5% accuracy when
tasked to classify Pure AI, Pure Human, and mixed class. GPTZero showed a
tendency to classify challenging and small-content cases as either mixed or
unrecognized while our proposed model showed a more balanced performance across
the three classes.
|
2501.03212 | Leveraging Explainable AI for LLM Text Attribution: Differentiating
Human-Written and Multiple LLMs-Generated Text | cs.CL cs.CY | The development of Generative AI Large Language Models (LLMs) raised the
alarm regarding identifying content produced through generative AI or humans.
In one case, issues arise when students heavily rely on such tools in a manner
that can affect the development of their writing or coding skills. Other issues
of plagiarism also apply. This study aims to support efforts to detect and
identify textual content generated using LLM tools. We hypothesize that
LLMs-generated text is detectable by machine learning (ML), and investigate ML
models that can recognize and differentiate texts generated by multiple LLMs
tools. We leverage several ML and Deep Learning (DL) algorithms such as Random
Forest (RF), and Recurrent Neural Networks (RNN), and utilized Explainable
Artificial Intelligence (XAI) to understand the important features in
attribution. Our method is divided into 1) binary classification to
differentiate between human-written and AI-text, and 2) multi classification,
to differentiate between human-written text and the text generated by the five
different LLM tools (ChatGPT, LLaMA, Google Bard, Claude, and Perplexity).
Results show high accuracy in the multi and binary classification. Our model
outperformed GPTZero with 98.5\% accuracy to 78.3\%. Notably, GPTZero was
unable to recognize about 4.2\% of the observations, but our model was able to
recognize the complete test dataset. XAI results showed that understanding
feature importance across different classes enables detailed author/source
profiles. Further, aiding in attribution and supporting plagiarism detection by
highlighting unique stylistic and structural elements ensuring robust content
originality verification.
|
2501.03218 | Dispider: Enabling Video LLMs with Active Real-Time Interaction via
Disentangled Perception, Decision, and Reaction | cs.CV | Active Real-time interaction with video LLMs introduces a new paradigm for
human-computer interaction, where the model not only understands user intent
but also responds while continuously processing streaming video on the fly.
Unlike offline video LLMs, which analyze the entire video before answering
questions, active real-time interaction requires three capabilities: 1)
Perception: real-time video monitoring and interaction capturing. 2) Decision:
raising proactive interaction in proper situations, 3) Reaction: continuous
interaction with users. However, inherent conflicts exist among the desired
capabilities. The Decision and Reaction require a contrary Perception scale and
grain, and the autoregressive decoding blocks the real-time Perception and
Decision during the Reaction. To unify the conflicted capabilities within a
harmonious system, we present Dispider, a system that disentangles Perception,
Decision, and Reaction. Dispider features a lightweight proactive streaming
video processing module that tracks the video stream and identifies optimal
moments for interaction. Once the interaction is triggered, an asynchronous
interaction module provides detailed responses, while the processing module
continues to monitor the video in the meantime. Our disentangled and
asynchronous design ensures timely, contextually accurate, and computationally
efficient responses, making Dispider ideal for active real-time interaction for
long-duration video streams. Experiments show that Dispider not only maintains
strong performance in conventional video QA tasks, but also significantly
surpasses previous online models in streaming scenario responses, thereby
validating the effectiveness of our architecture. The code and model are
released at \url{https://github.com/Mark12Ding/Dispider}.
|
2501.03220 | ProTracker: Probabilistic Integration for Robust and Accurate Point
Tracking | cs.CV | In this paper, we propose ProTracker, a novel framework for robust and
accurate long-term dense tracking of arbitrary points in videos. The key idea
of our method is incorporating probabilistic integration to refine multiple
predictions from both optical flow and semantic features for robust short-term
and long-term tracking. Specifically, we integrate optical flow estimations in
a probabilistic manner, producing smooth and accurate trajectories by
maximizing the likelihood of each prediction. To effectively re-localize
challenging points that disappear and reappear due to occlusion, we further
incorporate long-term feature correspondence into our flow predictions for
continuous trajectory generation. Extensive experiments show that ProTracker
achieves the state-of-the-art performance among unsupervised and
self-supervised approaches, and even outperforms supervised methods on several
benchmarks. Our code and model will be publicly available upon publication.
|
2501.03221 | RW-Net: Enhancing Few-Shot Point Cloud Classification with a Wavelet
Transform Projection-based Network | cs.CV | In the domain of 3D object classification, a fundamental challenge lies in
addressing the scarcity of labeled data, which limits the applicability of
traditional data-intensive learning paradigms. This challenge is particularly
pronounced in few-shot learning scenarios, where the objective is to achieve
robust generalization from minimal annotated samples. To overcome these
limitations, it is crucial to identify and leverage the most salient and
discriminative features of 3D objects, thereby enhancing learning efficiency
and reducing dependency on large-scale labeled datasets. This work introduces
RW-Net, a novel framework designed to address the challenges above by
integrating Rate-Distortion Explanation (RDE) and wavelet transform into a
state-of-the-art projection-based 3D object classification architecture. The
proposed method capitalizes on RDE to extract critical features by identifying
and preserving the most informative data components while reducing redundancy.
This process ensures the retention of essential information for effective
decision-making, optimizing the model's ability to learn from limited data.
Complementing RDE, incorporating the wavelet transform further enhances the
framework's capability to generalize in low-data regimes. By emphasizing
low-frequency components of the input data, the wavelet transform captures
fundamental geometric and structural attributes of 3D objects. These attributes
are instrumental in mitigating overfitting and improving the robustness of the
learned representations across diverse tasks and domains. To validate the
effectiveness of our RW-Net, we conduct extensive experiments on three
datasets: ModelNet40, ModelNet40-C, and ScanObjectNN for few-shot 3D object
classification. The results demonstrate that our approach achieves
state-of-the-art performance and exhibits superior generalization and
robustness in few-shot learning scenarios.
|
2501.03222 | Characterizing the Accuracy-Communication-Privacy Trade-off in
Distributed Stochastic Convex Optimization | cs.LG cs.IT math.IT stat.ML | We consider the problem of differentially private stochastic convex
optimization (DP-SCO) in a distributed setting with $M$ clients, where each of
them has a local dataset of $N$ i.i.d. data samples from an underlying data
distribution. The objective is to design an algorithm to minimize a convex
population loss using a collaborative effort across $M$ clients, while ensuring
the privacy of the local datasets. In this work, we investigate the
accuracy-communication-privacy trade-off for this problem. We establish
matching converse and achievability results using a novel lower bound and a new
algorithm for distributed DP-SCO based on Vaidya's plane cutting method. Thus,
our results provide a complete characterization of the
accuracy-communication-privacy trade-off for DP-SCO in the distributed setting.
|
2501.03223 | Rate-My-LoRA: Efficient and Adaptive Federated Model Tuning for Cardiac
MRI Segmentation | cs.CV cs.DC cs.LG | Cardiovascular disease (CVD) and cardiac dyssynchrony are major public health
problems in the United States. Precise cardiac image segmentation is crucial
for extracting quantitative measures that help categorize cardiac dyssynchrony.
However, achieving high accuracy often depends on centralizing large datasets
from different hospitals, which can be challenging due to privacy concerns. To
solve this problem, Federated Learning (FL) is proposed to enable decentralized
model training on such data without exchanging sensitive information. However,
bandwidth limitations and data heterogeneity remain as significant challenges
in conventional FL algorithms. In this paper, we propose a novel efficient and
adaptive federate learning method for cardiac segmentation that improves model
performance while reducing the bandwidth requirement. Our method leverages the
low-rank adaptation (LoRA) to regularize model weight update and reduce
communication overhead. We also propose a \mymethod{} aggregation technique to
address data heterogeneity among clients. This technique adaptively penalizes
the aggregated weights from different clients by comparing the validation
accuracy in each client, allowing better generalization performance and fast
local adaptation. In-client and cross-client evaluations on public cardiac MR
datasets demonstrate the superiority of our method over other LoRA-based
federate learning approaches.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.