id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
2501.08284
|
AfriHate: A Multilingual Collection of Hate Speech and Abusive Language
Datasets for African Languages
|
cs.CL
|
Hate speech and abusive language are global phenomena that need
socio-cultural background knowledge to be understood, identified, and
moderated. However, in many regions of the Global South, there have been
several documented occurrences of (1) absence of moderation and (2) censorship
due to the reliance on keyword spotting out of context. Further, high-profile
individuals have frequently been at the center of the moderation process, while
large and targeted hate speech campaigns against minorities have been
overlooked. These limitations are mainly due to the lack of high-quality data
in the local languages and the failure to include local communities in the
collection, annotation, and moderation processes. To address this issue, we
present AfriHate: a multilingual collection of hate speech and abusive language
datasets in 15 African languages. Each instance in AfriHate is annotated by
native speakers familiar with the local culture. We report the challenges
related to the construction of the datasets and present various classification
baseline results with and without using LLMs. The datasets, individual
annotations, and hate speech and offensive language lexicons are available on
https://github.com/AfriHate/AfriHate
|
2501.08285
|
Can Bayesian Neural Networks Explicitly Model Input Uncertainty?
|
cs.LG cs.CV
|
Inputs to machine learning models can have associated noise or uncertainties,
but they are often ignored and not modelled. It is unknown if Bayesian Neural
Networks and their approximations are able to consider uncertainty in their
inputs. In this paper we build a two input Bayesian Neural Network (mean and
standard deviation) and evaluate its capabilities for input uncertainty
estimation across different methods like Ensembles, MC-Dropout, and Flipout.
Our results indicate that only some uncertainty estimation methods for
approximate Bayesian NNs can model input uncertainty, in particular Ensembles
and Flipout.
|
2501.08286
|
VINGS-Mono: Visual-Inertial Gaussian Splatting Monocular SLAM in Large
Scenes
|
cs.RO cs.CV
|
VINGS-Mono is a monocular (inertial) Gaussian Splatting (GS) SLAM framework
designed for large scenes. The framework comprises four main components: VIO
Front End, 2D Gaussian Map, NVS Loop Closure, and Dynamic Eraser. In the VIO
Front End, RGB frames are processed through dense bundle adjustment and
uncertainty estimation to extract scene geometry and poses. Based on this
output, the mapping module incrementally constructs and maintains a 2D Gaussian
map. Key components of the 2D Gaussian Map include a Sample-based Rasterizer,
Score Manager, and Pose Refinement, which collectively improve mapping speed
and localization accuracy. This enables the SLAM system to handle large-scale
urban environments with up to 50 million Gaussian ellipsoids. To ensure global
consistency in large-scale scenes, we design a Loop Closure module, which
innovatively leverages the Novel View Synthesis (NVS) capabilities of Gaussian
Splatting for loop closure detection and correction of the Gaussian map.
Additionally, we propose a Dynamic Eraser to address the inevitable presence of
dynamic objects in real-world outdoor scenes. Extensive evaluations in indoor
and outdoor environments demonstrate that our approach achieves localization
performance on par with Visual-Inertial Odometry while surpassing recent
GS/NeRF SLAM methods. It also significantly outperforms all existing methods in
terms of mapping and rendering quality. Furthermore, we developed a mobile app
and verified that our framework can generate high-quality Gaussian maps in real
time using only a smartphone camera and a low-frequency IMU sensor. To the best
of our knowledge, VINGS-Mono is the first monocular Gaussian SLAM method
capable of operating in outdoor environments and supporting kilometer-scale
large scenes.
|
2501.08288
|
Avoiding subtraction and division of stochastic signals using
normalizing flows: NFdeconvolve
|
stat.ML cs.LG math.PR physics.data-an q-bio.QM
|
Across the scientific realm, we find ourselves subtracting or dividing
stochastic signals. For instance, consider a stochastic realization, $x$,
generated from the addition or multiplication of two stochastic signals $a$ and
$b$, namely $x=a+b$ or $x = ab$. For the $x=a+b$ example, $a$ can be
fluorescence background and $b$ the signal of interest whose statistics are to
be learned from the measured $x$. Similarly, when writing $x=ab$, $a$ can be
thought of as the illumination intensity and $b$ the density of fluorescent
molecules of interest. Yet dividing or subtracting stochastic signals amplifies
noise, and we ask instead whether, using the statistics of $a$ and the
measurement of $x$ as input, we can recover the statistics of $b$. Here, we
show how normalizing flows can generate an approximation of the probability
distribution over $b$, thereby avoiding subtraction or division altogether.
This method is implemented in our software package, NFdeconvolve, available on
GitHub with a tutorial linked in the main text.
|
2501.08292
|
HALoGEN: Fantastic LLM Hallucinations and Where to Find Them
|
cs.CL cs.AI
|
Despite their impressive ability to generate high-quality and fluent text,
generative large language models (LLMs) also produce hallucinations: statements
that are misaligned with established world knowledge or provided input context.
However, measuring hallucination can be challenging, as having humans verify
model generations on-the-fly is both expensive and time-consuming. In this
work, we release HALoGEN, a comprehensive hallucination benchmark consisting
of: (1) 10,923 prompts for generative models spanning nine domains including
programming, scientific attribution, and summarization, and (2) automatic
high-precision verifiers for each use case that decompose LLM generations into
atomic units, and verify each unit against a high-quality knowledge source. We
use this framework to evaluate ~150,000 generations from 14 language models,
finding that even the best-performing models are riddled with hallucinations
(sometimes up to 86% of generated atomic facts depending on the domain). We
further define a novel error classification for LLM hallucinations based on
whether they likely stem from incorrect recollection of training data (Type A
errors), or incorrect knowledge in training data (Type B errors), or are
fabrication (Type C errors). We hope our framework provides a foundation to
enable the principled study of why generative models hallucinate, and advances
the development of trustworthy large language models.
|
2501.08295
|
LayerAnimate: Layer-specific Control for Animation
|
cs.CV
|
Animated video separates foreground and background elements into layers, with
distinct processes for sketching, refining, coloring, and in-betweening.
Existing video generation methods typically treat animation as a monolithic
data domain, lacking fine-grained control over individual layers. In this
paper, we introduce LayerAnimate, a novel architectural approach that enhances
fine-grained control over individual animation layers within a video diffusion
model, allowing users to independently manipulate foreground and background
elements in distinct layers. To address the challenge of limited layer-specific
data, we propose a data curation pipeline that features automated element
segmentation, motion-state hierarchical merging, and motion coherence
refinement. Through quantitative and qualitative comparisons, and user study,
we demonstrate that LayerAnimate outperforms current methods in terms of
animation quality, control precision, and usability, making it an ideal tool
for both professional animators and amateur enthusiasts. This framework opens
up new possibilities for layer-specific animation applications and creative
flexibility. Our code is available at https://layeranimate.github.io.
|
2501.08296
|
A Survey on Pedophile Attribution Techniques for Online Platforms
|
cs.CL
|
Reliance on anonymity in social media has increased its popularity on these
platforms among all ages. The availability of public Wi-Fi networks has
facilitated a vast variety of online content, including social media
applications. Although anonymity and ease of access can be a convenient means
of communication for their users, it is difficult to manage and protect its
vulnerable users against sexual predators. Using an automated identification
system that can attribute predators to their text would make the solution more
attainable. In this survey, we provide a review of the methods of pedophile
attribution used in social media platforms. We examine the effect of the size
of the suspect set and the length of the text on the task of attribution.
Moreover, we review the most-used datasets, features, classification techniques
and performance measures for attributing sexual predators. We found that few
studies have proposed tools to mitigate the risk of online sexual predators,
but none of them can provide suspect attribution. Finally, we list several open
research problems.
|
2501.08297
|
Polynomial Threshold Functions of Bounded Tree-Width: Some
Explainability and Complexity Aspects
|
cs.LG cs.AI
|
The tree-width of a multivariate polynomial is the tree-width of the
hypergraph with hyperedges corresponding to its terms. Multivariate polynomials
of bounded tree-width have been studied by Makowsky and Meer as a new sparsity
condition that allows for polynomial solvability of problems which are
intractable in general. We consider a variation on this theme for Boolean
variables. A representation of a Boolean function as the sign of a polynomial
is called a polynomial threshold representation. We discuss Boolean functions
representable as polynomial threshold functions of bounded tree-width and
present two applications to Bayesian network classifiers, a probabilistic
graphical model. Both applications are in Explainable Artificial Intelligence
(XAI), the research area dealing with the black-box nature of many recent
machine learning models. We also give a separation result between the
representational power of positive and general polynomial threshold functions.
|
2501.08303
|
Advancing Semantic Future Prediction through Multimodal Visual Sequence
Transformers
|
cs.CV
|
Semantic future prediction is important for autonomous systems navigating
dynamic environments. This paper introduces FUTURIST, a method for multimodal
future semantic prediction that uses a unified and efficient visual sequence
transformer architecture. Our approach incorporates a multimodal masked visual
modeling objective and a novel masking mechanism designed for multimodal
training. This allows the model to effectively integrate visible information
from various modalities, improving prediction accuracy. Additionally, we
propose a VAE-free hierarchical tokenization process, which reduces
computational complexity, streamlines the training pipeline, and enables
end-to-end training with high-resolution, multimodal inputs. We validate
FUTURIST on the Cityscapes dataset, demonstrating state-of-the-art performance
in future semantic segmentation for both short- and mid-term forecasting. We
provide the implementation code at https://github.com/Sta8is/FUTURIST .
|
2501.08304
|
A Novel Method for Detecting Dust Accumulation in Photovoltaic Systems:
Evaluating Visible Sunlight Obstruction in Different Dust Levels and AI-based
Bird Droppings Detection
|
eess.SY cs.SY
|
This paper presents an innovative method for automatically detecting dust
accumulation on a PV system and notifying the user to clean it instantly. The
accumulation of dust, bird, or insect droppings on the surface of photovoltaic
(PV) panels creates a barrier between the solar energy and the panel's surface
to receive sufficient energy to generate electricity. The study investigates
the effects of dust on PV panel output and visible sunlight (VSL) block amounts
to utilize the necessity of cleaning and detection. The amount of blocked
visible sunlight while passing through glass due to dust determines the
accumulated dust level. Visible sunlight can easily pass through the clean,
transparent glass but reflects when something like dust obstructs it. Based on
those concepts, a system is designed with a light sensor that is simple,
effective, easy to install, hassle-free, and can spread the technology. The
study also explores the effectiveness of the detection system developed by
using image processing and machine learning algorithms to identify dust levels
and bird or insect droppings accurately. The experimental setup in Gazipur,
Bangladesh, found that excessive dust can block up to 55% of visible sunlight,
wasting 55% of solar energy in the visible spectrum, and cleaning can recover
3% of power weekly. The data from the dust detection system is correlated with
the 400W capacity solar panels' naturally lost efficiency data to validate the
system. This research measured visible sunlight obstruction and loss due to
dust. However, the addition of an infrared radiation sensor can draw the entire
scenario of energy loss by doing more research.
|
2501.08305
|
Benchmarking Graph Representations and Graph Neural Networks for
Multivariate Time Series Classification
|
cs.LG
|
Multivariate Time Series Classification (MTSC) enables the analysis if
complex temporal data, and thus serves as a cornerstone in various real-world
applications, ranging from healthcare to finance. Since the relationship among
variables in MTS usually contain crucial cues, a large number of graph-based
MTSC approaches have been proposed, as the graph topology and edges can
explicitly represent relationships among variables (channels), where not only
various MTS graph representation learning strategies but also different Graph
Neural Networks (GNNs) have been explored. Despite such progresses, there is no
comprehensive study that fairly benchmarks and investigates the performances of
existing widely-used graph representation learning strategies/GNN classifiers
in the application of different MTSC tasks. In this paper, we present the first
benchmark which systematically investigates the effectiveness of the
widely-used three node feature definition strategies, four edge feature
learning strategies and five GNN architecture, resulting in 60 different
variants for graph-based MTSC. These variants are developed and evaluated with
a standardized data pipeline and training/validation/testing strategy on 26
widely-used suspensor MTSC datasets. Our experiments highlight that node
features significantly influence MTSC performance, while the visualization of
edge features illustrates why adaptive edge learning outperforms other edge
feature learning methods. The code of the proposed benchmark is publicly
available at
\url{https://github.com/CVI-yangwn/Benchmark-GNN-for-Multivariate-Time-Series-Classification}.
|
2501.08306
|
Path Loss Prediction Using Machine Learning with Extended Features
|
cs.LG eess.SP
|
Wireless communications rely on path loss modeling, which is most effective
when it includes the physical details of the propagation environment. Acquiring
this data has historically been challenging, but geographic information system
data is becoming increasingly available with higher resolution and accuracy.
Access to such details enables propagation models to more accurately predict
coverage and minimize interference in wireless deployments. Machine
learning-based modeling can significantly support this effort, with
feature-based approaches allowing for accurate, efficient, and scalable
propagation modeling. Building on previous work, we introduce an extended set
of features that improves prediction accuracy while, most importantly,
maintaining model generalization across a broad range of environments.
|
2501.08312
|
Everybody Likes to Sleep: A Computer-Assisted Comparison of Object
Naming Data from 30 Languages
|
cs.CL
|
Object naming - the act of identifying an object with a word or a phrase - is
a fundamental skill in interpersonal communication, relevant to many
disciplines, such as psycholinguistics, cognitive linguistics, or language and
vision research. Object naming datasets, which consist of concept lists with
picture pairings, are used to gain insights into how humans access and select
names for objects in their surroundings and to study the cognitive processes
involved in converting visual stimuli into semantic concepts. Unfortunately,
object naming datasets often lack transparency and have a highly idiosyncratic
structure. Our study tries to make current object naming data transparent and
comparable by using a multilingual, computer-assisted approach that links
individual items of object naming lists to unified concepts. Our current sample
links 17 object naming datasets that cover 30 languages from 10 different
language families. We illustrate how the comparative dataset can be explored by
searching for concepts that recur across the majority of datasets and comparing
the conceptual spaces of covered object naming datasets with classical basic
vocabulary lists from historical linguistics and linguistic typology. Our
findings can serve as a basis for enhancing cross-linguistic object naming
research and as a guideline for future studies dealing with object naming
tasks.
|
2501.08313
|
MiniMax-01: Scaling Foundation Models with Lightning Attention
|
cs.CL cs.CV
|
We introduce MiniMax-01 series, including MiniMax-Text-01 and MiniMax-VL-01,
which are comparable to top-tier models while offering superior capabilities in
processing longer contexts. The core lies in lightning attention and its
efficient scaling. To maximize computational capacity, we integrate it with
Mixture of Experts (MoE), creating a model with 32 experts and 456 billion
total parameters, of which 45.9 billion are activated for each token. We
develop an optimized parallel strategy and highly efficient
computation-communication overlap techniques for MoE and lightning attention.
This approach enables us to conduct efficient training and inference on models
with hundreds of billions of parameters across contexts spanning millions of
tokens. The context window of MiniMax-Text-01 can reach up to 1 million tokens
during training and extrapolate to 4 million tokens during inference at an
affordable cost. Our vision-language model, MiniMax-VL-01 is built through
continued training with 512 billion vision-language tokens. Experiments on both
standard and in-house benchmarks show that our models match the performance of
state-of-the-art models like GPT-4o and Claude-3.5-Sonnet while offering 20-32
times longer context window. We publicly release MiniMax-01 at
https://github.com/MiniMax-AI.
|
2501.08314
|
Mechanics Informatics: A paradigm for efficiently learning constitutive
models
|
cs.CE
|
Efficient and accurate learning of constitutive laws is crucial for
accurately predicting the mechanical behavior of materials under complex
loading conditions. Accurate model calibration hinges on a delicate interplay
between the information embedded in experimental data and the parameters that
define our constitutive models. The information encoded in the parameters of
the constitutive model must be complemented by the information in the data used
for calibration. This interplay raises fundamental questions: How can we
quantify the information content of test data? How much information does a
single test convey? Also, how much information is required to accurately learn
a constitutive model? To address these questions, we introduce mechanics
informatics, a paradigm for efficient and accurate constitutive model learning.
At its core is the stress state entropy, a metric quantifying the information
content of experimental data. Using this framework, we analyzed specimen
geometries with varying information content for learning an anisotropic
inelastic law. Specimens with limited information enabled accurate
identification of a few parameters sensitive to the information in the data.
Furthermore, we optimized specimen design by incorporating stress state entropy
into a Bayesian optimization scheme. This led to the design of cruciform
specimens with maximized entropy for accurate parameter identification.
Conversely, minimizing entropy in Peirs shear specimens yielded a uniform pure
shear stress state, showcasing the framework's flexibility in tailoring designs
for specific experimental goals. Finally, we addressed experimental
uncertainties and demonstrated the potential of transfer learning for replacing
challenging testing protocols with simpler alternatives, while preserving
calibration accuracy.
|
2501.08316
|
Diffusion Adversarial Post-Training for One-Step Video Generation
|
cs.CV cs.AI cs.LG
|
The diffusion models are widely used for image and video generation, but
their iterative generation process is slow and expansive. While existing
distillation approaches have demonstrated the potential for one-step generation
in the image domain, they still suffer from significant quality degradation. In
this work, we propose Adversarial Post-Training (APT) against real data
following diffusion pre-training for one-step video generation. To improve the
training stability and quality, we introduce several improvements to the model
architecture and training procedures, along with an approximated R1
regularization objective. Empirically, our experiments show that our
adversarial post-trained model, Seaweed-APT, can generate 2-second, 1280x720,
24fps videos in real time using a single forward evaluation step. Additionally,
our model is capable of generating 1024px images in a single step, achieving
quality comparable to state-of-the-art methods.
|
2501.08317
|
A Similarity Measure Between Functions with Applications to Statistical
Learning and Optimization
|
cs.LG math.OC stat.ML
|
In this note, we present a novel measure of similarity between two functions.
It quantifies how the sub-optimality gaps of two functions convert to each
other, and unifies several existing notions of functional similarity. We show
that it has convenient operation rules, and illustrate its use in empirical
risk minimization and non-stationary online optimization.
|
2501.08319
|
Enhancing Automated Interpretability with Output-Centric Feature
Descriptions
|
cs.CL
|
Automated interpretability pipelines generate natural language descriptions
for the concepts represented by features in large language models (LLMs), such
as plants or the first word in a sentence. These descriptions are derived using
inputs that activate the feature, which may be a dimension or a direction in
the model's representation space. However, identifying activating inputs is
costly, and the mechanistic role of a feature in model behavior is determined
both by how inputs cause a feature to activate and by how feature activation
affects outputs. Using steering evaluations, we reveal that current pipelines
provide descriptions that fail to capture the causal effect of the feature on
outputs. To fix this, we propose efficient, output-centric methods for
automatically generating feature descriptions. These methods use the tokens
weighted higher after feature stimulation or the highest weight tokens after
applying the vocabulary "unembedding" head directly to the feature. Our
output-centric descriptions better capture the causal effect of a feature on
model outputs than input-centric descriptions, but combining the two leads to
the best performance on both input and output evaluations. Lastly, we show that
output-centric descriptions can be used to find inputs that activate features
previously thought to be "dead".
|
2501.08322
|
Exploring Robustness of Multilingual LLMs on Real-World Noisy Data
|
cs.CL
|
Large Language Models (LLMs) are trained on Web data that might contain
spelling errors made by humans. But do they become robust to similar real-world
noise? In this paper, we investigate the effect of real-world spelling mistakes
on the performance of 9 language models, with parameters ranging from 0.2B to
13B, in 3 different NLP tasks, namely Natural Language Inference (NLI), Name
Entity Recognition (NER), and Intent Classification (IC). We perform our
experiments on 6 different languages and build a dictionary of real-world noise
for them using the Wikipedia edit history. We show that the performance gap of
the studied models on the clean and noisy test data averaged across all the
datasets and languages ranges from 2.3 to 4.3 absolute percentage points. In
addition, mT5 models, in general, show more robustness compared to BLOOM,
Falcon, and BERT-like models. In particular, mT5 (13B), was the most robust on
average overall, across the 3 tasks, and in 4 of the 6 languages.
|
2501.08324
|
ADAM-1: AI and Bioinformatics for Alzheimer's Detection and
Microbiome-Clinical Data Integrations
|
cs.AI
|
The Alzheimer's Disease Analysis Model Generation 1 (ADAM) is a multi-agent
large language model (LLM) framework designed to integrate and analyze
multi-modal data, including microbiome profiles, clinical datasets, and
external knowledge bases, to enhance the understanding and detection of
Alzheimer's disease (AD). By leveraging retrieval-augmented generation (RAG)
techniques along with its multi-agent architecture, ADAM-1 synthesizes insights
from diverse data sources and contextualizes findings using literature-driven
evidence. Comparative evaluation against XGBoost revealed similar mean F1
scores but significantly reduced variance for ADAM-1, highlighting its
robustness and consistency, particularly in small laboratory datasets. While
currently tailored for binary classification tasks, future iterations aim to
incorporate additional data modalities, such as neuroimaging and biomarkers, to
broaden the scalability and applicability for Alzheimer's research and
diagnostics.
|
2501.08325
|
GameFactory: Creating New Games with Generative Interactive Videos
|
cs.CV
|
Generative game engines have the potential to revolutionize game development
by autonomously creating new content and reducing manual workload. However,
existing video-based game generation methods fail to address the critical
challenge of scene generalization, limiting their applicability to existing
games with fixed styles and scenes. In this paper, we present GameFactory, a
framework focused on exploring scene generalization in game video generation.
To enable the creation of entirely new and diverse games, we leverage
pre-trained video diffusion models trained on open-domain video data. To bridge
the domain gap between open-domain priors and small-scale game dataset, we
propose a multi-phase training strategy that decouples game style learning from
action control, preserving open-domain generalization while achieving action
controllability. Using Minecraft as our data source, we release GF-Minecraft, a
high-quality and diversity action-annotated video dataset for research.
Furthermore, we extend our framework to enable autoregressive
action-controllable game video generation, allowing the production of
unlimited-length interactive game videos. Experimental results demonstrate that
GameFactory effectively generates open-domain, diverse, and action-controllable
game videos, representing a significant step forward in AI-driven game
generation. Our dataset and project page are publicly available at
\url{https://vvictoryuki.github.io/gamefactory/}.
|
2501.08326
|
Omni-RGPT: Unifying Image and Video Region-level Understanding via Token
Marks
|
cs.CV
|
We present Omni-RGPT, a multimodal large language model designed to
facilitate region-level comprehension for both images and videos. To achieve
consistent region representation across spatio-temporal dimensions, we
introduce Token Mark, a set of tokens highlighting the target regions within
the visual feature space. These tokens are directly embedded into spatial
regions using region prompts (e.g., boxes or masks) and simultaneously
incorporated into the text prompt to specify the target, establishing a direct
connection between visual and text tokens. To further support robust video
understanding without requiring tracklets, we introduce an auxiliary task that
guides Token Mark by leveraging the consistency of the tokens, enabling stable
region interpretation across the video. Additionally, we introduce a
large-scale region-level video instruction dataset (RegVID-300k). Omni-RGPT
achieves state-of-the-art results on image and video-based commonsense
reasoning benchmarks while showing strong performance in captioning and
referring expression comprehension tasks.
|
2501.08328
|
PokerBench: Training Large Language Models to become Professional Poker
Players
|
cs.CL cs.AI cs.GT
|
We introduce PokerBench - a benchmark for evaluating the poker-playing
abilities of large language models (LLMs). As LLMs excel in traditional NLP
tasks, their application to complex, strategic games like poker poses a new
challenge. Poker, an incomplete information game, demands a multitude of skills
such as mathematics, reasoning, planning, strategy, and a deep understanding of
game theory and human psychology. This makes Poker the ideal next frontier for
large language models. PokerBench consists of a comprehensive compilation of
11,000 most important scenarios, split between pre-flop and post-flop play,
developed in collaboration with trained poker players. We evaluate prominent
models including GPT-4, ChatGPT 3.5, and various Llama and Gemma series models,
finding that all state-of-the-art LLMs underperform in playing optimal poker.
However, after fine-tuning, these models show marked improvements. We validate
PokerBench by having models with different scores compete with each other,
demonstrating that higher scores on PokerBench lead to higher win rates in
actual poker games. Through gameplay between our fine-tuned model and GPT-4, we
also identify limitations of simple supervised fine-tuning for learning optimal
playing strategy, suggesting the need for more advanced methodologies for
effectively training language models to excel in games. PokerBench thus
presents a unique benchmark for a quick and reliable evaluation of the
poker-playing ability of LLMs as well as a comprehensive benchmark to study the
progress of LLMs in complex game-playing scenarios.
|
2501.08329
|
Predicting 4D Hand Trajectory from Monocular Videos
|
cs.CV
|
We present HaPTIC, an approach that infers coherent 4D hand trajectories from
monocular videos. Current video-based hand pose reconstruction methods
primarily focus on improving frame-wise 3D pose using adjacent frames rather
than studying consistent 4D hand trajectories in space. Despite the additional
temporal cues, they generally underperform compared to image-based methods due
to the scarcity of annotated video data. To address these issues, we repurpose
a state-of-the-art image-based transformer to take in multiple frames and
directly predict a coherent trajectory. We introduce two types of lightweight
attention layers: cross-view self-attention to fuse temporal information, and
global cross-attention to bring in larger spatial context. Our method infers 4D
hand trajectories similar to the ground truth while maintaining strong 2D
reprojection alignment. We apply the method to both egocentric and allocentric
videos. It significantly outperforms existing methods in global trajectory
accuracy while being comparable to the state-of-the-art in single-image pose
estimation. Project website: https://judyye.github.io/haptic-www
|
2501.08330
|
Gradient Equilibrium in Online Learning: Theory and Applications
|
cs.LG math.OC math.ST stat.ML stat.TH
|
We present a new perspective on online learning that we refer to as gradient
equilibrium: a sequence of iterates achieves gradient equilibrium if the
average of gradients of losses along the sequence converges to zero. In
general, this condition is not implied by, nor implies, sublinear regret. It
turns out that gradient equilibrium is achievable by standard online learning
methods such as gradient descent and mirror descent with constant step sizes
(rather than decaying step sizes, as is usually required for no regret).
Further, as we show through examples, gradient equilibrium translates into an
interpretable and meaningful property in online prediction problems spanning
regression, classification, quantile estimation, and others. Notably, we show
that the gradient equilibrium framework can be used to develop a debiasing
scheme for black-box predictions under arbitrary distribution shift, based on
simple post hoc online descent updates. We also show that post hoc gradient
updates can be used to calibrate predicted quantiles under distribution shift,
and that the framework leads to unbiased Elo scores for pairwise preference
prediction.
|
2501.08331
|
Go-with-the-Flow: Motion-Controllable Video Diffusion Models Using
Real-Time Warped Noise
|
cs.CV
|
Generative modeling aims to transform random noise into structured outputs.
In this work, we enhance video diffusion models by allowing motion control via
structured latent noise sampling. This is achieved by just a change in data: we
pre-process training videos to yield structured noise. Consequently, our method
is agnostic to diffusion model design, requiring no changes to model
architectures or training pipelines. Specifically, we propose a novel noise
warping algorithm, fast enough to run in real time, that replaces random
temporal Gaussianity with correlated warped noise derived from optical flow
fields, while preserving the spatial Gaussianity. The efficiency of our
algorithm enables us to fine-tune modern video diffusion base models using
warped noise with minimal overhead, and provide a one-stop solution for a wide
range of user-friendly motion control: local object motion control, global
camera movement control, and motion transfer. The harmonization between
temporal coherence and spatial Gaussianity in our warped noise leads to
effective motion control while maintaining per-frame pixel quality. Extensive
experiments and user studies demonstrate the advantages of our method, making
it a robust and scalable approach for controlling motion in video diffusion
models. Video results are available on our webpage:
https://eyeline-research.github.io/Go-with-the-Flow. Source code and model
checkpoints are available on GitHub:
https://github.com/Eyeline-Research/Go-with-the-Flow.
|
2501.08332
|
MangaNinja: Line Art Colorization with Precise Reference Following
|
cs.CV
|
Derived from diffusion models, MangaNinjia specializes in the task of
reference-guided line art colorization. We incorporate two thoughtful designs
to ensure precise character detail transcription, including a patch shuffling
module to facilitate correspondence learning between the reference color image
and the target line art, and a point-driven control scheme to enable
fine-grained color matching. Experiments on a self-collected benchmark
demonstrate the superiority of our model over current solutions in terms of
precise colorization. We further showcase the potential of the proposed
interactive point control in handling challenging cases, cross-character
colorization, multi-reference harmonization, beyond the reach of existing
algorithms.
|
2501.08333
|
DAViD: Modeling Dynamic Affordance of 3D Objects using Pre-trained Video
Diffusion Models
|
cs.CV
|
Understanding the ability of humans to use objects is crucial for AI to
improve daily life. Existing studies for learning such ability focus on
human-object patterns (e.g., contact, spatial relation, orientation) in static
situations, and learning Human-Object Interaction (HOI) patterns over time
(i.e., movement of human and object) is relatively less explored. In this
paper, we introduce a novel type of affordance named Dynamic Affordance. For a
given input 3D object mesh, we learn dynamic affordance which models the
distribution of both (1) human motion and (2) human-guided object pose during
interactions. As a core idea, we present a method to learn the 3D dynamic
affordance from synthetically generated 2D videos, leveraging a pre-trained
video diffusion model. Specifically, we propose a pipeline that first generates
2D HOI videos from the 3D object and then lifts them into 3D to generate 4D HOI
samples. Once we generate diverse 4D HOI samples on various target objects, we
train our DAViD, where we present a method based on the Low-Rank Adaptation
(LoRA) module for pre-trained human motion diffusion model (MDM) and an object
pose diffusion model with human pose guidance. Our motion diffusion model is
extended for multi-object interactions, demonstrating the advantage of our
pipeline with LoRA for combining the concepts of object usage. Through
extensive experiments, we demonstrate our DAViD outperforms the baselines in
generating human motion with HOIs.
|
2501.08334
|
High-throughput digital twin framework for predicting neurite
deterioration using MetaFormer attention
|
q-bio.NC cs.CV cs.LG
|
Neurodevelopmental disorders (NDDs) cover a variety of conditions, including
autism spectrum disorder, attention-deficit/hyperactivity disorder, and
epilepsy, which impair the central and peripheral nervous systems. Their high
comorbidity and complex etiologies present significant challenges for accurate
diagnosis and effective treatments. Conventional clinical and experimental
studies are time-intensive, burdening research progress considerably. This
paper introduces a high-throughput digital twin framework for modeling neurite
deteriorations associated with NDDs, integrating synthetic data generation,
experimental images, and machine learning (ML) models. The synthetic data
generator utilizes an isogeometric analysis (IGA)-based phase field model to
capture diverse neurite deterioration patterns such as neurite retraction,
atrophy, and fragmentation while mitigating the limitations of scarce
experimental data. The ML model utilizes MetaFormer-based gated spatiotemporal
attention architecture with deep temporal layers and provides fast predictions.
The framework effectively captures long-range temporal dependencies and
intricate morphological transformations with average errors of 1.9641% and
6.0339% for synthetic and experimental neurite deterioration, respectively.
Seamlessly integrating simulations, experiments, and ML, the digital twin
framework can guide researchers to make informed experimental decisions by
predicting potential experimental outcomes, significantly reducing costs and
saving valuable time. It can also advance our understanding of neurite
deterioration and provide a scalable solution for exploring complex
neurological mechanisms, contributing to the development of targeted
treatments.
|
2501.08335
|
MERaLiON-TextLLM: Cross-Lingual Understanding of Large Language Models
in Chinese, Indonesian, Malay, and Singlish
|
cs.CL cs.AI
|
Multilingual large language models (MLLMs) have shown impressive capabilities
across a variety of languages. However, efficacy can differ greatly between
different language families, especially for those with limited linguistic
resources. This report presents MERaLiON-TextLLM, a series of open-source
language models specifically tailored to improve understanding and generation
in Chinese, Indonesian, Malay, and Singlish. The initial released model is
built on Llama-3-8B-Base and refined through a meticulously crafted process of
continued pre-training and weight merging. Our approach achieves performance
improvements across benchmarks in these languages, exceeding the capabilities
of the official Llama-3 models. We provide the model checkpoints as a resource
to support further research and development in cross-lingual language
understanding.
|
2501.08339
|
Operator Learning for Reconstructing Flow Fields from Sparse
Measurements: an Energy Transformer Approach
|
physics.flu-dyn cs.AI cs.CE
|
Machine learning methods have shown great success in various scientific
areas, including fluid mechanics. However, reconstruction problems, where full
velocity fields must be recovered from partial observations, remain
challenging. In this paper, we propose a novel operator learning framework for
solving reconstruction problems by using the Energy Transformer (ET), an
architecture inspired by associative memory models. We formulate reconstruction
as a mapping from incomplete observed data to full reconstructed fields. The
method is validated on three fluid mechanics examples using diverse types of
data: (1) unsteady 2D vortex street in flow past a cylinder using simulation
data; (2) high-speed under-expanded impinging supersonic jets impingement using
Schlieren imaging; and (3) 3D turbulent jet flow using particle tracking. The
results demonstrate the ability of ET to accurately reconstruct complex flow
fields from highly incomplete data (90\% missing), even for noisy experimental
measurements, with fast training and inference on a single GPU. This work
provides a promising new direction for tackling reconstruction problems in
fluid mechanics and other areas in mechanics, geophysics, weather prediction,
and beyond.
|
2501.08341
|
Dissecting a Small Artificial Neural Network
|
cond-mat.dis-nn cond-mat.stat-mech cs.LG physics.comp-ph
|
We investigate the loss landscape and backpropagation dynamics of convergence
for the simplest possible artificial neural network representing the logical
exclusive-OR (XOR) gate. Cross-sections of the loss landscape in the
nine-dimensional parameter space are found to exhibit distinct features, which
help understand why backpropagation efficiently achieves convergence toward
zero loss, whereas values of weights and biases keep drifting. Differences in
shapes of cross-sections obtained by nonrandomized and randomized batches are
discussed. In reference to statistical physics we introduce the microcanonical
entropy as a unique quantity that allows to characterize the phase behavior of
the network. Learning in neural networks can thus be thought of as an annealing
process that experiences the analogue of phase transitions known from
thermodynamic systems. It also reveals how the loss landscape simplifies as
more hidden neurons are added to the network, eliminating entropic barriers
caused by finite-size effects.
|
2501.08347
|
SCOT: Self-Supervised Contrastive Pretraining For Zero-Shot
Compositional Retrieval
|
cs.CV cs.AI
|
Compositional image retrieval (CIR) is a multimodal learning task where a
model combines a query image with a user-provided text modification to retrieve
a target image. CIR finds applications in a variety of domains including
product retrieval (e-commerce) and web search. Existing methods primarily focus
on fully-supervised learning, wherein models are trained on datasets of labeled
triplets such as FashionIQ and CIRR. This poses two significant challenges: (i)
curating such triplet datasets is labor intensive; and (ii) models lack
generalization to unseen objects and domains. In this work, we propose SCOT
(Self-supervised COmpositional Training), a novel zero-shot compositional
pretraining strategy that combines existing large image-text pair datasets with
the generative capabilities of large language models to contrastively train an
embedding composition network. Specifically, we show that the text embedding
from a large-scale contrastively-pretrained vision-language model can be
utilized as proxy target supervision during compositional pretraining,
replacing the target image embedding. In zero-shot settings, this strategy
surpasses SOTA zero-shot compositional retrieval methods as well as many
fully-supervised methods on standard benchmarks such as FashionIQ and CIRR.
|
2501.08352
|
A Preliminary Survey of Semantic Descriptive Model for Images
|
cs.DL cs.CV
|
Considering the lack of a unified framework for image description and deep
cultural analysis at the subject level in the field of Ancient Chinese
Paintings (ACP), this study utilized the Beijing Palace Museum's ACP
collections to develop a semantic model integrating the iconological theory
with a new workflow for term extraction and mapping. Our findings underscore
the model's effectiveness. SDM can be used to support further art-related
knowledge organization and cultural exploration of ACPs.
|
2501.08361
|
Weight Averaging for Out-of-Distribution Generalization and Few-Shot
Domain Adaptation
|
cs.CV cs.LG
|
Empirical risk minimization (ERM) is not robust to changes in the
distribution of data. When the distribution of test data is different from that
of training data, the problem is known as out-of-distribution generalization.
Recently, two techniques have been developed for addressing out-of-distribution
generalization in computer vision: weight averaging (WA) and sharpness-aware
minimization (SAM). WA involves training multiple models with different
hyperparameters and then averaging the weights of these models, which can
significantly improve out-of-distribution generalization performance. SAM
optimizes a neural network to find minima in flat regions, which have been
proven to perform well under distribution shifts. While these techniques have
made great progress, there is still room for improvement and further
exploration. In this thesis, we propose increasing the model diversity in WA
explicitly by introducing gradient similarity as a loss regularizer to further
improve out-of-distribution generalization performance. We also propose
combining WA and SAM to solve the problem of few-shot domain adaptation. Our
extensive experiments on digits datasets (MNIST, SVHN, USPS, MNIST-M) and other
domain adaptation datasets (VLCS, PACS) show that combining WA and SAM leads to
improved out-of-distribution generalization performance and significantly
increases few-shot domain adaptation accuracy.
|
2501.08365
|
Towards Best Practices for Open Datasets for LLM Training
|
cs.CY cs.AI cs.CL cs.LG
|
Many AI companies are training their large language models (LLMs) on data
without the permission of the copyright owners. The permissibility of doing so
varies by jurisdiction: in countries like the EU and Japan, this is allowed
under certain restrictions, while in the United States, the legal landscape is
more ambiguous. Regardless of the legal status, concerns from creative
producers have led to several high-profile copyright lawsuits, and the threat
of litigation is commonly cited as a reason for the recent trend towards
minimizing the information shared about training datasets by both corporate and
public interest actors. This trend in limiting data information causes harm by
hindering transparency, accountability, and innovation in the broader ecosystem
by denying researchers, auditors, and impacted individuals access to the
information needed to understand AI models.
While this could be mitigated by training language models on open access and
public domain data, at the time of writing, there are no such models (trained
at a meaningful scale) due to the substantial technical and sociological
challenges in assembling the necessary corpus. These challenges include
incomplete and unreliable metadata, the cost and complexity of digitizing
physical records, and the diverse set of legal and technical skills required to
ensure relevance and responsibility in a quickly changing landscape. Building
towards a future where AI systems can be trained on openly licensed data that
is responsibly curated and governed requires collaboration across legal,
technical, and policy domains, along with investments in metadata standards,
digitization, and fostering a culture of openness.
|
2501.08370
|
3D Gaussian Splatting with Normal Information for Mesh Extraction and
Improved Rendering
|
cs.GR cs.CV
|
Differentiable 3D Gaussian splatting has emerged as an efficient and flexible
rendering technique for representing complex scenes from a collection of 2D
views and enabling high-quality real-time novel-view synthesis. However, its
reliance on photometric losses can lead to imprecisely reconstructed geometry
and extracted meshes, especially in regions with high curvature or fine detail.
We propose a novel regularization method using the gradients of a signed
distance function estimated from the Gaussians, to improve the quality of
rendering while also extracting a surface mesh. The regularizing normal
supervision facilitates better rendering and mesh reconstruction, which is
crucial for downstream applications in video generation, animation, AR-VR and
gaming. We demonstrate the effectiveness of our approach on datasets such as
Mip-NeRF360, Tanks and Temples, and Deep-Blending. Our method scores higher on
photorealism metrics compared to other mesh extracting rendering methods
without compromising mesh quality.
|
2501.08389
|
Toward Zero-Shot User Intent Recognition in Shared Autonomy
|
cs.RO cs.HC
|
A fundamental challenge of shared autonomy is to use high-DoF robots to
assist, rather than hinder, humans by first inferring user intent and then
empowering the user to achieve their intent. Although successful, prior methods
either rely heavily on a priori knowledge of all possible human intents or
require many demonstrations and interactions with the human to learn these
intents before being able to assist the user. We propose and study a zero-shot,
vision-only shared autonomy (VOSA) framework designed to allow robots to use
end-effector vision to estimate zero-shot human intents in conjunction with
blended control to help humans accomplish manipulation tasks with unknown and
dynamically changing object locations. To demonstrate the effectiveness of our
VOSA framework, we instantiate a simple version of VOSA on a Kinova Gen3
manipulator and evaluate our system by conducting a user study on three
tabletop manipulation tasks. The performance of VOSA matches that of an oracle
baseline model that receives privileged knowledge of possible human intents
while also requiring significantly less effort than unassisted teleoperation.
In more realistic settings, where the set of possible human intents is fully or
partially unknown, we demonstrate that VOSA requires less human effort and time
than baseline approaches while being preferred by a majority of the
participants. Our results demonstrate the efficacy and efficiency of using
off-the-shelf vision algorithms to enable flexible and beneficial shared
control of a robot manipulator. Code and videos available here:
https://sites.google.com/view/zeroshot-sharedautonomy/home.
|
2501.08393
|
Empathetic Conversational Agents: Utilizing Neural and Physiological
Signals for Enhanced Empathetic Interactions
|
cs.HC cs.LG
|
Conversational agents (CAs) are revolutionizing human-computer interaction by
evolving from text-based chatbots to empathetic digital humans (DHs) capable of
rich emotional expressions. This paper explores the integration of neural and
physiological signals into the perception module of CAs to enhance empathetic
interactions. By leveraging these cues, the study aims to detect emotions in
real-time and generate empathetic responses and expressions. We conducted a
user study where participants engaged in conversations with a DH about
emotional topics. The DH responded and displayed expressions by mirroring
detected emotions in real-time using neural and physiological cues. The results
indicate that participants experienced stronger emotions and greater engagement
during interactions with the Empathetic DH, demonstrating the effectiveness of
incorporating neural and physiological signals for real-time emotion
recognition. However, several challenges were identified, including recognition
accuracy, emotional transition speeds, individual personality effects, and
limitations in voice tone modulation. Addressing these challenges is crucial
for further refining Empathetic DHs and fostering meaningful connections
between humans and artificial entities. Overall, this research advances
human-agent interaction and highlights the potential of real-time neural and
physiological emotion recognition in creating empathetic DHs.
|
2501.08397
|
Predict Confidently, Predict Right: Abstention in Dynamic Graph Learning
|
cs.LG cs.SI
|
Many real-world systems can be modeled as dynamic graphs, where nodes and
edges evolve over time, requiring specialized models to capture their evolving
dynamics in risk-sensitive applications effectively. Temporal graph neural
networks (GNNs) are one such category of specialized models. For the first
time, our approach integrates a reject option strategy within the framework of
GNNs for continuous-time dynamic graphs. This allows the model to strategically
abstain from making predictions when the uncertainty is high and confidence is
low, thus minimizing the risk of critical misclassification and enhancing the
results and reliability. We propose a coverage-based abstention prediction
model to implement the reject option that maximizes prediction within a
specified coverage. It improves the prediction score for link prediction and
node classification tasks. Temporal GNNs deal with extremely skewed datasets
for the next state prediction or node classification task. In the case of class
imbalance, our method can be further tuned to provide a higher weightage to the
minority class. Exhaustive experiments are presented on four datasets for
dynamic link prediction and two datasets for dynamic node classification tasks.
This demonstrates the effectiveness of our approach in improving the
reliability and area under the curve (AUC)/ average precision (AP) scores for
predictions in dynamic graph scenarios. The results highlight our model's
ability to efficiently handle the trade-offs between prediction confidence and
coverage, making it a dependable solution for applications requiring high
precision in dynamic and uncertain environments.
|
2501.08401
|
Navigating Gender Disparities in Communication Research Leadership:
Academic Recognition, Career Development, and Compensation
|
cs.DL cs.SI
|
This study examines gender disparities in communication research through
citation metrics, authorship patterns, team composition, and faculty salaries.
Using data from 62,359 papers across 121 communication journals, we find that
while female authors are increasingly represented, citation gaps persist, with
sole-authored papers by women receiving fewer citations than those by men,
especially in smaller teams. Team composition analysis reveals a tendency
toward gender homophily, with single-gender teams being more common. In top
U.S. communication journals, female authors face underrepresentation and
citation disparities favoring male authors. Salary analysis from leading U.S.
public universities shows that female faculty earn lower salaries at the
Assistant Professor level, though disparities lessen at higher ranks. These
findings highlight the need for greater efforts to promote gender equity
through inclusive collaboration, equitable citation practices, and fair
compensation.
|
2501.08402
|
Addressing Quality Challenges in Deep Learning: The Role of MLOps and
Domain Knowledge
|
cs.SE cs.AI
|
Deep learning (DL) systems present unique challenges in software engineering,
especially concerning quality attributes like correctness and resource
efficiency. While DL models excel in specific tasks, engineering DL systems is
still essential. The effort, cost, and potential diminishing returns of
continual improvements must be carefully evaluated, as software engineers often
face the critical decision of when to stop refining a system relative to its
quality attributes. This experience paper explores the role of MLOps practices
-- such as monitoring and experiment tracking -- in creating transparent and
reproducible experimentation environments that enable teams to assess and
justify the impact of design decisions on quality attributes. Furthermore, we
report on experiences addressing the quality challenges by embedding domain
knowledge into the design of a DL model and its integration within a larger
system. The findings offer actionable insights into the benefits of domain
knowledge and MLOps and the strategic consideration of when to limit further
optimizations in DL projects to maximize overall system quality and
reliability.
|
2501.08406
|
OptiChat: Bridging Optimization Models and Practitioners with Large
Language Models
|
cs.HC cs.CL cs.LG math.OC
|
Optimization models have been applied to solve a wide variety of
decision-making problems. These models are usually developed by optimization
experts but are used by practitioners without optimization expertise in various
application domains. As a result, practitioners often struggle to interact with
and draw useful conclusions from optimization models independently. To fill
this gap, we introduce OptiChat, a natural language dialogue system designed to
help practitioners interpret model formulation, diagnose infeasibility, analyze
sensitivity, retrieve information, evaluate modifications, and provide
counterfactual explanations. By augmenting large language models (LLMs) with
functional calls and code generation tailored for optimization models, we
enable seamless interaction and minimize the risk of hallucinations in
OptiChat. We develop a new dataset to evaluate OptiChat's performance in
explaining optimization models. Experiments demonstrate that OptiChat
effectively bridges the gap between optimization models and practitioners,
delivering autonomous, accurate, and instant responses.
|
2501.08408
|
Leveraging 2D Masked Reconstruction for Domain Adaptation of 3D Pose
Estimation
|
cs.CV cs.LG
|
RGB-based 3D pose estimation methods have been successful with the
development of deep learning and the emergence of high-quality 3D pose
datasets. However, most existing methods do not operate well for testing images
whose distribution is far from that of training data. However, most existing
methods do not operate well for testing images whose distribution is far from
that of training data. This problem might be alleviated by involving diverse
data during training, however it is non-trivial to collect such diverse data
with corresponding labels (i.e. 3D pose). In this paper, we introduced an
unsupervised domain adaptation framework for 3D pose estimation that utilizes
the unlabeled data in addition to labeled data via masked image modeling (MIM)
framework. Foreground-centric reconstruction and attention regularization are
further proposed to increase the effectiveness of unlabeled data usage.
Experiments are conducted on the various datasets in human and hand pose
estimation tasks, especially using the cross-domain scenario. We demonstrated
the effectiveness of ours by achieving the state-of-the-art accuracy on all
datasets.
|
2501.08411
|
BiDepth Multimodal Neural Network: Bidirectional Depth Deep Learning
Architecture for Spatial-Temporal Prediction
|
cs.LG cs.AI cs.CV stat.AP
|
Accurate prediction of spatial-temporal (ST) information in dynamic systems,
such as urban mobility and weather patterns, is a crucial yet challenging
problem. The complexity stems from the intricate interplay between spatial
proximity and temporal relevance, where both long-term trends and short-term
fluctuations are present in convoluted patterns. Existing approaches, including
traditional statistical methods and conventional neural networks, may provide
inaccurate results due to the lack of an effective mechanism that
simultaneously incorporates information at variable temporal depths while
maintaining spatial context, resulting in a trade-off between comprehensive
long-term historical analysis and responsiveness to short-term new information.
To bridge this gap, this paper proposes the BiDepth Multimodal Neural Network
(BDMNN) with bidirectional depth modulation that enables a comprehensive
understanding of both long-term seasonality and short-term fluctuations,
adapting to the complex ST context. Case studies with real-world public data
demonstrate significant improvements in prediction accuracy, with a 12%
reduction in Mean Squared Error for urban traffic prediction and a 15%
improvement in rain precipitation forecasting compared to state-of-the-art
benchmarks, without demanding extra computational resources.
|
2501.08413
|
Ensemble of Large Language Models for Curated Labeling and Rating of
Free-text Data
|
cs.CL
|
Free-text responses are commonly collected in psychological studies,
providing rich qualitative insights that quantitative measures may not capture.
Labeling curated topics of research interest in free-text data by multiple
trained human coders is typically labor-intensive and time-consuming. Though
large language models (LLMs) excel in language processing, LLM-assisted
labeling techniques relying on closed-source LLMs cannot be directly applied to
free-text data, without explicit consent for external use.
In this study, we propose a framework of assembling locally-deployable LLMs
to enhance the labeling of predetermined topics in free-text data under privacy
constraints. Analogous to annotation by multiple human raters, this framework
leverages the heterogeneity of diverse open-source LLMs. The ensemble approach
seeks a balance between the agreement and disagreement across LLMs, guided by a
relevancy scoring methodology that utilizes embedding distances between topic
descriptions and LLMs' reasoning. We evaluated the ensemble approach using both
publicly accessible Reddit data from eating disorder related forums, and
free-text responses from eating disorder patients, both complemented by human
annotations.
We found that: (1) there is heterogeneity in the performance of labeling
among same-sized LLMs, with some showing low sensitivity but high precision,
while others exhibit high sensitivity but low precision. (2) Compared to
individual LLMs, the ensemble of LLMs achieved the highest accuracy and optimal
precision-sensitivity trade-off in predicting human annotations. (3) The
relevancy scores across LLMs showed greater agreement than dichotomous labels,
indicating that the relevancy scoring method effectively mitigates the
heterogeneity in LLMs' labeling.
|
2501.08415
|
Cross-Modal Transferable Image-to-Video Attack on Video Quality Metrics
|
cs.CV cs.AI
|
Recent studies have revealed that modern image and video quality assessment
(IQA/VQA) metrics are vulnerable to adversarial attacks. An attacker can
manipulate a video through preprocessing to artificially increase its quality
score according to a certain metric, despite no actual improvement in visual
quality. Most of the attacks studied in the literature are white-box attacks,
while black-box attacks in the context of VQA have received less attention.
Moreover, some research indicates a lack of transferability of adversarial
examples generated for one model to another when applied to VQA. In this paper,
we propose a cross-modal attack method, IC2VQA, aimed at exploring the
vulnerabilities of modern VQA models. This approach is motivated by the
observation that the low-level feature spaces of images and videos are similar.
We investigate the transferability of adversarial perturbations across
different modalities; specifically, we analyze how adversarial perturbations
generated on a white-box IQA model with an additional CLIP module can
effectively target a VQA model. The addition of the CLIP module serves as a
valuable aid in increasing transferability, as the CLIP model is known for its
effective capture of low-level semantics. Extensive experiments demonstrate
that IC2VQA achieves a high success rate in attacking three black-box VQA
models. We compare our method with existing black-box attack strategies,
highlighting its superiority in terms of attack success within the same number
of iterations and levels of attack strength. We believe that the proposed
method will contribute to the deeper analysis of robust VQA metrics.
|
2501.08416
|
A Survey on Recent Advances in Self-Organizing Maps
|
cs.NE cs.AI
|
Self-organising maps are a powerful tool for cluster analysis in a wide range
of data contexts. From the pioneer work of Kohonen, many variants and
improvements have been proposed. This review focuses on the last decade, in
order to provide an overview of the main evolution of the seminal SOM algorithm
as well as of the methodological developments that have been achieved in order
to better fit to various application contexts and users' requirements. We also
highlight a specific and important application field that is related to
commercial use of SOM, which involves specific data management.
|
2501.08418
|
CVaR-Based Variational Quantum Optimization for User Association in
Handoff-Aware Vehicular Networks
|
cs.LG cs.AI cs.NI cs.SY eess.SY
|
Efficient resource allocation is essential for optimizing various tasks in
wireless networks, which are usually formulated as generalized assignment
problems (GAP). GAP, as a generalized version of the linear sum assignment
problem, involves both equality and inequality constraints that add
computational challenges. In this work, we present a novel Conditional Value at
Risk (CVaR)-based Variational Quantum Eigensolver (VQE) framework to address
GAP in vehicular networks (VNets). Our approach leverages a hybrid
quantum-classical structure, integrating a tailored cost function that balances
both objective and constraint-specific penalties to improve solution quality
and stability. Using the CVaR-VQE model, we handle the GAP efficiently by
focusing optimization on the lower tail of the solution space, enhancing both
convergence and resilience on noisy intermediate-scale quantum (NISQ) devices.
We apply this framework to a user-association problem in VNets, where our
method achieves 23.5% improvement compared to the deep neural network (DNN)
approach.
|
2501.08420
|
Nonlinear Modeling of a PEM Fuel Cell System; a Practical Study with
Experimental Validation
|
eess.SY cs.SY
|
In this paper, a nonlinear six order model is proposed for a proton exchange
membrane fuel cell (PEMFC) as a control-oriented electrochemical model. Its
validation is performed on a specific single cell PEMFC with effective
dimension of 5 cm5 cm. This model is described in the nonlinear state space
form with 6 state variables. Load current and DC voltage are considered as
measurable disturbance and control input respectively. Besides, the model
includes fuel cell stack and its auxiliary components as well. In this survey,
a nonlinear state space representation is derived by arranging nonlinear
equations and combining them with auxiliary components model. The proposed
model can be successfully used to design nonlinear controller and nonlinear
observer systems. The analyzed PEMFC system consists of air compressor motor
dynamic equations, air and fuel supply subsystems, a perfect air humidifier and
a fuel cell stack. An experimentally validated nonlinear model that reproduces
the most typical features of a laboratory PEMFC system is presented. This model
is derived based on physics law in stack, including system gases dynamics. The
objective of this paper is to introduce a fully analytical model which has been
fully validated on a fuel cell system and its auxiliary components. The
proposed method can be used as a general modeling guideline for
control-oriented purposes. Moreover, it can be successfully implemented in
composing a dynamic subsystem, like hydrogen subsystem, as part of the whole
nonlinear model.
|
2501.08421
|
SEAL: Speaker Error Correction using Acoustic-conditioned Large Language
Models
|
eess.AS cs.AI cs.CL cs.LG cs.SD
|
Speaker Diarization (SD) is a crucial component of modern end-to-end ASR
pipelines. Traditional SD systems, which are typically audio-based and operate
independently of ASR, often introduce speaker errors, particularly during
speaker transitions and overlapping speech. Recently, language models including
fine-tuned large language models (LLMs) have shown to be effective as a
second-pass speaker error corrector by leveraging lexical context in the
transcribed output. In this work, we introduce a novel acoustic conditioning
approach to provide more fine-grained information from the acoustic diarizer to
the LLM. We also show that a simpler constrained decoding strategy reduces LLM
hallucinations, while avoiding complicated post-processing. Our approach
significantly reduces the speaker error rates by 24-43% across Fisher,
Callhome, and RT03-CTS datasets, compared to the first-pass Acoustic SD.
|
2501.08423
|
A Constant Velocity Latent Dynamics Approach for Accelerating Simulation
of Stiff Nonlinear Systems
|
stat.ML cs.LG
|
Solving stiff ordinary differential equations (StODEs) requires sophisticated
numerical solvers, which are often computationally expensive. In particular,
StODE's often cannot be solved with traditional explicit time integration
schemes and one must resort to costly implicit methods to compute solutions. On
the other hand, state-of-the-art machine learning (ML) based methods such as
Neural ODE (NODE) poorly handle the timescale separation of various elements of
the solutions to StODEs and require expensive implicit solvers for integration
at inference time. In this work, we embark on a different path which involves
learning a latent dynamics for StODEs, in which one completely avoids numerical
integration. To that end, we consider a constant velocity latent dynamical
system whose solution is a sequence of straight lines. Given the initial
condition and parameters of the ODE, the encoder networks learn the slope (i.e
the constant velocity) and the initial condition for the latent dynamics. In
other words, the solution of the original dynamics is encoded into a sequence
of straight lines which can be decoded back to retrieve the actual solution as
and when required. Another key idea in our approach is a nonlinear
transformation of time, which allows for the "stretching/squeezing" of time in
the latent space, thereby allowing for varying levels of attention to different
temporal regions in the solution. Additionally, we provide a simple
universal-approximation-type proof showing that our approach can approximate
the solution of stiff nonlinear system on a compact set to any degree of
accuracy, {\epsilon}. We show that the dimension of the latent dynamical system
in our approach is independent of {\epsilon}. Numerical investigation on
prototype StODEs suggest that our method outperforms state-of-the art machine
learning approaches for handling StODEs.
|
2501.08425
|
Is Stochastic Gradient Descent Effective? A PDE Perspective on Machine
Learning processes
|
cs.LG math.AP math.PR
|
In this paper we analyze the behaviour of the stochastic gradient descent
(SGD), a widely used method in supervised learning for optimizing neural
network weights via a minimization of non-convex loss functions. Since the
pioneering work of E, Li and Tai (2017), the underlying structure of such
processes can be understood via parabolic PDEs of Fokker-Planck type, which are
at the core of our analysis. Even if Fokker-Planck equations have a long
history and a extensive literature, almost nothing is known when the potential
is non-convex or when the diffusion matrix is degenerate, and this is the main
difficulty that we face in our analysis.
We identify two different regimes: in the initial phase of SGD, the loss
function drives the weights to concentrate around the nearest local minimum. We
refer to this phase as the drift regime and we provide quantitative estimates
on this concentration phenomenon. Next, we introduce the diffusion regime,
where stochastic fluctuations help the learning process to escape suboptimal
local minima. We analyze the Mean Exit Time (MET) and prove upper and lower
bounds of the MET. Finally, we address the asymptotic convergence of SGD, for a
non-convex cost function and a degenerate diffusion matrix, that do not allow
to use the standard approaches, and require new techniques. For this purpose,
we exploit two different methods: duality and entropy methods.
We provide new results about the dynamics and effectiveness of SGD, offering
a deep connection between stochastic optimization and PDE theory, and some
answers and insights to basic questions in the Machine Learning processes: How
long does SGD take to escape from a bad minimum? Do neural network parameters
converge using SGD? How do parameters evolve in the first stage of training
with SGD?
|
2501.08426
|
Causal vs. Anticausal merging of predictors
|
cs.LG cs.AI stat.ME stat.ML
|
We study the differences arising from merging predictors in the causal and
anticausal directions using the same data. In particular we study the
asymmetries that arise in a simple model where we merge the predictors using
one binary variable as target and two continuous variables as predictors. We
use Causal Maximum Entropy (CMAXENT) as inductive bias to merge the predictors,
however, we expect similar differences to hold also when we use other merging
methods that take into account asymmetries between cause and effect. We show
that if we observe all bivariate distributions, the CMAXENT solution reduces to
a logistic regression in the causal direction and Linear Discriminant Analysis
(LDA) in the anticausal direction. Furthermore, we study how the decision
boundaries of these two solutions differ whenever we observe only some of the
bivariate distributions implications for Out-Of-Variable (OOV) generalisation.
|
2501.08428
|
Physics-Informed Latent Neural Operator for Real-time Predictions of
Complex Physical Systems
|
cs.LG
|
Deep operator network (DeepONet) has shown great promise as a surrogate model
for systems governed by partial differential equations (PDEs), learning
mappings between infinite-dimensional function spaces with high accuracy.
However, achieving low generalization errors often requires highly
overparameterized networks, posing significant challenges for large-scale,
complex systems. To address these challenges, latent DeepONet was proposed,
introducing a two-step approach: first, a reduced-order model is used to learn
a low-dimensional latent space, followed by operator learning on this latent
space. While effective, this method is inherently data-driven, relying on large
datasets and making it difficult to incorporate governing physics into the
framework. Additionally, the decoupled nature of these steps prevents
end-to-end optimization and the ability to handle data scarcity. This work
introduces PI-Latent-NO, a physics-informed latent operator learning framework
that overcomes these limitations. Our architecture employs two coupled
DeepONets in an end-to-end training scheme: the first, termed Latent-DeepONet,
identifies and learns the low-dimensional latent space, while the second,
Reconstruction-DeepONet, maps the latent representations back to the original
physical space. By integrating governing physics directly into the training
process, our approach requires significantly fewer data samples while achieving
high accuracy. Furthermore, the framework is computationally and memory
efficient, exhibiting nearly constant scaling behavior on a single GPU and
demonstrating the potential for further efficiency gains with distributed
training. We validate the proposed method on high-dimensional parametric PDEs,
demonstrating its effectiveness as a proof of concept and its potential
scalability for large-scale systems.
|
2501.08429
|
Modeling Discrimination with Causal Abstraction
|
cs.CY cs.AI
|
A person is directly racially discriminated against only if her race caused
her worse treatment. This implies that race is an attribute sufficiently
separable from other attributes to isolate its causal role. But race is
embedded in a nexus of social factors that resist isolated treatment. If race
is socially constructed, in what sense can it cause worse treatment? Some
propose that the perception of race, rather than race itself, causes worse
treatment. Others suggest that since causal models require modularity, i.e. the
ability to isolate causal effects, attempts to causally model discrimination
are misguided.
This paper addresses the problem differently. We introduce a framework for
reasoning about discrimination, in which race is a high-level abstraction of
lower-level features. In this framework, race can be modeled as itself causing
worse treatment. Modularity is ensured by allowing assumptions about social
construction to be precisely and explicitly stated, via an alignment between
race and its constituents. Such assumptions can then be subjected to normative
and empirical challenges, which lead to different views of when discrimination
occurs. By distinguishing constitutive and causal relations, the abstraction
framework pinpoints disagreements in the current literature on modeling
discrimination, while preserving a precise causal account of discrimination.
|
2501.08430
|
Physics-informed neural networks for phase-resolved data assimilation
and prediction of nonlinear ocean waves
|
cs.LG physics.ao-ph physics.flu-dyn
|
The assimilation and prediction of phase-resolved surface gravity waves are
critical challenges in ocean science and engineering. Potential flow theory
(PFT) has been widely employed to develop wave models and numerical techniques
for wave prediction. However, traditional wave prediction methods are often
limited. For example, most simplified wave models have a limited ability to
capture strong wave nonlinearity, while fully nonlinear PFT solvers often fail
to meet the speed requirements of engineering applications. This computational
inefficiency also hinders the development of effective data assimilation
techniques, which are required to reconstruct spatial wave information from
sparse measurements to initialize the wave prediction. To address these
challenges, we propose a novel solver method that leverages physics-informed
neural networks (PINNs) that parameterize PFT solutions as neural networks.
This provides a computationally inexpensive way to assimilate and predict wave
data. The proposed PINN framework is validated through comparisons with
analytical linear PFT solutions and experimental data collected in a laboratory
wave flume. The results demonstrate that our approach accurately captures and
predicts irregular, nonlinear, and dispersive wave surface dynamics. Moreover,
the PINN can infer the fully nonlinear velocity potential throughout the entire
fluid volume solely from surface elevation measurements, enabling the
calculation of fluid velocities that are difficult to measure experimentally.
|
2501.08435
|
Secure Composition of Quantum Key Distribution and Symmetric Key
Encryption
|
quant-ph cs.CR cs.IT math.IT
|
Quantum key distribution (QKD) allows Alice and Bob to share a secret key
over an insecure channel with proven information-theoretic security against an
adversary whose strategy is bounded only by the laws of physics.
Composability-based security proofs of QKD ensure that using the established
key with a one-time-pad encryption scheme provides information theoretic
secrecy for the message. In this paper, we consider the problem of using the
QKD established key with a secure symmetric key-based encryption algorithm and
use an approach based on hybrid encryption to provide a proof of security for
the composition.
Hybrid encryption was first proposed as a public key cryptographic algorithm
with proven security for messages of unrestricted length. We use an extension
of this framework to correlated randomness setting (Sharifian et al. in ISIT
2021) to propose a quantum-enabled Key Encapsulation Mechanism (qKEM) and
quantum-enabled hybrid encryption (qHE), and prove a composition theorem for
the security of the qHE. We construct a qKEM with proven security using an
existing QKD (Portmann et al. in Rev. of Mod. Physics 2022). Using this qKEM
with a secure Data Encapsulation Mechanism (DEM), that can be constructed using
a one-time symmetric key encryption scheme, results in an efficient encryption
system for unrestricted length messages with proved security against an
adversary with access to efficient computations on a quantum computer (i.e.
post-quantum secure encryption without using any computational assumptions.)
|
2501.08440
|
FARE: A Deep Learning-Based Framework for Radar-based Face Recognition
and Out-of-distribution Detection
|
cs.CV cs.AI cs.LG eess.SP
|
In this work, we propose a novel pipeline for face recognition and
out-of-distribution (OOD) detection using short-range FMCW radar. The proposed
system utilizes Range-Doppler and micro Range-Doppler Images. The architecture
features a primary path (PP) responsible for the classification of
in-distribution (ID) faces, complemented by intermediate paths (IPs) dedicated
to OOD detection. The network is trained in two stages: first, the PP is
trained using triplet loss to optimize ID face classification. In the second
stage, the PP is frozen, and the IPs-comprising simple linear autoencoder
networks-are trained specifically for OOD detection. Using our dataset
generated with a 60 GHz FMCW radar, our method achieves an ID classification
accuracy of 99.30% and an OOD detection AUROC of 96.91%.
|
2501.08441
|
Religious Bias Landscape in Language and Text-to-Image Models: Analysis,
Detection, and Debiasing Strategies
|
cs.CL
|
Note: This paper includes examples of potentially offensive content related
to religious bias, presented solely for academic purposes. The widespread
adoption of language models highlights the need for critical examinations of
their inherent biases, particularly concerning religion. This study
systematically investigates religious bias in both language models and
text-to-image generation models, analyzing both open-source and closed-source
systems. We construct approximately 400 unique, naturally occurring prompts to
probe language models for religious bias across diverse tasks, including mask
filling, prompt completion, and image generation. Our experiments reveal
concerning instances of underlying stereotypes and biases associated
disproportionately with certain religions. Additionally, we explore
cross-domain biases, examining how religious bias intersects with demographic
factors such as gender, age, and nationality. This study further evaluates the
effectiveness of targeted debiasing techniques by employing corrective prompts
designed to mitigate the identified biases. Our findings demonstrate that
language models continue to exhibit significant biases in both text and image
generation tasks, emphasizing the urgent need to develop fairer language models
to achieve global acceptability.
|
2501.08442
|
Jochre 3 and the Yiddish OCR corpus
|
cs.CL
|
We describe the construction of a publicly available Yiddish OCR Corpus, and
describe and evaluate the open source OCR tool suite Jochre 3, including an
Alto editor for corpus annotation, OCR software for Alto OCR layer generation,
and a customizable OCR search engine. The current version of the Yiddish OCR
corpus contains 658 pages, 186K tokens and 840K glyphs. The Jochre 3 OCR tool
uses various fine-tuned YOLOv8 models for top-down page layout analysis, and a
custom CNN network for glyph recognition. It attains a CER of 1.5% on our test
corpus, far out-performing all other existing public models for Yiddish. We
analyzed the full 660M word Yiddish Book Center with Jochre 3 OCR, and the new
OCR is searchable through the Yiddish Book Center OCR search engine.
|
2501.08443
|
Instruction-Guided Fusion of Multi-Layer Visual Features in Large
Vision-Language Models
|
cs.CV cs.LG
|
Large Vision-Language Models (LVLMs) have achieved remarkable success in a
wide range of multimodal tasks by integrating pre-trained vision encoders and
large language models. However, current LVLMs primarily rely on visual features
extracted from the final layers of the vision encoder, overlooking the
complementary information available in shallower layers. While recent
approaches have explored the use of multilayer visual features in LVLMs, they
tend to be task-agnostic and fail to examine the dependencies of hierarchical
visual features on specific tasks. To address these gaps, we systematically
investigate the contributions of visual features from different encoder layers
using 18 benchmarks spanning 6 task categories. Our findings reveal that
multilayer features provide complementary strengths with varying task
dependencies, and uniform fusion leads to suboptimal performance. Building on
these insights, we propose the instruction-guided vision aggregator, a module
that dynamically integrates multi-layer visual features based on textual
instructions, without increasing the number of visual tokens. Extensive
evaluations demonstrate the superior performance of our method. Additionally,
an in-depth analysis of the aggregator's behavior highlights the dominance of
mid-to-high-level features in semantic-rich tasks and the critical role of
low-level features in fine-grained perception.
|
2501.08446
|
Poseidon: A ViT-based Architecture for Multi-Frame Pose Estimation with
Adaptive Frame Weighting and Multi-Scale Feature Fusion
|
cs.CV
|
Human pose estimation, a vital task in computer vision, involves detecting
and localising human joints in images and videos. While single-frame pose
estimation has seen significant progress, it often fails to capture the
temporal dynamics for understanding complex, continuous movements. We propose
Poseidon, a novel multi-frame pose estimation architecture that extends the
ViTPose model by integrating temporal information for enhanced accuracy and
robustness to address these limitations. Poseidon introduces key innovations:
(1) an Adaptive Frame Weighting (AFW) mechanism that dynamically prioritises
frames based on their relevance, ensuring that the model focuses on the most
informative data; (2) a Multi-Scale Feature Fusion (MSFF) module that
aggregates features from different backbone layers to capture both fine-grained
details and high-level semantics; and (3) a Cross-Attention module for
effective information exchange between central and contextual frames, enhancing
the model's temporal coherence. The proposed architecture improves performance
in complex video scenarios and offers scalability and computational efficiency
suitable for real-world applications. Our approach achieves state-of-the-art
performance on the PoseTrack21 and PoseTrack18 datasets, achieving mAP scores
of 88.3 and 87.8, respectively, outperforming existing methods.
|
2501.08450
|
Active Sampling for Node Attribute Completion on Graphs
|
cs.AI cs.SI
|
Node attribute, a type of crucial information for graph analysis, may be
partially or completely missing for certain nodes in real world applications.
Restoring the missing attributes is expected to benefit downstream graph
learning. Few attempts have been made on node attribute completion, but a novel
framework called Structure-attribute Transformer (SAT) was recently proposed by
using a decoupled scheme to leverage structures and attributes. SAT ignores the
differences in contributing to the learning schedule and finding a practical
way to model the different importance of nodes with observed attributes is
challenging. This paper proposes a novel AcTive Sampling algorithm (ATS) to
restore missing node attributes. The representativeness and uncertainty of each
node's information are first measured based on graph structure, representation
similarity and learning bias. To select nodes as train samples in the next
optimization step, a weighting scheme controlled by Beta distribution is then
introduced to linearly combine the two properties. Extensive experiments on
four public benchmark datasets and two downstream tasks have shown the
superiority of ATS in node attribute completion.
|
2501.08453
|
Vchitect-2.0: Parallel Transformer for Scaling Up Video Diffusion Models
|
cs.CV cs.LG
|
We present Vchitect-2.0, a parallel transformer architecture designed to
scale up video diffusion models for large-scale text-to-video generation. The
overall Vchitect-2.0 system has several key designs. (1) By introducing a novel
Multimodal Diffusion Block, our approach achieves consistent alignment between
text descriptions and generated video frames, while maintaining temporal
coherence across sequences. (2) To overcome memory and computational
bottlenecks, we propose a Memory-efficient Training framework that incorporates
hybrid parallelism and other memory reduction techniques, enabling efficient
training of long video sequences on distributed systems. (3) Additionally, our
enhanced data processing pipeline ensures the creation of Vchitect T2V
DataVerse, a high-quality million-scale training dataset through rigorous
annotation and aesthetic evaluation. Extensive benchmarking demonstrates that
Vchitect-2.0 outperforms existing methods in video quality, training
efficiency, and scalability, serving as a suitable base for high-fidelity video
generation.
|
2501.08454
|
Tag&Tab: Pretraining Data Detection in Large Language Models Using
Keyword-Based Membership Inference Attack
|
cs.CR cs.CL
|
Large language models (LLMs) have become essential digital task assistance
tools. Their training relies heavily on the collection of vast amounts of data,
which may include copyright-protected or sensitive information. Recent studies
on the detection of pretraining data in LLMs have primarily focused on
sentence-level or paragraph-level membership inference attacks (MIAs), usually
involving probability analysis of the target model prediction tokens. However,
the proposed methods often demonstrate poor performance, specifically in terms
of accuracy, failing to account for the semantic importance of textual content
and word significance. To address these shortcomings, we propose Tag&Tab, a
novel approach for detecting data that has been used as part of the LLM
pretraining. Our method leverages advanced natural language processing (NLP)
techniques to tag keywords in the input text - a process we term Tagging. Then,
the LLM is used to obtain the probabilities of these keywords and calculate
their average log-likelihood to determine input text membership, a process we
refer to as Tabbing. Our experiments on three benchmark datasets (BookMIA,
MIMIR, and the Pile) and several open-source LLMs of varying sizes demonstrate
an average increase in the AUC scores ranging from 4.1% to 12.1% over
state-of-the-art methods. Tag&Tab not only sets a new standard for data leakage
detection in LLMs, but its outstanding performance is a testament to the
importance of words in MIAs on LLMs.
|
2501.08455
|
Keras Sig: Efficient Path Signature Computation on GPU in Keras 3
|
cs.LG cs.DC
|
In this paper we introduce Keras Sig a high-performance pythonic library
designed to compute path signature for deep learning applications. Entirely
built in Keras 3, \textit{Keras Sig} leverages the seamless integration with
the mostly used deep learning backends such as PyTorch, JAX and TensorFlow.
Inspired by Kidger and Lyons (2021),we proposed a novel approach reshaping
signature calculations to leverage GPU parallelism. This adjustment allows us
to reduce the training time by 55\% and 5 to 10-fold improvements in direct
signature computation compared to existing methods, while maintaining similar
CPU performance. Relying on high-level tensor operations instead of low-level
C++ code, Keras Sig significantly reduces the versioning and compatibility
issues commonly encountered in deep learning libraries, while delivering
superior or comparable performance across various hardware configurations. We
demonstrate through extensive benchmarking that our approach scales efficiently
with the length of input sequences and maintains competitive performance across
various signature parameters, though bounded by memory constraints for very
large signature dimensions.
|
2501.08457
|
Large Language Models For Text Classification: Case Study And
Comprehensive Review
|
cs.CL cs.LG
|
Unlocking the potential of Large Language Models (LLMs) in data
classification represents a promising frontier in natural language processing.
In this work, we evaluate the performance of different LLMs in comparison with
state-of-the-art deep-learning and machine-learning models, in two different
classification scenarios: i) the classification of employees' working locations
based on job reviews posted online (multiclass classification), and 2) the
classification of news articles as fake or not (binary classification). Our
analysis encompasses a diverse range of language models differentiating in
size, quantization, and architecture. We explore the impact of alternative
prompting techniques and evaluate the models based on the weighted F1-score.
Also, we examine the trade-off between performance (F1-score) and time
(inference response time) for each language model to provide a more nuanced
understanding of each model's practical applicability. Our work reveals
significant variations in model responses based on the prompting strategies. We
find that LLMs, particularly Llama3 and GPT-4, can outperform traditional
methods in complex classification tasks, such as multiclass classification,
though at the cost of longer inference times. In contrast, simpler ML models
offer better performance-to-time trade-offs in simpler binary classification
tasks.
|
2501.08458
|
RWKV-UNet: Improving UNet with Long-Range Cooperation for Effective
Medical Image Segmentation
|
eess.IV cs.CV
|
In recent years, there have been significant advancements in deep learning
for medical image analysis, especially with convolutional neural networks
(CNNs) and transformer models. However, CNNs face limitations in capturing
long-range dependencies while transformers suffer high computational
complexities. To address this, we propose RWKV-UNet, a novel model that
integrates the RWKV (Receptance Weighted Key Value) structure into the U-Net
architecture. This integration enhances the model's ability to capture
long-range dependencies and improve contextual understanding, which is crucial
for accurate medical image segmentation. We build a strong encoder with
developed inverted residual RWKV (IR-RWKV) blocks combining CNNs and RWKVs. We
also propose a Cross-Channel Mix (CCM) module to improve skip connections with
multi-scale feature fusion, achieving global channel information integration.
Experiments on benchmark datasets, including Synapse, ACDC, BUSI, CVC-ClinicDB,
CVC-ColonDB, Kvasir-SEG, ISIC 2017 and GLAS show that RWKV-UNet achieves
state-of-the-art performance on various types of medical image segmentation.
Additionally, smaller variants, RWKV-UNet-S and RWKV-UNet-T, balance accuracy
and computational efficiency, making them suitable for broader clinical
applications.
|
2501.08459
|
Head Motion Degrades Machine Learning Classification of Alzheimer's
Disease from Positron Emission Tomography
|
eess.IV cs.LG
|
Brain positron emission tomography (PET) imaging is broadly used in research
and clinical routines to study, diagnose, and stage Alzheimer's disease (AD).
However, its potential cannot be fully exploited yet due to the lack of
portable motion correction solutions, especially in clinical settings. Head
motion during data acquisition has indeed been shown to degrade image quality
and induces tracer uptake quantification error. In this study, we demonstrate
that it also biases machine learning-based AD classification. We start by
proposing a binary classification algorithm solely based on PET images. We find
that it reaches a high accuracy in classifying motion corrected images into
cognitive normal or AD. We demonstrate that the classification accuracy
substantially decreases when images lack motion correction, thereby limiting
the algorithm's effectiveness and biasing image interpretation. We validate
these findings in cohorts of 128 $^{11}$C-UCB-J and 173 $^{18}$F-FDG scans, two
tracers highly relevant to the study of AD. Classification accuracies decreased
by 10% and 5% on 20 $^{18}$F-FDG and 20 $^{11}$C-UCB-J testing cases,
respectively. Our findings underscore the critical need for efficient motion
correction methods to make the most of the diagnostic capabilities of PET-based
machine learning.
|
2501.08460
|
Towards Zero-Shot & Explainable Video Description by Reasoning over
Graphs of Events in Space and Time
|
cs.CV cs.AI cs.CL
|
In the current era of Machine Learning, Transformers have become the de facto
approach across a variety of domains, such as computer vision and natural
language processing. Transformer-based solutions are the backbone of current
state-of-the-art methods for language generation, image and video
classification, segmentation, action and object recognition, among many others.
Interestingly enough, while these state-of-the-art methods produce impressive
results in their respective domains, the problem of understanding the
relationship between vision and language is still beyond our reach. In this
work, we propose a common ground between vision and language based on events in
space and time in an explainable and programmatic way, to connect
learning-based vision and language state of the art models and provide a
solution to the long standing problem of describing videos in natural language.
We validate that our algorithmic approach is able to generate coherent, rich
and relevant textual descriptions on videos collected from a variety of
datasets, using both standard metrics (e.g. Bleu, ROUGE) and the modern
LLM-as-a-Jury approach.
|
2501.08464
|
Time series forecasting for multidimensional telemetry data using GAN
and BiLSTM in a Digital Twin
|
cs.LG eess.SP
|
The research related to digital twins has been increasing in recent years.
Besides the mirroring of the physical word into the digital, there is the need
of providing services related to the data collected and transferred to the
virtual world. One of these services is the forecasting of physical part future
behavior, that could lead to applications, like preventing harmful events or
designing improvements to get better performance. One strategy used to predict
any system operation it is the use of time series models like ARIMA or LSTM,
and improvements were implemented using these algorithms. Recently, deep
learning techniques based on generative models such as Generative Adversarial
Networks (GANs) have been proposed to create time series and the use of LSTM
has gained more relevance in time series forecasting, but both have limitations
that restrict the forecasting results. Another issue found in the literature is
the challenge of handling multivariate environments/applications in time series
generation. Therefore, new methods need to be studied in order to fill these
gaps and, consequently, provide better resources for creating useful digital
twins. In this proposal, it is going to be studied the integration of a BiLSTM
layer with a time series obtained by GAN in order to improve the forecasting of
all the features provided by the dataset in terms of accuracy and,
consequently, improving behaviour prediction.
|
2501.08465
|
Predicting Performance of Object Detection Models in Electron Microscopy
Using Random Forests
|
cs.CV cond-mat.mtrl-sci
|
Quantifying prediction uncertainty when applying object detection models to
new, unlabeled datasets is critical in applied machine learning. This study
introduces an approach to estimate the performance of deep learning-based
object detection models for quantifying defects in transmission electron
microscopy (TEM) images, focusing on detecting irradiation-induced cavities in
TEM images of metal alloys. We developed a random forest regression model that
predicts the object detection F1 score, a statistical metric used to evaluate
the ability to accurately locate and classify objects of interest. The random
forest model uses features extracted from the predictions of the object
detection model whose uncertainty is being quantified, enabling fast prediction
on new, unlabeled images. The mean absolute error (MAE) for predicting F1 of
the trained model on test data is 0.09, and the $R^2$ score is 0.77, indicating
there is a significant correlation between the random forest regression model
predicted and true defect detection F1 scores. The approach is shown to be
robust across three distinct TEM image datasets with varying imaging and
material domains. Our approach enables users to estimate the reliability of a
defect detection and segmentation model predictions and assess the
applicability of the model to their specific datasets, providing valuable
information about possible domain shifts and whether the model needs to be
fine-tuned or trained on additional data to be maximally effective for the
desired use case.
|
2501.08466
|
A Short-Term Predict-Then-Cluster Framework for Meal Delivery Services
|
cs.CY cs.AI
|
Micro-delivery services offer promising solutions for on-demand city
logistics, but their success relies on efficient real-time delivery operations
and fleet management. On-demand meal delivery platforms seek to optimize
real-time operations based on anticipatory insights into citywide demand
distributions. To address these needs, this study proposes a short-term
predict-then-cluster framework for on-demand meal delivery services. The
framework utilizes ensemble-learning methods for point and distributional
forecasting with multivariate features, including lagged-dependent inputs to
capture demand dynamics. We introduce Constrained K-Means Clustering (CKMC) and
Contiguity Constrained Hierarchical Clustering with Iterative Constraint
Enforcement (CCHC-ICE) to generate dynamic clusters based on predicted demand
and geographical proximity, tailored to user-defined operational constraints.
Evaluations of European and Taiwanese case studies demonstrate that the
proposed methods outperform traditional time series approaches in both accuracy
and computational efficiency. Clustering results demonstrate that the
incorporation of distributional predictions effectively addresses demand
uncertainties, improving the quality of operational insights. Additionally, a
simulation study demonstrates the practical value of short-term demand
predictions for proactive strategies, such as idle fleet rebalancing,
significantly enhancing delivery efficiency. By addressing demand uncertainties
and operational constraints, our predict-then-cluster framework provides
actionable insights for optimizing real-time operations. The approach is
adaptable to other on-demand platform-based city logistics and passenger
mobility services, promoting sustainable and efficient urban operations.
|
2501.08468
|
Selective Attention Merging for low resource tasks: A case study of
Child ASR
|
cs.CL cs.SD eess.AS
|
While Speech Foundation Models (SFMs) excel in various speech tasks, their
performance for low-resource tasks such as child Automatic Speech Recognition
(ASR) is hampered by limited pretraining data. To address this, we explore
different model merging techniques to leverage knowledge from models trained on
larger, more diverse speech corpora. This paper also introduces Selective
Attention (SA) Merge, a novel method that selectively merges task vectors from
attention matrices to enhance SFM performance on low-resource tasks.
Experiments on the MyST database show significant reductions in relative word
error rate of up to 14%, outperforming existing model merging and data
augmentation techniques. By combining data augmentation techniques with SA
Merge, we achieve a new state-of-the-art WER of 8.69 on the MyST database for
the Whisper-small model, highlighting the potential of SA Merge for improving
low-resource ASR.
|
2501.08469
|
Electrostatic Clutches Enable High-Force Mechanical Multiplexing:
Demonstrating Single-Motor Full-Actuation of a 4-DoF Hand
|
cs.RO cs.SY eess.SY
|
This paper introduces a novel mechanical multiplexing system powered by
electrostatic capstan clutches, enabling high-force, single-motor control of
multiple degrees of freedom (DoF). The system is capable of both bidirectional
single-input single-output time-division and single-input multiple-output
multiplexing to actuate a commercial 4-DoF robotic hand with a single motor.
Our mechanical multiplexer is also capable of powerless position holding owing
to its use of a leadscrew nut acting as the output. Experimental results
demonstrate the effectiveness of this approach, achieving individual and
simultaneous actuation. This innovation offers a scalable solution for high-DoF
robotic systems, providing a path to efficient actuation in robotic platforms.
|
2501.08470
|
Detecting Contextual Anomalies by Discovering Consistent Spatial Regions
|
cs.CV cs.AI
|
We describe a method for modeling spatial context to enable video anomaly
detection. The main idea is to discover regions that share similar object-level
activities by clustering joint object attributes using Gaussian mixture models.
We demonstrate that this straightforward approach, using orders of magnitude
fewer parameters than competing models, achieves state-of-the-art performance
in the challenging spatial-context-dependent Street Scene dataset. As a side
benefit, the high-resolution discovered regions learned by the model also
provide explainable normalcy maps for human operators without the need for any
pre-trained segmentation model.
|
2501.08471
|
Benchmarking Classical, Deep, and Generative Models for Human Activity
Recognition
|
cs.CV cs.AI
|
Human Activity Recognition (HAR) has gained significant importance with the
growing use of sensor-equipped devices and large datasets. This paper evaluates
the performance of three categories of models : classical machine learning,
deep learning architectures, and Restricted Boltzmann Machines (RBMs) using
five key benchmark datasets of HAR (UCI-HAR, OPPORTUNITY, PAMAP2, WISDM, and
Berkeley MHAD). We assess various models, including Decision Trees, Random
Forests, Convolutional Neural Networks (CNN), and Deep Belief Networks (DBNs),
using metrics such as accuracy, precision, recall, and F1-score for a
comprehensive comparison. The results show that CNN models offer superior
performance across all datasets, especially on the Berkeley MHAD. Classical
models like Random Forest do well on smaller datasets but face challenges with
larger, more complex data. RBM-based models also show notable potential,
particularly for feature learning. This paper offers a detailed comparison to
help researchers choose the most suitable model for HAR tasks.
|
2501.08472
|
Energy Storage Arbitrage Under Price Uncertainty: Market Risks and
Opportunities
|
math.OC cs.SY eess.SY
|
We investigate the profitability and risk of energy storage arbitrage in
electricity markets under price uncertainty, exploring both robust and
chance-constrained optimization approaches. We analyze various uncertainty
representations, including polyhedral, ellipsoidal uncertainty sets and
probabilistic approximations, to model price fluctuations and construct
efficient frontiers that highlight the tradeoff between risk and profit. Using
historical electricity price data, we quantify the impact of uncertainty on
arbitrage strategies and compare their performance under distinct market
conditions. The results reveal that arbitrage strategies under uncertainties
can effectively secure expected profits, and robust strategies perform better
in risk management across varying levels of conservativeness, especially under
highly volatile market conditions. This work provides insights into storage
arbitrage strategy selection for market participants with differing risk
preferences, emphasizing the adaptability of efficient frontiers to the
electricity market.
|
2501.08474
|
The Theater Stage as Laboratory: Review of Real-Time Comedy LLM Systems
for Live Performance
|
cs.CL
|
In this position paper, we review the eclectic recent history of academic and
artistic works involving computational systems for humor generation, and focus
specifically on live performance. We make the case that AI comedy should be
evaluated in live conditions, in front of audiences sharing either physical or
online spaces, and under real-time constraints. We further suggest that
improvised comedy is therefore the perfect substrate for deploying and
assessing computational humor systems. Using examples of successful AI-infused
shows, we demonstrate that live performance raises three sets of challenges for
computational humor generation: 1) questions around robotic embodiment,
anthropomorphism and competition between humans and machines, 2) questions
around comedic timing and the nature of audience interaction, and 3) questions
about the human interpretation of seemingly absurd AI-generated humor. We argue
that these questions impact the choice of methodologies for evaluating
computational humor, as any such method needs to work around the constraints of
live audiences and performance spaces. These interrogations also highlight
different types of collaborative relationship of human comedians towards AI
tools.
|
2501.08479
|
Skyrise: Exploiting Serverless Cloud Infrastructure for Elastic Data
Processing
|
cs.DB
|
Serverless computing offers elasticity unmatched by conventional server-based
cloud infrastructure. Although modern data processing systems embrace
serverless storage, such as Amazon S3, they continue to manage their compute
resources as servers. This is challenging for unpredictable workloads, leaving
clusters often underutilized. Recent research shows the potential of serverless
compute resources, such as cloud functions, for elastic data processing, but
also sees limitations in performance robustness and cost efficiency for long
running workloads. These challenges require holistic approaches across the
system stack. However, to the best of our knowledge, there is no end-to-end
data processing system built entirely on serverless infrastructure. In this
paper, we present Skyrise, our effort towards building the first fully
serverless SQL query processor. Skyrise exploits the elasticity of its
underlying infrastructure, while alleviating the inherent limitations with a
number of adaptive and cost-aware techniques. We show that both Skyrise's
performance and cost are competitive to other cloud data systems for
terabyte-scale queries of the analytical TPC-H benchmark.
|
2501.08490
|
FLAVARS: A Multimodal Foundational Language and Vision Alignment Model
for Remote Sensing
|
cs.CV cs.LG
|
Remote sensing imagery is dense with objects and contextual visual
information. There is a recent trend to combine paired satellite images and
text captions for pretraining performant encoders for downstream tasks.
However, while contrastive image-text methods like CLIP enable vision-language
alignment and zero-shot classification ability, vision-only downstream
performance tends to degrade compared to image-only pretraining, such as MAE.
In this paper, we propose FLAVARS, a pretraining method that combines the best
of both contrastive learning and masked modeling, along with geospatial
alignment via contrastive location encoding. We find that FLAVARS significantly
outperforms a baseline of SkyCLIP for vision-only tasks such as KNN
classification and semantic segmentation, +6\% mIOU on SpaceNet1, while
retaining the ability to perform zero-shot classification, unlike MAE
pretrained methods.
|
2501.08495
|
Automotive Elevation Mapping with Interferometric Synthetic Aperture
Radar
|
eess.SP cs.CV eess.IV
|
Radar is a low-cost and ubiquitous automotive sensor, but is limited by array
resolution and sensitivity when performing direction of arrival analysis.
Synthetic Aperture Radar (SAR) is a class of techniques to improve azimuth
resolution and sensitivity for radar. Interferometric SAR (InSAR) can be used
to extract elevation from the variations in phase measurements in SAR images.
Utilizing InSAR we show that a typical, low-resolution radar array mounted on a
vehicle can be used to accurately localize detections in 3D space for both
urban and agricultural environments. We generate point clouds in each
environment by combining InSAR with a signal processing scheme tailored to
automotive driving. This low-compute approach allows radar to be used as a
primary sensor to map fine details in complex driving environments, and be used
to make autonomous perception decisions.
|
2501.08496
|
Quantifying the Importance of Data Alignment in Downstream Model
Performance
|
cs.CL cs.AI cs.LG cs.PL
|
Contrary to the conventional emphasis on dataset size, we explore the role of
data alignment -- an often overlooked aspect of data quality -- in training
capable Large Language Models (LLMs). To do so, we use the Task2Vec-based
alignment coefficient, a quantitative measure of the similarity between two
datasets, to quantify the impact of alignment between training data and
evaluation data on downstream performance. In particular, we conduct controlled
\textit{interventional} experiments for two settings: 1. the impact of
increased alignment coefficients between various pre-training (pt) against
evaluation datasets, and 2. the impact of increased alignment coefficients
between domain specific fine-tuning (ft) against domain specific evaluation.
The domain specific task we explore is Autoformalization -- the machine
translation task between natural language and code for formal verification. In
both settings, we find a strong, predictable negative correlation between the
alignment coefficient of a model's training and evaluation data and the model's
loss/perplexity on the respective downstream task. These findings suggest a
re-evaluation of LLM training approaches, demonstrating the relevance of data
alignment compared to data quantity, especially in specialized downstream tasks
such as Autoformalization.
|
2501.08498
|
Heterogeneous Update Processes Shape Information Cascades in Social
Networks
|
cs.MA cs.SI
|
A common assumption in the literature on information diffusion is that
populations are homogeneous regarding individuals' information acquisition and
propagation process: Individuals update their informed and actively
communicating state either through imitation (simple contagion) or peer
influence (complex contagion). Here, we study the impact of the mixing and
placement of individuals with different update processes on how information
cascades in social networks. We consider Simple Spreaders, which take
information from a random neighbor and communicate it, and Threshold-based
Spreaders, which require a threshold number of active neighbors to change their
state to active communication. Even though, in a population made exclusively of
Simple Spreaders, information reaches all elements of any (connected) network,
we show that, when Simple and Threshold-based Spreaders coexist and occupy
random positions in a social network, the number of Simple Spreaders
systematically amplifies the cascades only in degree heterogeneous networks
(exponential and scale-free). In random and modular structures, this cascading
effect originated by Simple Spreaders only exists above a critical mass of
these individuals. In contrast, when Threshold-based Spreaders are assorted
preferentially in the nodes with a higher degree, the cascading effect of
Simple Spreaders vanishes, and the spread of information is drastically
impaired. Overall, the study highlights the significance of the strategic
placement of different roles in networked structures, with Simple Spreaders
driving widespread cascades in heterogeneous networks and Threshold-based
Spreaders playing a critical regulatory role in information spread with a
tunable effect based on the threshold value.
|
2501.08501
|
Scalable Bayesian Physics-Informed Kolmogorov-Arnold Networks
|
math.NA cs.LG cs.NA
|
Uncertainty quantification (UQ) plays a pivotal role in scientific machine
learning, especially when surrogate models are used to approximate complex
systems. Although multilayer perceptions (MLPs) are commonly employed as
surrogates, they often suffer from overfitting due to their large number of
parameters. Kolmogorov-Arnold networks (KANs) offer an alternative solution
with fewer parameters. However, gradient-based inference methods, such as
Hamiltonian Monte Carlo (HMC), may result in computational inefficiency when
applied to KANs, especially for large-scale datasets, due to the high cost of
back-propagation. To address these challenges, we propose a novel approach,
combining the dropout Tikhonov ensemble Kalman inversion (DTEKI) with Chebyshev
KANs. This gradient-free method effectively mitigates overfitting and enhances
numerical stability. Additionally, we incorporate the active subspace method to
reduce the parameter-space dimensionality, allowing us to improve the accuracy
of predictions and obtain more reliable uncertainty estimates. Extensive
experiments demonstrate the efficacy of our approach in various test cases,
including scenarios with large datasets and high noise levels. Our results show
that the new method achieves comparable or better accuracy, much higher
efficiency as well as stability compared to HMC, in addition to scalability.
Moreover, by leveraging the low-dimensional parameter subspace, our method
preserves prediction accuracy while substantially reducing further the
computational cost.
|
2501.08502
|
Adapting Whisper for Regional Dialects: Enhancing Public Services for
Vulnerable Populations in the United Kingdom
|
cs.CL cs.AI
|
We collect novel data in the public service domain to evaluate the capability
of the state-of-the-art automatic speech recognition (ASR) models in capturing
regional differences in accents in the United Kingdom (UK), specifically
focusing on two accents from Scotland with distinct dialects. This study
addresses real-world problems where biased ASR models can lead to
miscommunication in public services, disadvantaging individuals with regional
accents particularly those in vulnerable populations. We first examine the
out-of-the-box performance of the Whisper large-v3 model on a baseline dataset
and our data. We then explore the impact of fine-tuning Whisper on the
performance in the two UK regions and investigate the effectiveness of existing
model evaluation techniques for our real-world application through manual
inspection of model errors. We observe that the Whisper model has a higher word
error rate (WER) on our test datasets compared to the baseline data and
fine-tuning on a given data improves performance on the test dataset with the
same domain and accent. The fine-tuned models also appear to show improved
performance when applied to the test data outside of the region it was trained
on suggesting that fine-tuned models may be transferable within parts of the
UK. Our manual analysis of model outputs reveals the benefits and drawbacks of
using WER as an evaluation metric and fine-tuning to adapt to regional
dialects.
|
2501.08504
|
SuperSAM: Crafting a SAM Supernetwork via Structured Pruning and
Unstructured Parameter Prioritization
|
cs.CV cs.LG
|
Neural Architecture Search (NAS) is a powerful approach of automating the
design of efficient neural architectures. In contrast to traditional NAS
methods, recently proposed one-shot NAS methods prove to be more efficient in
performing NAS. One-shot NAS works by generating a singular weight-sharing
supernetwork that acts as a search space (container) of subnetworks. Despite
its achievements, designing the one-shot search space remains a major
challenge. In this work we propose a search space design strategy for Vision
Transformer (ViT)-based architectures. In particular, we convert the Segment
Anything Model (SAM) into a weight-sharing supernetwork called SuperSAM. Our
approach involves automating the search space design via layer-wise structured
pruning and parameter prioritization. While the structured pruning applies
probabilistic removal of certain transformer layers, parameter prioritization
performs weight reordering and slicing of MLP-blocks in the remaining layers.
We train supernetworks on several datasets using the sandwich rule. For
deployment, we enhance subnetwork discovery by utilizing a program autotuner to
identify efficient subnetworks within the search space. The resulting
subnetworks are 30-70% smaller in size compared to the original pre-trained SAM
ViT-B, yet outperform the pretrained model. Our work introduces a new and
effective method for ViT NAS search-space design.
|
2501.08505
|
Yuan: Yielding Unblemished Aesthetics Through A Unified Network for
Visual Imperfections Removal in Generated Images
|
cs.CV eess.IV
|
Generative AI presents transformative potential across various domains, from
creative arts to scientific visualization. However, the utility of AI-generated
imagery is often compromised by visual flaws, including anatomical
inaccuracies, improper object placements, and misplaced textual elements. These
imperfections pose significant challenges for practical applications. To
overcome these limitations, we introduce \textit{Yuan}, a novel framework that
autonomously corrects visual imperfections in text-to-image synthesis.
\textit{Yuan} uniquely conditions on both the textual prompt and the segmented
image, generating precise masks that identify areas in need of refinement
without requiring manual intervention -- a common constraint in previous
methodologies. Following the automated masking process, an advanced inpainting
module seamlessly integrates contextually coherent content into the identified
regions, preserving the integrity and fidelity of the original image and
associated text prompts. Through extensive experimentation on publicly
available datasets such as ImageNet100 and Stanford Dogs, along with a
custom-generated dataset, \textit{Yuan} demonstrated superior performance in
eliminating visual imperfections. Our approach consistently achieved higher
scores in quantitative metrics, including NIQE, BRISQUE, and PI, alongside
favorable qualitative evaluations. These results underscore \textit{Yuan}'s
potential to significantly enhance the quality and applicability of
AI-generated images across diverse fields.
|
2501.08506
|
Exploring the Efficacy of Meta-Learning: Unveiling Superior Data
Diversity Utilization of MAML Over Pre-training
|
cs.LG cs.AI cs.CV
|
Currently, data and model size dominate the narrative in the training of
super-large, powerful models. However, there has been a lack of exploration on
the effect of other attributes of the training dataset on model performance. We
hypothesize that dataset diversity can impact the performance of vision models.
Our study shows positive correlations between test set accuracy and data
diversity, providing an argument for furthering the research of dataset
attributes beyond size. We analyzed pre-training and model-agnostic
meta-learning methods on twelve popular visual datasets (e.g., Omniglot,
CIFAR-FS, Aircraft) and five model configurations, including MAML variants with
different numbers of inner gradient steps and supervised learning. We show
moderate to strong positive correlations (R-squared: 0.15-0.42) between
accuracy and data diversity and weaker but significant correlations (R-squared:
~0.2) between loss and diversity. These findings support our hypothesis and
demonstrate a promising way for a deeper exploration of how formal data
diversity influences model performance. This initial study highlights the
potential of (Task2Vec) data diversity as a valuable measure in the rapidly
evolving field of large-scale learning and emphasizes that understanding the
dataset is key to building more powerful and generalizable models.
|
2501.08507
|
A Framework for Dynamic Situational Awareness in Human Robot Teams: An
Interview Study
|
cs.RO cs.HC
|
In human-robot teams, human situational awareness is the operator's conscious
knowledge of the team's states, actions, plans and their environment.
Appropriate human situational awareness is critical to successful human-robot
collaboration. In human-robot teaming, it is often assumed that the best and
required level of situational awareness is knowing everything at all times.
This view is problematic, because what a human needs to know for optimal team
performance varies given the dynamic environmental conditions, task context and
roles and capabilities of team members. We explore this topic by interviewing
16 participants with active and repeated experience in diverse human-robot
teaming applications. Based on analysis of these interviews, we derive a
framework explaining the dynamic nature of required situational awareness in
human-robot teaming. In addition, we identify a range of factors affecting the
dynamic nature of required and actual levels of situational awareness (i.e.,
dynamic situational awareness), types of situational awareness inefficiencies
resulting from gaps between actual and required situational awareness, and
their main consequences. We also reveal various strategies, initiated by humans
and robots, that assist in maintaining the required situational awareness. Our
findings inform the implementation of accurate estimates of dynamic situational
awareness and the design of user-adaptive human-robot interfaces. Therefore,
this work contributes to the future design of more collaborative and effective
human-robot teams.
|
2501.08508
|
Score-based 3D molecule generation with neural fields
|
cs.LG q-bio.BM
|
We introduce a new representation for 3D molecules based on their continuous
atomic density fields. Using this representation, we propose a new model based
on walk-jump sampling for unconditional 3D molecule generation in the
continuous space using neural fields. Our model, FuncMol, encodes molecular
fields into latent codes using a conditional neural field, samples noisy codes
from a Gaussian-smoothed distribution with Langevin MCMC (walk), denoises these
samples in a single step (jump), and finally decodes them into molecular
fields. FuncMol performs all-atom generation of 3D molecules without
assumptions on the molecular structure and scales well with the size of
molecules, unlike most approaches. Our method achieves competitive results on
drug-like molecules and easily scales to macro-cyclic peptides, with at least
one order of magnitude faster sampling. The code is available at
https://github.com/prescient-design/funcmol.
|
2501.08512
|
Ensuring Truthfulness in Distributed Aggregative Optimization
|
cs.MA
|
Distributed aggregative optimization methods are gaining increased traction
due to their ability to address cooperative control and optimization problems,
where the objective function of each agent depends not only on its own decision
variable but also on the aggregation of other agents' decision variables.
Nevertheless, existing distributed aggregative optimization methods implicitly
assume all agents to be truthful in information sharing, which can be
unrealistic in real-world scenarios, where agents may act selfishly or
strategically. In fact, an opportunistic agent may deceptively share false
information in its own favor to minimize its own loss, which, however, will
compromise the network-level global performance. To solve this issue, we
propose a new distributed aggregative optimization algorithm that can ensure
truthfulness of agents and convergence performance. To the best of our
knowledge, this is the first algorithm that ensures truthfulness in a fully
distributed setting, where no "centralized" aggregator exists to collect
private information/decision variables from participating agents. We
systematically characterize the convergence rate of our algorithm under
nonconvex/convex/strongly convex objective functions, which generalizes
existing distributed aggregative optimization results that only focus on convex
objective functions. We also rigorously quantify the tradeoff between
convergence performance and the level of enabled truthfulness under different
convexity conditions. Numerical simulations using distributed charging of
electric vehicles confirm the efficacy of our algorithm.
|
2501.08514
|
Multimodal Fake News Video Explanation Generation: Dataset, Model, and
Evaluation
|
cs.CV cs.MM
|
Although existing methods have addressed fake news video detection as a
classification problem, it is not clear why certain news content is identified
as fake. Without proper explanation, end users may not be able to understand
the potential meaning of fake news. Therefore, we propose a novel task, Fake
News Video Explanation (FNVE), to generate natural language explanations that
reveal the falseness of news videos. To this end, we first developed ONVE and
VTSE, two new datasets to explain fake news video posts. Then, we propose a
Multimodal Relation Graph Transformer (MRGT) model to benchmark ONVE and VTSE.
MRGT introduces a multimodal relation graph to comprehensively represent
multimodal relations and then introduces a BART-based decoder to explain
generations. The experimental results show that the proposed MRGT outperforms
the strong baselines. In addition, the human evaluation on the annotated ONVE
and VTSE also achieves high scores in terms of adequacy rating.
|
2501.08515
|
Learning Hyperplane Tree: A Piecewise Linear and Fully Interpretable
Decision-making Framework
|
cs.LG
|
This paper introduces a novel tree-based model, Learning Hyperplane Tree
(LHT), which outperforms state-of-the-art (SOTA) tree models for classification
tasks on several public datasets. The structure of LHT is simple and efficient:
it partitions the data using several hyperplanes to progressively distinguish
between target and non-target class samples. Although the separation is not
perfect at each stage, LHT effectively improves the distinction through
successive partitions. During testing, a sample is classified by evaluating the
hyperplanes defined in the branching blocks and traversing down the tree until
it reaches the corresponding leaf block. The class of the test sample is then
determined using the piecewise linear membership function defined in the leaf
blocks, which is derived through least-squares fitting and fuzzy logic. LHT is
highly transparent and interpretable--at each branching block, the contribution
of each feature to the classification can be clearly observed.
|
2501.08516
|
A Survey on IBR Penetrated Power System Stability Analysis Using
Frequency Scanning
|
eess.SY cs.SY
|
The rapid rise in inverter-based renewable resources has heightened concerns
over subsynchronous resonance and oscillations, thereby challenging grid
stability. This paper reviews approaches to identify and mitigate these issues,
focusing on frequency scanning methods for stability assessment. It categorizes
white-, black-, and gray-box modeling techniques, compares positive-sequence,
dq-frame, and alpha-beta domain scanning, and examines perturbation shapes like
step, ramp, and chirp. A comparative study highlights their strengths,
limitations, and suitability for specific scenarios. By summarizing past events
and surveying available tools, this work guides operators and researchers
toward more effective, reliable stability analysis methods in grids with high
renewable penetration.
|
2501.08518
|
Easing Seasickness through Attention Redirection with a
Mindfulness-Based Brain--Computer Interface
|
cs.HC cs.AI eess.SP q-bio.QM
|
Seasickness is a prevalent issue that adversely impacts both passenger
experiences and the operational efficiency of maritime crews. While techniques
that redirect attention have proven effective in alleviating motion sickness
symptoms in terrestrial environments, applying similar strategies to manage
seasickness poses unique challenges due to the prolonged and intense motion
environment associated with maritime travel. In this study, we propose a
mindfulness brain-computer interface (BCI), specifically designed to redirect
attention with the aim of mitigating seasickness symptoms in real-world
settings. Our system utilizes a single-channel headband to capture prefrontal
EEG signals, which are then wirelessly transmitted to computing devices for the
assessment of mindfulness states. The results are transferred into real-time
feedback as mindfulness scores and audiovisual stimuli, facilitating a shift in
attentional focus from physiological discomfort to mindfulness practices. A
total of 43 individuals participated in a real-world maritime experiment
consisted of three sessions: a real-feedback mindfulness session, a resting
session, and a pseudofeedback mindfulness session. Notably, 81.39% of
participants reported that the mindfulness BCI intervention was effective, and
there was a significant reduction in the severity of seasickness, as measured
by the Misery Scale (MISC). Furthermore, EEG analysis revealed a decrease in
the theta/beta ratio, corresponding with the alleviation of seasickness
symptoms. A decrease in overall EEG band power during the real-feedback
mindfulness session suggests that the mindfulness BCI fosters a more tranquil
and downregulated state of brain activity. Together, this study presents a
novel nonpharmacological, portable, and effective approach for seasickness
intervention, with the potential to enhance the cruising experience for both
passengers and crews.
|
2501.08520
|
Chance-Constrained Sampling-Based MPC for Collision Avoidance in
Uncertain Dynamic Environments
|
cs.RO cs.SY eess.SY
|
Navigating safely in dynamic and uncertain environments is challenging due to
uncertainties in perception and motion. This letter presents C2U-MPPI, a robust
sampling-based Model Predictive Control (MPC) framework that addresses these
challenges by leveraging the Unscented Model Predictive Path Integral (U-MPPI)
control strategy with integrated probabilistic chance constraints, ensuring
more reliable and efficient navigation under uncertainty. Unlike gradient-based
MPC methods, our approach (i) avoids linearization of system dynamics and
directly applies non-convex and nonlinear chance constraints, enabling more
accurate and flexible optimization, and (ii) enhances computational efficiency
by reformulating probabilistic constraints into a deterministic form and
employing a layered dynamic obstacle representation, enabling real-time
handling of multiple obstacles. Extensive experiments in simulated and
real-world human-shared environments validate the effectiveness of our
algorithm against baseline methods, showcasing its capability to generate
feasible trajectories and control inputs that adhere to system dynamics and
constraints in dynamic settings, enabled by unscented-based sampling strategy
and risk-sensitive trajectory evaluation. A supplementary video is available
at: https://youtu.be/FptAhvJlQm8
|
2501.08521
|
Mitigating Domain Shift in Federated Learning via Intra- and
Inter-Domain Prototypes
|
cs.LG cs.AI
|
Federated Learning (FL) has emerged as a decentralized machine learning
technique, allowing clients to train a global model collaboratively without
sharing private data. However, most FL studies ignore the crucial challenge of
heterogeneous domains where each client has a distinct feature distribution,
which is common in real-world scenarios. Prototype learning, which leverages
the mean feature vectors within the same classes, has become a prominent
solution for federated learning under domain skew. However, existing federated
prototype learning methods only consider inter-domain prototypes on the server
and overlook intra-domain characteristics. In this work, we introduce a novel
federated prototype learning method, namely I$^2$PFL, which incorporates
$\textbf{I}$ntra-domain and $\textbf{I}$nter-domain $\textbf{P}$rototypes, to
mitigate domain shifts and learn a generalized global model across multiple
domains in federated learning. To construct intra-domain prototypes, we propose
feature alignment with MixUp-based augmented prototypes to capture the
diversity of local domains and enhance the generalization of local features.
Additionally, we introduce a reweighting mechanism for inter-domain prototypes
to generate generalized prototypes to provide inter-domain knowledge and reduce
domain skew across multiple clients. Extensive experiments on the Digits,
Office-10, and PACS datasets illustrate the superior performance of our method
compared to other baselines.
|
2501.08523
|
Doc-Guided Sent2Sent++: A Sent2Sent++ Agent with Doc-Guided memory for
Document-level Machine Translation
|
cs.CL cs.AI
|
The field of artificial intelligence has witnessed significant advancements
in natural language processing, largely attributed to the capabilities of Large
Language Models (LLMs). These models form the backbone of Agents designed to
address long-context dependencies, particularly in Document-level Machine
Translation (DocMT). DocMT presents unique challenges, with quality,
consistency, and fluency being the key metrics for evaluation. Existing
approaches, such as Doc2Doc and Doc2Sent, either omit sentences or compromise
fluency. This paper introduces Doc-Guided Sent2Sent++, an Agent that employs an
incremental sentence-level forced decoding strategy \textbf{to ensure every
sentence is translated while enhancing the fluency of adjacent sentences.} Our
Agent leverages a Doc-Guided Memory, focusing solely on the summary and its
translation, which we find to be an efficient approach to maintaining
consistency. Through extensive testing across multiple languages and domains,
we demonstrate that Sent2Sent++ outperforms other methods in terms of quality,
consistency, and fluency. The results indicate that, our approach has achieved
significant improvements in metrics such as s-COMET, d-COMET, LTCR-$1_f$, and
document-level perplexity (d-ppl). The contributions of this paper include a
detailed analysis of current DocMT research, the introduction of the
Sent2Sent++ decoding method, the Doc-Guided Memory mechanism, and validation of
its effectiveness across languages and domains.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.