id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
2501.17076
|
DINOSTAR: Deep Iterative Neural Object Detector Self-Supervised Training
for Roadside LiDAR Applications
|
cs.CV
|
Recent advancements in deep-learning methods for object detection in
point-cloud data have enabled numerous roadside applications, fostering
improvements in transportation safety and management. However, the intricate
nature of point-cloud data poses significant challenges for human-supervised
labeling, resulting in substantial expenditures of time and capital. This paper
addresses the issue by developing an end-to-end, scalable, and self-supervised
framework for training deep object detectors tailored for roadside point-cloud
data. The proposed framework leverages self-supervised, statistically modeled
teachers to train off-the-shelf deep object detectors, thus circumventing the
need for human supervision. The teacher models follow fine-tuned set standard
practices of background filtering, object clustering, bounding-box fitting, and
classification to generate noisy labels. It is presented that by training the
student model over the combined noisy annotations from multitude of teachers
enhances its capacity to discern background/foreground more effectively and
forces it to learn diverse point-cloud-representations for object categories of
interest. The evaluations, involving publicly available roadside datasets and
state-of-art deep object detectors, demonstrate that the proposed framework
achieves comparable performance to deep object detectors trained on
human-annotated labels, despite not utilizing such human-annotations in its
training process.
|
2501.17077
|
Induced Modularity and Community Detection for Functionally
Interpretable Reinforcement Learning
|
cs.LG cs.AI
|
Interpretability in reinforcement learning is crucial for ensuring AI systems
align with human values and fulfill the diverse related requirements including
safety, robustness and fairness. Building on recent approaches to encouraging
sparsity and locality in neural networks, we demonstrate how the penalisation
of non-local weights leads to the emergence of functionally independent modules
in the policy network of a reinforcement learning agent. To illustrate this, we
demonstrate the emergence of two parallel modules for assessment of movement
along the X and Y axes in a stochastic Minigrid environment. Through the novel
application of community detection algorithms, we show how these modules can be
automatically identified and their functional roles verified through direct
intervention on the network weights prior to inference. This establishes a
scalable framework for reinforcement learning interpretability through
functional modularity, addressing challenges regarding the trade-off between
completeness and cognitive tractability of reinforcement learning explanations.
|
2501.17079
|
Learning Mean Field Control on Sparse Graphs
|
cs.MA cs.AI cs.GT cs.LG
|
Large agent networks are abundant in applications and nature and pose
difficult challenges in the field of multi-agent reinforcement learning (MARL)
due to their computational and theoretical complexity. While graphon mean field
games and their extensions provide efficient learning algorithms for dense and
moderately sparse agent networks, the case of realistic sparser graphs remains
largely unsolved. Thus, we propose a novel mean field control model inspired by
local weak convergence to include sparse graphs such as power law networks with
coefficients above two. Besides a theoretical analysis, we design scalable
learning algorithms which apply to the challenging class of graph sequences
with finite first moment. We compare our model and algorithms for various
examples on synthetic and real world networks with mean field algorithms based
on Lp graphons and graphexes. As it turns out, our approach outperforms
existing methods in many examples and on various networks due to the special
design aiming at an important, but so far hard to solve class of MARL problems.
|
2501.17081
|
Graph Transformers for inverse physics: reconstructing flows around
arbitrary 2D airfoils
|
cs.LG cs.AI cs.CE
|
We introduce a Graph Transformer framework that serves as a general inverse
physics engine on meshes, demonstrated through the challenging task of
reconstructing aerodynamic flow fields from sparse surface measurements. While
deep learning has shown promising results in forward physics simulation,
inverse problems remain particularly challenging due to their ill-posed nature
and the difficulty of propagating information from limited boundary
observations. Our approach addresses these challenges by combining the
geometric expressiveness of message-passing neural networks with the global
reasoning of Transformers, enabling efficient learning of inverse mappings from
boundary conditions to complete states. We evaluate this framework on a
comprehensive dataset of steady-state RANS simulations around diverse airfoil
geometries, where the task is to reconstruct full pressure and velocity fields
from surface pressure measurements alone. The architecture achieves high
reconstruction accuracy while maintaining fast inference times. We conduct
experiments and provide insights into the relative importance of local
geometric processing and global attention mechanisms in mesh-based inverse
problems. We also find that the framework is robust to reduced sensor coverage.
These results suggest that Graph Transformers can serve as effective inverse
physics engines across a broader range of applications where complete system
states must be reconstructed from limited boundary observations.
|
2501.17084
|
Token-by-Token Regeneration and Domain Biases: A Benchmark of LLMs on
Advanced Mathematical Problem-Solving
|
cs.LG
|
Large language models (LLMs) excel in many natural language tasks, yet they
struggle with complex mathemat-ical problem-solving, particularly in symbolic
reasoning and maintaining consistent output. This study evalu-ates 10 LLMs with
7 to 8 billion parameters using 945 competition-level problems from the MATH
dataset. The focus is on their ability to generate executable Python code as a
step in their reasoning process, involving over 9,450 code executions. The
research introduces an evaluation framework using mistral-large-2411 to rate
answers on a 5-point scale, which helps address inconsistencies in mathematical
notation. It also examines the impact of regenerating output token-by-token on
refining results. The findings reveal a significant 34.5% per-formance gap
between the top commercial model (gpt-4o-mini, scoring 83.7%) and the least
effective open-source model (open-codestral-mamba:v0.1, scoring 49.2%). This
disparity is especially noticeable in complex areas like Number Theory. While
token-by-token regeneration slightly improved accuracy (+0.8%) for the model
llama3.1:8b, it also reduced code execution time by 36.7%, highlighting a
trade-off between efficiency and precision. The study also noted a consistent
trend where harder problems correlated with lower accuracy across all models.
Despite using controlled execution environments, less than 1% of the generated
code was unsafe, and 3.17% of problems remained unsolved after 10 attempts,
suggesting that hybrid reasoning methods may be beneficial.
|
2501.17085
|
Evaluating CrowdSplat: Perceived Level of Detail for Gaussian Crowds
|
cs.CV
|
Efficient and realistic crowd rendering is an important element of many
real-time graphics applications such as Virtual Reality (VR) and games. To this
end, Levels of Detail (LOD) avatar representations such as polygonal meshes,
image-based impostors, and point clouds have been proposed and evaluated. More
recently, 3D Gaussian Splatting has been explored as a potential method for
real-time crowd rendering. In this paper, we present a two-alternative forced
choice (2AFC) experiment that aims to determine the perceived quality of 3D
Gaussian avatars. Three factors were explored: Motion, LOD (i.e., #Gaussians),
and the avatar height in Pixels (corresponding to the viewing distance).
Participants viewed pairs of animated 3D Gaussian avatars and were tasked with
choosing the most detailed one. Our findings can inform the optimization of LOD
strategies in Gaussian-based crowd rendering, thereby helping to achieve
efficient rendering while maintaining visual quality in real-time applications.
|
2501.17086
|
Accelerated Training through Iterative Gradient Propagation Along the
Residual Path
|
cs.LG
|
Despite being the cornerstone of deep learning, backpropagation is criticized
for its inherent sequentiality, which can limit the scalability of very deep
models. Such models faced convergence issues due to vanishing gradient, later
resolved using residual connections. Variants of these are now widely used in
modern architecture. However, the computational cost of backpropagation remains
a major burden, accounting for most of the training time. Taking advantage of
residual-like architectural designs, we introduce Highway backpropagation, a
parallelizable iterative algorithm that approximates backpropagation, by
alternatively i) accumulating the gradient estimates along the residual path,
and ii) backpropagating them through every layer in parallel. This algorithm is
naturally derived from a decomposition of the gradient as the sum of gradients
flowing through all paths and is adaptable to a diverse set of common
architectures, ranging from ResNets and Transformers to recurrent neural
networks. Through an extensive empirical study on a large selection of tasks
and models, we evaluate Highway-BP and show that major speedups can be achieved
with minimal performance degradation.
|
2501.17088
|
Mamba-Shedder: Post-Transformer Compression for Efficient Selective
Structured State Space Models
|
cs.LG cs.AI cs.CL
|
Large pre-trained models have achieved outstanding results in sequence
modeling. The Transformer block and its attention mechanism have been the main
drivers of the success of these models. Recently, alternative architectures,
such as Selective Structured State Space Models (SSMs), have been proposed to
address the inefficiencies of Transformers. This paper explores the compression
of SSM-based models, particularly Mamba and its hybrids. We study the
sensitivity of these models to the removal of selected components at different
granularities to reduce the model size and computational overhead, thus
improving their efficiency while maintaining accuracy. The proposed solutions,
collectively referred to as Mamba-Shedder, achieve a speedup of up to 1.4x
during inference, demonstrating that model efficiency can be improved by
eliminating several redundancies with minimal impact on the overall model
performance. The code is available at
https://github.com/IntelLabs/Hardware-Aware-Automated-Machine-Learning.
|
2501.17096
|
Why is the estimation of metaorder impact with public market data so
challenging?
|
q-fin.TR cs.AI econ.EM physics.soc-ph
|
Estimating market impact and transaction costs of large trades (metaorders)
is a very important topic in finance. However, using models of price and trade
based on public market data provide average price trajectories which are
qualitatively different from what is observed during real metaorder executions:
the price increases linearly, rather than in a concave way, during the
execution and the amount of reversion after its end is very limited. We claim
that this is a generic phenomenon due to the fact that even sophisticated
statistical models are unable to correctly describe the origin of the
autocorrelation of the order flow. We propose a modified Transient Impact Model
which provides more realistic trajectories by assuming that only a fraction of
the metaorder trading triggers market order flow. Interestingly, in our model
there is a critical condition on the kernels of the price and order flow
equations in which market impact becomes permanent.
|
2501.17099
|
Text-to-Image Generation for Vocabulary Learning Using the Keyword
Method
|
cs.HC cs.CV cs.GR cs.LG
|
The 'keyword method' is an effective technique for learning vocabulary of a
foreign language. It involves creating a memorable visual link between what a
word means and what its pronunciation in a foreign language sounds like in the
learner's native language. However, these memorable visual links remain
implicit in the people's mind and are not easy to remember for a large set of
words. To enhance the memorisation and recall of the vocabulary, we developed
an application that combines the keyword method with text-to-image generators
to externalise the memorable visual links into visuals. These visuals represent
additional stimuli during the memorisation process. To explore the
effectiveness of this approach we first run a pilot study to investigate how
difficult it is to externalise the descriptions of mental visualisations of
memorable links, by asking participants to write them down. We used these
descriptions as prompts for text-to-image generator (DALL-E2) to convert them
into images and asked participants to select their favourites. Next, we
compared different text-to-image generators (DALL-E2, Midjourney, Stable and
Latent Diffusion) to evaluate the perceived quality of the generated images by
each. Despite heterogeneous results, participants mostly preferred images
generated by DALL-E2, which was used also for the final study. In this study,
we investigated whether providing such images enhances the retention of
vocabulary being learned, compared to the keyword method only. Our results
indicate that people did not encounter difficulties describing their
visualisations of memorable links and that providing corresponding images
significantly improves memory retention.
|
2501.17104
|
COS(M+O)S: Curiosity and RL-Enhanced MCTS for Exploring Story Space via
Language Models
|
cs.CL cs.AI
|
We present COS(M+O)S, a System 2-inspired framework for open-ended plot
development that systematically explores the vast space of possible story
expansions, enabling a 3B-parameter language model to approach the plot quality
of a 70B model on select short-story tasks. The method accomplishes this by
combining Monte Carlo Tree Search (MCTS), guided by a step-level value model
that rewards moderate surprisal (curiosity) while penalizing incoherence, and
Odds Ratio Preference Optimization (ORPO) to fine-tune the policy on high-value
plot expansions. This iterative reinforcement learning loop systematically
explores multiple candidate plot branches, backpropagates quality signals, and
adapts the policy for faster convergence, notably shifting the policy from
puzzle-based Chain-of-Thought to more character-driven storytelling. In
small-scale tests with short-story prompts, 67%-77% of participants favored
COS(M+O)S's highest-rated expansions over lower-rated ones, suggesting that our
learned value function aligns. GPT-4o ratings further show that COS(M+O)S
surpasses naive single-pass decoding from Llama 3.2 3B by 0.59 SD, coming
within 0.06 SD of Llama 3.1 70B (no significant difference, p=0.93). Pairwise
comparisons with o1 place COS(M+O)S 1.5 SD above the 3B baseline and find no
statistically significant gap from 70B. Nevertheless, absolute story quality
remains modest, constrained by the small model's capacity and limited training
data.
|
2501.17105
|
Optimal control over Markovian wireless communication channels under
generalized packet dropout compensation
|
eess.SY cs.SY math.OC
|
Control loops closed over wireless links greatly benefit from accurate
estimates of the communication channel condition. To this end, the finite-state
Markov channel model allows for reliable channel state estimation. This paper
develops a Markov jump linear system representation for wireless networked
control with persistent channel state observation, stochastic message losses,
and generalized packet dropout compensation. With this model, we solve the
finite- and infinite-horizon linear quadratic regulation problems and introduce
an easy-to-test stability condition for any given infinite-horizon control law.
We also thoroughly analyze the impact of a scalar general dropout compensation
factor on the stability and closed-loop performance of a rotary inverted
pendulum controlled remotely through a wireless link. Finally, we validate the
results numerically via extensive Monte Carlo simulations, showing the benefits
of the proposed control strategy.
|
2501.17110
|
Solving Roughly Forced Nonlinear PDEs via Misspecified Kernel Methods
and Neural Networks
|
math.NA cs.LG cs.NA
|
We consider the use of Gaussian Processes (GPs) or Neural Networks (NNs) to
numerically approximate the solutions to nonlinear partial differential
equations (PDEs) with rough forcing or source terms, which commonly arise as
pathwise solutions to stochastic PDEs. Kernel methods have recently been
generalized to solve nonlinear PDEs by approximating their solutions as the
maximum a posteriori estimator of GPs that are conditioned to satisfy the PDE
at a finite set of collocation points. The convergence and error guarantees of
these methods, however, rely on the PDE being defined in a classical sense and
its solution possessing sufficient regularity to belong to the associated
reproducing kernel Hilbert space. We propose a generalization of these methods
to handle roughly forced nonlinear PDEs while preserving convergence guarantees
with an oversmoothing GP kernel that is misspecified relative to the true
solution's regularity. This is achieved by conditioning a regular GP to satisfy
the PDE with a modified source term in a weak sense (when integrated against a
finite number of test functions). This is equivalent to replacing the empirical
$L^2$-loss on the PDE constraint by an empirical negative-Sobolev norm. We
further show that this loss function can be used to extend physics-informed
neural networks (PINNs) to stochastic equations, thereby resulting in a new
NN-based variant termed Negative Sobolev Norm-PINN (NeS-PINN).
|
2501.17112
|
Unlocking Transparent Alignment Through Enhanced Inverse Constitutional
AI for Principle Extraction
|
cs.LG
|
Traditional methods for aligning Large Language Models (LLMs), such as
Reinforcement Learning from Human Feedback (RLHF) and Direct Preference
Optimization (DPO), rely on implicit principles, limiting interpretability.
Constitutional AI (CAI) offers an explicit, rule-based framework for guiding
model outputs. Building on this, we refine the Inverse Constitutional AI (ICAI)
algorithm, which extracts constitutions from preference datasets. By improving
principle generation, clustering, and embedding processes, our approach
enhances the accuracy and generalizability of extracted principles across
synthetic and real-world datasets. While in-context alignment yields modest
improvements, our results highlight the potential of these principles to foster
more transparent and adaptable alignment methods, offering a promising
direction for future advancements beyond traditional fine-tuning.
|
2501.17115
|
Evidence on the Regularisation Properties of Maximum-Entropy
Reinforcement Learning
|
cs.LG
|
The generalisation and robustness properties of policies learnt through
Maximum-Entropy Reinforcement Learning are investigated on chaotic dynamical
systems with Gaussian noise on the observable. First, the robustness under
noise contamination of the agent's observation of entropy regularised policies
is observed. Second, notions of statistical learning theory, such as complexity
measures on the learnt model, are borrowed to explain and predict the
phenomenon. Results show the existence of a relationship between
entropy-regularised policy optimisation and robustness to noise, which can be
described by the chosen complexity measures.
|
2501.17116
|
Optimizing Large Language Model Training Using FP4 Quantization
|
cs.LG cs.CL
|
The growing computational demands of training large language models (LLMs)
necessitate more efficient methods. Quantized training presents a promising
solution by enabling low-bit arithmetic operations to reduce these costs. While
FP8 precision has demonstrated feasibility, leveraging FP4 remains a challenge
due to significant quantization errors and limited representational capacity.
This work introduces the first FP4 training framework for LLMs, addressing
these challenges with two key innovations: a differentiable quantization
estimator for precise weight updates and an outlier clamping and compensation
strategy to prevent activation collapse. To ensure stability, the framework
integrates a mixed-precision training scheme and vector-wise quantization.
Experimental results demonstrate that our FP4 framework achieves accuracy
comparable to BF16 and FP8, with minimal degradation, scaling effectively to
13B-parameter LLMs trained on up to 100B tokens. With the emergence of
next-generation hardware supporting FP4, our framework sets a foundation for
efficient ultra-low precision training.
|
2501.17117
|
Histoires Morales: A French Dataset for Assessing Moral Alignment
|
cs.CL cs.AI
|
Aligning language models with human values is crucial, especially as they
become more integrated into everyday life. While models are often adapted to
user preferences, it is equally important to ensure they align with moral norms
and behaviours in real-world social situations. Despite significant progress in
languages like English and Chinese, French has seen little attention in this
area, leaving a gap in understanding how LLMs handle moral reasoning in this
language. To address this gap, we introduce Histoires Morales, a French dataset
derived from Moral Stories, created through translation and subsequently
refined with the assistance of native speakers to guarantee grammatical
accuracy and adaptation to the French cultural context. We also rely on
annotations of the moral values within the dataset to ensure their alignment
with French norms. Histoires Morales covers a wide range of social situations,
including differences in tipping practices, expressions of honesty in
relationships, and responsibilities toward animals. To foster future research,
we also conduct preliminary experiments on the alignment of multilingual models
on French and English data and the robustness of the alignment. We find that
while LLMs are generally aligned with human moral norms by default, they can be
easily influenced with user-preference optimization for both moral and immoral
data.
|
2501.17122
|
Convergence of two-timescale gradient descent ascent dynamics:
finite-dimensional and mean-field perspectives
|
math.OC cs.LG cs.NA math.NA
|
The two-timescale gradient descent-ascent (GDA) is a canonical gradient
algorithm designed to find Nash equilibria in min-max games. We analyze the
two-timescale GDA by investigating the effects of learning rate ratios on
convergence behavior in both finite-dimensional and mean-field settings. In
particular, for finite-dimensional quadratic min-max games, we obtain long-time
convergence in near quasi-static regimes through the hypocoercivity method. For
mean-field GDA dynamics, we investigate convergence under a finite-scale ratio
using a mixed synchronous-reflection coupling technique.
|
2501.17123
|
Hybrid Deep Learning Model for Multiple Cache Side Channel Attacks
Detection: A Comparative Analysis
|
cs.CR cs.NE
|
Cache side channel attacks are a sophisticated and persistent threat that
exploit vulnerabilities in modern processors to extract sensitive information.
These attacks leverage weaknesses in shared computational resources,
particularly the last level cache, to infer patterns in data access and
execution flows, often bypassing traditional security defenses. Such attacks
are especially dangerous as they can be executed remotely without requiring
physical access to the victim's device. This study focuses on a specific class
of these threats: fingerprinting attacks, where an adversary monitors and
analyzes the behavior of co-located processes via cache side channels. This can
potentially reveal confidential information, such as encryption keys or user
activity patterns. A comprehensive threat model illustrates how attackers
sharing computational resources with target systems exploit these side channels
to compromise sensitive data. To mitigate such risks, a hybrid deep learning
model is proposed for detecting cache side channel attacks. Its performance is
compared with five widely used deep learning models: Multi-Layer Perceptron,
Convolutional Neural Network, Simple Recurrent Neural Network, Long Short-Term
Memory, and Gated Recurrent Unit. The experimental results demonstrate that the
hybrid model achieves a detection rate of up to 99.96%. These findings
highlight the limitations of existing models, the need for enhanced defensive
mechanisms, and directions for future research to secure sensitive data against
evolving side channel threats.
|
2501.17124
|
The Asymptotic Capacity of Byzantine Symmetric Private Information
Retrieval and Its Consequences
|
cs.IT cs.CR cs.NI eess.SP math.IT
|
We consider the problem of finding the asymptotic capacity of symmetric
private information retrieval (SPIR) with $B$ Byzantine servers. Prior to
finding the capacity, a definition for the Byzantine servers is needed since in
the literature there are two different definitions. In \cite{byzantine_tpir},
where it was first defined, the Byzantine servers can send any symbol from the
storage, their received queries and some independent random symbols. In
\cite{unresponsive_byzantine_1}, Byzantine servers send any random symbol
independently of their storage and queries. It is clear that these definitions
are not identical, especially when \emph{symmetric} privacy is required. To
that end, we define Byzantine servers, inspired by \cite{byzantine_tpir}, as
the servers that can share everything, before and after the scheme initiation.
In this setting, we find an upper bound, for an infinite number of messages
case, that should be satisfied for all schemes that protect against this
setting and develop a scheme that achieves this upper bound. Hence, we identify
the capacity of the problem.
|
2501.17125
|
CoRe-Net: Co-Operational Regressor Network with Progressive Transfer
Learning for Blind Radar Signal Restoration
|
cs.LG
|
Real-world radar signals are frequently corrupted by various artifacts,
including sensor noise, echoes, interference, and intentional jamming,
differing in type, severity, and duration. This pilot study introduces a novel
model, called Co-Operational Regressor Network (CoRe-Net) for blind radar
signal restoration, designed to address such limitations and drawbacks.
CoRe-Net replaces adversarial training with a novel cooperative learning
strategy, leveraging the complementary roles of its Apprentice Regressor (AR)
and Master Regressor (MR). The AR restores radar signals corrupted by various
artifacts, while the MR evaluates the quality of the restoration and provides
immediate and task-specific feedback, ensuring stable and efficient learning.
The AR, therefore, has the advantage of both self-learning and assistive
learning by the MR. The proposed model has been extensively evaluated over the
benchmark Blind Radar Signal Restoration (BRSR) dataset, which simulates
diverse real-world artifact scenarios. Under the fair experimental setup, this
study shows that the CoRe-Net surpasses the Op-GANs over a 1 dB mean SNR
improvement. To further boost the performance gain, this study proposes
multi-pass restoration by cascaded CoRe-Nets trained with a novel paradigm
called Progressive Transfer Learning (PTL), which enables iterative refinement,
thus achieving an additional 2 dB mean SNR enhancement. Multi-pass CoRe-Net
training by PTL consistently yields incremental performance improvements
through successive restoration passes whilst highlighting CoRe-Net ability to
handle such a complex and varying blend of artifacts.
|
2501.17131
|
Scenario Understanding of Traffic Scenes Through Large Visual Language
Models
|
cs.CV
|
Deep learning models for autonomous driving, encompassing perception,
planning, and control, depend on vast datasets to achieve their high
performance. However, their generalization often suffers due to domain-specific
data distributions, making an effective scene-based categorization of samples
necessary to improve their reliability across diverse domains. Manual
captioning, though valuable, is both labor-intensive and time-consuming,
creating a bottleneck in the data annotation process. Large Visual Language
Models (LVLMs) present a compelling solution by automating image analysis and
categorization through contextual queries, often without requiring retraining
for new categories. In this study, we evaluate the capabilities of LVLMs,
including GPT-4 and LLaVA, to understand and classify urban traffic scenes on
both an in-house dataset and the BDD100K. We propose a scalable captioning
pipeline that integrates state-of-the-art models, enabling a flexible
deployment on new datasets. Our analysis, combining quantitative metrics with
qualitative insights, demonstrates the effectiveness of LVLMs to understand
urban traffic scenarios and highlights their potential as an efficient tool for
data-driven advancements in autonomous driving.
|
2501.17132
|
ASTRAL: Automated Safety Testing of Large Language Models
|
cs.SE cs.CL
|
Large Language Models (LLMs) have recently gained attention due to their
ability to understand and generate sophisticated human-like content. However,
ensuring their safety is paramount as they might provide harmful and unsafe
responses. Existing LLM testing frameworks address various safety-related
concerns (e.g., drugs, terrorism, animal abuse) but often face challenges due
to unbalanced and obsolete datasets. In this paper, we present ASTRAL, a tool
that automates the generation and execution of test cases (i.e., prompts) for
testing the safety of LLMs. First, we introduce a novel black-box coverage
criterion to generate balanced and diverse unsafe test inputs across a diverse
set of safety categories as well as linguistic writing characteristics (i.e.,
different style and persuasive writing techniques). Second, we propose an
LLM-based approach that leverages Retrieval Augmented Generation (RAG),
few-shot prompting strategies and web browsing to generate up-to-date test
inputs. Lastly, similar to current LLM test automation techniques, we leverage
LLMs as test oracles to distinguish between safe and unsafe test outputs,
allowing a fully automated testing approach. We conduct an extensive evaluation
on well-known LLMs, revealing the following key findings: i) GPT3.5 outperforms
other LLMs when acting as the test oracle, accurately detecting unsafe
responses, and even surpassing more recent LLMs (e.g., GPT-4), as well as LLMs
that are specifically tailored to detect unsafe LLM outputs (e.g., LlamaGuard);
ii) the results confirm that our approach can uncover nearly twice as many
unsafe LLM behaviors with the same number of test inputs compared to currently
used static datasets; and iii) our black-box coverage criterion combined with
web browsing can effectively guide the LLM on generating up-to-date unsafe test
inputs, significantly increasing the number of unsafe LLM behaviors.
|
2501.17144
|
FactCG: Enhancing Fact Checkers with Graph-Based Multi-Hop Data
|
cs.CL cs.AI
|
Prior research on training grounded factuality classification models to
detect hallucinations in large language models (LLMs) has relied on public
natural language inference (NLI) data and synthetic data. However, conventional
NLI datasets are not well-suited for document-level reasoning, which is
critical for detecting LLM hallucinations. Recent approaches to document-level
synthetic data generation involve iteratively removing sentences from documents
and annotating factuality using LLM-based prompts. While effective, this method
is computationally expensive for long documents and limited by the LLM's
capabilities. In this work, we analyze the differences between existing
synthetic training data used in state-of-the-art models and real LLM output
claims. Based on our findings, we propose a novel approach for synthetic data
generation, CG2C, that leverages multi-hop reasoning on context graphs
extracted from documents. Our fact checker model, FactCG, demonstrates improved
performance with more connected reasoning, using the same backbone models.
Experiments show it even outperforms GPT-4-o on the LLM-Aggrefact benchmark
with much smaller model size.
|
2501.17148
|
AxBench: Steering LLMs? Even Simple Baselines Outperform Sparse
Autoencoders
|
cs.CL cs.AI cs.LG
|
Fine-grained steering of language model outputs is essential for safety and
reliability. Prompting and finetuning are widely used to achieve these goals,
but interpretability researchers have proposed a variety of
representation-based techniques as well, including sparse autoencoders (SAEs),
linear artificial tomography, supervised steering vectors, linear probes, and
representation finetuning. At present, there is no benchmark for making direct
comparisons between these proposals. Therefore, we introduce AxBench, a
large-scale benchmark for steering and concept detection, and report
experiments on Gemma-2-2B and 9B. For steering, we find that prompting
outperforms all existing methods, followed by finetuning. For concept
detection, representation-based methods such as difference-in-means, perform
the best. On both evaluations, SAEs are not competitive. We introduce a novel
weakly-supervised representational method (Rank-1 Representation Finetuning;
ReFT-r1), which is competitive on both tasks while providing the
interpretability advantages that prompting lacks. Along with AxBench, we train
and publicly release SAE-scale feature dictionaries for ReFT-r1 and DiffMean.
|
2501.17151
|
Scanning Trojaned Models Using Out-of-Distribution Samples
|
cs.LG
|
Scanning for trojan (backdoor) in deep neural networks is crucial due to
their significant real-world applications. There has been an increasing focus
on developing effective general trojan scanning methods across various trojan
attacks. Despite advancements, there remains a shortage of methods that perform
effectively without preconceived assumptions about the backdoor attack method.
Additionally, we have observed that current methods struggle to identify
classifiers trojaned using adversarial training. Motivated by these challenges,
our study introduces a novel scanning method named TRODO (TROjan scanning by
Detection of adversarial shifts in Out-of-distribution samples). TRODO
leverages the concept of "blind spots"--regions where trojaned classifiers
erroneously identify out-of-distribution (OOD) samples as in-distribution (ID).
We scan for these blind spots by adversarially shifting OOD samples towards
in-distribution. The increased likelihood of perturbed OOD samples being
classified as ID serves as a signature for trojan detection. TRODO is both
trojan and label mapping agnostic, effective even against adversarially trained
trojaned classifiers. It is applicable even in scenarios where training data is
absent, demonstrating high accuracy and adaptability across various scenarios
and datasets, highlighting its potential as a robust trojan scanning strategy.
|
2501.17152
|
Three-Dimensional Diffusion-Weighted Multi-Slab MRI With Slice Profile
Compensation Using Deep Energy Model
|
eess.IV cs.AI physics.med-ph
|
Three-dimensional (3D) multi-slab acquisition is a technique frequently
employed in high-resolution diffusion-weighted MRI in order to achieve the best
signal-to-noise ratio (SNR) efficiency. However, this technique is limited by
slab boundary artifacts that cause intensity fluctuations and aliasing between
slabs which reduces the accuracy of anatomical imaging. Addressing this issue
is crucial for advancing diffusion MRI quality and making high-resolution
imaging more feasible for clinical and research applications. In this work, we
propose a regularized slab profile encoding (PEN) method within a Plug-and-Play
ADMM framework, incorporating multi-scale energy (MuSE) regularization to
effectively improve the slab combined reconstruction. Experimental results
demonstrate that the proposed method significantly improves image quality
compared to non-regularized and TV-regularized PEN approaches. The regularized
PEN framework provides a more robust and efficient solution for high-resolution
3D diffusion MRI, potentially enabling clearer, more reliable anatomical
imaging across various applications.
|
2501.17159
|
IC-Portrait: In-Context Matching for View-Consistent Personalized
Portrait
|
cs.CV
|
Existing diffusion models show great potential for identity-preserving
generation. However, personalized portrait generation remains challenging due
to the diversity in user profiles, including variations in appearance and
lighting conditions. To address these challenges, we propose IC-Portrait, a
novel framework designed to accurately encode individual identities for
personalized portrait generation. Our key insight is that pre-trained diffusion
models are fast learners (e.g.,100 ~ 200 steps) for in-context dense
correspondence matching, which motivates the two major designs of our
IC-Portrait framework. Specifically, we reformulate portrait generation into
two sub-tasks: 1) Lighting-Aware Stitching: we find that masking a high
proportion of the input image, e.g., 80%, yields a highly effective
self-supervisory representation learning of reference image lighting. 2)
View-Consistent Adaptation: we leverage a synthetic view-consistent profile
dataset to learn the in-context correspondence. The reference profile can then
be warped into arbitrary poses for strong spatial-aligned view conditioning.
Coupling these two designs by simply concatenating latents to form
ControlNet-like supervision and modeling, enables us to significantly enhance
the identity preservation fidelity and stability. Extensive evaluations
demonstrate that IC-Portrait consistently outperforms existing state-of-the-art
methods both quantitatively and qualitatively, with particularly notable
improvements in visual qualities. Furthermore, IC-Portrait even demonstrates
3D-aware relighting capabilities.
|
2501.17160
|
A Hybrid Deep Learning CNN Model for Enhanced COVID-19 Detection from
Computed Tomography (CT) Scan Images
|
eess.IV cs.AI cs.CV
|
Early detection of COVID-19 is crucial for effective treatment and
controlling its spread. This study proposes a novel hybrid deep learning model
for detecting COVID-19 from CT scan images, designed to assist overburdened
medical professionals. Our proposed model leverages the strengths of VGG16,
DenseNet121, and MobileNetV2 to extract features, followed by Principal
Component Analysis (PCA) for dimensionality reduction, after which the features
are stacked and classified using a Support Vector Classifier (SVC). We
conducted comparative analysis between the proposed hybrid model and individual
pre-trained CNN models, using a dataset of 2,108 training images and 373 test
images comprising both COVID-positive and non-COVID images. Our proposed hybrid
model achieved an accuracy of 98.93%, outperforming the individual models in
terms of precision, recall, F1 scores, and ROC curve performance.
|
2501.17161
|
SFT Memorizes, RL Generalizes: A Comparative Study of Foundation Model
Post-training
|
cs.AI cs.CV cs.LG
|
Supervised fine-tuning (SFT) and reinforcement learning (RL) are widely used
post-training techniques for foundation models. However, their roles in
enhancing model generalization capabilities remain unclear. This paper studies
the difference between SFT and RL on generalization and memorization, focusing
on text-based rule variants and visual variants. We introduce GeneralPoints, an
arithmetic reasoning card game, and adopt V-IRL, a real-world navigation
environment, to assess how models trained with SFT and RL generalize to unseen
variants in both textual and visual domains. We show that RL, especially when
trained with an outcome-based reward, generalizes across both rule-based
textual and visual variants. SFT, in contrast, tends to memorize training data
and struggles to generalize out-of-distribution scenarios. Further analysis
reveals that RL improves the model's underlying visual recognition
capabilities, contributing to its enhanced generalization in the visual domain.
Despite RL's superior generalization, we show that SFT remains essential for
effective RL training; SFT stabilizes the model's output format, enabling
subsequent RL to achieve its performance gains. These findings demonstrates the
capability of RL for acquiring generalizable knowledge in complex, multi-modal
tasks.
|
2501.17162
|
CubeDiff: Repurposing Diffusion-Based Image Models for Panorama
Generation
|
cs.CV cs.LG
|
We introduce a novel method for generating 360{\deg} panoramas from text
prompts or images. Our approach leverages recent advances in 3D generation by
employing multi-view diffusion models to jointly synthesize the six faces of a
cubemap. Unlike previous methods that rely on processing equirectangular
projections or autoregressive generation, our method treats each face as a
standard perspective image, simplifying the generation process and enabling the
use of existing multi-view diffusion models. We demonstrate that these models
can be adapted to produce high-quality cubemaps without requiring
correspondence-aware attention layers. Our model allows for fine-grained text
control, generates high resolution panorama images and generalizes well beyond
its training set, whilst achieving state-of-the-art results, both qualitatively
and quantitatively. Project page: https://cubediff.github.io/
|
2501.17164
|
Split Knowledge Distillation for Large Models in IoT: Architecture,
Challenges, and Solutions
|
cs.LG cs.AI
|
Large models (LMs) have immense potential in Internet of Things (IoT)
systems, enabling applications such as intelligent voice assistants, predictive
maintenance, and healthcare monitoring. However, training LMs on edge servers
raises data privacy concerns, while deploying them directly on IoT devices is
constrained by limited computational and memory resources. We analyze the key
challenges of training LMs in IoT systems, including energy constraints,
latency requirements, and device heterogeneity, and propose potential solutions
such as dynamic resource management, adaptive model partitioning, and clustered
collaborative training. Furthermore, we propose a split knowledge distillation
framework to efficiently distill LMs into smaller, deployable versions for IoT
devices while ensuring raw data remains local. This framework integrates
knowledge distillation and split learning to minimize energy consumption and
meet low model training delay requirements. A case study is presented to
evaluate the feasibility and performance of the proposed framework.
|
2501.17166
|
Optimizing Carbon Footprint in ICT through Swarm Intelligence with
Algorithmic Complexity
|
cs.NE physics.comp-ph
|
Global emissions from fossil fuel combustion and cement production were
recorded in 2022, signaling a resurgence to pre-pandemic levels and providing
an apodictic indication that emission peaks have not yet been achieved.
Significant contributions to this upward trend are made by the Information and
Communication Technology (ICT) industry due to its substantial energy
consumption. This shows the need for further exploration of swarm intelligence
applications to measure and optimize the carbon footprint within ICT. All
causative factors are evaluated based on the quality of data collection;
variations from each source are quantified; and an objective function related
to carbon footprint in ICT energy management is optimized. Emphasis is placed
on the asyndetic integration of data sources to construct a convex optimization
problem. An apodictic necessity to prevent the erosion of accuracy in carbon
footprint assessments is addressed. Complexity percentages ranged from 5.25%
for the Bat Algorithm to 7.87% for Fast Bacterial Swarming, indicating
significant fluctuations in resource intensity among algorithms. These findings
suggest that we were able to quantify the environmental impact of various swarm
algorithms.
|
2501.17167
|
QualityFlow: An Agentic Workflow for Program Synthesis Controlled by LLM
Quality Checks
|
cs.SE cs.AI
|
We introduce QualityFlow, a dynamic agentic workflow for program synthesis.
Given the English description of a programming problem and a set of unit tests,
the model's goal is to synthesize the correct program that solves the problem
and passes the tests. QualityFlow consists of multiple large language model
(LLM) agents that resemble a software development team, including code
generation, testing, and self-debugging. Existing program synthesis methods
face three major limitations: assumption of visible unit test conformity,
bottleneck of synthesized test quality, and deviation of self-debugging
trajectory. To address them, we propose the LLM Quality Checker, which
explicitly "imagines" whether the synthesized programs' execution would conform
to the unit tests. The Quality Checks dynamically control the workflow,
including actions to submit the final answer, clarify the problem statement,
and revert previous workflow steps. As a result, our Quality Checker can
precisely accept any correct program, mitigate faulty synthesized tests, and
prevent potential workflow deviation. The success of the Quality Checker
further enables Diversified Prompting, which encourages variations in LLM
responses to maximize the possibility that a correct program appears and passes
the quality check. In experiments, QualityFlow establishes the state-of-the-art
results on four program synthesis benchmarks: MBPP, HumanEval, and the stricter
evaluations of both MBPP and HumanEval from EvalPlus. Our systematic analysis
shows that the dynamic workflow controlled by LLM quality checks can outperform
static workflows and single-attempt zero-shot synthesis. The Quality Checker is
the center of our investigation, and we dissect its individual performance and
integrated impact on the workflow accuracy, as well as other ablations
experiments to justify our workflow design.
|
2501.17168
|
EvoGP: A GPU-accelerated Framework for Tree-based Genetic Programming
|
cs.NE cs.AI
|
Tree-based Genetic Programming (TGP) is a key evolutionary algorithm widely
used in symbolic regression, feature engineering, and scientific modeling. Its
high computational demands make GPU acceleration essential for scalable and
high-performance evolutionary computation. However, GPU acceleration of TGP
faces three key challenges: inefficient tree encoding, highly heterogeneous
genetic operations, and limited parallelism in fitness evaluation. To address
these challenges, we introduce EvoGP, a comprehensive GPU-accelerated TGP
framework. First, we design a tensorized encoding scheme to represent tree with
different structures as tensors with the same shape, optimizing memory access
and enabling efficient parallel execution. Second, we propose a unified
parallel framework for genetic operations by leveraging shared computational
primitives and implementing dedicated CUDA kernels for scalable performance.
Third, we present a fully parallel fitness evaluation strategy for symbolic
regression, exploiting both population-level and data-level parallelism to
maximize GPU utilization. Moreover, we implement a comprehensive library to
provide rich algorithm operators and benchmark problems. EvoGP is extensively
tested on various tasks, including symbolic regression, classification, and
robotics control, demonstrating its versatility and effectiveness across
diverse application scenarios. Experimental results show that EvoGP achieves up
to a 140.89x speedup over the state-of-the-art GPU-based TGP implementation,
while maintaining or exceeding the accuracy of baseline methods. EvoGP is
open-source and accessible at: https://github.com/EMI-Group/evogp.
|
2501.17170
|
Benchmarking Randomized Optimization Algorithms on Binary, Permutation,
and Combinatorial Problem Landscapes
|
cs.NE cs.AI cs.CL cs.LG
|
In this paper, we evaluate the performance of four randomized optimization
algorithms: Randomized Hill Climbing (RHC), Simulated Annealing (SA), Genetic
Algorithms (GA), and MIMIC (Mutual Information Maximizing Input Clustering),
across three distinct types of problems: binary, permutation, and
combinatorial. We systematically compare these algorithms using a set of
benchmark fitness functions that highlight the specific challenges and
requirements of each problem category. Our study analyzes each algorithm's
effectiveness based on key performance metrics, including solution quality,
convergence speed, computational cost, and robustness. Results show that while
MIMIC and GA excel in producing high-quality solutions for binary and
combinatorial problems, their computational demands vary significantly. RHC and
SA, while computationally less expensive, demonstrate limited performance in
complex problem landscapes. The findings offer valuable insights into the
trade-offs between different optimization strategies and provide practical
guidance for selecting the appropriate algorithm based on the type of problems,
accuracy requirements, and computational constraints.
|
2501.17171
|
Separated Inter/Intra-Modal Fusion Prompts for Compositional Zero-Shot
Learning
|
cs.CV cs.AI cs.LG eess.IV
|
Compositional Zero-Shot Learning (CZSL) aims to recognize subtle differences
in meaning or the combination of states and objects through the use of known
and unknown concepts during training. Existing methods either focused on prompt
configuration or on using prompts to tune the pre-trained Vision-Language
model. However, these methods faced challenges in accurately identifying subtle
differences in meaning or combining states with objects. To jointly eradicate
the above issues and construct an efficient and effective CZSL technique, we
suggest a method to improve attribute recognition performance by utilizing
diverse Prompt Learning with an Inter/Intra-Modality Fusion Synthesizer in
scene understanding involving subtle semantic differences and multiple objects.
|
2501.17172
|
Towards spiking analog hardware implementation of a trajectory
interpolation mechanism for smooth closed-loop control of a spiking robot arm
|
cs.NE cs.RO
|
Neuromorphic engineering aims to incorporate the computational principles
found in animal brains, into modern technological systems. Following this
approach, in this work we propose a closed-loop neuromorphic control system for
an event-based robotic arm. The proposed system consists of a shifted
Winner-Take-All spiking network for interpolating a reference trajectory and a
spiking comparator network responsible for controlling the flow continuity of
the trajectory, which is fed back to the actual position of the robot. The
comparator model is based on a differential position comparison neural network,
which governs the execution of the next trajectory points to close the control
loop between both components of the system. To evaluate the system, we
implemented and deployed the model on a mixed-signal analog-digital
neuromorphic platform, the DYNAP-SE2, to facilitate integration and
communication with the ED-Scorbot robotic arm platform. Experimental results on
one joint of the robot validate the use of this architecture and pave the way
for future neuro-inspired control of the entire robot.
|
2501.17173
|
Model Evaluation of a Transformable CubeSat for Nonholonomic Attitude
Reorientation Using a Drop Tower
|
astro-ph.IM cs.RO
|
This paper presents a design for a drop tower test to evaluate a numerical
model for a structurally reconfigurable spacecraft with actuatable joints,
referred to as a transformable spacecraft. A mock-up robot for a 3U-sized
transformable spacecraft is designed to fit in a limited time and space of the
microgravity environment available in the drop tower. The robot performs agile
reorientation, referred to as nonholonomic attitude control, by actuating
joints in a particular manner. To adapt to the very short duration of
microgravity in the drop tower test, a successive joint actuation maneuver is
optimized to maximize the amount of attitude reorientation within the time
constraint. The robot records the angular velocity history of all four bodies,
and the data is analyzed to evaluate the accuracy of the numerical model. We
confirm that the constructed numerical model sufficiently replicates the
robot's motion and show that the post-experiment model corrections further
improve the accuracy of the numerical simulations. Finally, the difference
between this drop tower test and the actual orbit demonstration is discussed to
show the prospect.
|
2501.17174
|
Extractive Schema Linking for Text-to-SQL
|
cs.DB cs.AI cs.CL
|
Text-to-SQL is emerging as a practical interface for real world databases.
The dominant paradigm for Text-to-SQL is cross-database or schema-independent,
supporting application schemas unseen during training. The schema of a database
defines the tables, columns, column types and foreign key connections between
tables. Real world schemas can be large, containing hundreds of columns, but
for any particular query only a small fraction will be relevant. Placing the
entire schema in the prompt for an LLM can be impossible for models with
smaller token windows and expensive even when the context window is large
enough to allow it. Even apart from computational considerations, the accuracy
of the model can be improved by focusing the SQL generation on only the
relevant portion of the database. Schema linking identifies the portion of the
database schema useful for the question. Previous work on schema linking has
used graph neural networks, generative LLMs, and cross encoder classifiers. We
introduce a new approach to adapt decoder-only LLMs to schema linking that is
both computationally more efficient and more accurate than the generative
approach. Additionally our extractive approach permits fine-grained control
over the precision-recall trade-off for schema linking.
|
2501.17175
|
Document-Level Sentiment Analysis of Urdu Text Using Deep Learning
Techniques
|
cs.CL cs.AI cs.IR
|
Document level Urdu Sentiment Analysis (SA) is a challenging Natural Language
Processing (NLP) task as it deals with large documents in a resource-poor
language. In large documents, there are ample amounts of words that exhibit
different viewpoints. Deep learning (DL) models comprise of complex neural
network architectures that have the ability to learn diverse features of the
data to classify various sentiments. Besides audio, image and video
classification; DL algorithms are now extensively used in text-based
classification problems. To explore the powerful DL techniques for Urdu SA, we
have applied five different DL architectures namely, Bidirectional Long Short
Term Memory (BiLSTM), Convolutional Neural Network (CNN), Convolutional Neural
Network with Bidirectional Long Short Term Memory (CNN-BiLSTM), Bidirectional
Encoder Representation from Transformer (BERT). In this paper, we have proposed
a DL hybrid model that integrates BiLSTM with Single Layer Multi Filter
Convolutional Neural Network (BiLSTM-SLMFCNN). The proposed and baseline
techniques are applied on Urdu Customer Support data set and IMDB Urdu movie
review data set by using pretrained Urdu word embeddings that are suitable for
(SA) at the document level. Results of these techniques are evaluated and our
proposed model outperforms all other DL techniques for Urdu SA. BiLSTM-SLMFCNN
outperformed the baseline DL models and achieved 83{\%}, 79{\%}, 83{\%} and
94{\%} accuracy on small, medium and large sized IMDB Urdu movie review data
set and Urdu Customer Support data set respectively.
|
2501.17176
|
Prompt-Based Cost-Effective Evaluation and Operation of ChatGPT as a
Computer Programming Teaching Assistant
|
cs.CY cs.AI cs.CL
|
The dream of achieving a student-teacher ratio of 1:1 is closer than ever
thanks to the emergence of large language models (LLMs). One potential
application of these models in the educational field would be to provide
feedback to students in university introductory programming courses, so that a
student struggling to solve a basic implementation problem could seek help from
an LLM available 24/7. This article focuses on studying three aspects related
to such an application. First, the performance of two well-known models,
GPT-3.5T and GPT-4T, in providing feedback to students is evaluated. The
empirical results showed that GPT-4T performs much better than GPT-3.5T,
however, it is not yet ready for use in a real-world scenario. This is due to
the possibility of generating incorrect information that potential users may
not always be able to detect. Second, the article proposes a carefully designed
prompt using in-context learning techniques that allows automating important
parts of the evaluation process, as well as providing a lower bound for the
fraction of feedbacks containing incorrect information, saving time and effort.
This was possible because the resulting feedback has a programmatically
analyzable structure that incorporates diagnostic information about the LLM's
performance in solving the requested task. Third, the article also suggests a
possible strategy for implementing a practical learning tool based on LLMs,
which is rooted on the proposed prompting techniques. This strategy opens up a
whole range of interesting possibilities from a pedagogical perspective.
|
2501.17178
|
Tuning LLM Judge Design Decisions for 1/1000 of the Cost
|
cs.CL cs.AI cs.LG
|
Evaluating Large Language Models (LLMs) often requires costly human
annotations. To address this, LLM-based judges have been proposed, which
compare the outputs of two LLMs enabling the ranking of models without human
intervention. While several approaches have been proposed, many confounding
factors are present between different papers. For instance the model, the
prompt and other hyperparameters are typically changed at the same time making
apple-to-apple comparisons challenging. In this paper, we propose to
systematically analyze and tune hyperparameter of LLM judges. To alleviate the
high cost of evaluating a judge, we propose to leverage multi-objective
multi-fidelity which allows to find judges that trades accuracy for cost and
also reduce significantly the cost of the search. Our method identifies judges
that not only outperform existing benchmarks in accuracy and cost-efficiency
but also utilize open-weight models, ensuring greater accessibility and
reproducibility.
|
2501.17181
|
An AI-Driven Live Systematic Reviews in the Brain-Heart Interconnectome:
Minimizing Research Waste and Advancing Evidence Synthesis
|
cs.AI cs.CL cs.DL cs.IR
|
The Brain-Heart Interconnectome (BHI) combines neurology and cardiology but
is hindered by inefficiencies in evidence synthesis, poor adherence to quality
standards, and research waste. To address these challenges, we developed an
AI-driven system to enhance systematic reviews in the BHI domain. The system
integrates automated detection of Population, Intervention, Comparator,
Outcome, and Study design (PICOS), semantic search using vector embeddings,
graph-based querying, and topic modeling to identify redundancies and
underexplored areas. Core components include a Bi-LSTM model achieving 87%
accuracy for PICOS compliance, a study design classifier with 95.7% accuracy,
and Retrieval-Augmented Generation (RAG) with GPT-3.5, which outperformed GPT-4
for graph-based and topic-driven queries. The system provides real-time
updates, reducing research waste through a living database and offering an
interactive interface with dashboards and conversational AI. While initially
developed for BHI, the system's adaptable architecture enables its application
across various biomedical fields, supporting rigorous evidence synthesis,
efficient resource allocation, and informed clinical decision-making.
|
2501.17182
|
Dialogue Systems for Emotional Support via Value Reinforcement
|
cs.CL cs.AI cs.CY cs.HC
|
Emotional support dialogue systems aim to reduce help-seekers' distress and
help them overcome challenges. While human values$\unicode{x2013}$core beliefs
that shape an individual's priorities$\unicode{x2013}$are increasingly
emphasized in contemporary psychological therapy for their role in fostering
internal transformation and long-term emotional well-being, their integration
into emotional support systems remains underexplored. To bridge this gap, we
present a value-driven method for training emotional support dialogue systems
designed to reinforce positive values in seekers. Our model learns to identify
which values to reinforce at each turn and how to do so, by leveraging online
support conversations from Reddit. The model demonstrated superior performance
in emotional support capabilities, outperforming various baselines. Notably, it
more effectively explored and elicited values from seekers. Expert assessments
by therapists highlighted two key strengths of our model: its ability to
validate users' challenges and its effectiveness in emphasizing positive
aspects of their situations$\unicode{x2013}$both crucial elements of value
reinforcement. Our work validates the effectiveness of value reinforcement for
emotional support systems and establishes a foundation for future research.
|
2501.17183
|
LLM Evaluation Based on Aerospace Manufacturing Expertise: Automated
Generation and Multi-Model Question Answering
|
cs.CL cs.AI
|
Aerospace manufacturing demands exceptionally high precision in technical
parameters. The remarkable performance of Large Language Models (LLMs), such as
GPT-4 and QWen, in Natural Language Processing has sparked industry interest in
their application to tasks including process design, material selection, and
tool information retrieval. However, LLMs are prone to generating
"hallucinations" in specialized domains, producing inaccurate or false
information that poses significant risks to the quality of aerospace products
and flight safety. This paper introduces a set of evaluation metrics tailored
for LLMs in aerospace manufacturing, aiming to assess their accuracy by
analyzing their performance in answering questions grounded in professional
knowledge. Firstly, key information is extracted through in-depth textual
analysis of classic aerospace manufacturing textbooks and guidelines.
Subsequently, utilizing LLM generation techniques, we meticulously construct
multiple-choice questions with multiple correct answers of varying difficulty.
Following this, different LLM models are employed to answer these questions,
and their accuracy is recorded. Experimental results demonstrate that the
capabilities of LLMs in aerospace professional knowledge are in urgent need of
improvement. This study provides a theoretical foundation and practical
guidance for the application of LLMs in aerospace manufacturing, addressing a
critical gap in the field.
|
2501.17184
|
Deep Learning in Wireless Communication Receiver: A Survey
|
cs.IT cs.LG cs.NI math.IT
|
The design of wireless communication receivers to enhance signal processing
in complex and dynamic environments is going through a transformation by
leveraging deep neural networks (DNNs). Traditional wireless receivers depend
on mathematical models and algorithms, which do not have the ability to adapt
or learn from data. In contrast, deep learning-based receivers are more
suitable for modern wireless communication systems because they can learn from
data and adapt accordingly. This survey explores various deep learning
architectures such as multilayer perceptrons (MLPs), convolutional neural
networks (CNNs), recurrent neural networks (RNNs), generative adversarial
networks (GANs), and autoencoders, focusing on their application in the design
of wireless receivers. Key modules of a receiver such as synchronization,
channel estimation, equalization, space-time decoding, demodulation, decoding,
interference cancellation, and modulation classification are discussed in the
context of advanced wireless technologies like orthogonal frequency division
multiplexing (OFDM), multiple input multiple output (MIMO), semantic
communication, task-oriented communication, and next-generation (Next-G)
networks. The survey not only emphasizes the potential of deep learning-based
receivers in future wireless communication but also investigates different
challenges of deep learning-based receivers, such as data availability,
security and privacy concerns, model interpretability, computational
complexity, and integration with legacy systems.
|
2501.17186
|
Complete Chess Games Enable LLM Become A Chess Master
|
cs.AI cs.CL cs.LG
|
Large language models (LLM) have shown remarkable abilities in text
generation, question answering, language translation, reasoning and many other
tasks. It continues to advance rapidly and is becoming increasingly influential
in various fields, from technology and business to education and entertainment.
Despite LLM's success in multiple areas, its ability to play abstract games,
such as chess, is underexplored. Chess-playing requires the language models to
output legal and reasonable moves from textual inputs. Here, we propose the
Large language model ChessLLM to play full chess games. We transform the game
into a textual format with the best move represented in the Forsyth-Edwards
Notation. We show that by simply supervised fine-tuning, our model has achieved
a professional-level Elo rating of 1788 in matches against the standard
Elo-rated Stockfish when permitted to sample 10 times. We further show that
data quality is important. Long-round data supervision enjoys a 350 Elo rating
improvement over short-round data.
|
2501.17187
|
Visualizing Uncertainty in Translation Tasks: An Evaluation of LLM
Performance and Confidence Metrics
|
cs.CL cs.AI cs.LG
|
Large language models (LLMs) are increasingly utilized for machine
translation, yet their predictions often exhibit uncertainties that hinder
interpretability and user trust. Effectively visualizing these uncertainties
can enhance the usability of LLM outputs, particularly in contexts where
translation accuracy is critical. This paper addresses two primary objectives:
(1) providing users with token-level insights into model confidence and (2)
developing a web-based visualization tool to quantify and represent translation
uncertainties. To achieve these goals, we utilized the T5 model with the WMT19
dataset for translation tasks and evaluated translation quality using
established metrics such as BLEU, METEOR, and ROUGE. We introduced three novel
uncertainty quantification (UQ) metrics: (1) the geometric mean of token
probabilities, (2) the arithmetic mean of token probabilities, and (3) the
arithmetic mean of the kurtosis of token distributions. These metrics provide a
simple yet effective framework for evaluating translation performance. Our
analysis revealed a linear relationship between the traditional evaluation
metrics and our UQ metrics, demonstrating the validity of our approach.
Additionally, we developed an interactive web-based visualization that uses a
color gradient to represent token confidence. This tool offers users a clear
and intuitive understanding of translation quality while providing valuable
insights into model performance. Overall, we show that our UQ metrics and
visualization are both robust and interpretable, offering practical tools for
evaluating and accessing machine translation systems.
|
2501.17188
|
Letters, Colors, and Words: Constructing the Ideal Building Blocks Set
|
cs.AI cs.NE
|
Define a building blocks set to be a collection of n cubes (each with six
sides) where each side is assigned one letter and one color from a palette of m
colors. We propose a novel problem of assigning letters and colors to each face
so as to maximize the number of words one can spell from a chosen dataset that
are either mono words, all letters have the same color, or rainbow words, all
letters have unique colors. We explore this problem considering a chosen set of
English words, up to six letters long, from a typical vocabulary of a US
American 14 year old and explore the problem when n=6 and m=6, with the added
restriction that each color appears exactly once on the cube. The problem is
intractable, as the size of the solution space makes a brute force approach
computationally infeasible. Therefore we aim to solve this problem using random
search, simulated annealing, two distinct tree search approaches (greedy and
best-first), and a genetic algorithm. To address this, we explore a range of
optimization techniques: random search, simulated annealing, two distinct tree
search methods (greedy and best-first), and a genetic algorithm. Additionally,
we attempted to implement a reinforcement learning approach; however, the model
failed to converge to viable solutions within the problem's constraints. Among
these methods, the genetic algorithm delivered the best performance, achieving
a total of 2846 mono and rainbow words.
|
2501.17190
|
A Comprehensive Study on Fine-Tuning Large Language Models for Medical
Question Answering Using Classification Models and Comparative Analysis
|
cs.CL
|
This paper presents the overview of the development and fine-tuning of large
language models (LLMs) designed specifically for answering medical questions.
We are mainly improving the accuracy and efficiency of providing reliable
answers to medical queries. In our approach, we have two stages, prediction of
a specific label for the received medical question and then providing a
predefined answer for this label. Various models such as RoBERTa and BERT were
examined and evaluated based on their ability. The models are trained using the
datasets derived from 6,800 samples that were scraped from Healthline. com with
additional synthetic data. For evaluation, we conducted a comparative study
using 5-fold cross-validation. For accessing performance we used metrics like,
accuracy, precision, recall, and F1 score and also recorded the training time.
The performance of the models was evaluated using 5-fold cross-validation. The
LoRA Roberta-large model achieved an accuracy of 78.47%, precision of 72.91%,
recall of 76.95%, and an F1 score of 73.56%. The Roberta-base model
demonstrated high performance with an accuracy of 99.87%, precision of 99.81%,
recall of 99.86%, and an F1 score of 99.82%. The Bert Uncased model showed
strong results with an accuracy of 95.85%, precision of 94.42%, recall of
95.58%, and an F1 score of 94.72%. Lastly, the Bert Large Uncased model
achieved the highest performance, with an accuracy, precision, recall, and F1
score of 100%. The results obtained have helped indicate the capability of the
models in classifying the medical questions and generating accurate answers in
the prescription of improved health-related AI solutions.
|
2501.17191
|
Aspect-Aware Decomposition for Opinion Summarization
|
cs.CL cs.IR
|
Opinion summarization plays a key role in deriving meaningful insights from
large-scale online reviews. To make this process more explainable and grounded,
we propose a modular approach guided by review aspects which separates the
tasks of aspect identification, opinion consolidation, and meta-review
synthesis, enabling greater transparency and ease of inspection. We conduct
extensive experiments across datasets representing scientific research,
business, and product domains. Results show that our method generates more
grounded summaries compared to strong baseline models, as verified through
automated and human evaluations. Additionally, our modular approach, which
incorporates reasoning based on review aspects, produces more informative
intermediate outputs than knowledge-agnostic decomposed prompting. These
intermediate outputs can also effectively support humans in summarizing
opinions from large volumes of reviews.
|
2501.17194
|
AI-assisted German Employment Contract Review: A Benchmark Dataset
|
cs.CL
|
Employment contracts are used to agree upon the working conditions between
employers and employees all over the world. Understanding and reviewing
contracts for void or unfair clauses requires extensive knowledge of the legal
system and terminology. Recent advances in Natural Language Processing (NLP)
hold promise for assisting in these reviews. However, applying NLP techniques
on legal text is particularly difficult due to the scarcity of expert-annotated
datasets. To address this issue and as a starting point for our effort in
assisting lawyers with contract reviews using NLP, we release an anonymized and
annotated benchmark dataset for legality and fairness review of German
employment contract clauses, alongside with baseline model evaluations.
|
2501.17195
|
Atla Selene Mini: A General Purpose Evaluation Model
|
cs.CL cs.AI
|
We introduce Atla Selene Mini, a state-of-the-art small language
model-as-a-judge (SLMJ). Selene Mini is a general-purpose evaluator that
outperforms the best SLMJs and GPT-4o-mini on overall performance across 11
out-of-distribution benchmarks, spanning absolute scoring, classification, and
pairwise preference tasks. It is the highest-scoring 8B generative model on
RewardBench, surpassing strong baselines like GPT-4o and specialized judges. To
achieve this, we develop a principled data curation strategy that augments
public datasets with synthetically generated critiques and ensures high quality
through filtering and dataset ablations. We train our model on a combined
direct preference optimization (DPO) and supervised fine-tuning (SFT) loss, and
produce a highly promptable evaluator that excels in real-world scenarios.
Selene Mini shows dramatically improved zero-shot agreement with human expert
evaluations on financial and medical industry datasets. It is also robust to
variations in prompt format. Preliminary results indicate that Selene Mini is
the top-ranking evaluator in a live, community-driven Judge Arena. We release
the model weights on HuggingFace
(https://hf.co/AtlaAI/Selene-1-Mini-Llama-3.1-8B) and Ollama to encourage
widespread community adoption.
|
2501.17200
|
Improving LLM Leaderboards with Psychometrical Methodology
|
cs.CL cs.AI stat.AP
|
The rapid development of large language models (LLMs) has necessitated the
creation of benchmarks to evaluate their performance. These benchmarks resemble
human tests and surveys, as they consist of sets of questions designed to
measure emergent properties in the cognitive behavior of these systems.
However, unlike the well-defined traits and abilities studied in social
sciences, the properties measured by these benchmarks are often vaguer and less
rigorously defined. The most prominent benchmarks are often grouped into
leaderboards for convenience, aggregating performance metrics and enabling
comparisons between models. Unfortunately, these leaderboards typically rely on
simplistic aggregation methods, such as taking the average score across
benchmarks. In this paper, we demonstrate the advantages of applying
contemporary psychometric methodologies - originally developed for human tests
and surveys - to improve the ranking of large language models on leaderboards.
Using data from the Hugging Face Leaderboard as an example, we compare the
results of the conventional naive ranking approach with a psychometrically
informed ranking. The findings highlight the benefits of adopting psychometric
techniques for more robust and meaningful evaluation of LLM performance.
|
2501.17201
|
Smart Cubing for Graph Search: A Comparative Study
|
cs.AI
|
Parallel solving via cube-and-conquer is a key method for scaling SAT solvers
to hard instances. While cube-and-conquer has proven successful for pure SAT
problems, notably the Pythagorean triples conjecture, its application to SAT
solvers extended with propagators presents unique challenges, as these
propagators learn constraints dynamically during the search.
We study this problem using SAT Modulo Symmetries (SMS) as our primary test
case, where a symmetry-breaking propagator reduces the search space by learning
constraints that eliminate isomorphic graphs. Through extensive experimentation
comprising over 10,000 CPU hours, we systematically evaluate different
cube-and-conquer variants on three well-studied combinatorial problems. Our
methodology combines prerun phases to collect learned constraints, various
cubing strategies, and parameter tuning via algorithm configuration and
LLM-generated design suggestions.
The comprehensive empirical evaluation provides new insights into effective
cubing strategies for propagator-based SAT solving, with our best method
achieving speedups of 2-3x from improved cubing and parameter tuning, providing
an additional 1.5-2x improvement on harder instances.
|
2501.17202
|
Audio Large Language Models Can Be Descriptive Speech Quality Evaluators
|
cs.SD cs.CL eess.AS
|
An ideal multimodal agent should be aware of the quality of its input
modalities. Recent advances have enabled large language models (LLMs) to
incorporate auditory systems for handling various speech-related tasks.
However, most audio LLMs remain unaware of the quality of the speech they
process. This limitation arises because speech quality evaluation is typically
excluded from multi-task training due to the lack of suitable datasets. To
address this, we introduce the first natural language-based speech evaluation
corpus, generated from authentic human ratings. In addition to the overall Mean
Opinion Score (MOS), this corpus offers detailed analysis across multiple
dimensions and identifies causes of quality degradation. It also enables
descriptive comparisons between two speech samples (A/B tests) with human-like
judgment. Leveraging this corpus, we propose an alignment approach with LLM
distillation (ALLD) to guide the audio LLM in extracting relevant information
from raw speech and generating meaningful responses. Experimental results
demonstrate that ALLD outperforms the previous state-of-the-art regression
model in MOS prediction, with a mean square error of 0.17 and an A/B test
accuracy of 98.6%. Additionally, the generated responses achieve BLEU scores of
25.8 and 30.2 on two tasks, surpassing the capabilities of task-specific
models. This work advances the comprehensive perception of speech signals by
audio LLMs, contributing to the development of real-world auditory and sensory
intelligent agents.
|
2501.17205
|
Near-Optimal Algorithms for Omniprediction
|
stat.ML cs.DS cs.LG
|
Omnipredictors are simple prediction functions that encode loss-minimizing
predictions with respect to a hypothesis class $\mathcal{H}$, simultaneously
for every loss function within a class of losses $\mathcal{L}$. In this work,
we give near-optimal learning algorithms for omniprediction, in both the online
and offline settings. To begin, we give an oracle-efficient online learning
algorithm that acheives $(\mathcal{L},\mathcal{H})$-omniprediction with
$\tilde{O}(\sqrt{T \log |\mathcal{H}|})$ regret for any class of Lipschitz loss
functions $\mathcal{L} \subseteq \mathcal{L}_\mathrm{Lip}$. Quite surprisingly,
this regret bound matches the optimal regret for \emph{minimization of a single
loss function} (up to a $\sqrt{\log(T)}$ factor). Given this online algorithm,
we develop an online-to-offline conversion that achieves near-optimal
complexity across a number of measures. In particular, for all bounded loss
functions within the class of Bounded Variation losses
$\mathcal{L}_\mathrm{BV}$ (which include all convex, all Lipschitz, and all
proper losses) and any (possibly-infinite) $\mathcal{H}$, we obtain an offline
learning algorithm that, leveraging an (offline) ERM oracle and $m$ samples
from $\mathcal{D}$, returns an efficient
$(\mathcal{L}_{\mathrm{BV}},\mathcal{H},\varepsilon(m))$-omnipredictor for
$\varepsilon(m)$ scaling near-linearly in the Rademacher complexity of
$\mathrm{Th} \circ \mathcal{H}$.
|
2501.17206
|
Integrating Reinforcement Learning and AI Agents for Adaptive Robotic
Interaction and Assistance in Dementia Care
|
cs.AI cs.RO
|
This study explores a novel approach to advancing dementia care by
integrating socially assistive robotics, reinforcement learning (RL), large
language models (LLMs), and clinical domain expertise within a simulated
environment. This integration addresses the critical challenge of limited
experimental data in socially assistive robotics for dementia care, providing a
dynamic simulation environment that realistically models interactions between
persons living with dementia (PLWDs) and robotic caregivers. The proposed
framework introduces a probabilistic model to represent the cognitive and
emotional states of PLWDs, combined with an LLM-based behavior simulation to
emulate their responses. We further develop and train an adaptive RL system
enabling humanoid robots, such as Pepper, to deliver context-aware and
personalized interactions and assistance based on PLWDs' cognitive and
emotional states. The framework also generalizes to computer-based agents,
highlighting its versatility. Results demonstrate that the RL system, enhanced
by LLMs, effectively interprets and responds to the complex needs of PLWDs,
providing tailored caregiving strategies. This research contributes to
human-computer and human-robot interaction by offering a customizable AI-driven
caregiving platform, advancing understanding of dementia-related challenges,
and fostering collaborative innovation in assistive technologies. The proposed
approach has the potential to enhance the independence and quality of life for
PLWDs while alleviating caregiver burden, underscoring the transformative role
of interaction-focused AI systems in dementia care.
|
2501.17207
|
Rethinking Functional Brain Connectome Analysis: Do Graph Deep Learning
Models Help?
|
cs.NE cs.AI cs.LG q-bio.NC
|
Functional brain connectome is crucial for deciphering the neural mechanisms
underlying cognitive functions and neurological disorders. Graph deep learning
models have recently gained tremendous popularity in this field. However, their
actual effectiveness in modeling the brain connectome remains unclear. In this
study, we re-examine graph deep learning models based on four large-scale
neuroimaging studies encompassing diverse cognitive and clinical outcomes.
Surprisingly, we find that the message aggregation mechanism, a hallmark of
graph deep learning models, does not help with predictive performance as
typically assumed, but rather consistently degrades it. To address this issue,
we propose a hybrid model combining a linear model with a graph attention
network through dual pathways, achieving robust predictions and enhanced
interpretability by revealing both localized and global neural connectivity
patterns. Our findings urge caution in adopting complex deep learning models
for functional brain connectome analysis, emphasizing the need for rigorous
experimental designs to establish tangible performance gains and perhaps more
importantly, to pursue improvements in model interpretability.
|
2501.17211
|
MR imaging in the low-field: Leveraging the power of machine learning
|
eess.IV cs.LG
|
Recent innovations in Magnetic Resonance Imaging (MRI) hardware and software
have reignited interest in low-field ($<1\,\mathrm{T}$) and ultra-low-field MRI
($<0.1\,\mathrm{T}$). These technologies offer advantages such as lower power
consumption, reduced specific absorption rate, reduced field-inhomogeneities,
and cost-effectiveness, presenting a promising alternative for resource-limited
and point-of-care settings. However, low-field MRI faces inherent challenges
like reduced signal-to-noise ratio and therefore, potentially lower spatial
resolution or longer scan times.
This chapter examines the challenges and opportunities of low-field and
ultra-low-field MRI, with a focus on the role of machine learning (ML) in
overcoming these limitations. We provide an overview of deep neural networks
and their application in enhancing low-field and ultra-low-field MRI
performance. Specific ML-based solutions, including advanced image
reconstruction, denoising, and super-resolution algorithms, are discussed. The
chapter concludes by exploring how integrating ML with low-field MRI could
expand its clinical applications and improve accessibility, potentially
revolutionizing its use in diverse healthcare settings.
|
2501.17216
|
Amplifier: Bringing Attention to Neglected Low-Energy Components in Time
Series Forecasting
|
cs.LG
|
We propose an energy amplification technique to address the issue that
existing models easily overlook low-energy components in time series
forecasting. This technique comprises an energy amplification block and an
energy restoration block. The energy amplification block enhances the energy of
low-energy components to improve the model's learning efficiency for these
components, while the energy restoration block returns the energy to its
original level. Moreover, considering that the energy-amplified data typically
displays two distinct energy peaks in the frequency spectrum, we integrate the
energy amplification technique with a seasonal-trend forecaster to model the
temporal relationships of these two peaks independently, serving as the
backbone for our proposed model, Amplifier. Additionally, we propose a
semi-channel interaction temporal relationship enhancement block for Amplifier,
which enhances the model's ability to capture temporal relationships from the
perspective of the commonality and specificity of each channel in the data.
Extensive experiments on eight time series forecasting benchmarks consistently
demonstrate our model's superiority in both effectiveness and efficiency
compared to state-of-the-art methods.
|
2501.17256
|
Increasing Information for Model Predictive Control with Semi-Markov
Decision Processes
|
cs.LG
|
Recent works in Learning-Based Model Predictive Control of dynamical systems
show impressive sample complexity performances using criteria from Information
Theory to accelerate the learning procedure. However, the sequential
exploration opportunities are limited by the system local state, restraining
the amount of information of the observations from the current exploration
trajectory. This article resolves this limitation by introducing temporal
abstraction through the framework of Semi-Markov Decision Processes. The
framework increases the total information of the gathered data for a fixed
sampling budget, thus reducing the sample complexity.
|
2501.17260
|
ViT-2SPN: Vision Transformer-based Dual-Stream Self-Supervised
Pretraining Networks for Retinal OCT Classification
|
cs.CV cs.AI cs.LG
|
Optical Coherence Tomography (OCT) is a non-invasive imaging modality
essential for diagnosing various eye diseases. Despite its clinical
significance, developing OCT-based diagnostic tools faces challenges, such as
limited public datasets, sparse annotations, and privacy concerns. Although
deep learning has made progress in automating OCT analysis, these challenges
remain unresolved. To address these limitations, we introduce the Vision
Transformer-based Dual-Stream Self-Supervised Pretraining Network (ViT-2SPN), a
novel framework designed to enhance feature extraction and improve diagnostic
accuracy. ViT-2SPN employs a three-stage workflow: Supervised Pretraining,
Self-Supervised Pretraining (SSP), and Supervised Fine-Tuning. The pretraining
phase leverages the OCTMNIST dataset (97,477 unlabeled images across four
disease classes) with data augmentation to create dual-augmented views. A
Vision Transformer (ViT-Base) backbone extracts features, while a negative
cosine similarity loss aligns feature representations. Pretraining is conducted
over 50 epochs with a learning rate of 0.0001 and momentum of 0.999.
Fine-tuning is performed on a stratified 5.129% subset of OCTMNIST using
10-fold cross-validation. ViT-2SPN achieves a mean AUC of 0.93, accuracy of
0.77, precision of 0.81, recall of 0.75, and an F1 score of 0.76, outperforming
existing SSP-based methods.
|
2501.17261
|
NUS-Emo at SemEval-2024 Task 3: Instruction-Tuning LLM for Multimodal
Emotion-Cause Analysis in Conversations
|
cs.CL
|
This paper describes the architecture of our system developed for Task 3 of
SemEval-2024: Multimodal Emotion-Cause Analysis in Conversations. Our project
targets the challenges of subtask 2, dedicated to Multimodal Emotion-Cause Pair
Extraction with Emotion Category (MECPE-Cat), and constructs a dual-component
system tailored to the unique challenges of this task. We divide the task into
two subtasks: emotion recognition in conversation (ERC) and emotion-cause pair
extraction (ECPE). To address these subtasks, we capitalize on the abilities of
Large Language Models (LLMs), which have consistently demonstrated
state-of-the-art performance across various natural language processing tasks
and domains. Most importantly, we design an approach of emotion-cause-aware
instruction-tuning for LLMs, to enhance the perception of the emotions with
their corresponding causal rationales. Our method enables us to adeptly
navigate the complexities of MECPE-Cat, achieving a weighted average 34.71% F1
score of the task, and securing the 2nd rank on the leaderboard. The code and
metadata to reproduce our experiments are all made publicly available.
|
2501.17265
|
Giving the Old a Fresh Spin: Quality Estimation-Assisted Constrained
Decoding for Automatic Post-Editing
|
cs.CL
|
Automatic Post-Editing (APE) systems often struggle with over-correction,
where unnecessary modifications are made to a translation, diverging from the
principle of minimal editing. In this paper, we propose a novel technique to
mitigate over-correction by incorporating word-level Quality Estimation (QE)
information during the decoding process. This method is architecture-agnostic,
making it adaptable to any APE system, regardless of the underlying model or
training approach. Our experiments on English-German, English-Hindi, and
English-Marathi language pairs show the proposed approach yields significant
improvements over their corresponding baseline APE systems, with TER gains of
$0.65$, $1.86$, and $1.44$ points, respectively. These results underscore the
complementary relationship between QE and APE tasks and highlight the
effectiveness of integrating QE information to reduce over-correction in APE
systems.
|
2501.17266
|
Advancing the Biological Plausibility and Efficacy of Hebbian
Convolutional Neural Networks
|
cs.NE cs.CV
|
The research presented in this paper advances the integration of Hebbian
learning into Convolutional Neural Networks (CNNs) for image processing,
systematically exploring different architectures to build an optimal
configuration, adhering to biological tenability. Hebbian learning operates on
local unsupervised neural information to form feature representations,
providing an alternative to the popular but arguably biologically implausible
and computationally intensive backpropagation learning algorithm. The suggested
optimal architecture significantly enhances recent research aimed at
integrating Hebbian learning with competition mechanisms and CNNs, expanding
their representational capabilities by incorporating hard Winner-Takes-All
(WTA) competition, Gaussian lateral inhibition mechanisms and
Bienenstock-Cooper-Munro (BCM) learning rule in a single model. The resulting
model achieved 76% classification accuracy on CIFAR-10, rivalling its
end-to-end backpropagation variant (77%) and critically surpassing the
state-of-the-art hard-WTA performance in CNNs of the same network depth (64.6%)
by 11.4%. Moreover, results showed clear indications of sparse hierarchical
learning through increasingly complex and abstract receptive fields. In
summary, our implementation enhances both the performance and the
generalisability of the learnt representations and constitutes a crucial step
towards more biologically realistic artificial neural networks.
|
2501.17269
|
A 1-D CNN inference engine for constrained platforms
|
cs.LG
|
1D-CNNs are used for time series classification in various domains with a
high degree of accuracy. Most implementations collect the incoming data samples
in a buffer before performing inference on it. On edge devices, which are
typically constrained and single-threaded, such an implementation may interfere
with time-critical tasks. One such task is that of sample acquisition. In this
work, we propose an inference scheme that interleaves the convolution
operations between sample intervals, which allows us to reduce the inference
latency. Furthermore, our scheme is well-suited for storing data in ring
buffers, yielding a small memory footprint. We demonstrate these improvements
by comparing our approach to TFLite's inference method, giving a 10% reduction
in the inference delay while almost halving the memory usage. Our approach is
feasible on common consumer devices, which we show using an AVR-based Arduino
board and an ARM-based Arduino board.
|
2501.17270
|
Comprehensive Evaluation for a Large Scale Knowledge Graph Question
Answering Service
|
cs.CL cs.DB
|
Question answering systems for knowledge graph (KGQA), answer factoid
questions based on the data in the knowledge graph. KGQA systems are complex
because the system has to understand the relations and entities in the
knowledge-seeking natural language queries and map them to structured queries
against the KG to answer them. In this paper, we introduce Chronos, a
comprehensive evaluation framework for KGQA at industry scale. It is designed
to evaluate such a multi-component system comprehensively, focusing on (1)
end-to-end and component-level metrics, (2) scalable to diverse datasets and
(3) a scalable approach to measure the performance of the system prior to
release. In this paper, we discuss the unique challenges associated with
evaluating KGQA systems at industry scale, review the design of Chronos, and
how it addresses these challenges. We will demonstrate how it provides a base
for data-driven decisions and discuss the challenges of using it to measure and
improve a real-world KGQA system.
|
2501.17273
|
Tailored Truths: Optimizing LLM Persuasion with Personalization and
Fabricated Statistics
|
cs.CL
|
Large Language Models (LLMs) are becoming increasingly persuasive,
demonstrating the ability to personalize arguments in conversation with humans
by leveraging their personal data. This may have serious impacts on the scale
and effectiveness of disinformation campaigns. We studied the persuasiveness of
LLMs in a debate setting by having humans $(n=33)$ engage with LLM-generated
arguments intended to change the human's opinion. We quantified the LLM's
effect by measuring human agreement with the debate's hypothesis pre- and
post-debate and analyzing both the magnitude of opinion change, as well as the
likelihood of an update in the LLM's direction. We compare persuasiveness
across established persuasion strategies, including personalized arguments
informed by user demographics and personality, appeal to fabricated statistics,
and a mixed strategy utilizing both personalized arguments and fabricated
statistics. We found that static arguments generated by humans and GPT-4o-mini
have comparable persuasive power. However, the LLM outperformed static
human-written arguments when leveraging the mixed strategy in an interactive
debate setting. This approach had a $\mathbf{51\%}$ chance of persuading
participants to modify their initial position, compared to $\mathbf{32\%}$ for
the static human-written arguments. Our results highlight the concerning
potential for LLMs to enable inexpensive and persuasive large-scale
disinformation campaigns.
|
2501.17275
|
Dual-Lagrange Encoding for Storage and Download in Elastic Computing for
Resilience
|
cs.IT cs.DC math.IT
|
Coded elastic computing enables virtual machines to be preempted for
high-priority tasks while allowing new virtual machines to join ongoing
computation seamlessly. This paper addresses coded elastic computing for
matrix-matrix multiplications with straggler tolerance by encoding both storage
and download using Lagrange codes. In 2018, Yang et al. introduced the first
coded elastic computing scheme for matrix-matrix multiplications, achieving a
lower computational load requirement. However, this scheme lacks straggler
tolerance and suffers from high upload cost. Zhong et al. (2023) later tackled
these shortcomings by employing uncoded storage and Lagrange-coded download.
However, their approach requires each machine to store the entire dataset. This
paper introduces a new class of elastic computing schemes that utilize Lagrange
codes to encode both storage and download, achieving a reduced storage size.
The proposed schemes efficiently mitigate both elasticity and straggler
effects, with a storage size reduced to a fraction $\frac{1}{L}$ of Zhong et
al.'s approach, at the expense of doubling the download cost. Moreover, we
evaluate the proposed schemes on AWS EC2 by measuring computation time under
two different tasks allocations: heterogeneous and cyclic assignments. Both
assignments minimize computation redundancy of the system while distributing
varying computation loads across machines.
|
2501.17281
|
Stiff Transfer Learning for Physics-Informed Neural Networks
|
cs.LG math.AP
|
Stiff differential equations are prevalent in various scientific domains,
posing significant challenges due to the disparate time scales of their
components. As computational power grows, physics-informed neural networks
(PINNs) have led to significant improvements in modeling physical processes
described by differential equations. Despite their promising outcomes, vanilla
PINNs face limitations when dealing with stiff systems, known as failure modes.
In response, we propose a novel approach, stiff transfer learning for
physics-informed neural networks (STL-PINNs), to effectively tackle stiff
ordinary differential equations (ODEs) and partial differential equations
(PDEs). Our methodology involves training a Multi-Head-PINN in a low-stiff
regime, and obtaining the final solution in a high stiff regime by transfer
learning. This addresses the failure modes related to stiffness in PINNs while
maintaining computational efficiency by computing "one-shot" solutions. The
proposed approach demonstrates superior accuracy and speed compared to
PINNs-based methods, as well as comparable computational efficiency with
implicit numerical methods in solving stiff-parameterized linear and polynomial
nonlinear ODEs and PDEs under stiff conditions. Furthermore, we demonstrate the
scalability of such an approach and the superior speed it offers for
simulations involving initial conditions and forcing function
reparametrization.
|
2501.17282
|
From Natural Language to Extensive-Form Game Representations
|
cs.AI cs.CL cs.GT cs.MA
|
We introduce a framework for translating game descriptions in natural
language into extensive-form representations in game theory, leveraging Large
Language Models (LLMs) and in-context learning. Given the varying levels of
strategic complexity in games, such as perfect versus imperfect information,
directly applying in-context learning would be insufficient. To address this,
we introduce a two-stage framework with specialized modules to enhance
in-context learning, enabling it to divide and conquer the problem effectively.
In the first stage, we tackle the challenge of imperfect information by
developing a module that identifies information sets along and the
corresponding partial tree structure. With this information, the second stage
leverages in-context learning alongside a self-debugging module to produce a
complete extensive-form game tree represented using pygambit, the Python API of
a recognized game-theoretic analysis tool called Gambit. Using this python
representation enables the automation of tasks such as computing Nash
equilibria directly from natural language descriptions. We evaluate the
performance of the full framework, as well as its individual components, using
various LLMs on games with different levels of strategic complexity. Our
experimental results show that the framework significantly outperforms baseline
models in generating accurate extensive-form games, with each module playing a
critical role in its success.
|
2501.17284
|
Nonlinear dynamics of localization in neural receptive fields
|
cs.LG
|
Localized receptive fields -- neurons that are selective for certain
contiguous spatiotemporal features of their input -- populate early sensory
regions of the mammalian brain. Unsupervised learning algorithms that optimize
explicit sparsity or independence criteria replicate features of these
localized receptive fields, but fail to explain directly how localization
arises through learning without efficient coding, as occurs in early layers of
deep neural networks and might occur in early sensory regions of biological
systems. We consider an alternative model in which localized receptive fields
emerge without explicit top-down efficiency constraints -- a feedforward neural
network trained on a data model inspired by the structure of natural images.
Previous work identified the importance of non-Gaussian statistics to
localization in this setting but left open questions about the mechanisms
driving dynamical emergence. We address these questions by deriving the
effective learning dynamics for a single nonlinear neuron, making precise how
higher-order statistical properties of the input data drive emergent
localization, and we demonstrate that the predictions of these effective
dynamics extend to the many-neuron setting. Our analysis provides an
alternative explanation for the ubiquity of localization as resulting from the
nonlinear dynamics of learning in neural circuits.
|
2501.17286
|
Fine-Tuning Open-Source Large Language Models to Improve Their
Performance on Radiation Oncology Tasks: A Feasibility Study to Investigate
Their Potential Clinical Applications in Radiation Oncology
|
physics.med-ph cs.AI cs.CL
|
Background: The radiation oncology clinical practice involves many steps
relying on the dynamic interplay of abundant text data. Large language models
have displayed remarkable capabilities in processing complex text information.
But their direct applications in specific fields like radiation oncology remain
underexplored.
Purpose: This study aims to investigate whether fine-tuning LLMs with domain
knowledge can improve the performance on Task (1) treatment regimen generation,
Task (2) treatment modality selection (photon, proton, electron, or
brachytherapy), and Task (3) ICD-10 code prediction in radiation oncology.
Methods: Data for 15,724 patient cases were extracted. Cases where patients
had a single diagnostic record, and a clearly identifiable primary treatment
plan were selected for preprocessing and manual annotation to have 7,903 cases
of the patient diagnosis, treatment plan, treatment modality, and ICD-10 code.
Each case was used to construct a pair consisting of patient diagnostics
details and an answer (treatment regimen, treatment modality, or ICD-10 code
respectively) for the supervised fine-tuning of these three tasks. Open source
LLaMA2-7B and Mistral-7B models were utilized for the fine-tuning with the
Low-Rank Approximations method. Accuracy and ROUGE-1 score were reported for
the fine-tuned models and original models. Clinical evaluation was performed on
Task (1) by radiation oncologists, while precision, recall, and F-1 score were
evaluated for Task (2) and (3). One-sided Wilcoxon signed-rank tests were used
to statistically analyze the results.
Results: Fine-tuned LLMs outperformed original LLMs across all tasks with
p-value <= 0.001. Clinical evaluation demonstrated that over 60% of the
fine-tuned LLMs-generated treatment regimens were clinically acceptable.
Precision, recall, and F1-score showed improved performance of fine-tuned LLMs.
|
2501.17289
|
A Contrastive Teacher-Student Framework for Novelty Detection under
Style Shifts
|
cs.CV
|
There have been several efforts to improve Novelty Detection (ND)
performance. However, ND methods often suffer significant performance drops
under minor distribution shifts caused by changes in the environment, known as
style shifts. This challenge arises from the ND setup, where the absence of
out-of-distribution (OOD) samples during training causes the detector to be
biased toward the dominant style features in the in-distribution (ID) data. As
a result, the model mistakenly learns to correlate style with core features,
using this shortcut for detection. Robust ND is crucial for real-world
applications like autonomous driving and medical imaging, where test samples
may have different styles than the training data. Motivated by this, we propose
a robust ND method that crafts an auxiliary OOD set with style features similar
to the ID set but with different core features. Then, a task-based knowledge
distillation strategy is utilized to distinguish core features from style
features and help our model rely on core features for discriminating crafted
OOD and ID sets. We verified the effectiveness of our method through extensive
experimental evaluations on several datasets, including synthetic and
real-world benchmarks, against nine different ND methods.
|
2501.17295
|
Mitigating Hallucinated Translations in Large Language Models with
Hallucination-focused Preference Optimization
|
cs.CL cs.AI cs.LG
|
Machine Translation (MT) is undergoing a paradigm shift, with systems based
on fine-tuned large language models (LLM) becoming increasingly competitive
with traditional encoder-decoder models trained specifically for translation
tasks. However, LLM-based systems are at a higher risk of generating
hallucinations, which can severely undermine user's trust and safety. Most
prior research on hallucination mitigation focuses on traditional MT models,
with solutions that involve post-hoc mitigation - detecting hallucinated
translations and re-translating them. While effective, this approach introduces
additional complexity in deploying extra tools in production and also increases
latency. To address these limitations, we propose a method that intrinsically
learns to mitigate hallucinations during the model training phase.
Specifically, we introduce a data creation framework to generate hallucination
focused preference datasets. Fine-tuning LLMs on these preference datasets
reduces the hallucination rate by an average of 96% across five language pairs,
while preserving overall translation quality. In a zero-shot setting our
approach reduces hallucinations by 89% on an average across three unseen target
languages.
|
2501.17296
|
Multi-Physics Simulations via Coupled Fourier Neural Operator
|
cs.LG cs.AI
|
Physical simulations are essential tools across critical fields such as
mechanical and aerospace engineering, chemistry, meteorology, etc. While neural
operators, particularly the Fourier Neural Operator (FNO), have shown promise
in predicting simulation results with impressive performance and efficiency,
they face limitations when handling real-world scenarios involving coupled
multi-physics outputs. Current neural operator methods either overlook the
correlations between multiple physical processes or employ simplistic
architectures that inadequately capture these relationships. To overcome these
challenges, we introduce a novel coupled multi-physics neural operator learning
(COMPOL) framework that extends the capabilities of Fourier operator layers to
model interactions among multiple physical processes. Our approach implements
feature aggregation through recurrent and attention mechanisms, enabling
comprehensive modeling of coupled interactions. Our method's core is an
innovative system for aggregating latent features from multi-physics processes.
These aggregated features serve as enriched information sources for neural
operator layers, allowing our framework to capture complex physical
relationships accurately. We evaluated our coupled multi-physics neural
operator across diverse physical simulation tasks, including biological
systems, fluid mechanics, and multiphase flow in porous media. Our proposed
model demonstrates a two to three-fold improvement in predictive performance
compared to existing approaches.
|
2501.17299
|
"Ownership, Not Just Happy Talk": Co-Designing a Participatory Large
Language Model for Journalism
|
cs.HC cs.CL cs.CY
|
Journalism has emerged as an essential domain for understanding the uses,
limitations, and impacts of large language models (LLMs) in the workplace. News
organizations face divergent financial incentives: LLMs already permeate
newswork processes within financially constrained organizations, even as
ongoing legal challenges assert that AI companies violate their copyright. At
stake are key questions about what LLMs are created to do, and by whom: How
might a journalist-led LLM work, and what can participatory design illuminate
about the present-day challenges about adapting ``one-size-fits-all''
foundation models to a given context of use? In this paper, we undertake a
co-design exploration to understand how a participatory approach to LLMs might
address opportunities and challenges around AI in journalism. Our 20 interviews
with reporters, data journalists, editors, labor organizers, product leads, and
executives highlight macro, meso, and micro tensions that designing for this
opportunity space must address. From these desiderata, we describe the result
of our co-design work: organizational structures and functionality for a
journalist-controlled LLM. In closing, we discuss the limitations of commercial
foundation models for workplace use, and the methodological implications of
applying participatory methods to LLM co-design.
|
2501.17300
|
Dilemmas and trade-offs in the diffusion of conventions
|
physics.soc-ph cs.SI stat.AP
|
Outside ideal settings, conventions are shaped by heterogeneous competing
processes that can challenge the emergence of universal norms. This paper
identifies three trade-offs challenging the diffusion of conventions and
explores each of them empirically using observational behavioral data. The
first trade-off (I) concerns the imperatives of social, sequential, and
contextual consistency that individuals must balance when choosing between
competing conventions. The second trade-off (II) involves the balance between
local and global coordination, depending on whether individuals coordinate
their behavior via interactions throughout a social network or external factors
transcending the network. The third trade-off (III) is the balance between
decision optimality (e.g., collective satisfaction) and decision costs when
collectives with conflicting preferences choose one convention. We develop a
utilitarian account of conventions which we translate into a broadly applicable
statistical physics framework for measuring each of these trade-offs. We then
apply this framework to a sign convention in physics using textual and network
data. Our analysis suggests that the purpose of conventions may exceed
coordination, and that multiple infrastructures (including prior cultural
traits and social networks) concurrently shape individual preferences towards
conventions. Additionally, we confirm the role of seniority in resolving
conflicting preferences in collaborations, resulting in suboptimal outcomes.
|
2501.17303
|
Measurement-Based Modeling and Analysis of UAV Air-Ground Channels at 1
and 4 GHz
|
eess.SP cs.IT math.IT physics.ins-det
|
In the design of unmanned aerial vehicle (UAV) wireless communications, a
better understanding of propagation characteristics and an accurate channel
model are required. Measurements and comprehensive analysis for the UAV-based
air-ground (AG) propagation channel in the vertical dimension are presented in
this letter. Based on the measurement data at 1 and 4 GHz, the large-scale and
small-scale channel parameters are extracted in the line-of-sight (LOS) and
nonLOS case, respectively. The altitude-dependent path loss model is proposed
herein. Furthermore, shadow fading and fast fading are statistically analyzed
for comprehensively describing the fading behavior. Our results will be useful
in the modeling of AG channels and the performance analysis for UAV-enabled
wireless communication systems.
|
2501.17304
|
Summary of the NOTSOFAR-1 Challenge: Highlights and Learnings
|
cs.SD cs.LG eess.AS
|
The first Natural Office Talkers in Settings of Far-field Audio Recordings
(NOTSOFAR-1) Challenge is a pivotal initiative that sets new benchmarks by
offering datasets more representative of the needs of real-world business
applications than those previously available. The challenge provides a unique
combination of 280 recorded meetings across 30 diverse environments, capturing
real-world acoustic conditions and conversational dynamics, and a 1000-hour
simulated training dataset, synthesized with enhanced authenticity for
real-world generalization, incorporating 15,000 real acoustic transfer
functions. In this paper, we provide an overview of the systems submitted to
the challenge and analyze the top-performing approaches, hypothesizing the
factors behind their success. Additionally, we highlight promising directions
left unexplored by participants. By presenting key findings and actionable
insights, this work aims to drive further innovation and progress in DASR
research and applications.
|
2501.17310
|
Probing LLM World Models: Enhancing Guesstimation with Wisdom of Crowds
Decoding
|
cs.AI cs.HC
|
Guesstimation, the task of making approximate quantity estimates, is a common
real-world challenge. However, it has been largely overlooked in large language
models (LLMs) and vision language models (VLMs) research. We introduce a novel
guesstimation dataset, MARBLES. This dataset requires one to estimate how many
items (e.g., marbles) can fit into containers (e.g., a one-cup measuring cup),
both with and without accompanying images. Inspired by the social science
concept of the ``Wisdom of Crowds'' (WOC) - taking the median from estimates
from a crowd), which has proven effective in guesstimation, we propose ``WOC
decoding'' strategy for LLM guesstimation. We show that LLMs/VLMs perform well
on guesstimation, suggesting that they possess some level of a "world model"
necessary for guesstimation. Moreover, similar to human performance, the WOC
decoding method improves LLM/VLM guesstimation accuracy. Furthermore, the
inclusion of images in the multimodal condition enhances model performance.
These results highlight the value of WOC decoding strategy for LLMs/VLMs and
position guesstimation as a probe for evaluating LLMs/VLMs' world model. As
LLMs' world model is a fundamental prerequisite for many real-world tasks,
e.g., human-AI teaming, our findings have broad implications for the AI
community.
|
2501.17311
|
RLPP: A Residual Method for Zero-Shot Real-World Autonomous Racing on
Scaled Platforms
|
cs.RO cs.LG
|
Autonomous racing presents a complex environment requiring robust controllers
capable of making rapid decisions under dynamic conditions. While traditional
controllers based on tire models are reliable, they often demand extensive
tuning or system identification. Reinforcement Learning (RL) methods offer
significant potential due to their ability to learn directly from interaction,
yet they typically suffer from the sim-to-real gap, where policies trained in
simulation fail to perform effectively in the real world. In this paper, we
propose RLPP, a residual RL framework that enhances a Pure Pursuit (PP)
controller with an RL-based residual. This hybrid approach leverages the
reliability and interpretability of PP while using RL to fine-tune the
controller's performance in real-world scenarios. Extensive testing on the
F1TENTH platform demonstrates that RLPP improves lap times of the baseline
controllers by up to 6.37 %, closing the gap to the State-of-the-Art methods by
more than 52 % and providing reliable performance in zero-shot real-world
deployment, overcoming key challenges associated with the sim-to-real transfer
and reducing the performance gap from simulation to reality by more than 8-fold
when compared to the baseline RL controller. The RLPP framework is made
available as an open-source tool, encouraging further exploration and
advancement in autonomous racing research. The code is available at:
www.github.com/forzaeth/rlpp.
|
2501.17313
|
Surena-V: A Humanoid Robot for Human-Robot Collaboration with
Optimization-based Control Architecture
|
cs.RO
|
This paper presents Surena-V, a humanoid robot designed to enhance
human-robot collaboration capabilities. The robot features a range of sensors,
including barometric tactile sensors in its hands, to facilitate precise
environmental interaction. This is demonstrated through an experiment
showcasing the robot's ability to control a medical needle's movement through
soft material. Surena-V's operational framework emphasizes stability and
collaboration, employing various optimization-based control strategies such as
Zero Moment Point (ZMP) modification through upper body movement and stepping.
Notably, the robot's interaction with the environment is improved by detecting
and interpreting external forces at their point of effect, allowing for more
agile responses compared to methods that control overall balance based on
external forces. The efficacy of this architecture is substantiated through an
experiment illustrating the robot's collaboration with a human in moving a bar.
This work contributes to the field of humanoid robotics by presenting a
comprehensive system design and control architecture focused on human-robot
collaboration and environmental adaptability.
|
2501.17315
|
A sketch of an AI control safety case
|
cs.AI cs.CR cs.SE
|
As LLM agents gain a greater capacity to cause harm, AI developers might
increasingly rely on control measures such as monitoring to justify that they
are safe. We sketch how developers could construct a "control safety case",
which is a structured argument that models are incapable of subverting control
measures in order to cause unacceptable outcomes. As a case study, we sketch an
argument that a hypothetical LLM agent deployed internally at an AI company
won't exfiltrate sensitive information. The sketch relies on evidence from a
"control evaluation,"' where a red team deliberately designs models to
exfiltrate data in a proxy for the deployment environment. The safety case then
hinges on several claims: (1) the red team adequately elicits model
capabilities to exfiltrate data, (2) control measures remain at least as
effective in deployment, and (3) developers conservatively extrapolate model
performance to predict the probability of data exfiltration in deployment. This
safety case sketch is a step toward more concrete arguments that can be used to
show that a dangerously capable LLM agent is safe to deploy.
|
2501.17318
|
Floodgates up to contain the DeePC and limit extrapolation
|
eess.SY cs.SY
|
Behavioral data-enabled control approaches typically assume data-generating
systems of linear dynamics. This may result in false generalization if the
newly designed closed-loop system results in input-output distributional shifts
beyond learning data. These shifts may compromise safety by activating harmful
nonlinearities in the data-generating system not experienced previously in the
data and/or not captured by the linearity assumption inherent in these
approaches. This paper proposes an approach to slow down the distributional
shifts and therefore enhance the safety of the data-enabled methods. This is
achieved by introducing quadratic regularization terms to the data-enabled
predictive control formulations. Slowing down the distributional shifts comes
at the expense of slowing down the exploration, in a trade-off resembling the
exploration vs exploitation balance in machine learning.
|
2501.17319
|
MDDM: A Molecular Dynamics Diffusion Model to Predict Particle
Self-Assembly
|
cs.LG physics.comp-ph
|
The discovery and study of new material systems relies on molecular
simulations that often come with significant computational expense. We propose
MDDM, a Molecular Dynamics Diffusion Model, which is capable of predicting a
valid output conformation for a given input pair potential function. After
training MDDM on a large dataset of molecular dynamics self-assembly results,
the proposed model can convert uniform noise into a meaningful output particle
structure corresponding to an arbitrary input potential. The model's
architecture has domain-specific properties built-in, such as satisfying
periodic boundaries and being invariant to translation. The model significantly
outperforms the baseline point-cloud diffusion model for both unconditional and
conditional generation tasks.
|
2501.17322
|
Influence of field of view in visual prostheses design: Analysis with a
VR system
|
cs.HC cs.CV
|
Visual prostheses are designed to restore partial functional vision in
patients with total vision loss. Retinal visual prostheses provide limited
capabilities as a result of low resolution, limited field of view and poor
dynamic range. Understanding the influence of these parameters in the
perception results can guide prostheses research and design. In this work, we
evaluate the influence of field of view with respect to spatial resolution in
visual prostheses, measuring the accuracy and response time in a search and
recognition task. Twenty-four normally sighted participants were asked to find
and recognize usual objects, such as furniture and home appliance in indoor
room scenes. For the experiment, we use a new simulated prosthetic vision
system that allows simple and effective experimentation. Our system uses a
virtual-reality environment based on panoramic scenes. The simulator employs a
head-mounted display which allows users to feel immersed in the scene by
perceiving the entire scene all around. Our experiments use public image
datasets and a commercial head-mounted display. We have also released the
virtual-reality software for replicating and extending the experimentation.
Results show that the accuracy and response time decrease when the field of
view is increased. Furthermore, performance appears to be correlated with the
angular resolution, but showing a diminishing return even with a resolution of
less than 2.3 phosphenes per degree. Our results seem to indicate that, for the
design of retinal prostheses, it is better to concentrate the phosphenes in a
small area, to maximize the angular resolution, even if that implies
sacrificing field of view.
|
2501.17323
|
Exploring Non-Convex Discrete Energy Landscapes: A Langevin-Like Sampler
with Replica Exchange
|
cs.LG stat.ML
|
Gradient-based Discrete Samplers (GDSs) are effective for sampling discrete
energy landscapes. However, they often stagnate in complex, non-convex
settings. To improve exploration, we introduce the Discrete Replica EXchangE
Langevin (DREXEL) sampler and its variant with Adjusted Metropolis (DREAM).
These samplers use two GDSs at different temperatures and step sizes: one
focuses on local exploitation, while the other explores broader energy
landscapes. When energy differences are significant, sample swaps occur, which
are determined by a mechanism tailored for discrete sampling to ensure detailed
balance. Theoretically, we prove both DREXEL and DREAM converge asymptotically
to the target energy and exhibit faster mixing than a single GDS. Experiments
further confirm their efficiency in exploring non-convex discrete energy
landscapes.
|
2501.17324
|
CardiCat: a Variational Autoencoder for High-Cardinality Tabular Data
|
cs.LG stat.ML
|
High-cardinality categorical features are a common characteristic of
mixed-type tabular datasets. Existing generative model architectures struggle
to learn the complexities of such data at scale, primarily due to the
difficulty of parameterizing the categorical features. In this paper, we
present a general variational autoencoder model, CardiCat, that can accurately
fit imbalanced high-cardinality and heterogeneous tabular data. Our method
substitutes one-hot encoding with regularized dual encoder-decoder embedding
layers, which are jointly learned. This approach enables us to use embeddings
that depend also on the other covariates, leading to a compact and homogenized
parameterization of categorical features. Our model employs a considerably
smaller trainable parameter space than competing methods, enabling learning at
a large scale. CardiCat generates high-quality synthetic data that better
represent high-cardinality and imbalanced features compared to competing VAE
models for multiple real and simulated datasets.
|
2501.17325
|
Connecting Federated ADMM to Bayes
|
cs.LG cs.AI stat.ML
|
We provide new connections between two distinct federated learning approaches
based on (i) ADMM and (ii) Variational Bayes (VB), and propose new variants by
combining their complementary strengths. Specifically, we show that the dual
variables in ADMM naturally emerge through the 'site' parameters used in VB
with isotropic Gaussian covariances. Using this, we derive two versions of ADMM
from VB that use flexible covariances and functional regularisation,
respectively. Through numerical experiments, we validate the improvements
obtained in performance. The work shows connection between two fields that are
believed to be fundamentally different and combines them to improve federated
learning.
|
2501.17326
|
Memorize and Rank: Elevating Large Language Models for Clinical
Diagnosis Prediction
|
cs.CL cs.AI cs.LG
|
Clinical diagnosis prediction models, when provided with a patient's medical
history, aim to detect potential diseases early, facilitating timely
intervention and improving prognostic outcomes. However, the inherent scarcity
of patient data and large disease candidate space often pose challenges in
developing satisfactory models for this intricate task. The exploration of
leveraging Large Language Models (LLMs) for encapsulating clinical decision
processes has been limited. We introduce MERA, a clinical diagnosis prediction
model that bridges pertaining natural language knowledge with medical practice.
We apply hierarchical contrastive learning on a disease candidate ranking list
to alleviate the large decision space issue. With concept memorization through
fine-tuning, we bridge the natural language clinical knowledge with medical
codes. Experimental results on MIMIC-III and IV datasets show that MERA
achieves the state-of-the-art diagnosis prediction performance and dramatically
elevates the diagnosis prediction capabilities of generative LMs.
|
2501.17328
|
WASUP: Interpretable Classification with Weight-Input Alignment and
Class-Discriminative SUPports Vectors
|
cs.CV cs.LG
|
The deployment of deep learning models in critical domains necessitates a
balance between high accuracy and interpretability. We introduce WASUP, an
inherently interpretable neural network that provides local and global
explanations of its decision-making process. We prove that these explanations
are faithful by fulfilling established axioms for explanations. Leveraging the
concept of case-based reasoning, WASUP extracts class-representative support
vectors from training images, ensuring they capture relevant features while
suppressing irrelevant ones. Classification decisions are made by calculating
and aggregating similarity scores between these support vectors and the input's
latent feature vector. We employ B-Cos transformations, which align model
weights with inputs to enable faithful mappings of latent features back to the
input space, facilitating local explanations in addition to global explanations
of case-based reasoning. We evaluate WASUP on three tasks: fine-grained
classification on Stanford Dogs, multi-label classification on Pascal VOC, and
pathology detection on the RSNA dataset. Results indicate that WASUP not only
achieves competitive accuracy compared to state-of-the-art black-box models but
also offers insightful explanations verified through theoretical analysis. Our
findings underscore WASUP's potential for applications where understanding
model decisions is as critical as the decisions themselves.
|
2501.17329
|
Anomaly Detection in Cooperative Vehicle Perception Systems under
Imperfect Communication
|
cs.MA cs.AI cs.LG
|
Anomaly detection is a critical requirement for ensuring safety in autonomous
driving. In this work, we leverage Cooperative Perception to share information
across nearby vehicles, enabling more accurate identification and consensus of
anomalous behaviors in complex traffic scenarios. To account for the real-world
challenge of imperfect communication, we propose a cooperative-perception-based
anomaly detection framework (CPAD), which is a robust architecture that remains
effective under communication interruptions, thereby facilitating reliable
performance even in low-bandwidth settings. Since no multi-agent anomaly
detection dataset exists for vehicle trajectories, we introduce 15,000
different scenarios with a 90,000 trajectories benchmark dataset generated
through rule-based vehicle dynamics analysis. Empirical results demonstrate
that our approach outperforms standard anomaly classification methods in
F1-score, AUC and showcase strong robustness to agent connection interruptions.
|
2501.17330
|
Attribution analysis of legal language as used by LLM
|
cs.LG cs.CL
|
Three publicly-available LLM specifically designed for legal tasks have been
implemented and shown that classification accuracy can benefit from training
over legal corpora, but why and how? Here we use two publicly-available legal
datasets, a simpler binary classification task of ``overruling'' texts, and a
more elaborate multiple choice task identifying ``holding'' judicial decisions.
We report on experiments contrasting the legal LLM and a generic BERT model for
comparison, against both datasets. We use integrated gradient attribution
techniques to impute ``causes'' of variation in the models' perfomance, and
characterize them in terms of the tokenizations each use. We find that while
all models can correctly classify some test examples from the casehold task,
other examples can only be identified by only one, model, and attribution can
be used to highlight the reasons for this. We find that differential behavior
of the models' tokenizers accounts for most of the difference and analyze these
differences in terms of the legal language they process. Frequency analysis of
tokens generated by dataset texts, combined with use of known ``stop word''
lists, allow identification of tokens that are clear signifiers of legal
topics.
|
2501.17332
|
Compact Neural TTS Voices for Accessibility
|
cs.SD cs.LG eess.AS
|
Contemporary text-to-speech solutions for accessibility applications can
typically be classified into two categories: (i) device-based statistical
parametric speech synthesis (SPSS) or unit selection (USEL) and (ii)
cloud-based neural TTS. SPSS and USEL offer low latency and low disk footprint
at the expense of naturalness and audio quality. Cloud-based neural TTS systems
provide significantly better audio quality and naturalness but regress in terms
of latency and responsiveness, rendering these impractical for real-world
applications. More recently, neural TTS models were made deployable to run on
handheld devices. Nevertheless, latency remains higher than SPSS and USEL,
while disk footprint prohibits pre-installation for multiple voices at once. In
this work, we describe a high-quality compact neural TTS system achieving
latency on the order of 15 ms with low disk footprint. The proposed solution is
capable of running on low-power devices.
|
2501.17333
|
A Guaranteed-Stable Neural Network Approach for Optimal Control of
Nonlinear Systems
|
math.OC cs.LG
|
A promising approach to optimal control of nonlinear systems involves
iteratively linearizing the system and solving an optimization problem at each
time instant to determine the optimal control input. Since this approach relies
on online optimization, it can be computationally expensive, and thus
unrealistic for systems with limited computing resources. One potential
solution to this issue is to incorporate a Neural Network (NN) into the control
loop to emulate the behavior of the optimal control scheme. Ensuring stability
and reference tracking in the resulting NN-based closed-loop system requires
modifications to the primary optimization problem. These modifications often
introduce non-convexity and nonlinearity with respect to the decision
variables, which may surpass the capabilities of existing solvers and
complicate the generation of the training dataset. To address this issue, this
paper develops a Neural Optimization Machine (NOM) to solve the resulting
optimization problems. The central concept of a NOM is to transform the
optimization challenges into the problem of training a NN. Rigorous proofs
demonstrate that when a NN trained on data generated by the NOM is used in the
control loop, all signals remain bounded and the system states asymptotically
converge to a neighborhood around the desired equilibrium point, with a tunable
proximity threshold. Simulation and experimental studies are provided to
illustrate the effectiveness of the proposed methodology.
|
2501.17335
|
Pandora's Box: Cross-Chain Arbitrages in the Realm of Blockchain
Interoperability
|
cs.CR cs.CE
|
Over recent years, the blockchain ecosystem has grown significantly with the
emergence of new Layer-1 (L1) and Layer-2 (L2) networks. These blockchains
typically host Decentralized Exchanges (DEXes) for trading assets such as
native currencies and stablecoins. While this diversity enriches the ecosystem,
it also fragments liquidity, posing challenges for DEXes offering the same
assets across multiple blockchains. This fragmentation leads to price
discrepancies, creating opportunities like arbitrages for profit-seeking
traders, which fall under the broader category of exploitative economic
practices known as Maximal Extractable Value (MEV). Although MEV extraction has
been extensively studied within single domains (i.e., individual blockchains),
cross-chain arbitrages, a form of cross-domain MEV, have received little
attention due to their non-atomic nature, complicating both execution and
detection.
In this paper, we shed light on opaque cross-chain MEV activities by
presenting the first systematic study of two non-atomic cross-chain arbitrage
strategies: Sequence-Independent Arbitrage (SIA) and Sequence-Dependent
Arbitrage (SDA). The former involves independent, opposite-direction trades
across chains, while the latter relies on asset bridges. We analyze the
effectiveness of these strategies across nine blockchains over a one-year
period from September 2023 to August 2024, identifying 260,808 cross-chain
arbitrages, 32.37% of which involve bridging solutions. These arbitrages
generated a lower-bound profit of 9,496,115.28 USD from a total traded volume
of 465,797,487.98 USD. Additionally, we examine the security implications of
cross-chain arbitrages, uncovering centralization among arbitrageurs, network
congestion caused by failed transactions, and growing private mempool adoption.
Finally, we discuss sequencer incentives and propose a risk-optimized arbitrage
strategy.
|
2501.17338
|
Inferring from Logits: Exploring Best Practices for Decoding-Free
Generative Candidate Selection
|
cs.CL cs.AI cs.LG
|
Generative Language Models rely on autoregressive decoding to produce the
output sequence token by token. Many tasks such as preference optimization,
require the model to produce task-level output consisting of multiple tokens
directly by selecting candidates from a pool as predictions. Determining a
task-level prediction from candidates using the ordinary token-level decoding
mechanism is constrained by time-consuming decoding and interrupted gradients
by discrete token selection. Existing works have been using decoding-free
candidate selection methods to obtain candidate probability from initial output
logits over vocabulary. Though these estimation methods are widely used, they
are not systematically evaluated, especially on end tasks. We introduce an
evaluation of a comprehensive collection of decoding-free candidate selection
approaches on a comprehensive set of tasks, including five multiple-choice QA
tasks with a small candidate pool and four clinical decision tasks with a
massive amount of candidates, some with 10k+ options. We evaluate the
estimation methods paired with a wide spectrum of foundation LMs covering
different architectures, sizes and training paradigms. The results and insights
from our analysis inform the future model design.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.