id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
2501.07447
|
PrecipDiff: Leveraging image diffusion models to enhance satellite-based
precipitation observations
|
cs.LG cs.CV
|
A recent report from the World Meteorological Organization (WMO) highlights
that water-related disasters have caused the highest human losses among natural
disasters over the past 50 years, with over 91\% of deaths occurring in
low-income countries. This disparity is largely due to the lack of adequate
ground monitoring stations, such as weather surveillance radars (WSR), which
are expensive to install. For example, while the US and Europe combined possess
over 600 WSRs, Africa, despite having almost one and half times their landmass,
has fewer than 40. To address this issue, satellite-based observations offer a
global, near-real-time monitoring solution. However, they face several
challenges like accuracy, bias, and low spatial resolution. This study
leverages the power of diffusion models and residual learning to address these
limitations in a unified framework. We introduce the first diffusion model for
correcting the inconsistency between different precipitation products. Our
method demonstrates the effectiveness in downscaling satellite precipitation
estimates from 10 km to 1 km resolution. Extensive experiments conducted in the
Seattle region demonstrate significant improvements in accuracy, bias
reduction, and spatial detail. Importantly, our approach achieves these results
using only precipitation data, showcasing the potential of a purely computer
vision-based approach for enhancing satellite precipitation products and paving
the way for further advancements in this domain.
|
2501.07449
|
On the effects of logical database design on database size, query
complexity, query performance, and energy consumption
|
cs.DB cs.PF
|
Database normalization theory is the basis for logical design of relational
databases. Normalization reduces data redundancy and consequently eliminates
potential data anomalies, while increasing the computational cost of read
operations. Despite decades worth of applications of normalization theory, it
still remains largely unclear to what extent normalization affects database
size and efficiency. In this study, we study the effects of database
normalization using the Internet Movie Database (IMDb) public dataset and
PostgreSQL. The results indicate, rather intuitively, that (i) database size on
disk is reduced through normalization from 1NF to 2NF by 10%, but not from 2NF
to 4NF, (ii) the number of tables and table rows in total increase
monotonically from 1NF to 2NF to 4NF, and that (iii) query complexity increases
with further normalization. Surprisingly, however, the results also indicate
that (iv) normalization from 1NF to 2NF increases throughput by a factor of 4,
and consequently, (v) energy consumption per transaction reduces by 74% with
normalization from 1NF to 2NF. The results imply that the gains of
normalization from 2NF to 4NF in terms of throughput and energy consumption are
minimal, yet increase the storage space requirements by approximately 7%. While
these results represent merely one specific case, they provide needed empirical
evaluation on the practical effects and magnitude of database normalization.
|
2501.07451
|
A Survey on Dynamic Neural Networks: from Computer Vision to Multi-modal
Sensor Fusion
|
cs.CV
|
Model compression is essential in the deployment of large Computer Vision
models on embedded devices. However, static optimization techniques (e.g.
pruning, quantization, etc.) neglect the fact that different inputs have
different complexities, thus requiring different amount of computations.
Dynamic Neural Networks allow to condition the number of computations to the
specific input. The current literature on the topic is very extensive and
fragmented. We present a comprehensive survey that synthesizes and unifies
existing Dynamic Neural Networks research in the context of Computer Vision.
Additionally, we provide a logical taxonomy based on which component of the
network is adaptive: the output, the computation graph or the input.
Furthermore, we argue that Dynamic Neural Networks are particularly beneficial
in the context of Sensor Fusion for better adaptivity, noise reduction and
information prioritization. We present preliminary works in this direction.
|
2501.07458
|
Understanding and Benchmarking Artificial Intelligence: OpenAI's o3 Is
Not AGI
|
cs.AI cs.PF
|
OpenAI's o3 achieves a high score of 87.5 % on ARC-AGI, a benchmark proposed
to measure intelligence. This raises the question whether systems based on
Large Language Models (LLMs), particularly o3, demonstrate intelligence and
progress towards artificial general intelligence (AGI). Building on the
distinction between skills and intelligence made by Fran\c{c}ois Chollet, the
creator of ARC-AGI, a new understanding of intelligence is introduced: an agent
is the more intelligent, the more efficiently it can achieve the more diverse
goals in the more diverse worlds with the less knowledge. An analysis of the
ARC-AGI benchmark shows that its tasks represent a very specific type of
problem that can be solved by massive trialling of combinations of predefined
operations. This method is also applied by o3, achieving its high score through
the extensive use of computing power. However, for most problems in the
physical world and in the human domain, solutions cannot be tested in advance
and predefined operations are not available. Consequently, massive trialling of
predefined operations, as o3 does, cannot be a basis for AGI - instead, new
approaches are required that can reliably solve a wide variety of problems
without existing skills. To support this development, a new benchmark for
intelligence is outlined that covers a much higher diversity of unknown tasks
to be solved, thus enabling a comprehensive assessment of intelligence and of
progress towards AGI.
|
2501.07461
|
A Linear Parameter-Varying Framework for the Analysis of Time-Varying
Optimization Algorithms
|
math.OC cs.SY eess.SY
|
In this paper we propose a framework to analyze iterative first-order
optimization algorithms for time-varying convex optimization. We assume that
the temporal variability is caused by a time-varying parameter entering the
objective, which can be measured at the time of decision but whose future
values are unknown. We consider the case of strongly convex objective functions
with Lipschitz continuous gradients and address the class of running algorithms
where only one iteration per time change is performed. We model these
algorithms as discrete-time linear parameter varying (LPV) systems in feedback
with a time-varying gradient. We leverage the approach of analyzing algorithms
as uncertain control interconnections with integral quadratic constraints
(IQCs) and generalize that framework to the time-varying case. We propose novel
IQCs that are capable of capturing the behavior of time-varying nonlinearities
and leverage techniques from the LPV literature to establish novel bounds on
the tracking error. Quantitative bounds can be computed by solving a
semi-definite program and can be interpreted as an input-to-state stability
result with respect to a disturbance signal which increases with the temporal
variability of the problem. As a departure from results in this research area,
our bounds introduce terms that can be interpreted as a temporal rate of change
in the cost function and the optimal value. We exemplify our main results with
numerical experiments that showcase how our analysis framework is able to
capture convergence rates of different first-order algorithms for time-varying
optimization through the choice of IQC and rate bounds.
|
2501.07462
|
The Sense of Agency in Assistive Robotics Using Shared Autonomy
|
cs.RO
|
Sense of agency is one factor that influences people's preferences for robot
assistance and a phenomenon from cognitive science that represents the
experience of control over one's environment. However, in assistive robotics
literature, we often see paradigms that optimize measures like task success and
cognitive load, rather than sense of agency. In fact, prior work has found that
participants sometimes express a preference for paradigms, such as direct
teleoperation, which do not perform well with those other metrics but give more
control to the user. In this work, we focus on a subset of assistance paradigms
for manipulation called shared autonomy in which the system combines control
signals from the user and the automated control. We run a study to evaluate
sense of agency and show that higher robot autonomy during assistance leads to
improved task performance but a decreased sense of agency, indicating a
potential trade-off between task performance and sense of agency. From our
findings, we discuss the relation between sense of agency and optimality, and
we consider a proxy metric for a component of sense of agency which might
enable us to build systems that monitor and maintain sense of agency in real
time.
|
2501.07468
|
From Screens to Scenes: A Survey of Embodied AI in Healthcare
|
cs.AI
|
Healthcare systems worldwide face persistent challenges in efficiency,
accessibility, and personalization. Powered by modern AI technologies such as
multimodal large language models and world models, Embodied AI (EmAI)
represents a transformative frontier, offering enhanced autonomy and the
ability to interact with the physical world to address these challenges. As an
interdisciplinary and rapidly evolving research domain, "EmAI in healthcare"
spans diverse fields such as algorithms, robotics, and biomedicine. This
complexity underscores the importance of timely reviews and analyses to track
advancements, address challenges, and foster cross-disciplinary collaboration.
In this paper, we provide a comprehensive overview of the "brain" of EmAI for
healthcare, wherein we introduce foundational AI algorithms for perception,
actuation, planning, and memory, and focus on presenting the healthcare
applications spanning clinical interventions, daily care & companionship,
infrastructure support, and biomedical research. Despite its promise, the
development of EmAI for healthcare is hindered by critical challenges such as
safety concerns, gaps between simulation platforms and real-world applications,
the absence of standardized benchmarks, and uneven progress across
interdisciplinary domains. We discuss the technical barriers and explore
ethical considerations, offering a forward-looking perspective on the future of
EmAI in healthcare. A hierarchical framework of intelligent levels for EmAI
systems is also introduced to guide further development. By providing
systematic insights, this work aims to inspire innovation and practical
applications, paving the way for a new era of intelligent, patient-centered
healthcare.
|
2501.07473
|
Quantifying Polarization: A Comparative Study of Measures and Methods
|
cs.CY cs.SI physics.soc-ph
|
Political polarization, a key driver of social fragmentation, has drawn
increasing attention for its role in shaping online and offline discourse.
Despite significant efforts, accurately measuring polarization within
ideological distributions remains a challenge. This study evaluates five widely
used polarization measures, testing their strengths and weaknesses with
synthetic datasets and a real-world case study on YouTube discussions during
the 2020 U.S. Presidential Election. Building on these findings, we present a
novel adaptation of Kleinberg's burst detection algorithm to improve mode
detection in polarized distributions. By offering both a critical review and an
innovative methodological tool, this work advances the analysis of ideological
patterns in social media discourse.
|
2501.07474
|
Estimating Musical Surprisal in Audio
|
cs.SD cs.AI eess.AS
|
In modeling musical surprisal expectancy with computational methods, it has
been proposed to use the information content (IC) of one-step predictions from
an autoregressive model as a proxy for surprisal in symbolic music. With an
appropriately chosen model, the IC of musical events has been shown to
correlate with human perception of surprise and complexity aspects, including
tonal and rhythmic complexity. This work investigates whether an analogous
methodology can be applied to music audio. We train an autoregressive
Transformer model to predict compressed latent audio representations of a
pretrained autoencoder network. We verify learning effects by estimating the
decrease in IC with repetitions. We investigate the mean IC of musical segment
types (e.g., A or B) and find that segment types appearing later in a piece
have a higher IC than earlier ones on average. We investigate the IC's relation
to audio and musical features and find it correlated with timbral variations
and loudness and, to a lesser extent, dissonance, rhythmic complexity, and
onset density related to audio and musical features. Finally, we investigate if
the IC can predict EEG responses to songs and thus model humans' surprisal in
music. We provide code for our method on github.com/sonycslparis/audioic.
|
2501.07476
|
Encrypted Computation of Collision Probability for Secure Satellite
Conjunction Analysis
|
cs.CR cs.SY eess.SY
|
The computation of collision probability ($\mathcal{P}_c$) is crucial for
space environmentalism and sustainability by providing decision-making
knowledge that can prevent collisions between anthropogenic space objects.
However, the accuracy and precision of $\mathcal{P}_c$ computations is often
compromised by limitations in computational resources and data availability.
While significant improvements have been made in the computational aspects, the
rising concerns regarding the privacy of collaborative data sharing can be a
major limiting factor in the future conjunction analysis and risk assessment,
especially as the space environment grows increasingly privatized, competitive,
and fraught with conflicting strategic interests. This paper argues that the
importance of privacy measures in space situational awareness (SSA) is
underappreciated, and regulatory and compliance measures currently in place are
not sufficient by themselves, presenting a significant gap.
To address this gap, we introduce a novel encrypted architecture that
leverages advanced cryptographic techniques, including homomorphic encryption
(HE) and multi-party computation (MPC), to safeguard the privacy of entities
computing space sustainability metrics, inter alia, $\mathcal{P}_c$. Our
proposed protocol, Encrypted $\mathcal{P}_c$, integrates the Monte Carlo
estimation algorithm with cryptographic solutions, enabling secure collision
probability computation without exposing sensitive or proprietary information.
This research advances secure conjunction analysis by developing a secure MPC
protocol for $\mathcal{P}_c$ computation and highlights the need for innovative
protocols to ensure a more secure and cooperative SSA landscape.
|
2501.07478
|
3DGS-to-PC: Convert a 3D Gaussian Splatting Scene into a Dense Point
Cloud or Mesh
|
cs.GR cs.CV
|
3D Gaussian Splatting (3DGS) excels at producing highly detailed 3D
reconstructions, but these scenes often require specialised renderers for
effective visualisation. In contrast, point clouds are a widely used 3D
representation and are compatible with most popular 3D processing software, yet
converting 3DGS scenes into point clouds is a complex challenge. In this work
we introduce 3DGS-to-PC, a flexible and highly customisable framework that is
capable of transforming 3DGS scenes into dense, high-accuracy point clouds. We
sample points probabilistically from each Gaussian as a 3D density function. We
additionally threshold new points using the Mahalanobis distance to the
Gaussian centre, preventing extreme outliers. The result is a point cloud that
closely represents the shape encoded into the 3D Gaussian scene. Individual
Gaussians use spherical harmonics to adapt colours depending on view, and each
point may contribute only subtle colour hints to the resulting rendered scene.
To avoid spurious or incorrect colours that do not fit with the final point
cloud, we recalculate Gaussian colours via a customised image rendering
approach, assigning each Gaussian the colour of the pixel to which it
contributes most across all views. 3DGS-to-PC also supports mesh generation
through Poisson Surface Reconstruction, applied to points sampled from
predicted surface Gaussians. This allows coloured meshes to be generated from
3DGS scenes without the need for re-training. This package is highly
customisable and capability of simple integration into existing 3DGS pipelines.
3DGS-to-PC provides a powerful tool for converting 3DGS data into point cloud
and surface-based formats.
|
2501.07482
|
TiEBe: A Benchmark for Assessing the Current Knowledge of Large Language
Models
|
cs.CL cs.AI
|
In a rapidly evolving knowledge landscape and the increasing adoption of
large language models, a need has emerged to keep these models continuously
updated with current events. While existing benchmarks evaluate general factual
recall, they often overlook two critical aspects: the ability of models to
integrate evolving knowledge through continual learning and the significant
regional disparities in their performance. To address these gaps, we introduce
the Timely Events Benchmark (TiEBe), a dataset containing over 11,000
question-answer pairs focused on globally and regionally significant events.
TiEBe leverages structured retrospective data from Wikipedia, enabling
continuous updates to assess LLMs' knowledge of evolving global affairs and
their understanding of events across different regions. Our benchmark
demonstrates that LLMs exhibit substantial geographic disparities in factual
recall, emphasizing the need for more balanced global knowledge representation.
Furthermore, TiEBe serves as a tool for evaluating continual learning
strategies, providing insights into models' ability to acquire new information
without forgetting past knowledge.
|
2501.07486
|
Smart Learning in the 21st Century: Advancing Constructionism Across
Three Digital Epochs
|
cs.CY cs.AI
|
This article explores the evolution of constructionism as an educational
framework, tracing its relevance and transformation across three pivotal eras:
the advent of personal computing, the networked society, and the current era of
generative AI. Rooted in Seymour Papert constructionist philosophy, this study
examines how constructionist principles align with the expanding role of
digital technology in personal and collective learning. We discuss the
transformation of educational environments from hierarchical instructionism to
constructionist models that emphasize learner autonomy and interactive,
creative engagement. Central to this analysis is the concept of an expanded
personality, wherein digital tools and AI integration fundamentally reshape
individual self-perception and social interactions. By integrating
constructionism into the paradigm of smart education, we propose it as a
foundational approach to personalized and democratized learning. Our findings
underscore constructionism enduring relevance in navigating the complexities of
technology-driven education, providing insights for educators and policymakers
seeking to harness digital innovations to foster adaptive, student-centered
learning experiences.
|
2501.07487
|
Data and System Perspectives of Sustainable Artificial Intelligence
|
cs.AI
|
Sustainable AI is a subfield of AI for concerning developing and using AI
systems in ways of aiming to reduce environmental impact and achieve
sustainability. Sustainable AI is increasingly important given that training of
and inference with AI models such as large langrage models are consuming a
large amount of computing power. In this article, we discuss current issues,
opportunities and example solutions for addressing these issues, and future
challenges to tackle, from the data and system perspectives, related to data
acquisition, data processing, and AI model training and inference.
|
2501.07493
|
Exploring and Mitigating Adversarial Manipulation of Voting-Based
Leaderboards
|
cs.LG cs.CR
|
It is now common to evaluate Large Language Models (LLMs) by having humans
manually vote to evaluate model outputs, in contrast to typical benchmarks that
evaluate knowledge or skill at some particular task. Chatbot Arena, the most
popular benchmark of this type, ranks models by asking users to select the
better response between two randomly selected models (without revealing which
model was responsible for the generations). These platforms are widely trusted
as a fair and accurate measure of LLM capabilities. In this paper, we show that
if bot protection and other defenses are not implemented, these voting-based
benchmarks are potentially vulnerable to adversarial manipulation.
Specifically, we show that an attacker can alter the leaderboard (to promote
their favorite model or demote competitors) at the cost of roughly a thousand
votes (verified in a simulated, offline version of Chatbot Arena). Our attack
consists of two steps: first, we show how an attacker can determine which model
was used to generate a given reply with more than $95\%$ accuracy; and then,
the attacker can use this information to consistently vote for (or against) a
target model. Working with the Chatbot Arena developers, we identify, propose,
and implement mitigations to improve the robustness of Chatbot Arena against
adversarial manipulation, which, based on our analysis, substantially increases
the cost of such attacks. Some of these defenses were present before our
collaboration, such as bot protection with Cloudflare, malicious user
detection, and rate limiting. Others, including reCAPTCHA and login are being
integrated to strengthen the security in Chatbot Arena.
|
2501.07496
|
Aligning First, Then Fusing: A Novel Weakly Supervised Multimodal
Violence Detection Method
|
cs.CV
|
Weakly supervised violence detection refers to the technique of training
models to identify violent segments in videos using only video-level labels.
Among these approaches, multimodal violence detection, which integrates
modalities such as audio and optical flow, holds great potential. Existing
methods in this domain primarily focus on designing multimodal fusion models to
address modality discrepancies. In contrast, we take a different approach;
leveraging the inherent discrepancies across modalities in violence event
representation to propose a novel multimodal semantic feature alignment method.
This method sparsely maps the semantic features of local, transient, and less
informative modalities ( such as audio and optical flow ) into the more
informative RGB semantic feature space. Through an iterative process, the
method identifies the suitable no-zero feature matching subspace and aligns the
modality-specific event representations based on this subspace, enabling the
full exploitation of information from all modalities during the subsequent
modality fusion stage. Building on this, we design a new weakly supervised
violence detection framework that consists of unimodal multiple-instance
learning for extracting unimodal semantic features, multimodal alignment,
multimodal fusion, and final detection. Experimental results on benchmark
datasets demonstrate the effectiveness of our method, achieving an average
precision (AP) of 86.07% on the XD-Violence dataset. Our code is available at
https://github.com/xjpp2016/MAVD.
|
2501.07498
|
Computing Safety Margins of Parameterized Nonlinear Systems for
Vulnerability Assessment via Trajectory Sensitivities
|
eess.SY cs.SY
|
Physical systems experience nonlinear disturbances which have the potential
to disrupt desired behavior. For a particular disturbance, whether or not the
system recovers from the disturbance to a desired stable equilibrium point
depends on system parameter values, which are typically uncertain and
time-varying. Therefore, to quantify proximity to vulnerability we define the
safety margin to be the smallest change in parameter values from a nominal
value such that the system will no longer be able to recover from the
disturbance. Safety margins are valuable but challenging to compute as related
methods, such as those for robust region of attraction estimation, are often
either overly conservative or computationally intractable for high dimensional
systems. Recently, we developed algorithms to compute safety margins
efficiently and non-conservatively by exploiting the large sensitivity of the
system trajectory near the region of attraction boundary to small
perturbations. Although these algorithms have enjoyed empirical success, they
lack theoretical guarantees that would ensure their generalizability. This work
develops a novel characterization of safety margins in terms of trajectory
sensitivities, and uses this to derive well-posedness and convergence
guarantees for these algorithms, enabling their generalizability and successful
application to a large class of nonlinear systems.
|
2501.07499
|
Three-view Focal Length Recovery From Homographies
|
cs.CV
|
In this paper, we propose a novel approach for recovering focal lengths from
three-view homographies. By examining the consistency of normal vectors between
two homographies, we derive new explicit constraints between the focal lengths
and homographies using an elimination technique. We demonstrate that three-view
homographies provide two additional constraints, enabling the recovery of one
or two focal lengths. We discuss four possible cases, including three cameras
having an unknown equal focal length, three cameras having two different
unknown focal lengths, three cameras where one focal length is known, and the
other two cameras have equal or different unknown focal lengths. All the
problems can be converted into solving polynomials in one or two unknowns,
which can be efficiently solved using Sturm sequence or hidden variable
technique. Evaluation using both synthetic and real data shows that the
proposed solvers are both faster and more accurate than methods relying on
existing two-view solvers. The code and data are available on
https://github.com/kocurvik/hf
|
2501.07502
|
RbRL2.0: Integrated Reward and Policy Learning for Rating-based
Reinforcement Learning
|
cs.LG cs.AI
|
Reinforcement learning (RL), a common tool in decision making, learns
policies from various experiences based on the associated cumulative
return/rewards without treating them differently. On the contrary, humans often
learn to distinguish from different levels of performance and extract the
underlying trends towards improving their decision making for best performance.
Motivated by this, this paper proposes a novel RL method that mimics humans'
decision making process by differentiating among collected experiences for
effective policy learning. The main idea is to extract important directional
information from experiences with different performance levels, named ratings,
so that policies can be updated towards desired deviation from these
experiences with different ratings. Specifically, we propose a new policy loss
function that penalizes distribution similarities between the current policy
and failed experiences with different ratings, and assign different weights to
the penalty terms based on the rating classes. Meanwhile, reward learning from
these rated samples can be integrated with the new policy loss towards an
integrated reward and policy learning from rated samples. Optimizing the
integrated reward and policy loss function will lead to the discovery of
directions for policy improvement towards maximizing cumulative rewards and
penalizing most from the lowest performance level while least from the highest
performance level. To evaluate the effectiveness of the proposed method, we
present results for experiments on a few typical environments that show
improved convergence and overall performance over the existing rating-based
reinforcement learning method with only reward learning.
|
2501.07507
|
Inductive Learning of Robot Task Knowledge from Raw Data and Online
Expert Feedback
|
cs.AI cs.LO cs.RO
|
The increasing level of autonomy of robots poses challenges of trust and
social acceptance, especially in human-robot interaction scenarios. This
requires an interpretable implementation of robotic cognitive capabilities,
possibly based on formal methods as logics for the definition of task
specifications. However, prior knowledge is often unavailable in complex
realistic scenarios.
In this paper, we propose an offline algorithm based on inductive logic
programming from noisy examples to extract task specifications (i.e., action
preconditions, constraints and effects) directly from raw data of few
heterogeneous (i.e., not repetitive) robotic executions. Our algorithm
leverages on the output of any unsupervised action identification algorithm
from video-kinematic recordings. Combining it with the definition of very
basic, almost task-agnostic, commonsense concepts about the environment, which
contribute to the interpretability of our methodology, we are able to learn
logical axioms encoding preconditions of actions, as well as their effects in
the event calculus paradigm. Since the quality of learned specifications
depends mainly on the accuracy of the action identification algorithm, we also
propose an online framework for incremental refinement of task knowledge from
user feedback, guaranteeing safe execution. Results in a standard manipulation
task and benchmark for user training in the safety-critical surgical robotic
scenario, show the robustness, data- and time-efficiency of our methodology,
with promising results towards the scalability in more complex domains.
|
2501.07508
|
Improving DeFi Accessibility through Efficient Liquidity Provisioning
with Deep Reinforcement Learning
|
q-fin.CP cs.LG
|
This paper applies deep reinforcement learning (DRL) to optimize liquidity
provisioning in Uniswap v3, a decentralized finance (DeFi) protocol
implementing an automated market maker (AMM) model with concentrated liquidity.
We model the liquidity provision task as a Markov Decision Process (MDP) and
train an active liquidity provider (LP) agent using the Proximal Policy
Optimization (PPO) algorithm. The agent dynamically adjusts liquidity positions
by using information about price dynamics to balance fee maximization and
impermanent loss mitigation. We use a rolling window approach for training and
testing, reflecting realistic market conditions and regime shifts. This study
compares the data-driven performance of the DRL-based strategy against common
heuristics adopted by small retail LP actors that do not systematically modify
their liquidity positions. By promoting more efficient liquidity management,
this work aims to make DeFi markets more accessible and inclusive for a broader
range of participants. Through a data-driven approach to liquidity management,
this work seeks to contribute to the ongoing development of more efficient and
user-friendly DeFi markets.
|
2501.07515
|
The Paradox of Success in Evolutionary and Bioinspired Optimization:
Revisiting Critical Issues, Key Studies, and Methodological Pathways
|
cs.NE cs.AI
|
Evolutionary and bioinspired computation are crucial for efficiently
addressing complex optimization problems across diverse application domains. By
mimicking processes observed in nature, like evolution itself, these algorithms
offer innovative solutions beyond the reach of traditional optimization
methods. They excel at finding near-optimal solutions in large, complex search
spaces, making them invaluable in numerous fields. However, both areas are
plagued by challenges at their core, including inadequate benchmarking,
problem-specific overfitting, insufficient theoretical grounding, and
superfluous proposals justified only by their biological metaphor. This
overview recapitulates and analyzes in depth the criticisms concerning the lack
of innovation and rigor in experimental studies within the field. To this end,
we examine the judgmental positions of the existing literature in an informed
attempt to guide the research community toward directions of solid contribution
and advancement in these areas. We summarize guidelines for the design of
evolutionary and bioinspired optimizers, the development of experimental
comparisons, and the derivation of novel proposals that take a step further in
the field. We provide a brief note on automating the process of creating these
algorithms, which may help align metaheuristic optimization research with its
primary objective (solving real-world problems), provided that our identified
pathways are followed. Our conclusions underscore the need for a sustained push
towards innovation and the enforcement of methodological rigor in prospective
studies to fully realize the potential of these advanced computational
techniques.
|
2501.07516
|
Determining Disturbance Recovery Conditions by Inverse Sensitivity
Minimization
|
eess.SY cs.SY
|
Power systems naturally experience disturbances, some of which can damage
equipment and disrupt consumers. It is important to quickly assess the likely
consequences of credible disturbances and take preventive action, if necessary.
However, assessing the impact of potential disturbances is challenging because
many of the influential factors, such as loading patterns, controller settings
and load dynamics, are not precisely known. To address this issue, the paper
introduces the concept of parameter-space recovery regions. For each
disturbance, the corresponding recovery region is the region of parameter space
for which the system will recover to the desired operating point. The boundary
of the recovery region establishes the separation between parameter values that
result in trouble-free recovery and those that incur undesirable non-recovery.
The safety margin for a given set of parameter values is defined as the
smallest distance (in parameter space) between the given values and the
recovery boundary. Novel numerical algorithms with theoretical guarantees are
presented for efficiently computing recovery boundaries and safety margins.
Unlike prior methods, which tend to be overly conservative and restricted to
low dimensional parameter space, these methods compute safety margins to
arbitrary user-specified accuracy and do so efficiently in high dimensional
parameter space. The efficacy of the methods is demonstrated using the IEEE
39-bus benchmark power system, where safety margins are computed for cases that
consider up to 86 parameters, and reveal unexpected safety implications that
would not have been observed otherwise.
|
2501.07523
|
Parallel Key-Value Cache Fusion for Position Invariant RAG
|
cs.AI cs.CL
|
Recent advancements in Large Language Models (LLMs) underscore the necessity
of Retrieval Augmented Generation (RAG) to leverage external information.
However, LLMs are sensitive to the position of relevant information within
contexts and tend to generate incorrect responses when such information is
placed in the middle, known as `Lost in the Middle' phenomenon. In this paper,
we introduce a framework that generates consistent outputs for decoder-only
models, irrespective of the input context order. Experimental results for three
open domain question answering tasks demonstrate position invariance, where the
model is not sensitive to input context order, and superior robustness to
irrelevent passages compared to prevailing approaches for RAG pipelines.
|
2501.07525
|
RadAlign: Advancing Radiology Report Generation with Vision-Language
Concept Alignment
|
cs.CV cs.AI cs.LG
|
Automated chest radiographs interpretation requires both accurate disease
classification and detailed radiology report generation, presenting a
significant challenge in the clinical workflow. Current approaches either focus
on classification accuracy at the expense of interpretability or generate
detailed but potentially unreliable reports through image captioning
techniques. In this study, we present RadAlign, a novel framework that combines
the predictive accuracy of vision-language models (VLMs) with the reasoning
capabilities of large language models (LLMs). Inspired by the radiologist's
workflow, RadAlign first employs a specialized VLM to align visual features
with key medical concepts, achieving superior disease classification with an
average AUC of 0.885 across multiple diseases. These recognized medical
conditions, represented as text-based concepts in the aligned visual-language
space, are then used to prompt LLM-based report generation. Enhanced by a
retrieval-augmented generation mechanism that grounds outputs in similar
historical cases, RadAlign delivers superior report quality with a GREEN score
of 0.678, outperforming state-of-the-art methods' 0.634. Our framework
maintains strong clinical interpretability while reducing hallucinations,
advancing automated medical imaging and report analysis through integrated
predictive and generative AI. Code is available at
https://github.com/difeigu/RadAlign.
|
2501.07530
|
IP-FaceDiff: Identity-Preserving Facial Video Editing with Diffusion
|
cs.CV
|
Facial video editing has become increasingly important for content creators,
enabling the manipulation of facial expressions and attributes. However,
existing models encounter challenges such as poor editing quality, high
computational costs and difficulties in preserving facial identity across
diverse edits. Additionally, these models are often constrained to editing
predefined facial attributes, limiting their flexibility to diverse editing
prompts. To address these challenges, we propose a novel facial video editing
framework that leverages the rich latent space of pre-trained text-to-image
(T2I) diffusion models and fine-tune them specifically for facial video editing
tasks. Our approach introduces a targeted fine-tuning scheme that enables high
quality, localized, text-driven edits while ensuring identity preservation
across video frames. Additionally, by using pre-trained T2I models during
inference, our approach significantly reduces editing time by 80%, while
maintaining temporal consistency throughout the video sequence. We evaluate the
effectiveness of our approach through extensive testing across a wide range of
challenging scenarios, including varying head poses, complex action sequences,
and diverse facial expressions. Our method consistently outperforms existing
techniques, demonstrating superior performance across a broad set of metrics
and benchmarks.
|
2501.07531
|
Evaluating Agent-based Program Repair at Google
|
cs.SE cs.AI
|
Agent-based program repair offers to automatically resolve complex bugs
end-to-end by combining the planning, tool use, and code generation abilities
of modern LLMs. Recent work has explored the use of agent-based repair
approaches on the popular open-source SWE-Bench, a collection of bugs from
highly-rated GitHub Python projects. In addition, various agentic approaches
such as SWE-Agent have been proposed to solve bugs in this benchmark. This
paper explores the viability of using an agentic approach to address bugs in an
enterprise context. To investigate this, we curate an evaluation set of 178
bugs drawn from Google's issue tracking system. This dataset spans both
human-reported (78) and machine-reported bugs (100).
To establish a repair performance baseline on this benchmark, we implement
Passerine, an agent similar in spirit to SWE-Agent that can work within
Google's development environment. We show that with 20 trajectory samples and
Gemini 1.5 Pro, Passerine can produce a patch that passes bug tests (i.e.,
plausible) for 73% of machine-reported and 25.6% of human-reported bugs in our
evaluation set. After manual examination, we found that 43% of machine-reported
bugs and 17.9% of human-reported bugs have at least one patch that is
semantically equivalent to the ground-truth patch.
These results establish a baseline on an industrially relevant benchmark,
which as we show, contains bugs drawn from a different distribution -- in terms
of language diversity, size, and spread of changes, etc. -- compared to those
in the popular SWE-Bench dataset.
|
2501.07532
|
Investigating Large Language Models in Inferring Personality Traits from
User Conversations
|
cs.CL
|
Large Language Models (LLMs) are demonstrating remarkable human like
capabilities across diverse domains, including psychological assessment. This
study evaluates whether LLMs, specifically GPT-4o and GPT-4o mini, can infer
Big Five personality traits and generate Big Five Inventory-10 (BFI-10) item
scores from user conversations under zero-shot prompting conditions. Our
findings reveal that incorporating an intermediate step--prompting for BFI-10
item scores before calculating traits--enhances accuracy and aligns more
closely with the gold standard than direct trait inference. This structured
approach underscores the importance of leveraging psychological frameworks in
improving predictive precision. Additionally, a group comparison based on
depressive symptom presence revealed differential model performance.
Participants were categorized into two groups: those experiencing at least one
depressive symptom and those without symptoms. GPT-4o mini demonstrated
heightened sensitivity to depression-related shifts in traits such as
Neuroticism and Conscientiousness within the symptom-present group, whereas
GPT-4o exhibited strengths in nuanced interpretation across groups. These
findings underscore the potential of LLMs to analyze real-world psychological
data effectively, offering a valuable foundation for interdisciplinary research
at the intersection of artificial intelligence and psychology.
|
2501.07533
|
Confident Pseudo-labeled Diffusion Augmentation for Canine Cardiomegaly
Detection
|
cs.CV
|
Canine cardiomegaly, marked by an enlarged heart, poses serious health risks
if undetected, requiring accurate diagnostic methods. Current detection models
often rely on small, poorly annotated datasets and struggle to generalize
across diverse imaging conditions, limiting their real-world applicability. To
address these issues, we propose a Confident Pseudo-labeled Diffusion
Augmentation (CDA) model for identifying canine cardiomegaly. Our approach
addresses the challenge of limited high-quality training data by employing
diffusion models to generate synthetic X-ray images and annotate Vertebral
Heart Score key points, thereby expanding the dataset. We also employ a
pseudo-labeling strategy with Monte Carlo Dropout to select high-confidence
labels, refine the synthetic dataset, and improve accuracy. Iteratively
incorporating these labels enhances the model's performance, overcoming the
limitations of existing approaches. Experimental results show that the CDA
model outperforms traditional methods, achieving state-of-the-art accuracy in
canine cardiomegaly detection. The code implementation is available at
https://github.com/Shira7z/CDA.
|
2501.07534
|
Investigating Map-Based Path Loss Models: A Study of Feature
Representations in Convolutional Neural Networks
|
cs.LG eess.SP
|
Path loss prediction is a beneficial tool for efficient use of the radio
frequency spectrum. Building on prior research on high-resolution map-based
path loss models, this paper studies convolutional neural network input
representations in more detail. We investigate different methods of
representing scalar features in convolutional neural networks. Specifically, we
compare using frequency and distance as input channels to convolutional layers
or as scalar inputs to regression layers. We assess model performance using
three different feature configurations and find that representing scalar
features as image channels results in the strongest generalization.
|
2501.07536
|
ML Mule: Mobile-Driven Context-Aware Collaborative Learning
|
cs.LG cs.HC
|
Artificial intelligence has been integrated into nearly every aspect of daily
life, powering applications from object detection with computer vision to large
language models for writing emails and compact models in smart homes. These
machine learning models cater to individual users but are often detached from
them, as they are typically stored and processed in centralized data centers.
This centralized approach raises privacy concerns, incurs high infrastructure
costs, and struggles with personalization. Federated and fully decentralized
learning methods have been proposed to address these issues, but they still
depend on centralized servers or face slow convergence due to communication
constraints. To overcome these challenges, we propose ML Mule, a approach that
utilizes individual mobile devices as 'Mules' to train and transport model
snapshots as they move through physical spaces, sharing these models with the
physical 'Spaces' they inhabit. This method implicitly forms affinity groups
among devices associated with users who share particular spaces, enabling
collaborative model evolution, and protecting users' privacy. Our approach
addresses several major shortcomings of traditional, federated, and fully
decentralized learning systems. The proposed framework represents a new class
of machine learning methods that are more robust, distributed, and
personalized, bringing the field closer to realizing the original vision of
intelligent, adaptive, and genuinely context-aware smart environments. The
results show that ML Mule converges faster and achieves higher model accuracy
compared to other existing methods.
|
2501.07542
|
Imagine while Reasoning in Space: Multimodal Visualization-of-Thought
|
cs.CL cs.CV cs.LG
|
Chain-of-Thought (CoT) prompting has proven highly effective for enhancing
complex reasoning in Large Language Models (LLMs) and Multimodal Large Language
Models (MLLMs). Yet, it struggles in complex spatial reasoning tasks.
Nonetheless, human cognition extends beyond language alone, enabling the
remarkable capability to think in both words and images. Inspired by this
mechanism, we propose a new reasoning paradigm, Multimodal
Visualization-of-Thought (MVoT). It enables visual thinking in MLLMs by
generating image visualizations of their reasoning traces. To ensure
high-quality visualization, we introduce token discrepancy loss into
autoregressive MLLMs. This innovation significantly improves both visual
coherence and fidelity. We validate this approach through several dynamic
spatial reasoning tasks. Experimental results reveal that MVoT demonstrates
competitive performance across tasks. Moreover, it exhibits robust and reliable
improvements in the most challenging scenarios where CoT fails. Ultimately,
MVoT establishes new possibilities for complex reasoning tasks where visual
thinking can effectively complement verbal reasoning.
|
2501.07554
|
SST-EM: Advanced Metrics for Evaluating Semantic, Spatial and Temporal
Aspects in Video Editing
|
cs.CV cs.CL
|
Video editing models have advanced significantly, but evaluating their
performance remains challenging. Traditional metrics, such as CLIP text and
image scores, often fall short: text scores are limited by inadequate training
data and hierarchical dependencies, while image scores fail to assess temporal
consistency. We present SST-EM (Semantic, Spatial, and Temporal Evaluation
Metric), a novel evaluation framework that leverages modern Vision-Language
Models (VLMs), Object Detection, and Temporal Consistency checks. SST-EM
comprises four components: (1) semantic extraction from frames using a VLM, (2)
primary object tracking with Object Detection, (3) focused object refinement
via an LLM agent, and (4) temporal consistency assessment using a Vision
Transformer (ViT). These components are integrated into a unified metric with
weights derived from human evaluations and regression analysis. The name SST-EM
reflects its focus on Semantic, Spatial, and Temporal aspects of video
evaluation. SST-EM provides a comprehensive evaluation of semantic fidelity and
temporal smoothness in video editing. The source code is available in the
\textbf{\href{https://github.com/custommetrics-sst/SST_CustomEvaluationMetrics.git}{GitHub
Repository}}.
|
2501.07555
|
Dynamic Prototype Rehearsal for Continual Learning in ECG Arrhythmia
Detection
|
cs.LG
|
Continual Learning (CL) methods aim to learn from a sequence of tasks while
avoiding the challenge of forgetting previous knowledge. We present DREAM-CL, a
novel CL method for ECG arrhythmia detection that introduces dynamic prototype
rehearsal memory. DREAM-CL selects representative prototypes by clustering data
based on learning behavior during each training session. Within each cluster,
we apply a smooth sorting operation that ranks samples by training difficulty,
compressing extreme values and removing outliers. The more challenging samples
are then chosen as prototypes for the rehearsal memory, ensuring effective
knowledge retention across sessions. We evaluate our method on
time-incremental, class-incremental, and lead-incremental scenarios using two
widely used ECG arrhythmia datasets, Chapman and PTB-XL. The results
demonstrate that DREAM-CL outperforms the state-of-the-art in CL for ECG
arrhythmia detection. Detailed ablation and sensitivity studies are performed
to validate the different design choices of our method.
|
2501.07556
|
MatchAnything: Universal Cross-Modality Image Matching with Large-Scale
Pre-Training
|
cs.CV
|
Image matching, which aims to identify corresponding pixel locations between
images, is crucial in a wide range of scientific disciplines, aiding in image
registration, fusion, and analysis. In recent years, deep learning-based image
matching algorithms have dramatically outperformed humans in rapidly and
accurately finding large amounts of correspondences. However, when dealing with
images captured under different imaging modalities that result in significant
appearance changes, the performance of these algorithms often deteriorates due
to the scarcity of annotated cross-modal training data. This limitation hinders
applications in various fields that rely on multiple image modalities to obtain
complementary information. To address this challenge, we propose a large-scale
pre-training framework that utilizes synthetic cross-modal training signals,
incorporating diverse data from various sources, to train models to recognize
and match fundamental structures across images. This capability is transferable
to real-world, unseen cross-modality image matching tasks. Our key finding is
that the matching model trained with our framework achieves remarkable
generalizability across more than eight unseen cross-modality registration
tasks using the same network weight, substantially outperforming existing
methods, whether designed for generalization or tailored for specific tasks.
This advancement significantly enhances the applicability of image matching
technologies across various scientific disciplines and paves the way for new
applications in multi-modality human and artificial intelligence analysis and
beyond.
|
2501.07561
|
Design and Analysis of a Concatenated Code for Intersymbol Interference
Wiretap Channels
|
cs.IT math.IT
|
We propose a two-stage concatenated coding scheme for reliable and
information-theoretically secure communication over intersymbol interference
wiretap channels. Motivated by the theoretical coding strategies that achieve
the secrecy capacity, our scheme integrates low-density parity-check (LDPC)
codes in the outer stage, forming a nested structure of wiretap codes, with
trellis codes in the inner stage to improve achievable secure rates. The
trellis code is specifically designed to transform the uniformly distributed
codewords produced by the LDPC code stage into a Markov process, achieving
tight lower bounds on the secrecy capacity. We further estimate the information
leakage rate of the proposed coding scheme using an upper bound. To meet the
weak secrecy criterion, we optimize degree distributions of the irregular LDPC
codes at the outer stage, essentially driving the estimated upper bound on the
information leakage rate to zero.
|
2501.07563
|
Training-Free Motion-Guided Video Generation with Enhanced Temporal
Consistency Using Motion Consistency Loss
|
cs.CV
|
In this paper, we address the challenge of generating temporally consistent
videos with motion guidance. While many existing methods depend on additional
control modules or inference-time fine-tuning, recent studies suggest that
effective motion guidance is achievable without altering the model architecture
or requiring extra training. Such approaches offer promising compatibility with
various video generation foundation models. However, existing training-free
methods often struggle to maintain consistent temporal coherence across frames
or to follow guided motion accurately. In this work, we propose a simple yet
effective solution that combines an initial-noise-based approach with a novel
motion consistency loss, the latter being our key innovation. Specifically, we
capture the inter-frame feature correlation patterns of intermediate features
from a video diffusion model to represent the motion pattern of the reference
video. We then design a motion consistency loss to maintain similar feature
correlation patterns in the generated video, using the gradient of this loss in
the latent space to guide the generation process for precise motion control.
This approach improves temporal consistency across various motion control tasks
while preserving the benefits of a training-free setup. Extensive experiments
show that our method sets a new standard for efficient, temporally coherent
video generation.
|
2501.07564
|
E2ESlack: An End-to-End Graph-Based Framework for Pre-Routing Slack
Prediction
|
cs.LG
|
Pre-routing slack prediction remains a critical area of research in
Electronic Design Automation (EDA). Despite numerous machine learning-based
approaches targeting this task, there is still a lack of a truly end-to-end
framework that engineers can use to obtain TNS/WNS metrics from raw circuit
data at the placement stage. Existing works have demonstrated effectiveness in
Arrival Time (AT) prediction but lack a mechanism for Required Arrival Time
(RAT) prediction, which is essential for slack prediction and obtaining TNS/WNS
metrics. In this work, we propose E2ESlack, an end-to-end graph-based framework
for pre-routing slack prediction. The framework includes a TimingParser that
supports DEF, SDF and LIB files for feature extraction and graph construction,
an arrival time prediction model and a fast RAT estimation module. To the best
of our knowledge, this is the first work capable of predicting path-level
slacks at the pre-routing stage. We perform extensive experiments and
demonstrate that our proposed RAT estimation method outperforms the SOTA
ML-based prediction method and also pre-routing STA tool. Additionally, the
proposed E2ESlack framework achieves TNS/WNS values comparable to post-routing
STA results while saving up to 23x runtime.
|
2501.07566
|
SafeSwarm: Decentralized Safe RL for the Swarm of Drones Landing in
Dense Crowds
|
cs.RO
|
This paper introduces a safe swarm of drones capable of performing landings
in crowded environments robustly by relying on Reinforcement Learning
techniques combined with Safe Learning. The developed system allows us to teach
the swarm of drones with different dynamics to land on moving landing pads in
an environment while avoiding collisions with obstacles and between agents.
The safe barrier net algorithm was developed and evaluated using a swarm of
Crazyflie 2.1 micro quadrotors, which were tested indoors with the Vicon motion
capture system to ensure precise localization and control.
Experimental results show that our system achieves landing accuracy of 2.25
cm with a mean time of 17 s and collision-free landings, underscoring its
effectiveness and robustness in real-world scenarios. This work offers a
promising foundation for applications in environments where safety and
precision are paramount.
|
2501.07570
|
Digital Twin for Smart Societies: A Catalyst for Inclusive and
Accessible Healthcare
|
cs.CY cs.SY eess.SY
|
With rapid digitization and digitalization, drawing a fine line between the
digital and the physical world has become nearly impossible. It has become
essential more than ever to integrate all spheres of life into a single Digital
Thread to address pressing challenges of modern society: accessible and
inclusive healthcare in terms of equality and equity. Techno-social
advancements and mutual acceptance have enabled the infusion of digital models
to simulate social settings with minimum resource utilization to make effective
decisions. However, a significant gap exists in feeding back the models with
appropriate real-time changes. In other words, active behavioral modeling of
modern society is lacking, influencing community healthcare as a whole. By
creating virtual replicas of (physical) behavioral systems, digital twins can
enable real-time monitoring, simulation, and optimization of urban dynamics.
This paper explores the potential of digital twins to promote inclusive
healthcare for evolving smart cities. We argue that digital twins can be used
to: Identify and address disparities in access to healthcare services,
Facilitate community participation, Simulate the impact of urban policies and
interventions on different groups of people, and Aid policy-making bodies for
better access to healthcare. This paper proposes several ways to use digital
twins to stitch the actual and virtual societies. Several discussed concepts
within this framework envision an active, integrated, and synchronized
community aware of data privacy and security. The proposal also provides
high-level step-wise transitions that will enable this transformation.
|
2501.07572
|
WebWalker: Benchmarking LLMs in Web Traversal
|
cs.CL cs.AI
|
Retrieval-augmented generation (RAG) demonstrates remarkable performance
across tasks in open-domain question-answering. However, traditional search
engines may retrieve shallow content, limiting the ability of LLMs to handle
complex, multi-layered information. To address it, we introduce WebWalkerQA, a
benchmark designed to assess the ability of LLMs to perform web traversal. It
evaluates the capacity of LLMs to traverse a website's subpages to extract
high-quality data systematically. We propose WebWalker, which is a multi-agent
framework that mimics human-like web navigation through an explore-critic
paradigm. Extensive experimental results show that WebWalkerQA is challenging
and demonstrates the effectiveness of RAG combined with WebWalker, through the
horizontal and vertical integration in real-world scenarios.
|
2501.07574
|
UnCommon Objects in 3D
|
cs.CV cs.AI cs.GR
|
We introduce Uncommon Objects in 3D (uCO3D), a new object-centric dataset for
3D deep learning and 3D generative AI. uCO3D is the largest publicly-available
collection of high-resolution videos of objects with 3D annotations that
ensures full-360$^{\circ}$ coverage. uCO3D is significantly more diverse than
MVImgNet and CO3Dv2, covering more than 1,000 object categories. It is also of
higher quality, due to extensive quality checks of both the collected videos
and the 3D annotations. Similar to analogous datasets, uCO3D contains
annotations for 3D camera poses, depth maps and sparse point clouds. In
addition, each object is equipped with a caption and a 3D Gaussian Splat
reconstruction. We train several large 3D models on MVImgNet, CO3Dv2, and uCO3D
and obtain superior results using the latter, showing that uCO3D is better for
learning applications.
|
2501.07575
|
Dataset Distillation via Committee Voting
|
cs.CV cs.AI
|
Dataset distillation aims to synthesize a smaller, representative dataset
that preserves the essential properties of the original data, enabling
efficient model training with reduced computational resources. Prior work has
primarily focused on improving the alignment or matching process between
original and synthetic data, or on enhancing the efficiency of distilling large
datasets. In this work, we introduce ${\bf C}$ommittee ${\bf V}$oting for ${\bf
D}$ataset ${\bf D}$istillation (CV-DD), a novel and orthogonal approach that
leverages the collective wisdom of multiple models or experts to create
high-quality distilled datasets. We start by showing how to establish a strong
baseline that already achieves state-of-the-art accuracy through leveraging
recent advancements and thoughtful adjustments in model design and optimization
processes. By integrating distributions and predictions from a committee of
models while generating high-quality soft labels, our method captures a wider
spectrum of data features, reduces model-specific biases and the adverse
effects of distribution shifts, leading to significant improvements in
generalization. This voting-based strategy not only promotes diversity and
robustness within the distilled dataset but also significantly reduces
overfitting, resulting in improved performance on post-eval tasks. Extensive
experiments across various datasets and IPCs (images per class) demonstrate
that Committee Voting leads to more reliable and adaptable distilled data
compared to single/multi-model distillation methods, demonstrating its
potential for efficient and accurate dataset distillation. Code is available
at: https://github.com/Jiacheng8/CV-DD.
|
2501.07582
|
Spin-Weighted Spherical Harmonics for Polarized Light Transport
|
cs.GR cs.CV
|
The objective of polarization rendering is to simulate the interaction of
light with materials exhibiting polarization-dependent behavior. However,
integrating polarization into rendering is challenging and increases
computational costs significantly. The primary difficulty lies in efficiently
modeling and computing the complex reflection phenomena associated with
polarized light. Specifically, frequency-domain analysis, essential for
efficient environment lighting and storage of complex light interactions, is
lacking. To efficiently simulate and reproduce polarized light interactions
using frequency-domain techniques, we address the challenge of maintaining
continuity in polarized light transport represented by Stokes vectors within
angular domains. The conventional spherical harmonics method cannot effectively
handle continuity and rotation invariance for Stokes vectors. To overcome this,
we develop a new method called polarized spherical harmonics (PSH) based on the
spin-weighted spherical harmonics theory. Our method provides a
rotation-invariant representation of Stokes vector fields. Furthermore, we
introduce frequency domain formulations of polarized rendering equations and
spherical convolution based on PSH. We first define spherical convolution on
Stokes vector fields in the angular domain, and it also provides efficient
computation of polarized light transport, nearly on an entry-wise product in
the frequency domain. Our frequency domain formulation, including spherical
convolution, led to the development of the first real-time polarization
rendering technique under polarized environmental illumination, named
precomputed polarized radiance transfer, using our polarized spherical
harmonics. Results demonstrate that our method can effectively and accurately
simulate and reproduce polarized light interactions in complex reflection
phenomena.
|
2501.07584
|
Open-source End-to-End Digital Beamforming System Modeling
|
eess.SP cs.SY eess.SY
|
Digital beamforming forms the foundation for massive MIMO in 6G wireless
communications. At their core, digital beamforming architectures provide key
benefits such as faster beam search, interference nulling via zero-force
beamforming, higher spectral capacity, and more increased flexibility. However,
they generally tradeoff power consumption due to the large number of ADCs in
such systems. This paper introduces an open-source MATLAB-based behavioral
hardware model of a general digital beamforming system. More specifically, it
models an end-to-end uplink between an arbitrary number of user elements (UEs)
and an arbitrarily large base station (BS) with and without a strong
interferer. This paper also presents and validates an equation-based model for
the effects of interference on thermal and quantization noise. The behavioral
model presented in this paper aims to deepen understanding of such digital
beamforming systems to enable system designers to make optimizations. The
results presented in this paper primarily center on implementations with
low-resolution ADCs and, thus, focus on the effects of system parameters,
including interferer strength, on quantization noise.
|
2501.07585
|
Multi-task Domain Adaptation for Computation Offloading in
Edge-intelligence Networks
|
cs.LG cs.AI
|
In the field of multi-access edge computing (MEC), efficient computation
offloading is crucial for improving resource utilization and reducing latency
in dynamically changing environments. This paper introduces a new approach,
termed as Multi-Task Domain Adaptation (MTDA), aiming to enhance the ability of
computational offloading models to generalize in the presence of domain shifts,
i.e., when new data in the target environment significantly differs from the
data in the source domain. The proposed MTDA model incorporates a
teacher-student architecture that allows continuous adaptation without
necessitating access to the source domain data during inference, thereby
maintaining privacy and reducing computational overhead. Utilizing a multi-task
learning framework that simultaneously manages offloading decisions and
resource allocation, the proposed MTDA approach outperforms benchmark methods
regarding mean squared error and accuracy, particularly in environments with
increasing numbers of users. It is observed by means of computer simulation
that the proposed MTDA model maintains high performance across various
scenarios, demonstrating its potential for practical deployment in emerging MEC
applications.
|
2501.07590
|
Ultrafast pulsed laser evaluation of Single Event Transients in
opto-couplers
|
physics.ins-det cs.SY eess.SY physics.optics physics.space-ph
|
We build a 1064 nm fiber laser system-based testing facility for emulating
SETs in different electronics components and ICs. Using these facilities, we
tested the 4N35 optocoupler to observe SETs for the first time.
|
2501.07593
|
A Multi-Layer CNN-GRUSKIP model based on transformer for spatial
TEMPORAL traffic flow prediction
|
cs.LG
|
Traffic flow prediction remains a cornerstone for intelligent transportation
systems ITS, influencing both route optimization and environmental efforts.
While Recurrent Neural Networks RNN and traditional Convolutional Neural
Networks CNN offer some insights into the spatial temporal dynamics of traffic
data, they are often limited when navigating sparse and extended spatial
temporal patterns. In response, the CNN-GRUSKIP model emerges as a pioneering
approach. Notably, it integrates the GRU-SKIP mechanism, a hybrid model that
leverages the Gate Recurrent Unit of GRU capabilities to process sequences with
the SKIP feature of ability to bypass and connect longer temporal dependencies,
making it especially potent for traffic flow predictions with erratic and
extended patterns. Another distinctive aspect is its non-standard 6-layer CNN,
meticulously designed for in-depth spatiotemporal correlation extraction. The
model comprises (1) the specialized CNN feature extraction, (2) the GRU-SKIP
enhanced long-temporal module adept at capturing extended patterns, (3) a
transformer module employing encoder-decoder and multi-attention mechanisms to
hone prediction accuracy and trim model complexity, and (4) a bespoke
prediction module. When tested against real-world datasets from California of
Caltrans Performance Measurement System PeMS, specifically PeMS districts 4 and
8, the CNN-GRUSKIP consistently outperformed established models such as ARIMA,
Graph Wave Net, HA, LSTM, STGCN, and APTN. With its potent predictive prowess
and adaptive architecture, the CNN-GRUSKIP model stands to redefine ITS
applications, especially where nuanced traffic dynamics are in play.
|
2501.07595
|
LUCAS: A Low-Power Ultra-Low Jitter Compact ASIC for SiPM Targetting
ToF-CT
|
physics.ins-det cs.SY eess.SY
|
We present LUCAS (Low power Ultra-low jitter Compact ASIC for SiPM), an
analog front-end for Silicon Photomultipliers (SiPM) targeting fast timing
detectors in Time-of-Flight Computed Tomography (ToF-CT). LUCAS features a very
low input impedance preamplifier followed by a voltage comparator. It is
designed in TSMC 65 nm low-power CMOS technology with a power supply of 1.2 V.
Our first 8-channel prototype has been sent to fabrication and will be received
in August 2023. Post-layout simulations predict less than 40 ps FWHM SPTR
jitter and an approximate power consumption of 3.2 mW per channel. The front
end is suitable for applications with rigorous jitter requirements and high
event rates, thanks to its 3.9 GHz unity-gain bandwidth. The front-end compact
form factor will facilitate its incorporation into systems demanding high
channel densities.
|
2501.07596
|
Optimize Incompatible Parameters through Compatibility-aware Knowledge
Integration
|
cs.LG cs.CL cs.IR
|
Deep neural networks have become foundational to advancements in multiple
domains, including recommendation systems, natural language processing, and so
on. Despite their successes, these models often contain incompatible parameters
that can be underutilized or detrimental to model performance, particularly
when faced with specific, varying data distributions. Existing research excels
in removing such parameters or merging the outputs of multiple different
pretrained models. However, the former focuses on efficiency rather than
performance, while the latter requires several times more computing and storage
resources to support inference. In this paper, we set the goal to explicitly
improve these incompatible parameters by leveraging the complementary strengths
of different models, thereby directly enhancing the models without any
additional parameters. Specifically, we propose Compatibility-aware Knowledge
Integration (CKI), which consists of Parameter Compatibility Assessment and
Parameter Splicing, which are used to evaluate the knowledge content of
multiple models and integrate the knowledge into one model, respectively. The
integrated model can be used directly for inference or for further fine-tuning.
We conduct extensive experiments on various datasets for recommendation and
language tasks, and the results show that Compatibility-aware Knowledge
Integration can effectively optimize incompatible parameters under multiple
tasks and settings to break through the training limit of the original model
without increasing the inference cost.
|
2501.07597
|
Learning-based Detection of GPS Spoofing Attack for Quadrotors
|
cs.RO cs.CR cs.LG
|
Safety-critical cyber-physical systems (CPS), such as quadrotor UAVs, are
particularly prone to cyber attacks, which can result in significant
consequences if not detected promptly and accurately. During outdoor
operations, the nonlinear dynamics of UAV systems, combined with non-Gaussian
noise, pose challenges to the effectiveness of conventional statistical and
machine learning methods. To overcome these limitations, we present QUADFormer,
an advanced attack detection framework for quadrotor UAVs leveraging a
transformer-based architecture. This framework features a residue generator
that produces sequences sensitive to anomalies, which are then analyzed by the
transformer to capture statistical patterns for detection and classification.
Furthermore, an alert mechanism ensures UAVs can operate safely even when under
attack. Extensive simulations and experimental evaluations highlight that
QUADFormer outperforms existing state-of-the-art techniques in detection
accuracy.
|
2501.07598
|
Automated Heterogeneous Network learning with Non-Recursive Message
Passing
|
cs.LG
|
Heterogeneous information networks (HINs) can be used to model various
real-world systems. As HINs consist of multiple types of nodes, edges, and node
features, it is nontrivial to directly apply graph neural network (GNN)
techniques in heterogeneous cases. There are two remaining major challenges.
First, homogeneous message passing in a recursive manner neglects the distinct
types of nodes and edges in different hops, leading to unnecessary information
mixing. This often results in the incorporation of ``noise'' from uncorrelated
intermediate neighbors, thereby degrading performance. Second, feature learning
should be handled differently for different types, which is challenging
especially when the type sizes are large. To bridge this gap, we develop a
novel framework - AutoGNR, to directly utilize and automatically extract
effective heterogeneous information. Instead of recursive homogeneous message
passing, we introduce a non-recursive message passing mechanism for GNN to
mitigate noise from uncorrelated node types in HINs. Furthermore, under the
non-recursive framework, we manage to efficiently perform neural architecture
search for an optimal GNN structure in a differentiable way, which can
automatically define the heterogeneous paths for aggregation. Our tailored
search space encompasses more effective candidates while maintaining a
tractable size. Experiments show that AutoGNR consistently outperforms
state-of-the-art methods on both normal and large scale real-world HIN
datasets.
|
2501.07599
|
Analyzing Spatio-Temporal Dynamics of Dissolved Oxygen for the River
Thames using Superstatistical Methods and Machine Learning
|
cs.LG stat.ML
|
By employing superstatistical methods and machine learning, we analyze time
series data of water quality indicators for the River Thames, with a specific
focus on the dynamics of dissolved oxygen. After detrending, the probability
density functions of dissolved oxygen fluctuations exhibit heavy tails that are
effectively modeled using $q$-Gaussian distributions. Our findings indicate
that the multiplicative Empirical Mode Decomposition method stands out as the
most effective detrending technique, yielding the highest log-likelihood in
nearly all fittings. We also observe that the optimally fitted width parameter
of the $q$-Gaussian shows a negative correlation with the distance to the sea,
highlighting the influence of geographical factors on water quality dynamics.
In the context of same-time prediction of dissolved oxygen, regression analysis
incorporating various water quality indicators and temporal features identify
the Light Gradient Boosting Machine as the best model. SHapley Additive
exPlanations reveal that temperature, pH, and time of year play crucial roles
in the predictions. Furthermore, we use the Transformer to forecast dissolved
oxygen concentrations. For long-term forecasting, the Informer model
consistently delivers superior performance, achieving the lowest MAE and SMAPE
with the 192 historical time steps that we used. This performance is attributed
to the Informer's ProbSparse self-attention mechanism, which allows it to
capture long-range dependencies in time-series data more effectively than other
machine learning models. It effectively recognizes the half-life cycle of
dissolved oxygen, with particular attention to key intervals. Our findings
provide valuable insights for policymakers involved in ecological health
assessments, aiding in accurate predictions of river water quality and the
maintenance of healthy aquatic ecosystems.
|
2501.07600
|
Impact of Data Breadth and Depth on Performance of Siamese Neural
Network Model: Experiments with Three Keystroke Dynamic Datasets
|
cs.LG cs.CV stat.ML
|
Deep learning models, such as the Siamese Neural Networks (SNN), have shown
great potential in capturing the intricate patterns in behavioral data.
However, the impacts of dataset breadth (i.e., the number of subjects) and
depth (e.g., the amount of training samples per subject) on the performance of
these models is often informally assumed, and remains under-explored. To this
end, we have conducted extensive experiments using the concepts of "feature
space" and "density" to guide and gain deeper understanding on the impact of
dataset breadth and depth on three publicly available keystroke datasets
(Aalto, CMU and Clarkson II). Through varying the number of training subjects,
number of samples per subject, amount of data in each sample, and number of
triplets used in training, we found that when feasible, increasing dataset
breadth enables the training of a well-trained model that effectively captures
more inter-subject variability. In contrast, we find that the extent of depth's
impact from a dataset depends on the nature of the dataset. Free-text datasets
are influenced by all three depth-wise factors; inadequate samples per subject,
sequence length, training triplets and gallery sample size, which may all lead
to an under-trained model. Fixed-text datasets are less affected by these
factors, and as such make it easier to create a well-trained model. These
findings shed light on the importance of dataset breadth and depth in training
deep learning models for behavioral biometrics and provide valuable insights
for designing more effective authentication systems.
|
2501.07601
|
Real-Time Decision-Making for Digital Twin in Additive Manufacturing
with Model Predictive Control using Time-Series Deep Neural Networks
|
cs.LG cs.AI cs.SY eess.SY
|
Digital Twin-a virtual replica of a physical system enabling real-time
monitoring, model updating, prediction, and decision-making-combined with
recent advances in machine learning (ML), offers new opportunities for
proactive control strategies in autonomous manufacturing. However, achieving
real-time decision-making with Digital Twins requires efficient optimization
driven by accurate predictions of highly nonlinear manufacturing systems. This
paper presents a simultaneous multi-step Model Predictive Control (MPC)
framework for real-time decision-making, using a multi-variate deep neural
network (DNN), named Time-Series Dense Encoder (TiDE), as the surrogate model.
Different from the models in conventional MPC which only provide one-step ahead
prediction, TiDE is capable of predicting future states within the prediction
horizon in one shot (multi-step), significantly accelerating MPC. Using
Directed Energy Deposition additive manufacturing as a case study, we
demonstrate the effectiveness of the proposed MPC in achieving melt pool
temperature tracking to ensure part quality, while reducing porosity defects by
regulating laser power to maintain melt pool depth constraints. In this work,
we first show that TiDE is capable of accurately predicting melt pool
temperature and depth. Second, we demonstrate that the proposed MPC achieves
precise temperature tracking while satisfying melt pool depth constraints
within a targeted dilution range (10%-30%), reducing potential porosity
defects. Compared to the PID controller, MPC results in smoother and less
fluctuating laser power profiles with competitive or superior melt pool
temperature control performance. This demonstrates MPC's proactive control
capabilities, leveraging time-series prediction and real-time optimization,
positioning it as a powerful tool for future Digital Twin applications and
real-time process optimization in manufacturing.
|
2501.07602
|
An Explainable Pipeline for Machine Learning with Functional Data
|
cs.LG stat.ML
|
Machine learning (ML) models have shown success in applications with an
objective of prediction, but the algorithmic complexity of some models makes
them difficult to interpret. Methods have been proposed to provide insight into
these "black-box" models, but there is little research that focuses on
supervised ML when the model inputs are functional data. In this work, we
consider two applications from high-consequence spaces with objectives of
making predictions using functional data inputs. One application aims to
classify material types to identify explosive materials given hyperspectral
computed tomography scans of the materials. The other application considers the
forensics science task of connecting an inkjet printed document to the source
printer using color signatures extracted by Raman spectroscopy. An instinctive
route to consider for analyzing these data is a data driven ML model for
classification, but due to the high consequence nature of the applications, we
argue it is important to appropriately account for the nature of the data in
the analysis to not obscure or misrepresent patterns. As such, we propose the
Variable importance Explainable Elastic Shape Analysis (VEESA) pipeline for
training ML models with functional data that (1) accounts for the vertical and
horizontal variability in the functional data and (2) provides an explanation
in the original data space of how the model uses variability in the functional
data for prediction. The pipeline makes use of elastic functional principal
components analysis (efPCA) to generate uncorrelated model inputs and
permutation feature importance (PFI) to identify the principal components
important for prediction. The variability captured by the important principal
components in visualized the original data space. We ultimately discuss ideas
for natural extensions of the VEESA pipeline and challenges for future
research.
|
2501.07611
|
Kolmogorov-Arnold Networks and Evolutionary Game Theory for More
Personalized Cancer Treatment
|
cs.LG cs.NE
|
Personalized cancer treatment is revolutionizing oncology by leveraging
precision medicine and advanced computational techniques to tailor therapies to
individual patients. Despite its transformative potential, challenges such as
limited generalizability, interpretability, and reproducibility of predictive
models hinder its integration into clinical practice. Current methodologies
often rely on black-box machine learning models, which, while accurate, lack
the transparency needed for clinician trust and real-world application. This
paper proposes the development of an innovative framework that bridges
Kolmogorov-Arnold Networks (KANs) and Evolutionary Game Theory (EGT) to address
these limitations. Inspired by the Kolmogorov-Arnold representation theorem,
KANs offer interpretable, edge-based neural architectures capable of modeling
complex biological systems with unprecedented adaptability. Their integration
into the EGT framework enables dynamic modeling of cancer progression and
treatment responses. By combining KAN's computational precision with EGT's
mechanistic insights, this hybrid approach promises to enhance predictive
accuracy, scalability, and clinical usability.
|
2501.07616
|
The Ingenuity Mars Helicopter Specified and Analyzed with the Real-time
Mode-aware Dataflow Model
|
eess.SY cs.SY
|
Ingenuity is an autonomous Cyber-Pysical System (CPS) that has successfully
completed more than 70 flights over Mars between 2021 and 2024. Ensuring the
safety of its mission is paramount, as any failure could result in catastrophic
economic damage and significant financial losses. Dataflow Models of
Computation and Communication (DF MoCCs) serve as a formal framework for
specifying and analyzing the timing behavior of such CPSs. In particular, the
Real-time Mode-aware Dataflow (RMDF) model is highly suitable to specify and
analyze real-time and mode-dependent Cyber-Physical Systems (CPSs) like
Ingenuity. This paper showcases the application of RMDF for the specification
and analysis of Ingenuity. We propose a dataflow specification of Ingenuity,
analyze its timing behavior, and provide a feasibility test. Finally, we
proposed a plausible explanation of the timing anomaly that occurred during the
sixth flight of Ingenuity.
|
2501.07639
|
SafePowerGraph-LLM: Novel Power Grid Graph Embedding and Optimization
with Large Language Models
|
cs.AI
|
Efficiently solving Optimal Power Flow (OPF) problems in power systems is
crucial for operational planning and grid management. There is a growing need
for scalable algorithms capable of handling the increasing variability,
constraints, and uncertainties in modern power networks while providing
accurate and fast solutions. To address this, machine learning techniques,
particularly Graph Neural Networks (GNNs) have emerged as promising approaches.
This letter introduces SafePowerGraph-LLM, the first framework explicitly
designed for solving OPF problems using Large Language Models (LLM)s. The
proposed approach combines graph and tabular representations of power grids to
effectively query LLMs, capturing the complex relationships and constraints in
power systems. A new implementation of in-context learning and fine-tuning
protocols for LLMs is introduced, tailored specifically for the OPF problem.
SafePowerGraph-LLM demonstrates reliable performances using off-the-shelf LLM.
Our study reveals the impact of LLM architecture, size, and fine-tuning and
demonstrates our framework's ability to handle realistic grid components and
constraints.
|
2501.07641
|
GPT as a Monte Carlo Language Tree: A Probabilistic Perspective
|
cs.CL
|
Large Language Models (LLMs), such as GPT, are considered to learn the latent
distributions within large-scale web-crawl datasets and accomplish natural
language processing (NLP) tasks by predicting the next token. However, this
mechanism of latent distribution modeling lacks quantitative understanding and
analysis. In this paper, we propose a novel perspective that any language
dataset can be represented by a Monte Carlo Language Tree (abbreviated as
``Data-Tree''), where each node denotes a token, each edge denotes a token
transition probability, and each sequence has a unique path. Any GPT-like
language model can also be flattened into another Monte Carlo Language Tree
(abbreviated as ``GPT-Tree''). Our experiments show that different GPT models
trained on the same dataset exhibit significant structural similarity in
GPT-Tree visualization, and larger models converge more closely to the
Data-Tree. More than 87\% GPT output tokens can be recalled by Data-Tree. These
findings may confirm that the reasoning process of LLMs is more likely to be
probabilistic pattern-matching rather than formal reasoning, as each model
inference seems to find a context pattern with maximum probability from the
Data-Tree. Furthermore, we provide deeper insights into issues such as
hallucination, Chain-of-Thought (CoT) reasoning, and token bias in LLMs.
|
2501.07643
|
A Step Toward Interpretability: Smearing the Likelihood
|
hep-ph cs.LG hep-ex stat.ML
|
The problem of interpretability of machine learning architecture in particle
physics has no agreed-upon definition, much less any proposed solution. We
present a first modest step toward these goals by proposing a definition and
corresponding practical method for isolation and identification of relevant
physical energy scales exploited by the machine. This is accomplished by
smearing or averaging over all input events that lie within a prescribed metric
energy distance of one another and correspondingly renders any quantity
measured on a finite, discrete dataset continuous over the dataspace. Within
this approach, we are able to explicitly demonstrate that (approximate) scaling
laws are a consequence of extreme value theory applied to analysis of the
distribution of the irreducible minimal distance over which a machine must
extrapolate given a finite dataset. As an example, we study quark versus gluon
jet identification, construct the smeared likelihood, and show that
discrimination power steadily increases as resolution decreases, indicating
that the true likelihood for the problem is sensitive to emissions at all
scales.
|
2501.07647
|
BlobGEN-Vid: Compositional Text-to-Video Generation with Blob Video
Representations
|
cs.CV cs.AI
|
Existing video generation models struggle to follow complex text prompts and
synthesize multiple objects, raising the need for additional grounding input
for improved controllability. In this work, we propose to decompose videos into
visual primitives - blob video representation, a general representation for
controllable video generation. Based on blob conditions, we develop a
blob-grounded video diffusion model named BlobGEN-Vid that allows users to
control object motions and fine-grained object appearance. In particular, we
introduce a masked 3D attention module that effectively improves regional
consistency across frames. In addition, we introduce a learnable module to
interpolate text embeddings so that users can control semantics in specific
frames and obtain smooth object transitions. We show that our framework is
model-agnostic and build BlobGEN-Vid based on both U-Net and DiT-based video
diffusion models. Extensive experimental results show that BlobGEN-Vid achieves
superior zero-shot video generation ability and state-of-the-art layout
controllability on multiple benchmarks. When combined with an LLM for layout
planning, our framework even outperforms proprietary text-to-video generators
in terms of compositional accuracy.
|
2501.07652
|
Finite Sample Identification of Partially Observed Bilinear Dynamical
Systems
|
cs.LG cs.SY eess.SY math.OC stat.ML
|
We consider the problem of learning a realization of a partially observed
bilinear dynamical system (BLDS) from noisy input-output data. Given a single
trajectory of input-output samples, we provide a finite time analysis for
learning the system's Markov-like parameters, from which a balanced realization
of the bilinear system can be obtained. Our bilinear system identification
algorithm learns the system's Markov-like parameters by regressing the outputs
to highly correlated, nonlinear, and heavy-tailed covariates. Moreover, the
stability of BLDS depends on the sequence of inputs used to excite the system.
These properties, unique to partially observed bilinear dynamical systems, pose
significant challenges to the analysis of our algorithm for learning the
unknown dynamics. We address these challenges and provide high probability
error bounds on our identification algorithm under a uniform stability
assumption. Our analysis provides insights into system theoretic quantities
that affect learning accuracy and sample complexity. Lastly, we perform
numerical experiments with synthetic data to reinforce these insights.
|
2501.07653
|
Large Language Models for Interpretable Mental Health Diagnosis
|
cs.AI cs.LO
|
We propose a clinical decision support system (CDSS) for mental health
diagnosis that combines the strengths of large language models (LLMs) and
constraint logic programming (CLP). Having a CDSS is important because of the
high complexity of diagnostic manuals used by mental health professionals and
the danger of diagnostic errors. Our CDSS is a software tool that uses an LLM
to translate diagnostic manuals to a logic program and solves the program using
an off-the-shelf CLP engine to query a patient's diagnosis based on the encoded
rules and provided data. By giving domain experts the opportunity to inspect
the LLM-generated logic program, and making modifications when needed, our CDSS
ensures that the diagnosis is not only accurate but also interpretable. We
experimentally compare it with two baseline approaches of using LLMs:
diagnosing patients using the LLM-only approach, and using the LLM-generated
logic program but without expert inspection. The results show that, while LLMs
are extremely useful in generating candidate logic programs, these programs
still require expert inspection and modification to guarantee faithfulness to
the official diagnostic manuals. Additionally, ethical concerns arise from the
direct use of patient data in LLMs, underscoring the need for a safer hybrid
approach like our proposed method.
|
2501.07663
|
Enhancing Talent Employment Insights Through Feature Extraction with LLM
Finetuning
|
cs.CL
|
This paper explores the application of large language models (LLMs) to
extract nuanced and complex job features from unstructured job postings. Using
a dataset of 1.2 million job postings provided by AdeptID, we developed a
robust pipeline to identify and classify variables such as remote work
availability, remuneration structures, educational requirements, and work
experience preferences. Our methodology combines semantic chunking,
retrieval-augmented generation (RAG), and fine-tuning DistilBERT models to
overcome the limitations of traditional parsing tools. By leveraging these
techniques, we achieved significant improvements in identifying variables often
mislabeled or overlooked, such as non-salary-based compensation and inferred
remote work categories. We present a comprehensive evaluation of our fine-tuned
models and analyze their strengths, limitations, and potential for scaling.
This work highlights the promise of LLMs in labor market analytics, providing a
foundation for more accurate and actionable insights into job data.
|
2501.07670
|
A Survey of Early Exit Deep Neural Networks in NLP
|
cs.LG cs.CL
|
Deep Neural Networks (DNNs) have grown increasingly large in size to achieve
state of the art performance across a wide range of tasks. However, their high
computational requirements make them less suitable for resource-constrained
applications. Also, real-world datasets often consist of a mixture of easy and
complex samples, necessitating adaptive inference mechanisms that account for
sample difficulty. Early exit strategies offer a promising solution by enabling
adaptive inference, where simpler samples are classified using the initial
layers of the DNN, thereby accelerating the overall inference process. By
attaching classifiers at different layers, early exit methods not only reduce
inference latency but also improve the model robustness against adversarial
attacks. This paper presents a comprehensive survey of early exit methods and
their applications in NLP.
|
2501.07674
|
CDS: Data Synthesis Method Guided by Cognitive Diagnosis Theory
|
cs.AI
|
Large Language Models (LLMs) have demonstrated outstanding capabilities
across various domains, but the increasing complexity of new challenges demands
enhanced performance and adaptability. Traditional benchmarks, although
comprehensive, often lack the granularity needed for detailed capability
analysis. This study introduces the Cognitive Diagnostic Synthesis (CDS)
method, which employs Cognitive Diagnosis Theory (CDT) for precise evaluation
and targeted enhancement of LLMs. By decomposing complex tasks into discrete
knowledge points, CDS accurately identifies and synthesizes data targeting
model weaknesses, thereby enhancing the model's performance. This framework
proposes a comprehensive pipeline driven by knowledge point evaluation,
synthesis, data augmentation, and filtering, which significantly improves the
model's mathematical and coding capabilities, achieving up to an 11.12%
improvement in optimal scenarios.
|
2501.07679
|
Constructing Set-Compositional and Negated Representations for
First-Stage Ranking
|
cs.IR
|
Set compositional and negated queries are crucial for expressing complex
information needs and enable the discovery of niche items like Books about
non-European monarchs. Despite the recent advances in LLMs, first-stage ranking
remains challenging due to the requirement of encoding documents and queries
independently from each other. This limitation calls for constructing
compositional query representations that encapsulate logical operations or
negations, and can be used to match relevant documents effectively. In the
first part of this work, we explore constructing such representations in a
zero-shot setting using vector operations between lexically grounded Learned
Sparse Retrieval (LSR) representations. Specifically, we introduce Disentangled
Negation that penalizes only the negated parts of a query, and a Combined
Pseudo-Term approach that enhances LSRs ability to handle intersections. We
find that our zero-shot approach is competitive and often outperforms
retrievers fine-tuned on compositional data, highlighting certain limitations
of LSR and Dense Retrievers. Finally, we address some of these limitations and
improve LSRs representation power for negation, by allowing them to attribute
negative term scores and effectively penalize documents containing the negated
terms.
|
2501.07681
|
Dataset Distillation as Pushforward Optimal Quantization
|
cs.LG cs.CV math.OC stat.ML
|
Dataset distillation aims to find a synthetic training set such that training
on the synthetic data achieves similar performance to training on real data,
with orders of magnitude less computational requirements. Existing methods can
be broadly categorized as either bi-level optimization problems that have
neural network training heuristics as the lower level problem, or disentangled
methods that bypass the bi-level optimization by matching distributions of
data. The latter method has the major advantages of speed and scalability in
terms of size of both training and distilled datasets. We demonstrate that when
equipped with an encoder-decoder structure, the empirically successful
disentangled methods can be reformulated as an optimal quantization problem,
where a finite set of points is found to approximate the underlying probability
measure by minimizing the expected projection distance. In particular, we link
existing disentangled dataset distillation methods to the classical optimal
quantization and Wasserstein barycenter problems, demonstrating consistency of
distilled datasets for diffusion-based generative priors. We propose a simple
extension of the state-of-the-art data distillation method D4M, achieving
better performance on the ImageNet-1K dataset with trivial additional
computation, and state-of-the-art performance in higher image-per-class
settings.
|
2501.07688
|
C2PD: Continuity-Constrained Pixelwise Deformation for Guided Depth
Super-Resolution
|
cs.CV
|
Guided depth super-resolution (GDSR) has demonstrated impressive performance
across a wide range of domains, with numerous methods being proposed. However,
existing methods often treat depth maps as images, where shading values are
computed discretely, making them struggle to effectively restore the continuity
inherent in the depth map. In this paper, we propose a novel approach that
maximizes the utilization of spatial characteristics in depth, coupled with
human abstract perception of real-world substance, by transforming the GDSR
issue into deformation of a roughcast with ideal plasticity, which can be
deformed by force like a continuous object. Specifically, we firstly designed a
cross-modal operation, Continuity-constrained Asymmetrical Pixelwise Operation
(CAPO), which can mimic the process of deforming an isovolumetrically flexible
object through external forces. Utilizing CAPO as the fundamental component, we
develop the Pixelwise Cross Gradient Deformation (PCGD), which is capable of
emulating operations on ideal plastic objects (without volume constraint).
Notably, our approach demonstrates state-of-the-art performance across four
widely adopted benchmarks for GDSR, with significant advantages in large-scale
tasks and generalizability.
|
2501.07689
|
Real-Time Outlier Connections Detection in Databases Network Traffic
|
cs.DB cs.SY eess.SY
|
The article describes a practical method for detecting outlier database
connections in real-time. Outlier connections are detected with a specified
level of confidence. The method is based on generalized security rules and a
simple but effective real-time machine learning mechanism. The described method
is non-intrusive to the database and does not depend on the type of database.
The method is used to proactively control access even before database
connection is established, minimize false positives, and maintain the required
response speed to detected database connection outliers. The capabilities of
the system are demonstrated with several examples of outliers in real-world
scenarios.
|
2501.07700
|
An Adaptive Collocation Point Strategy For Physics Informed Neural
Networks via the QR Discrete Empirical Interpolation Method
|
cs.LG cs.CE cs.NA math.NA
|
Physics-informed neural networks (PINNs) have gained significant attention
for solving forward and inverse problems related to partial differential
equations (PDEs). While advancements in loss functions and network
architectures have improved PINN accuracy, the impact of collocation point
sampling on their performance remains underexplored. Fixed sampling methods,
such as uniform random sampling and equispaced grids, can fail to capture
critical regions with high solution gradients, limiting their effectiveness for
complex PDEs. Adaptive methods, inspired by adaptive mesh refinement from
traditional numerical methods, address this by dynamically updating collocation
points during training but may overlook residual dynamics between updates,
potentially losing valuable information. To overcome this limitation, we
propose an adaptive collocation point selection strategy utilizing the QR
Discrete Empirical Interpolation Method (QR-DEIM), a reduced-order modeling
technique for efficiently approximating nonlinear functions. Our results on
benchmark PDEs, including the wave, Allen-Cahn, and Burgers' equations,
demonstrate that our QR-DEIM-based approach improves PINN accuracy compared to
existing methods, offering a promising direction for adaptive collocation point
strategies.
|
2501.07701
|
Active Learning Enhanced Surrogate Modeling of Jet Engines in JuliaSim
|
cs.CE
|
Surrogate models are effective tools for accelerated design of complex
systems. The result of a design optimization procedure using surrogate models
can be used to initialize an optimization routine using the full order system.
High accuracy of the surrogate model can be advantageous for fast convergence.
In this work, we present an active learning approach to produce a very high
accuracy surrogate model of a turbofan jet engine, that demonstrates 0.1\%
relative error for all quantities of interest. We contrast this with a
surrogate model produced using a more traditional brute-force data generation
approach.
|
2501.07705
|
Autonomous Electrochemistry Platform with Real-Time Normality Testing of
Voltammetry Measurements Using ML
|
cs.DC cs.RO
|
Electrochemistry workflows utilize various instruments and computing systems
to execute workflows consisting of electrocatalyst synthesis, testing and
evaluation tasks. The heterogeneity of the software and hardware of these
ecosystems makes it challenging to orchestrate a complete workflow from
production to characterization by automating its tasks. We propose an
autonomous electrochemistry computing platform for a multi-site ecosystem that
provides the services for remote experiment steering, real-time measurement
transfer, and AI/ML-driven analytics. We describe the integration of a mobile
robot and synthesis workstation into the ecosystem by developing custom
hub-networks and software modules to support remote operations over the
ecosystem's wireless and wired networks. We describe a workflow task for
generating I-V voltammetry measurements using a potentiostat, and a machine
learning framework to ensure their normality by detecting abnormal conditions
such as disconnected electrodes. We study a number of machine learning methods
for the underlying detection problem, including smooth, non-smooth, structural
and statistical methods, and their fusers. We present experimental results to
illustrate the effectiveness of this platform, and also validate the proposed
ML method by deriving its rigorous generalization equations.
|
2501.07711
|
Pedestrian Trajectory Prediction Based on Social Interactions Learning
With Random Weights
|
cs.CV cs.MM
|
Pedestrian trajectory prediction is a critical technology in the evolution of
self-driving cars toward complete artificial intelligence. Over recent years,
focusing on the trajectories of pedestrians to model their social interactions
has surged with great interest in more accurate trajectory predictions.
However, existing methods for modeling pedestrian social interactions rely on
pre-defined rules, struggling to capture non-explicit social interactions. In
this work, we propose a novel framework named DTGAN, which extends the
application of Generative Adversarial Networks (GANs) to graph sequence data,
with the primary objective of automatically capturing implicit social
interactions and achieving precise predictions of pedestrian trajectory. DTGAN
innovatively incorporates random weights within each graph to eliminate the
need for pre-defined interaction rules. We further enhance the performance of
DTGAN by exploring diverse task loss functions during adversarial training,
which yields improvements of 16.7\% and 39.3\% on metrics ADE and FDE,
respectively. The effectiveness and accuracy of our framework are verified on
two public datasets. The experimental results show that our proposed DTGAN
achieves superior performance and is well able to understand pedestrians'
intentions.
|
2501.07713
|
Testing Human-Hand Segmentation on In-Distribution and
Out-of-Distribution Data in Human-Robot Interactions Using a Deep Ensemble
Model
|
cs.CV cs.HC cs.LG cs.RO
|
Reliable detection and segmentation of human hands are critical for enhancing
safety and facilitating advanced interactions in human-robot collaboration.
Current research predominantly evaluates hand segmentation under
in-distribution (ID) data, which reflects the training data of deep learning
(DL) models. However, this approach fails to address out-of-distribution (OOD)
scenarios that often arise in real-world human-robot interactions. In this
study, we present a novel approach by evaluating the performance of pre-trained
DL models under both ID data and more challenging OOD scenarios. To mimic
realistic industrial scenarios, we designed a diverse dataset featuring simple
and cluttered backgrounds with industrial tools, varying numbers of hands (0 to
4), and hands with and without gloves. For OOD scenarios, we incorporated
unique and rare conditions such as finger-crossing gestures and motion blur
from fast-moving hands, addressing both epistemic and aleatoric uncertainties.
To ensure multiple point of views (PoVs), we utilized both egocentric cameras,
mounted on the operator's head, and static cameras to capture RGB images of
human-robot interactions. This approach allowed us to account for multiple
camera perspectives while also evaluating the performance of models trained on
existing egocentric datasets as well as static-camera datasets. For
segmentation, we used a deep ensemble model composed of UNet and RefineNet as
base learners. Performance evaluation was conducted using segmentation metrics
and uncertainty quantification via predictive entropy. Results revealed that
models trained on industrial datasets outperformed those trained on
non-industrial datasets, highlighting the importance of context-specific
training. Although all models struggled with OOD scenarios, those trained on
industrial datasets demonstrated significantly better generalization.
|
2501.07714
|
Koopman Meets Limited Bandwidth: Effect of Quantization on Data-Driven
Linear Prediction and Control of Nonlinear Systems
|
eess.SY cs.SY
|
Koopman-based lifted linear identification have been widely used for
data-driven prediction and model predictive control (MPC) of nonlinear systems.
It has found applications in flow-control, soft robotics, and unmanned aerial
vehicles (UAV). For autonomous systems, this system identification method works
by embedding the nonlinear system in a higher-dimensional linear space and
computing a finite-dimensional approximation of the corresponding Koopman
operator with the Extended Dynamic Mode Decomposition (EDMD) algorithm. EDMD is
a data-driven algorithm that estimates an approximate linear system by lifting
the state data-snapshots via nonlinear dictionary functions. For control
systems, EDMD is further modified to utilize both state and control
data-snapshots to estimate a lifted linear predictor with control input. This
article investigates how the estimation process is affected when the data is
quantized. Specifically, we examine the fundamental connection between
estimates of the linear predictor matrices obtained from unquantized data and
those from quantized data via modified EDMD. Furthermore, using the law of
large numbers, we demonstrate that, under a large data regime, the quantized
estimate can be considered a regularized version of the unquantized estimate.
We also explore the relationship between the two estimates in the finite data
regime. We further analyze the effect of nonlinear lifting functions on this
regularization due to quantization. The theory is validated through repeated
numerical experiments conducted on several control systems. The effect of
quantization on the MPC performance is also demonstrated.
|
2501.07715
|
Analyzing the Role of the DSO in Electricity Trading of VPPs via a
Stackelberg Game Model
|
eess.SY cs.SY
|
The increasing penetration of distributed energy resources (DER) has sparked
interest in promoting their participation in the power market. Here we consider
a setting in which different virtual power plants (VPPs) with certain flexible
resources take part in electricity trading, either by direct participation in
the wholesale power market, or interfaced by the Distribution System Operator
(DSO). Our goal is to examine the role and influence of the DSO as a
stakeholder, for which we formulate a Stackelberg game via a bilevel
optimization model: the DSO maximizes profits at the upper level, while VPPs
minimize operating costs at the lower level. To solve this problem, we use the
Karush-Kuhn-Tucke optimality conditions of the convex lower-level problems to
achieve a single-level mixed-integer nonlinear program. The results show that
the role of the DSO as an intermediary agent leads to a decrease in operating
costs for the VPPs, while guaranteeing a profit for the DSO.
|
2501.07718
|
Benchmarking Abstractive Summarisation: A Dataset of Human-authored
Summaries of Norwegian News Articles
|
cs.CL
|
We introduce a dataset of high-quality human-authored summaries of news
articles in Norwegian. The dataset is intended for benchmarking the abstractive
summarisation capabilities of generative language models. Each document in the
dataset is provided with three different candidate gold-standard summaries
written by native Norwegian speakers, and all summaries are provided in both of
the written variants of Norwegian -- Bokm{\aa}l and Nynorsk. The paper
describes details on the data creation effort as well as an evaluation of
existing open LLMs for Norwegian on the dataset. We also provide insights from
a manual human evaluation, comparing human-authored to model-generated
summaries. Our results indicate that the dataset provides a challenging LLM
benchmark for Norwegian summarisation capabilities
|
2501.07719
|
Entailed Between the Lines: Incorporating Implication into NLI
|
cs.CL
|
Much of human communication depends on implication, conveying meaning beyond
literal words to express a wider range of thoughts, intentions, and feelings.
For models to better understand and facilitate human communication, they must
be responsive to the text's implicit meaning. We focus on Natural Language
Inference (NLI), a core tool for many language tasks, and find that
state-of-the-art NLI models and datasets struggle to recognize a range of cases
where entailment is implied, rather than explicit from the text. We formalize
implied entailment as an extension of the NLI task and introduce the Implied
NLI dataset (INLI) to help today's LLMs both recognize a broader variety of
implied entailments and to distinguish between implicit and explicit
entailment. We show how LLMs fine-tuned on INLI understand implied entailment
and can generalize this understanding across datasets and domains.
|
2501.07721
|
LLMic: Romanian Foundation Language Model
|
cs.CL
|
Recent advances in Large Language Models (LLMs) have demonstrated remarkable
capabilities across various tasks with commercial models leading the way. While
open models usually operate at a smaller scale, they maintain competitiveness
through specialization and fine-tuning. However, a significant challenge
persists: open models often underperform in low-resource languages due to
limited representation in the training corpus. In this paper, we present LLMic,
a bilingual foundation language model designed specifically for the Romanian
Language. We document the complete process of pretraining a foundation model
for a low-resource language, including corpus construction, architecture
selection, and hyper-parameter optimization. Our evaluation demonstrates that
LLMic can be specialized for tasks in the target language, achieving results
comparable to other much larger open models. We show that fine-tuning LLMic for
language translation after the initial pretraining phase outperforms existing
solutions in English-to-Romanian translation tasks. This opens the path for
efficient large-scale processing for the Romanian language community, using the
much smaller LLMic model
|
2501.07723
|
ESURF: Simple and Effective EDU Segmentation
|
cs.CL cs.LG
|
Segmenting text into Elemental Discourse Units (EDUs) is a fundamental task
in discourse parsing. We present a new simple method for identifying EDU
boundaries, and hence segmenting them, based on lexical and character n-gram
features, using random forest classification. We show that the method, despite
its simplicity, outperforms other methods both for segmentation and within a
state of the art discourse parser. This indicates the importance of such
features for identifying basic discourse elements, pointing towards potentially
more training-efficient methods for discourse analysis.
|
2501.07726
|
Exploring the encoding of linguistic representations in the
Fully-Connected Layer of generative CNNs for Speech
|
cs.CL
|
Interpretability work on the convolutional layers of CNNs has primarily
focused on computer vision, but some studies also explore correspondences
between the latent space and the output in the audio domain. However, it has
not been thoroughly examined how acoustic and linguistic information is
represented in the fully connected (FC) layer that bridges the latent space and
convolutional layers. The current study presents the first exploration of how
the FC layer of CNNs for speech synthesis encodes linguistically relevant
information. We propose two techniques for exploration of the fully connected
layer. In Experiment 1, we use weight matrices as inputs into convolutional
layers. In Experiment 2, we manipulate the FC layer to explore how
symbolic-like representations are encoded in CNNs. We leverage the fact that
the FC layer outputs a feature map and that variable-specific weight matrices
are temporally structured to (1) demonstrate how the distribution of learned
weights varies between latent variables in systematic ways and (2) demonstrate
how manipulating the FC layer while holding constant subsequent model
parameters affects the output. We ultimately present an FC manipulation that
can output a single segment. Using this technique, we show that lexically
specific latent codes in generative CNNs (ciwGAN) have shared lexically
invariant sublexical representations in the FC-layer weights, showing that
ciwGAN encodes lexical information in a linguistically principled manner.
|
2501.07727
|
Stronger Than You Think: Benchmarking Weak Supervision on Realistic
Tasks
|
cs.LG
|
Weak supervision (WS) is a popular approach for label-efficient learning,
leveraging diverse sources of noisy but inexpensive weak labels to
automatically annotate training data. Despite its wide usage, WS and its
practical value are challenging to benchmark due to the many knobs in its
setup, including: data sources, labeling functions (LFs), aggregation
techniques (called label models), and end model pipelines. Existing evaluation
suites tend to be limited, focusing on particular components or specialized use
cases. Moreover, they often involve simplistic benchmark tasks or de-facto LF
sets that are suboptimally written, producing insights that may not generalize
to real-world settings. We address these limitations by introducing a new
benchmark, BOXWRENCH, designed to more accurately reflect real-world usages of
WS. This benchmark features tasks with (1) higher class cardinality and
imbalance, (2) notable domain expertise requirements, and (3) opportunities to
re-use LFs across parallel multilingual corpora. For all tasks, LFs are written
using a careful procedure aimed at mimicking real-world settings. In contrast
to existing WS benchmarks, we show that supervised learning requires
substantial amounts (1000+) of labeled examples to match WS in many settings.
|
2501.07729
|
Autoencoded UMAP-Enhanced Clustering for Unsupervised Learning
|
cs.LG
|
We propose a novel approach to unsupervised learning by constructing a
non-linear embedding of the data into a low-dimensional space followed by any
conventional clustering algorithm. The embedding promotes clusterability of the
data and is comprised of two mappings: the encoder of an autoencoder neural
network and the output of UMAP algorithm. The autoencoder is trained with a
composite loss function that incorporates both a conventional data
reconstruction as a regularization component and a clustering-promoting
component built using the spectral graph theory. The two embeddings and the
subsequent clustering are integrated into a three-stage unsupervised learning
framework, referred to as Autoencoded UMAP-Enhanced Clustering (AUEC). When
applied to MNIST data, AUEC significantly outperforms the state-of-the-art
techniques in terms of clustering accuracy.
|
2501.07730
|
Democratizing Text-to-Image Masked Generative Models with Compact
Text-Aware One-Dimensional Tokens
|
cs.CV
|
Image tokenizers form the foundation of modern text-to-image generative
models but are notoriously difficult to train. Furthermore, most existing
text-to-image models rely on large-scale, high-quality private datasets, making
them challenging to replicate. In this work, we introduce Text-Aware
Transformer-based 1-Dimensional Tokenizer (TA-TiTok), an efficient and powerful
image tokenizer that can utilize either discrete or continuous 1-dimensional
tokens. TA-TiTok uniquely integrates textual information during the tokenizer
decoding stage (i.e., de-tokenization), accelerating convergence and enhancing
performance. TA-TiTok also benefits from a simplified, yet effective, one-stage
training process, eliminating the need for the complex two-stage distillation
used in previous 1-dimensional tokenizers. This design allows for seamless
scalability to large datasets. Building on this, we introduce a family of
text-to-image Masked Generative Models (MaskGen), trained exclusively on open
data while achieving comparable performance to models trained on private data.
We aim to release both the efficient, strong TA-TiTok tokenizers and the
open-data, open-weight MaskGen models to promote broader access and democratize
the field of text-to-image masked generative models.
|
2501.07731
|
HyperQuery: Beyond Binary Link Prediction
|
cs.LG cs.SI
|
Groups with complex set intersection relations are a natural way to model a
wide array of data, from the formation of social groups to the complex protein
interactions which form the basis of biological life. One approach to
representing such higher order relationships is as a hypergraph. However,
efforts to apply machine learning techniques to hypergraph structured datasets
have been limited thus far. In this paper, we address the problem of link
prediction in knowledge hypergraphs as well as simple hypergraphs and develop a
novel, simple, and effective optimization architecture that addresses both
tasks. Additionally, we introduce a novel feature extraction technique using
node level clustering and we show how integrating data from node-level labels
can improve system performance. Our self-supervised approach achieves
significant improvement over state of the art baselines on several hyperedge
prediction and knowledge hypergraph completion benchmarks.
|
2501.07737
|
Multi-megabase scale genome interpretation with genetic language models
|
q-bio.GN cs.LG
|
Understanding how molecular changes caused by genetic variation drive disease
risk is crucial for deciphering disease mechanisms. However, interpreting
genome sequences is challenging because of the vast size of the human genome,
and because its consequences manifest across a wide range of cells, tissues and
scales -- spanning from molecular to whole organism level. Here, we present
Phenformer, a multi-scale genetic language model that learns to generate
mechanistic hypotheses as to how differences in genome sequence lead to
disease-relevant changes in expression across cell types and tissues directly
from DNA sequences of up to 88 million base pairs. Using whole genome
sequencing data from more than 150 000 individuals, we show that Phenformer
generates mechanistic hypotheses about disease-relevant cell and tissue types
that match literature better than existing state-of-the-art methods, while
using only sequence data. Furthermore, disease risk predictors enriched by
Phenformer show improved prediction performance and generalisation to diverse
populations. Accurate multi-megabase scale interpretation of whole genomes
without additional experimental data enables both a deeper understanding of
molecular mechanisms involved in disease and improved disease risk prediction
at the level of individuals.
|
2501.07740
|
Advancing Student Writing Through Automated Syntax Feedback
|
cs.CL
|
This study underscores the pivotal role of syntax feedback in augmenting the
syntactic proficiency of students. Recognizing the challenges faced by learners
in mastering syntactic nuances, we introduce a specialized dataset named
Essay-Syntax-Instruct designed to enhance the understanding and application of
English syntax among these students. Leveraging the capabilities of Large
Language Models (LLMs) such as GPT3.5-Turbo, Llama-2-7b-chat-hf,
Llama-2-13b-chat-hf, and Mistral-7B-Instruct-v0.2, this work embarks on a
comprehensive fine-tuning process tailored to the syntax improvement task.
Through meticulous evaluation, we demonstrate that the fine-tuned LLMs exhibit
a marked improvement in addressing syntax-related challenges, thereby serving
as a potent tool for students to identify and rectify their syntactic errors.
The findings not only highlight the effectiveness of the proposed dataset in
elevating the performance of LLMs for syntax enhancement but also illuminate a
promising path for utilizing advanced language models to support language
acquisition efforts. This research contributes to the broader field of language
learning technology by showcasing the potential of LLMs in facilitating the
linguistic development of Students.
|
2501.07741
|
Concentration of Measure for Distributions Generated via Diffusion
Models
|
stat.ML cs.LG
|
We show via a combination of mathematical arguments and empirical evidence
that data distributions sampled from diffusion models satisfy a Concentration
of Measure Property saying that any Lipschitz $1$-dimensional projection of a
random vector is not too far from its mean with high probability. This implies
that such models are quite restrictive and gives an explanation for a fact
previously observed in arXiv:2410.14171 that conventional diffusion models
cannot capture "heavy-tailed" data (i.e. data $\mathbf{x}$ for which the norm
$\|\mathbf{x}\|_2$ does not possess a subgaussian tail) well. We then proceed
to train a generalized linear model using stochastic gradient descent (SGD) on
the diffusion-generated data for a multiclass classification task and observe
empirically that a Gaussian universality result holds for the test error.
In other words, the test error depends only on the first and second order
statistics of the diffusion-generated data in the linear setting. Results of
such forms are desirable because they allow one to assume the data itself is
Gaussian for analyzing performance of the trained classifier. Finally, we note
that current approaches to proving universality do not apply to this case as
the covariance matrices of the data tend to have vanishing minimum singular
values for the diffusion-generated data, while the current proofs assume that
this is not the case (see Subsection 3.4 for more details). This leaves
extending previous mathematical universality results as an intriguing open
question.
|
2501.07742
|
Fixing the Scale and Shift in Monocular Depth For Camera Pose Estimation
|
cs.CV
|
Recent advances in monocular depth prediction have led to significantly
improved depth prediction accuracy. In turn, this enables various applications
to use such depth predictions. In this paper, we propose a novel framework for
estimating the relative pose between two cameras from point correspondences
with associated monocular depths. Since depth predictions are typically defined
up to an unknown scale and shift parameter, our solvers jointly estimate both
scale and shift parameters together with the camera pose. We derive efficient
solvers for three cases: (1) two calibrated cameras, (2) two uncalibrated
cameras with an unknown but shared focal length, and (3) two uncalibrated
cameras with unknown and different focal lengths. Experiments on synthetic and
real data, including experiments with depth maps estimated by 11 different
depth predictors, show the practical viability of our solvers. Compared to
prior work, our solvers achieve state-of-the-art results on two large-scale,
real-world datasets. The source code is available at
https://github.com/yaqding/pose_monodepth
|
2501.07743
|
The Reliability of Remotely Piloted Aircraft System Performance under
Communication Loss and Latency Uncertainties
|
eess.SY cs.SY
|
Mission-critical use of highly maneuverable Remotely Piloted Aircraft Systems
(RPAS) requires a thorough understanding of the reliability of their
communication systems. Investigations into system-level performance under
stochastic aviation communication conditions are critical for estimating
mission success rates and assessing the risks associated with integrating RPAS
into existing airspace, ensuring overall aviation safety. This study aims to
quantify the impact of communication latency and complete signal loss on the
mission completion performance of a highly maneuverable RPAS. The mission is
defined as a static waypoint tracking task in three-dimensional airspace. We
start with examining and deriving mathematical formulations of key reliability
metrics of Required Communication Performance (RCP). These stochastic factors
are then embedded into flight control simulations (i.e., communication
availability and latency) to examine the system behavior. Lastly, we generate
mission success rate and mission completion time envelopes through extensive
multiprocessing Monte Carlo simulations through high-performance computing. We
discover a drastic deterioration in flight performance while latency or
availability erodes the stability margin. In addition, we propose a new
reliability metric, namely \textit{communicability}, which integrates three key
RCP metrics and helps understanding the maximum tolerable latency to flight
control. The procedure and results obtained from this research inform engineers
designing RPAS with better trade-off between communication capability and
flight control performance. Future works includes exploring alternative flight
simulators (i.e., nonlinear dynamic inversion) with other missions (i.e.,
dynamic waypoint following), or develop delay-compensated optimal controls. The
analysis on stability margin is also desired for theoretical verification.
|
2501.07744
|
CBS with Continuous-Time Revisit
|
cs.MA
|
In recent years, researchers introduced the Multi-Agent Path Finding in
Continuous Time (MAPFR) problem. Conflict-based search with Continuous Time
(CCBS), a variant of CBS for discrete MAPF, aims to solve MAPFR with
completeness and optimality guarantees. However, CCBS overlooked the fact that
search algorithms only guarantee termination and return the optimal solution
with a finite amount of search nodes. In this paper, we show that CCBS is
incomplete, reveal the gaps in the existing implementation, demonstrate that
patching is non-trivial, and discuss the next steps.
|
2501.07746
|
A Heterogeneous Multimodal Graph Learning Framework for Recognizing User
Emotions in Social Networks
|
cs.SI cs.CL cs.CV
|
The rapid expansion of social media platforms has provided unprecedented
access to massive amounts of multimodal user-generated content. Comprehending
user emotions can provide valuable insights for improving communication and
understanding of human behaviors. Despite significant advancements in Affective
Computing, the diverse factors influencing user emotions in social networks
remain relatively understudied. Moreover, there is a notable lack of deep
learning-based methods for predicting user emotions in social networks, which
could be addressed by leveraging the extensive multimodal data available. This
work presents a novel formulation of personalized emotion prediction in social
networks based on heterogeneous graph learning. Building upon this formulation,
we design HMG-Emo, a Heterogeneous Multimodal Graph Learning Framework that
utilizes deep learning-based features for user emotion recognition.
Additionally, we include a dynamic context fusion module in HMG-Emo that is
capable of adaptively integrating the different modalities in social media
data. Through extensive experiments, we demonstrate the effectiveness of
HMG-Emo and verify the superiority of adopting a graph neural network-based
approach, which outperforms existing baselines that use rich hand-crafted
features. To the best of our knowledge, HMG-Emo is the first multimodal and
deep-learning-based approach to predict personalized emotions within online
social networks. Our work highlights the significance of exploiting advanced
deep learning techniques for less-explored problems in Affective Computing.
|
2501.07747
|
Scaling Up ESM2 Architectures for Long Protein Sequences Analysis: Long
and Quantized Approaches
|
cs.LG q-bio.QM
|
Various approaches utilizing Transformer architectures have achieved
state-of-the-art results in Natural Language Processing (NLP). Based on this
success, numerous architectures have been proposed for other types of data,
such as in biology, particularly for protein sequences. Notably among these are
the ESM2 architectures, pre-trained on billions of proteins, which form the
basis of various state-of-the-art approaches in the field. However, the ESM2
architectures have a limitation regarding input size, restricting it to 1,022
amino acids, which necessitates the use of preprocessing techniques to handle
sequences longer than this limit. In this paper, we present the long and
quantized versions of the ESM2 architectures, doubling the input size limit to
2,048 amino acids.
|
2501.07750
|
Boosting Sclera Segmentation through Semi-supervised Learning with Fewer
Labels
|
cs.CV
|
Sclera segmentation is crucial for developing automatic eye-related medical
computer-aided diagnostic systems, as well as for personal identification and
verification, because the sclera contains distinct personal features. Deep
learning-based sclera segmentation has achieved significant success compared to
traditional methods that rely on hand-crafted features, primarily because it
can autonomously extract critical output-related features without the need to
consider potential physical constraints. However, achieving accurate sclera
segmentation using these methods is challenging due to the scarcity of
high-quality, fully labeled datasets, which depend on costly, labor-intensive
medical acquisition and expertise. To address this challenge, this paper
introduces a novel sclera segmentation framework that excels with limited
labeled samples. Specifically, we employ a semi-supervised learning method that
integrates domain-specific improvements and image-based spatial transformations
to enhance segmentation performance. Additionally, we have developed a
real-world eye diagnosis dataset to enrich the evaluation process. Extensive
experiments on our dataset and two additional public datasets demonstrate the
effectiveness and superiority of our proposed method, especially with
significantly fewer labeled samples.
|
2501.07751
|
Rethinking AI Cultural Evaluation
|
cs.AI cs.CY
|
As AI systems become more integrated into society, evaluating their capacity
to align with diverse cultural values is crucial for their responsible
deployment. Current evaluation methods predominantly rely on multiple-choice
question (MCQ) datasets. In this study, we demonstrate that MCQs are
insufficient for capturing the complexity of cultural values expressed in
open-ended scenarios. Our findings highlight significant discrepancies between
MCQ-based assessments and the values conveyed in unconstrained interactions.
Based on these findings, we recommend moving beyond MCQs to adopt more
open-ended, context-specific assessments that better reflect how AI models
engage with cultural values in realistic settings.
|
2501.07754
|
Universal Training of Neural Networks to Achieve Bayes Optimal
Classification Accuracy
|
cs.LG cs.CV cs.IT eess.IV eess.SP math.IT
|
This work invokes the notion of $f$-divergence to introduce a novel upper
bound on the Bayes error rate of a general classification task. We show that
the proposed bound can be computed by sampling from the output of a
parameterized model. Using this practical interpretation, we introduce the
Bayes optimal learning threshold (BOLT) loss whose minimization enforces a
classification model to achieve the Bayes error rate. We validate the proposed
loss for image and text classification tasks, considering MNIST, Fashion-MNIST,
CIFAR-10, and IMDb datasets. Numerical experiments demonstrate that models
trained with BOLT achieve performance on par with or exceeding that of
cross-entropy, particularly on challenging datasets. This highlights the
potential of BOLT in improving generalization.
|
2501.07755
|
Performance Optimization of Ratings-Based Reinforcement Learning
|
cs.LG cs.AI
|
This paper explores multiple optimization methods to improve the performance
of rating-based reinforcement learning (RbRL). RbRL, a method based on the idea
of human ratings, has been developed to infer reward functions in reward-free
environments for the subsequent policy learning via standard reinforcement
learning, which requires the availability of reward functions. Specifically,
RbRL minimizes the cross entropy loss that quantifies the differences between
human ratings and estimated ratings derived from the inferred reward. Hence, a
low loss means a high degree of consistency between human ratings and estimated
ratings. Despite its simple form, RbRL has various hyperparameters and can be
sensitive to various factors. Therefore, it is critical to provide
comprehensive experiments to understand the impact of various hyperparameters
on the performance of RbRL. This paper is a work in progress, providing users
some general guidelines on how to select hyperparameters in RbRL.
|
2501.07761
|
Impatient Bandits: Optimizing for the Long-Term Without Delay
|
cs.LG cs.AI stat.ML
|
Increasingly, recommender systems are tasked with improving users' long-term
satisfaction. In this context, we study a content exploration task, which we
formalize as a bandit problem with delayed rewards. There is an apparent
trade-off in choosing the learning signal: waiting for the full reward to
become available might take several weeks, slowing the rate of learning,
whereas using short-term proxy rewards reflects the actual long-term goal only
imperfectly. First, we develop a predictive model of delayed rewards that
incorporates all information obtained to date. Rewards as well as shorter-term
surrogate outcomes are combined through a Bayesian filter to obtain a
probabilistic belief. Second, we devise a bandit algorithm that quickly learns
to identify content aligned with long-term success using this new predictive
model. We prove a regret bound for our algorithm that depends on the
\textit{Value of Progressive Feedback}, an information theoretic metric that
captures the quality of short-term leading indicators that are observed prior
to the long-term reward. We apply our approach to a podcast recommendation
problem, where we seek to recommend shows that users engage with repeatedly
over two months. We empirically validate that our approach significantly
outperforms methods that optimize for short-term proxies or rely solely on
delayed rewards, as demonstrated by an A/B test in a recommendation system that
serves hundreds of millions of users.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.