text
string | source
string |
|---|---|
We study the influence of the antiferromagnetic order on the surface states
of topological insulators. We derive an effective Hamiltonian for these states,
taking into account the spatial structure of the antiferromagnetic order. We
obtain a typical (gapless) Dirac Hamiltonian for the surface states when the
surface of the sample is not perturbed. Gapless spectrum is protected by the
combination of time-reversal and half-translation symmetries. However, a shift
in the chemical potential of the surface layer opens a gap in the spectrum away
from the Fermi energy. Such a gap occurs only in systems with finite
antiferromagnetic order. We observe that the system topology remains unchanged
even for large values of the disorder. We calculate the spectrum using the
tight-binding model with different boundary conditions. In this case we get a
gap in the spectrum of the surface states. This discrepancy arises due to the
violation of the combined time-reversal symmetry. We compare our results with
experiments and density functional theory calculations.
|
http://arxiv.org/abs/2309.11216v2
|
This paper proposes a gradient descent based optimization method that relies
on automatic differentiation for the computation of gradients. The method uses
tools and techniques originally developed in the field of artificial neural
networks and applies them to power system simulations. It can be used as a
one-shot physics informed machine learning approach for the identification of
uncertain power system simulation parameters. Additionally, it can optimize
parameters with respect to a desired system behavior. The paper focuses on
presenting the theoretical background and showing exemplary use-cases for both
parameter identification and optimization using a single machine infinite
busbar system. The results imply a generic applicability for a wide range of
problems.
|
http://arxiv.org/abs/2309.16579v1
|
Estimating the probability of the binomial distribution is a basic problem,
which appears in almost all introductory statistics courses and is performed
frequently in various studies. In some cases, the parameter of interest is a
difference between two probabilities, and the current work studies the
construction of confidence intervals for this parameter when the sample size is
small. Our goal is to find the shortest confidence intervals under the
constraint of coverage probability being at least as large as a predetermined
level. For the two-sample case, there is no known algorithm that achieves this
goal, but different heuristics procedures have been suggested, and the present
work aims at finding optimal confidence intervals. In the one-sample case,
there is a known algorithm that finds optimal confidence intervals presented by
Blyth and Still (1983). It is based on solving small and local optimization
problems and then using an inversion step to find the global optimum solution.
We show that this approach fails in the two-sample case and therefore, in order
to find optimal confidence intervals, one needs to solve a global optimization
problem, rather than small and local ones, which is computationally much
harder. We present and discuss the suitable global optimization problem. Using
the Gurobi package we find near-optimal solutions when the sample sizes are
smaller than 15, and we compare these solutions to some existing methods, both
approximate and exact. We find that the improvement in terms of lengths with
respect to the best competitor varies between 1.5\% and 5\% for different
parameters of the problem. Therefore, we recommend the use of the new
confidence intervals when both sample sizes are smaller than 15. Tables of the
confidence intervals are given in the Excel file in this link.
|
http://arxiv.org/abs/2308.16650v3
|
This paper proposes a novel fuzzy cascaded Proportional-Derivative (PD)
controller for under-actuated single-link flexible joint manipulators. The
original flexible joint system is considered as two coupled $2^{nd}$-order
sub-systems. The proposed controller is composed of two cascaded PD controllers
and two fuzzy logic regulators (FLRs). The first (virtual) PD controller is
used to generate desired control input that stabilizes the first $2^{nd}$-order
sub-system. Solving the equation by considering the coupling terms as design
variables, the reference signal is generated for the second sub-system. Then
through simple compensation design, together with the second PD controller, the
cascaded PD controller is derived. In order to further improve the performance,
two FLRs are implemented that adaptively tune the parameters of PD controllers.
Under natural assumptions, the cascaded fuzzy PD controller is proved to
possess locally asymptotic stability. All the offline tuning processes are
completed data-efficiently by Bayesian Optimization. The results in simulation
illustrate the stability and validity of our proposed method. Besides, the idea
of cascaded PD controller presented here may be extended as a novel control
method for other under-actuated systems, and the stability analysis renders a
new perspective towards the stability proof of all other fuzzy-enhanced PID
controllers.
|
http://arxiv.org/abs/2309.07474v1
|
This study focuses on the presence of (multi)fractal structures in confined
hadronic matter through the momentum distributions of mesons produced in
proton-proton collisions between 23 GeV and 63 GeV. The analysis demonstrates
that the $q$-exponential behaviour of the particle momentum distributions is
consistent with fractal characteristics, exhibiting fractal structures in
confined hadronic matter with features similar to those observed in the
deconfined quark-gluon plasma (QGP) regime. Furthermore, the systematic
analysis of meson production in hadronic collisions at energies below 1 TeV
suggests that specific fractal parameters are universal, independently of
confinement or deconfinement, while others may be influenced by the quark
content of the produced meson. These results pave the way for further research
exploring the implications of fractal structures on various physical
distributions and offer insights into the nature of the phase transition
between confined and deconfined regimes.
|
http://arxiv.org/abs/2308.16888v1
|
The goal of this article is to obtain a proof of the Main conjectures of
Iwasawa theory for rational elliptic curves over anticyclotomic extensions of
imaginary quadratic fields, under mild arithmetic assumptions, both in the case
where the rational prime $p$ is good ordinary or supersingular.
|
http://arxiv.org/abs/2306.17784v1
|
We investigate the finite-time behavior of pair production from the vacuum by
a time-dependent Sauter pulsed electric field using the spinor quantum
electrodynamics (QED). In the adiabatic basis, the one-particle distribution
function in momentum space is determined by utilizing the exact analytical
solution of the Dirac equation. By examining the temporal behavior of the
one-particle distribution function and the momentum spectrum of created pairs
in the sub-critical field limit $(E_0 = 0.2E_c)$, we observe oscillatory
patterns in the longitudinal momentum spectrum(LMS) of particles at finite
times. These oscillations arise due to quantum interference effects resulting
from the dynamical tunneling. Furthermore, we derive an approximate and
simplified analytical expression for the distribution function at finite times,
which allows us to explain the origin and behavior of these oscillations.
Additionally, we discuss the role of the vacuum polarization function and its
counter term to the oscillations in LMS vacuum excitation. We also analyse the
transverse momentum spectrum (TMS).
|
http://arxiv.org/abs/2309.12079v3
|
We study Poisson valuations and provide their applications in solving
problems related to rigidity, automorphisms, Dixmier property, isomorphisms,
and embeddings of Poisson algebras and fields.
|
http://arxiv.org/abs/2309.05511v1
|
Neural fields, a category of neural networks trained to represent
high-frequency signals, have gained significant attention in recent years due
to their impressive performance in modeling complex 3D data, such as signed
distance (SDFs) or radiance fields (NeRFs), via a single multi-layer perceptron
(MLP). However, despite the power and simplicity of representing signals with
an MLP, these methods still face challenges when modeling large and complex
temporal signals due to the limited capacity of MLPs. In this paper, we propose
an effective approach to address this limitation by incorporating temporal
residual layers into neural fields, dubbed ResFields. It is a novel class of
networks specifically designed to effectively represent complex temporal
signals. We conduct a comprehensive analysis of the properties of ResFields and
propose a matrix factorization technique to reduce the number of trainable
parameters and enhance generalization capabilities. Importantly, our
formulation seamlessly integrates with existing MLP-based neural fields and
consistently improves results across various challenging tasks: 2D video
approximation, dynamic shape modeling via temporal SDFs, and dynamic NeRF
reconstruction. Lastly, we demonstrate the practical utility of ResFields by
showcasing its effectiveness in capturing dynamic 3D scenes from sparse RGBD
cameras of a lightweight capture system.
|
http://arxiv.org/abs/2309.03160v5
|
Audio recognition in specialized areas such as birdsong and submarine
acoustics faces challenges in large-scale pre-training due to the limitations
in available samples imposed by sampling environments and specificity
requirements. While the Transformer model excels in audio recognition, its
dependence on vast amounts of data becomes restrictive in resource-limited
settings. Addressing this, we introduce the Audio Spectrogram Convolution
Attention (ASCA) based on CoAtNet, integrating a Transformer-convolution hybrid
architecture, novel network design, and attention techniques, further augmented
with data enhancement and regularization strategies. On the BirdCLEF2023 and
AudioSet(Balanced), ASCA achieved accuracies of 81.2% and 35.1%, respectively,
significantly outperforming competing methods. The unique structure of our
model enriches output, enabling generalization across various audio detection
tasks. Our code can be found at https://github.com/LeeCiang/ASCA.
|
http://arxiv.org/abs/2309.13373v1
|
We conduct a theoretical investigation into the impacts of local microwave
electric field frequency detuning, laser frequency detuning, and transit
relaxation rate on enhancing heterodyne Rydberg atomic receiver sensitivity. To
optimize the output signal amplitude given the input microwave signal, we
derive the steady-state solutions of the atomic density matrix. Numerical
results show that laser frequency detuning and local microwave electric field
frequency detuning can improve the system detection sensitivity, which can help
the system achieve extra sensitivity gain. It also shows that the heterodyne
Rydberg atomic receiver can detect weak microwave signals continuously over a
wide frequency range with the same sensitivity or even more sensitivity than
the resonance case. To evaluate the transit relaxation effect, a modified
Liouville equation is used. We find that the transition relaxation rate
increases the time it takes to reach steady state and decreases the sensitivity
of the system detection.
|
http://arxiv.org/abs/2306.17790v1
|
We show a cancellation property for probabilistic choice. If distributions mu
+ rho and nu + rho are branching probabilistic bisimilar, then distributions mu
and nu are also branching probabilistic bisimilar. We do this in the setting of
a basic process language involving non-deterministic and probabilistic choice
and define branching probabilistic bisimilarity on distributions. Despite the
fact that the cancellation property is very elegant and concise, we failed to
provide a short and natural combinatorial proof. Instead we provide a proof
using metric topology. Our major lemma is that every distribution can be
unfolded into an equivalent stable distribution, where the topological
arguments are required to deal with uncountable branching.
|
http://arxiv.org/abs/2309.07306v1
|
Vision Transformers (ViTs) have become prominent models for solving various
vision tasks. However, the interpretability of ViTs has not kept pace with
their promising performance. While there has been a surge of interest in
developing {\it post hoc} solutions to explain ViTs' outputs, these methods do
not generalize to different downstream tasks and various transformer
architectures. Furthermore, if ViTs are not properly trained with the given
data and do not prioritize the region of interest, the {\it post hoc} methods
would be less effective. Instead of developing another {\it post hoc} approach,
we introduce a novel training procedure that inherently enhances model
interpretability. Our interpretability-aware ViT (IA-ViT) draws inspiration
from a fresh insight: both the class patch and image patches consistently
generate predicted distributions and attention maps. IA-ViT is composed of a
feature extractor, a predictor, and an interpreter, which are trained jointly
with an interpretability-aware training objective. Consequently, the
interpreter simulates the behavior of the predictor and provides a faithful
explanation through its single-head self-attention mechanism. Our comprehensive
experimental results demonstrate the effectiveness of IA-ViT in several image
classification tasks, with both qualitative and quantitative evaluations of
model performance and interpretability. Source code is available from:
https://github.com/qiangyao1988/IA-ViT.
|
http://arxiv.org/abs/2309.08035v1
|
Due to the rapid development of technology and the widespread usage of
smartphones, the number of mobile applications is exponentially growing.
Finding a suitable collection of apps that aligns with users needs and
preferences can be challenging. However, mobile app recommender systems have
emerged as a helpful tool in simplifying this process. But there is a drawback
to employing app recommender systems. These systems need access to user data,
which is a serious security violation. While users seek accurate opinions, they
do not want to compromise their privacy in the process. We address this issue
by developing SAppKG, an end-to-end user privacy-preserving knowledge graph
architecture for mobile app recommendation based on knowledge graph models such
as SAppKG-S and SAppKG-D, that utilized the interaction data and side
information of app attributes. We tested the proposed model on real-world data
from the Google Play app store, using precision, recall, mean absolute
precision, and mean reciprocal rank. We found that the proposed model improved
results on all four metrics. We also compared the proposed model to baseline
models and found that it outperformed them on all four metrics.
|
http://arxiv.org/abs/2309.17115v1
|
Hierarchical reinforcement learning has been a compelling approach for
achieving goal directed behavior over long sequences of actions. However, it
has been challenging to implement in realistic or open-ended environments. A
main challenge has been to find the right space of sub-goals over which to
instantiate a hierarchy. We present a novel approach where we use data from
humans solving these tasks to softly supervise the goal space for a set of long
range tasks in a 3D embodied environment. In particular, we use unconstrained
natural language to parameterize this space. This has two advantages: first, it
is easy to generate this data from naive human participants; second, it is
flexible enough to represent a vast range of sub-goals in human-relevant tasks.
Our approach outperforms agents that clone expert behavior on these tasks, as
well as HRL from scratch without this supervised sub-goal space. Our work
presents a novel approach to combining human expert supervision with the
benefits and flexibility of reinforcement learning.
|
http://arxiv.org/abs/2309.11564v1
|
We propose a novel framework for interactive class-agnostic object counting,
where a human user can interactively provide feedback to improve the accuracy
of a counter. Our framework consists of two main components: a user-friendly
visualizer to gather feedback and an efficient mechanism to incorporate it. In
each iteration, we produce a density map to show the current prediction result,
and we segment it into non-overlapping regions with an easily verifiable number
of objects. The user can provide feedback by selecting a region with obvious
counting errors and specifying the range for the estimated number of objects
within it. To improve the counting result, we develop a novel adaptation loss
to force the visual counter to output the predicted count within the
user-specified range. For effective and efficient adaptation, we propose a
refinement module that can be used with any density-based visual counter, and
only the parameters in the refinement module will be updated during adaptation.
Our experiments on two challenging class-agnostic object counting benchmarks,
FSCD-LVIS and FSC-147, show that our method can reduce the mean absolute error
of multiple state-of-the-art visual counters by roughly 30% to 40% with minimal
user input. Our project can be found at
https://yifehuang97.github.io/ICACountProjectPage/.
|
http://arxiv.org/abs/2309.05277v1
|
Optical frequency comb underpins a wide range of applications from
communication, metrology, to sensing. Its development on a chip-scale platform
-- so called soliton microcomb -- provides a promising path towards system
miniaturization and functionality integration via photonic integrated circuit
(PIC) technology. Although extensively explored in recent years, challenges
remain in key aspects of microcomb such as complex soliton initialization, high
threshold, low power efficiency, and limited comb reconfigurability. Here we
present an on-chip laser that directly outputs microcomb and resolves all these
challenges, with a distinctive mechanism created from synergetic interaction
among resonant electro-optic effect, optical Kerr effect, and optical gain
inside the laser cavity. Realized with integration between a III-V gain chip
and a thin-film lithium niobate (TFLN) PIC, the laser is able to directly emit
mode-locked microcomb on demand with robust turnkey operation inherently built
in, with individual comb linewidth down to 600 Hz, whole-comb frequency tuning
rate exceeding $\rm 2.4\times10^{17}$ Hz/s, and 100% utilization of optical
power fully contributing to comb generation. The demonstrated approach unifies
architecture and operation simplicity, high-speed reconfigurability, and
multifunctional capability enabled by TFLN PIC, opening up a great avenue
towards on-demand generation of mode-locked microcomb that is expected to have
profound impact on broad applications.
|
http://arxiv.org/abs/2310.20157v1
|
Learning for Demonstration (LfD) enables robots to acquire new skills by
imitating expert demonstrations, allowing users to communicate their
instructions in an intuitive manner. Recent progress in LfD often relies on
kinesthetic teaching or teleoperation as the medium for users to specify the
demonstrations. Kinesthetic teaching requires physical handling of the robot,
while teleoperation demands proficiency with additional hardware. This paper
introduces an alternative paradigm for LfD called Diagrammatic Teaching.
Diagrammatic Teaching aims to teach robots novel skills by prompting the user
to sketch out demonstration trajectories on 2D images of the scene, these are
then synthesised as a generative model of motion trajectories in 3D task space.
Additionally, we present the Ray-tracing Probabilistic Trajectory Learning
(RPTL) framework for Diagrammatic Teaching. RPTL extracts time-varying
probability densities from the 2D sketches, applies ray-tracing to find
corresponding regions in 3D Cartesian space, and fits a probabilistic model of
motion trajectories to these regions. New motion trajectories, which mimic
those sketched by the user, can then be generated from the probabilistic model.
We empirically validate our framework both in simulation and on real robots,
which include a fixed-base manipulator and a quadruped-mounted manipulator.
|
http://arxiv.org/abs/2309.03835v3
|
The end of Dennard scaling and the slowdown of Moore's law led to a shift in
technology trends toward parallel architectures, particularly in HPC systems.
To continue providing performance benefits, HPC should embrace Approximate
Computing (AC), which trades application quality loss for improved performance.
However, existing AC techniques have not been extensively applied and evaluated
in state-of-the-art hardware architectures such as GPUs, the primary execution
vehicle for HPC applications today.
This paper presents HPAC-Offload, a pragma-based programming model that
extends OpenMP offload applications to support AC techniques, allowing portable
approximations across different GPU architectures. We conduct a comprehensive
performance analysis of HPAC-Offload across GPU-accelerated HPC applications,
revealing that AC techniques can significantly accelerate HPC applications
(1.64x LULESH on AMD, 1.57x NVIDIA) with minimal quality loss (0.1%). Our
analysis offers deep insights into the performance of GPU-based AC that guide
the future development of AC algorithms and systems for these architectures.
|
http://arxiv.org/abs/2308.16877v1
|
The most general tree-level boundary correlation functions of quantum fields
in inflationary spacetime involve multiple exchanges of massive states in the
bulk, which are technically difficult to compute due to the multi-layer nested
time integrals in the Schwinger-Keldysh formalism. On the other hand,
correlators with multiple massive exchanges are well motivated in cosmological
collider physics, with the original quasi-single-field inflation model as a
notable example. In this work, with the partial Mellin-Barnes representation,
we derive a simple rule, called family-tree decomposition, for directly writing
down analytical answers for arbitrary nested time integrals in terms of
multi-variable hypergeometric series. We present the derivation of this rule
together with many explicit examples. This result allows us to obtain
analytical expressions for general tree-level inflation correlators with
multiple massive exchanges. As an example, we present the full analytical
results for a range of tree correlators with two massive exchanges.
|
http://arxiv.org/abs/2309.10849v2
|
We develop a framework that allows one to describe the birational geometry of
Calabi-Yau pairs $(X,D)$. After establishing some general results for
Calabi-Yau pairs $(X,D)$ with mild singularities, we focus on the special case
when $X=\mathbb{P}^3$ and $D\subset \mathbb{P}^3$ is a quartic surface. We
investigate how the appearance of increasingly worse singularities on $D$
enriches the birational geometry of the pair $(\mathbb{P}^3, D)$.
|
http://arxiv.org/abs/2306.00207v2
|
Recent models for the inner structure of active galactic nuclei (AGN) aim at
connecting the outer region of the accretion disk with the broad-line region
and dusty torus through a radiatively accelerated, dusty outflow. Such an
outflow not only requires the outer disk to be dusty and so predicts disk sizes
beyond the self-gravity limit but requires the presence of nuclear dust with
favourable properties. Here we investigate a large sample of type 1 AGN with
near-infrared (near-IR) cross-dispersed spectroscopy with the aim to constrain
the astrochemistry, location and geometry of the nuclear hot dust region.
Assuming thermal equilibrium for optically thin dust, we derive the
luminosity-based dust radius for different grain properties using our
measurement of the temperature. We combine our results with independent dust
radius measurements from reverberation mapping and interferometry and show that
large dust grains that can provide the necessary opacity for the outflow are
ubiquitous in AGN. Using our estimates of the dust covering factor, we
investigate the dust geometry using the effects of the accretion disk
anisotropy. A flared disk-like structure for the hot dust is favoured. Finally,
we discuss the implication of our results for the dust radius-luminosity plane.
|
http://arxiv.org/abs/2309.15931v1
|
We explore the collapsar scenario for long gamma-ray bursts by performing
axisymmetric neutrino-radiation magnetohydrodynamics simulations in full
general relativity for the first time. In this paper, we pay particular
attention to the outflow energy and the evolution of the black-hole spin. We
show that for a strong magnetic field with an aligned field configuration
initially given, a jet is launched by magnetohydrodynamical effects before the
formation of a disk and a torus, and after the jet launch, the matter accretion
onto the black hole is halted by the strong magnetic pressure, leading to the
spin-down of the black hole due to the Blandford-Znajek mechanism. The
spin-down timescale depends strongly on the magnetic-field strength initially
given because the magnetic-field strength on the black-hole horizon, which is
determined by the mass infall rate at the jet launch, depends strongly on the
initial condition, although the total jet-outflow energy appears to be huge
$>10^{53}$ erg depending only weakly on the initial field strength and
configuration. For the models in which the magnetic-field configuration is not
suitable for quick jet launch, a torus is formed and after a long-term
magnetic-field amplification, a jet can be launched. For this case, the matter
accretion onto the black hole continues even after the jet launch and
black-hole spin-down is not found. We also find that the jet launch is often
accompanied with the powerful explosion of the entire star with the explosion
energy of order $10^{52}$ erg by magnetohydrodynamical effects. We discuss an
issue of the overproduced energy for the early-jet-launch models.
|
http://arxiv.org/abs/2309.12086v1
|
In this talk we review jet production in a large variety of collision systems
using the JETSCAPE event generator and Hybrid Hadronization. Hybrid
Hadronization combines quark recombination, applicable when distances between
partons in phase space are small, and string fragmentation appropriate for
dilute parton systems. It can therefore smoothly describe the transition from
very dilute parton systems like $e^++e^-$ to full $A+A$ collisions. We test
this picture by using JETSCAPE to generate jets in various systems. Comparison
to experimental data in $e^++e^-$ and $p+p$ collisions allows for a precise
tuning of vacuum baseline parameters in JETSCAPE and Hybrid Hadronization.
Proceeding to systems with jets embedded in a medium, we study in-medium
hadronization for jet showers. We quantify the effects of an ambient medium,
focusing in particular on the dependence on the collective flow and size of the
medium. Our results clarify the effects we expect from in-medium hadronization
of jets on observables like fragmentation functions, hadron chemistry and jet
shape.
|
http://arxiv.org/abs/2310.20631v3
|
We study geometric inequalities for the circumradius and diameter with
respect to general gauges, partly also involving the inradius and the Minkowski
asymmetry. There are a number of options for defining the diameter of a convex
body that fall apart when we consider non-symmetric gauges. These definitions
correspond to different symmetrizations of the gauge, i.e. means of the gauge
$C$ and its origin reflection $-C$.
|
http://arxiv.org/abs/2309.12092v2
|
While the majority of existing pre-trained models from code learn source code
features such as code tokens and abstract syntax trees, there are some other
works that focus on learning from compiler intermediate representations (IRs).
Existing IR-based models typically utilize IR features such as instructions,
control and data flow graphs (CDFGs), call graphs, etc. However, these methods
confuse variable nodes and instruction nodes in a CDFG and fail to distinguish
different types of flows, and the neural networks they use fail to capture
long-distance dependencies and have over-smoothing and over-squashing problems.
To address these weaknesses, we propose FAIR, a Flow type-Aware pre-trained
model for IR that involves employing (1) a novel input representation of IR
programs; (2) Graph Transformer to address over-smoothing, over-squashing and
long-dependencies problems; and (3) five pre-training tasks that we
specifically propose to enable FAIR to learn the semantics of IR tokens, flow
type information, and the overall representation of IR. Experimental results
show that FAIR can achieve state-of-the-art results on four code-related
downstream tasks.
|
http://arxiv.org/abs/2309.04828v1
|
One of the intrinsic drift velocity limit of the quantum Hall effect is the
collective magneto-exciton (ME) instability. It has been demonstrated in
bilayer graphene (BLG) using noise measurements. We reproduce this experiment
in monolayer graphene (MLG), and show that the same mechanism carries a direct
relativistic signature on the breakdown velocity. Based on theoretical
calculations of MLG- and BLG-ME spectra, we show that Doppler-induced
instabilities manifest for a ME phase velocity determined by a universal value
of the ME conductivity, set by the Hall conductance.
|
http://arxiv.org/abs/2302.14791v2
|
The primary bottleneck towards obtaining good recognition performance in IR
images is the lack of sufficient labeled training data, owing to the cost of
acquiring such data. Realizing that object detection methods for the RGB
modality are quite robust (at least for some commonplace classes, like person,
car, etc.), thanks to the giant training sets that exist, in this work we seek
to leverage cues from the RGB modality to scale object detectors to the IR
modality, while preserving model performance in the RGB modality. At the core
of our method, is a novel tensor decomposition method called TensorFact which
splits the convolution kernels of a layer of a Convolutional Neural Network
(CNN) into low-rank factor matrices, with fewer parameters than the original
CNN. We first pretrain these factor matrices on the RGB modality, for which
plenty of training data are assumed to exist and then augment only a few
trainable parameters for training on the IR modality to avoid over-fitting,
while encouraging them to capture complementary cues from those trained only on
the RGB modality. We validate our approach empirically by first assessing how
well our TensorFact decomposed network performs at the task of detecting
objects in RGB images vis-a-vis the original network and then look at how well
it adapts to IR images of the FLIR ADAS v1 dataset. For the latter, we train
models under scenarios that pose challenges stemming from data paucity. From
the experiments, we observe that: (i) TensorFact shows performance gains on RGB
images; (ii) further, this pre-trained model, when fine-tuned, outperforms a
standard state-of-the-art object detector on the FLIR ADAS v1 dataset by about
4% in terms of mAP 50 score.
|
http://arxiv.org/abs/2309.16592v1
|
We demonstrate an approach to two-dimensional electronic spectroscopy (2DES)
that combines the benefits of shot-to-shot detection at high-repetition rates
with the simplicity of a broadband white light continuum input and conventional
optical elements to generate phase-locked pump pulse pairs. We demonstrate this
through mutual synchronization between the laser repetition rate,
acousto-optical deflector (AOD), pump delay stage and the CCD line camera,
which allows rapid scanning of pump optical delay synchronously with the laser
repetition rate while the delay stage is moved at a constant velocity. The
resulting shot-to-shot detection scheme is repetition rate scalable and only
limited by the CCD line rate and the maximum stage velocity. Using this
approach, we demonstrate measurement of an averaged 2DES absorptive spectrum in
as much as 1.2 seconds of continuous sample exposure per 2D spectrum. We
achieve a signal-to-noise ratio (SNR) of 6.8 for optical densities down to 0.05
with 11.6 seconds of averaging at 100 kHz laser repetition rate. Combining
rapid scanning of mechanical delay lines with shot-to-shot detection as
demonstrated here provides a viable alternative to acousto-optic pulse shaping
(AOPS) approaches that is repetition-rate scalable, has comparable throughput
and sensitivity, and minimizes sample exposure per 2D spectrum with promising
micro-spectroscopy applications.
|
http://arxiv.org/abs/2310.00293v1
|
Sparseness and robustness are two important properties for many machine
learning scenarios. In the present study, regarding the maximum correntropy
criterion (MCC) based robust regression algorithm, we investigate to integrate
the MCC method with the automatic relevance determination (ARD) technique in a
Bayesian framework, so that MCC-based robust regression could be implemented
with adaptive sparseness. To be specific, we use an inherent noise assumption
from the MCC to derive an explicit likelihood function, and realize the maximum
a posteriori (MAP) estimation with the ARD prior by variational Bayesian
inference. Compared to the existing robust and sparse L1-regularized MCC
regression, the proposed MCC-ARD regression can eradicate the troublesome
tuning for the regularization hyper-parameter which controls the regularization
strength. Further, MCC-ARD achieves superior prediction performance and feature
selection capability than L1-regularized MCC, as demonstrated by a noisy and
high-dimensional simulation study.
|
http://arxiv.org/abs/2302.00082v1
|
The purpose of this paper is to present the structure of the linear codes
over a finite field with q elements that have a permutation automorphism of
order m. These codes can be considered as generalized quasi-cyclic codes.
Quasi-cyclic codes and almost quasi-cyclic codes are discussed in detail,
presenting necessary and sufficient conditions for which linear codes with such
an automorphism are self-orthogonal, self-dual, or linear complementary dual.
|
http://arxiv.org/abs/2309.05288v1
|
Explainable Artificial Intelligence (XAI) models have recently attracted a
great deal of interest from a variety of application sectors. Despite
significant developments in this area, there are still no standardized methods
or approaches for understanding AI model outputs. A systematic and cohesive
framework is also increasingly necessary to incorporate new techniques like
discriminative and generative models to close the gap. This paper contributes
to the discourse on XAI by presenting an empirical evaluation based on a novel
framework: Sampling - Variational Auto Encoder (VAE) - Ensemble Anomaly
Detection (SVEAD). It is a hybrid architecture where VAE combined with ensemble
stacking and SHapley Additive exPlanations are used for imbalanced
classification. The finding reveals that combining ensemble stacking, VAE, and
SHAP can. not only lead to better model performance but also provide an easily
explainable framework. This work has used SHAP combined with Permutation
Importance and Individual Conditional Expectations to create a powerful
interpretability of the model. The finding has an important implication in the
real world, where the need for XAI is paramount to boost confidence in AI
applications.
|
http://arxiv.org/abs/2309.14385v1
|
Consider a stationary Poisson process of horospheres in a $d$-dimensional
hyperbolic space. In the focus of this note is the total surface area these
random horospheres induce in a sequence of balls of growing radius $R$. The
main result is a quantitative, non-standard central limit theorem for these
random variables as the radius $R$ of the balls and the space dimension $d$
tend to infinity simultaneously.
|
http://arxiv.org/abs/2303.17827v2
|
Data normalization is an essential task when modeling a classification
system. When dealing with data streams, data normalization becomes especially
challenging since we may not know in advance the properties of the features,
such as their minimum/maximum values, and these properties may change over
time. We compare the accuracies generated by eight well-known distance
functions in data streams without normalization, normalized considering the
statistics of the first batch of data received, and considering the previous
batch received. We argue that experimental protocols for streams that consider
the full stream as normalized are unrealistic and can lead to biased and poor
results. Our results indicate that using the original data stream without
applying normalization, and the Canberra distance, can be a good combination
when no information about the data stream is known beforehand.
|
http://arxiv.org/abs/2307.00106v2
|
Transfomer-based approaches advance the recent development of multi-camera 3D
detection both in academia and industry. In a vanilla transformer architecture,
queries are randomly initialised and optimised for the whole dataset, without
considering the differences among input frames. In this work, we propose to
leverage the predictions from an image backbone, which is often highly
optimised for 2D tasks, as priors to the transformer part of a 3D detection
network. The method works by (1). augmenting image feature maps with 2D priors,
(2). sampling query locations via ray-casting along 2D box centroids, as well
as (3). initialising query features with object-level image features.
Experimental results shows that 2D priors not only help the model converge
faster, but also largely improve the baseline approach by up to 12% in terms of
average precision.
|
http://arxiv.org/abs/2301.13592v1
|
We give a proof of linear inviscid damping and vorticity depletion for
non-monotonic shear flows with one critical point in a bounded periodic
channel. In particular, we obtain quantitative depletion rates for the
vorticity function without any symmetry assumptions.
|
http://arxiv.org/abs/2301.00288v2
|
A brief review is given of the author recent achievements in classifying
singular points of the Poynting vector patterns in electromagnetic fields of
complex configuration. The deep connection between the topological structure of
the force lines pattern and the law of energy conservation, the symmetry of the
problem, and the dimension of the space has been unveiled
|
http://arxiv.org/abs/2310.20619v1
|
Space-air-ground integrated networks (SAGINs), which have emerged as an
expansion of terrestrial networks, provide flexible access, ubiquitous
coverage, high-capacity backhaul, and emergency/disaster recovery for mobile
users (MUs). While the massive benefits brought by SAGIN may improve the
quality of service, unauthorized access to SAGIN entities is potentially
dangerous. At present, conventional crypto-based authentication is facing
challenges, such as the inability to provide continuous and transparent
protection for MUs. In this article, we propose an AI-oriented two-phase
multi-factor authentication scheme (ATMAS) by introducing intelligence to
authentication. The satellite and network control center collaborate on
continuous authentication, while unique spatial-temporal features, including
service features and geographic features, are utilized to enhance the system
security. Our further security analysis and performance evaluations show that
ATMAS has proper security characteristics which can meet various security
requirements. Moreover, we shed light on lightweight and efficient
authentication mechanism design through a proper combination of
spatial-temporal factors.
|
http://arxiv.org/abs/2303.17833v1
|
We develop a framework for characterizing quantum temporal correlations in a
general temporal scenario, in which an initial quantum state is measured, sent
through a quantum channel, and finally measured again. This framework does not
make any assumptions on the system nor on the measurements, namely, it is
device-independent. It is versatile enough, however, to allow for the addition
of further constraints in a semi-device-independent setting. Our framework
serves as a natural tool for quantum certification in a temporal scenario when
the quantum devices involved are uncharacterized or partially characterized. It
can hence also be used for characterizing quantum temporal correlations when
one assumes an additional constraint of no-signalling in time, there are upper
bounds on the involved systems' dimensions, rank constraints -- for which we
prove genuine quantum separations over local hidden variable models -- or
further linear constraints. We present a number of applications, including
bounding the maximal violation of temporal Bell inequalities, quantifying
temporal steerability, bounding the maximum successful probability in quantum
randomness access codes.
|
http://arxiv.org/abs/2305.19548v3
|
We present DictaBERT, a new state-of-the-art pre-trained BERT model for
modern Hebrew, outperforming existing models on most benchmarks. Additionally,
we release three fine-tuned versions of the model, designed to perform three
specific foundational tasks in the analysis of Hebrew texts: prefix
segmentation, morphological tagging and question answering. These fine-tuned
models allow any developer to perform prefix segmentation, morphological
tagging and question answering of a Hebrew input with a single call to a
HuggingFace model, without the need to integrate any additional libraries or
code. In this paper we describe the details of the training as well and the
results on the different benchmarks. We release the models to the community,
along with sample code demonstrating their use. We release these models as part
of our goal to help further research and development in Hebrew NLP.
|
http://arxiv.org/abs/2308.16687v2
|
The surface directed spinodal decomposition of a binary liquid confined
inside cylindrical pore is investigated using molecular dynamics simulation.
One component of the liquid wets the pore surface while the other remains
neutral. A variety of wetting conditions are studied. For the partial wetting
case, after an initial period of phase separation, the domains organize
themselves into plug-like structure and the system enters into a metastable
state. Therefore, a complete phase separation is never achieved. Analysis of
domain growth and the structure factor suggests an one-dimensional growth
dynamics for partial wetting case. As the wetting interaction is increased
beyond a critical value, a transition from the plug-like to tube-like domain
formation is observed which corresponds to the full wetting morphology. Thus, a
complete phase separation is achieved as the wetting species moves towards the
pore surface and forms layers enclosing the non wetting species residing around
the axis of the cylinder. The coarsening dynamics of both the species are
studied separately. The wetting species is found to follow a two-dimensional
domain growth dynamics with a growth exponent 1/2 in the viscous hydrodynamic
regime. This was substantiated by the Porod tail of the structure factor. On
the other hand, the domain grows linearly with time for the non wetting
species. This suggests that the non wetting species behaves akin to a
three-dimensional bulk system. An appropriate reasoning is presented to justify
the given observations.
|
http://arxiv.org/abs/2309.09511v1
|
High-level synthesis (HLS) enhances digital hardware design productivity
through a high abstraction level. Even if the HLS abstraction prevents
fine-grained manual register-transfer level (RTL) optimizations, it also
enables automatable optimizations that would be unfeasible or hard to automate
at RTL. Specifically, we propose a task-level multi-pumping methodology to
reduce resource utilization, particularly digital signal processors (DSPs),
while preserving the throughput of HLS kernels modeled as dataflow graphs
(DFGs) targeting field-programmable gate arrays. The methodology exploits the
HLS resource sharing to automatically insert the logic for reusing the same
functional unit for different operations. In addition, it relies on multi-clock
DFG s to run the multi-pumped tasks at higher frequencies. The methodology
scales the pipeline initiation interval (II) and the clock frequency
constraints of resource-intensive tasks by a multi-pumping factor (M). The
looser II allows sharing the same resource among M different operations, while
the tighter clock frequency preserves the throughput. We verified that our
methodology opens a new Pareto front in the throughput and resource space by
applying it to open-source HLS designs using state-of-the-art commercial HLS
and implementation tools by Xilinx. The multi-pumped designs require up to 40%
fewer DSP resources at the same throughput as the original designs optimized
for performance (i.e., running at the maximum clock frequency) and achieve up
to 50% better throughput using the same DSP s as the original designs optimized
for resources with a single clock.
|
http://arxiv.org/abs/2310.00330v1
|
In this work, a null geometric approach to the Brown-York quasilocal
formalism is used to derive an integral law that describes the rate of change
of mass and/or radiative energy escaping through a dynamical horizon of a
non-stationary spacetime. The result thus obtained shows - in accordance with
previous results from the theory of dynamical horizons of Ashtekar et al. -
that the rate at which energy is transferred from the bulk to the boundary of
spacetime through the dynamical horizon becomes zero at equilibrium, where said
horizon becomes non-expanding and null. Moreover, it reveals previously
unrecognized quasilocal corrections to the Bondi mass-loss formula arising from
the combined variation of bulk and boundary components of the Brown-York
Hamiltonian, given in terms of a bulk-to-boundary inflow term akin to an
expression derived in an earlier paper by the author [#huber2022remark]. For
clarity, this is discussed with reference to the Generalized Vaidya family of
spacetimes, for which derived integral expressions take a particularly simple
form.
|
http://arxiv.org/abs/2309.15138v1
|
Approximating differential operators defined on two-dimensional surfaces is
an important problem that arises in many areas of science and engineering. Over
the past ten years, localized meshfree methods based on generalized moving
least squares (GMLS) and radial basis function finite differences (RBF-FD) have
been shown to be effective for this task as they can give high orders of
accuracy at low computational cost, and they can be applied to surfaces defined
only by point clouds. However, there have yet to be any studies that perform a
direct comparison of these methods for approximating surface differential
operators (SDOs). The first purpose of this work is to fill that gap. For this
comparison, we focus on an RBF-FD method based on polyharmonic spline kernels
and polynomials (PHS+Poly) since they are most closely related to the GMLS
method. Additionally, we use a relatively new technique for approximating SDOs
with RBF-FD called the tangent plane method since it is simpler than previous
techniques and natural to use with PHS+Poly RBF-FD. The second purpose of this
work is to relate the tangent plane formulation of SDOs to the local coordinate
formulation used in GMLS and to show that they are equivalent when the tangent
space to the surface is known exactly. The final purpose is to use ideas from
the GMLS SDO formulation to derive a new RBF-FD method for approximating the
tangent space for a point cloud surface when it is unknown. For the numerical
comparisons of the methods, we examine their convergence rates for
approximating the surface gradient, divergence, and Laplacian as the point
clouds are refined for various parameter choices. We also compare their
efficiency in terms of accuracy per computational cost, both when including and
excluding setup costs.
|
http://arxiv.org/abs/2309.04035v1
|
We introduce the UT Campus Object Dataset (CODa), a mobile robot egocentric
perception dataset collected on the University of Texas Austin Campus. Our
dataset contains 8.5 hours of multimodal sensor data: synchronized 3D point
clouds and stereo RGB video from a 128-channel 3D LiDAR and two 1.25MP RGB
cameras at 10 fps; RGB-D videos from an additional 0.5MP sensor at 7 fps, and a
9-DOF IMU sensor at 40 Hz. We provide 58 minutes of ground-truth annotations
containing 1.3 million 3D bounding boxes with instance IDs for 53 semantic
classes, 5000 frames of 3D semantic annotations for urban terrain, and
pseudo-ground truth localization. We repeatedly traverse identical geographic
locations for a wide range of indoor and outdoor areas, weather conditions, and
times of the day. Using CODa, we empirically demonstrate that: 1) 3D object
detection performance in urban settings is significantly higher when trained
using CODa compared to existing datasets even when employing state-of-the-art
domain adaptation approaches, 2) sensor-specific fine-tuning improves 3D object
detection accuracy and 3) pretraining on CODa improves cross-dataset 3D object
detection performance in urban settings compared to pretraining on AV datasets.
Using our dataset and annotations, we release benchmarks for 3D object
detection and 3D semantic segmentation using established metrics. In the
future, the CODa benchmark will include additional tasks like unsupervised
object discovery and re-identification. We publicly release CODa on the Texas
Data Repository, pre-trained models, dataset development package, and
interactive dataset viewer on our website at https://amrl.cs.utexas.edu/coda.
We expect CODa to be a valuable dataset for research in egocentric 3D
perception and planning for autonomous navigation in urban environments.
|
http://arxiv.org/abs/2309.13549v2
|
We present some results about the irreducible representations appearing in
the exterior algebra $\Lambda \mathfrak{g}$, where $ \mathfrak{g}$ is a simple
Lie algebra over $\mathbb{C}$. For Lie algebras of type $B$, $C$ or $D$ we
prove that certain irreducible representations, associated to weights
characterized in a combinatorial way, appear as irreducible components of
$\Lambda \mathfrak{g}$. Moreover, we propose an analogue of a conjecture of
Kostant, about irreducibles appearing in the exterior algebra of the little
adjoint representation. Finally, we give some closed expressions, in type $B$,
$C$ and $D$, for generalized exponents of small representations that are
fundamental representations and we propose a generalization of some results of
De Concini, M\"oseneder Frajria, Procesi and Papi about the module of special
covariants of adjoint and little adjoint type.
|
http://arxiv.org/abs/2309.04753v1
|
The Quantum Materials group at Indian Institute of Technology Patna is
working on a range of topics relating to nanoelectronics, spintronics, clean
energy and memory design etc. The PI has past experiences of working
extensively with superconducting systems like cuprates [1, 2], ruthanate [3],
pnictide [4, 5], thin film heterostructures [6, 7] etc and magnetic recording
media [8, 9] etc. In this report, we have summarised the ongoing works in our
group. We explored a range of functional materials like two-dimensional
materials, oxides. topological insulators, organic materials etc. using a
combination of experimnetal and computational tools. Some of the useful
highlights are as follows: (a) tuning and control of the magnetic and
electronic state of 2D magentic materials with rapid enhancement in the Curie
temperature, (b) Design and detection of single electron transistor based
nanosensors for the detection of biological species with single molecular
resolution, (c) Observation of non-volatile memory behaviour in the hybrid
structures made of perovskite materials and 2D hybrids. The results offer
useful insight in the design of nanoelectronic architecrures for diverse
applications.
|
http://arxiv.org/abs/2310.00456v2
|
Google Translate has been prominent for language translation; however,
limited work has been done in evaluating the quality of translation when
compared to human experts. Sanskrit one of the oldest written languages in the
world. In 2022, the Sanskrit language was added to the Google Translate engine.
Sanskrit is known as the mother of languages such as Hindi and an ancient
source of the Indo-European group of languages. Sanskrit is the original
language for sacred Hindu texts such as the Bhagavad Gita. In this study, we
present a framework that evaluates the Google Translate for Sanskrit using the
Bhagavad Gita. We first publish a translation of the Bhagavad Gita in Sanskrit
using Google Translate. Our framework then compares Google Translate version of
Bhagavad Gita with expert translations using sentiment and semantic analysis
via BERT-based language models. Our results indicate that in terms of sentiment
and semantic analysis, there is low level of similarity in selected verses of
Google Translate when compared to expert translations. In the qualitative
evaluation, we find that Google translate is unsuitable for translation of
certain Sanskrit words and phrases due to its poetic nature, contextual
significance, metaphor and imagery. The mistranslations are not surprising
since the Bhagavad Gita is known as a difficult text not only to translate, but
also to interpret since it relies on contextual, philosophical and historical
information. Our framework lays the foundation for automatic evaluation of
other languages by Google Translate
|
http://arxiv.org/abs/2303.07201v1
|
Recommender systems are widely used to provide personalized recommendations
to users. Recent research has shown that recommender systems may be subject to
different types of biases, such as popularity bias, leading to an uneven
distribution of recommendation exposure among producer groups. To mitigate
this, producer-centered fairness re-ranking (PFR) approaches have been proposed
to ensure equitable recommendation utility across groups. However, these
approaches overlook the harm they may cause to within-group individuals
associated with colder items, which are items with few or no interactions.
This study reproduces previous PFR approaches and shows that they
significantly harm colder items, leading to a fairness gap for these items in
both advantaged and disadvantaged groups. Surprisingly, the unfair base
recommendation models were providing greater exposure opportunities to these
individual cold items, even though at the group level, they appeared to be
unfair. To address this issue, the study proposes an amendment to the PFR
approach that regulates the number of colder items recommended by the system.
This modification achieves a balance between accuracy and producer fairness
while optimizing the selection of colder items within each group, thereby
preventing or reducing harm to within-group individuals and augmenting the
novelty of all recommended items. The proposed method is able to register an
increase in sub-group fairness (SGF) from 0.3104 to 0.3782, 0.6156, and 0.9442
while also improving group-level fairness (GF) (112% and 37% with respect to
base models and traditional PFR). Moreover, the proposed method achieves these
improvements with minimal or no reduction in accuracy (or even an increase
sometimes). We evaluate the proposed method on various recommendation datasets
and demonstrate promising results independent of the underlying model or
datasets.
|
http://arxiv.org/abs/2309.09277v2
|
The Sparse Identification of Nonlinear Dynamics (SINDy) algorithm can be
applied to stochastic differential equations to estimate the drift and the
diffusion function using data from a realization of the SDE. The SINDy
algorithm requires sample data from each of these functions, which is typically
estimated numerically from the data of the state. We analyze the performance of
the previously proposed estimates for the drift and diffusion function to give
bounds on the error for finite data. However, since this algorithm only
converges as both the sampling frequency and the length of trajectory go to
infinity, obtaining approximations within a certain tolerance may be
infeasible. To combat this, we develop estimates with higher orders of accuracy
for use in the SINDy framework. For a given sampling frequency, these estimates
give more accurate approximations of the drift and diffusion functions, making
SINDy a far more feasible system identification method.
|
http://arxiv.org/abs/2306.17814v2
|
We introduced an $\tilde{\mathcal{A}}$-invariant for quasi-ordinary
parameterizations and we consider it to describe quasi-ordinary surfaces with
one generalized characteristic exponent admitting a countable moduli.
|
http://arxiv.org/abs/2309.09263v2
|
We introduce a new Hopf algebra that operates on pairs of finite interval
partitions and permutations of equal length. This algebra captures vincular
patterns, which involve specifying both the permutation patterns and the
consecutive occurrence of values. Our motivation stems from linear functionals
that encode the number of occurrences of these patterns, and we show that they
behave well with respect to the operations of this Hopf algebra.
|
http://arxiv.org/abs/2306.17800v1
|
For heliumlike uranium, the energies of the singly-excited $1sns$, $1snp$,
and $1snd$ states with $n\leq 4$ and the probabilities of the one-photon
$1s3d\to 1s2p$, $1s3p\to 1s2s$, $1s3p\to 1s2p$ and $1s4d\to 1s2p$ transitions
are evaluated. The calculations are performed within the Breit approximation
using the configuration-interaction method in the basis of the Dirac-Fock-Sturm
orbitals. The QED corrections to the energy levels are calculated employing the
model-QED-operator approach. The nuclear recoil, frequency-dependent
Breit-interaction, nuclear polarization, and nuclear deformation corrections
are taken into account as well.
|
http://arxiv.org/abs/2302.14626v1
|
Data augmentation (DA) is widely used to improve the generalization of neural
networks by enforcing the invariances and symmetries to pre-defined
transformations applied to input data. However, a fixed augmentation policy may
have different effects on each sample in different training stages but existing
approaches cannot adjust the policy to be adaptive to each sample and the
training model. In this paper, we propose Model Adaptive Data Augmentation
(MADAug) that jointly trains an augmentation policy network to teach the model
when to learn what. Unlike previous work, MADAug selects augmentation operators
for each input image by a model-adaptive policy varying between training
stages, producing a data augmentation curriculum optimized for better
generalization. In MADAug, we train the policy through a bi-level optimization
scheme, which aims to minimize a validation-set loss of a model trained using
the policy-produced data augmentations. We conduct an extensive evaluation of
MADAug on multiple image classification tasks and network architectures with
thorough comparisons to existing DA approaches. MADAug outperforms or is on par
with other baselines and exhibits better fairness: it brings improvement to all
classes and more to the difficult ones. Moreover, MADAug learned policy shows
better performance when transferred to fine-grained datasets. In addition, the
auto-optimized policy in MADAug gradually introduces increasing perturbations
and naturally forms an easy-to-hard curriculum.
|
http://arxiv.org/abs/2309.04747v2
|
Despite the success of large language models (LLMs) in various natural
language processing (NLP) tasks, the stored knowledge in these models may
inevitably be incomplete, out-of-date, or incorrect. This motivates the need to
utilize external knowledge to assist LLMs. Unfortunately, current methods for
incorporating external knowledge often require additional training or
fine-tuning, which can be costly and may not be feasible for LLMs. To address
this issue, we propose a novel post-processing approach, rethinking with
retrieval (RR), which retrieves relevant external knowledge based on the
decomposed reasoning steps obtained from the chain-of-thought (CoT) prompting.
This lightweight approach does not require additional training or fine-tuning
and is not limited by the input length of LLMs. We evaluate the effectiveness
of RR through extensive experiments with GPT-3 on three complex reasoning
tasks: commonsense reasoning, temporal reasoning, and tabular reasoning. Our
results show that RR can produce more faithful explanations and improve the
performance of LLMs.
|
http://arxiv.org/abs/2301.00303v1
|
The Scintillating Bubble Chamber (SBC) collaboration is constructing a 10~kg
liquid argon (LAr) bubble chamber at SNOLAB called SBC-SNOLAB having the main
objective of detecting dark matter. One of the most novel aspects of SBC-SNOLAB
is the scintillation system, consisting of LAr doped with on the order of
10~ppm Xe, 48 FBK VUV silicon photomultipliers (SiPMs), the SiPM electronics,
two quartz jars, and liquid CF$_4$ used as an hydraulic fluid and additional
source of scintillation photons. In contrast with traditional single or dual
phase scintillation experiments, the collected LAr scitillation light is used
to veto signals which involve the detection of at least a single photoelectron.
These proceedings will describe in detail the current SBC-SNOLAB scintillation
system which includes the unique design considerations for SBC-SNOLAB that
limit the light collection efficiency and the electronics.
|
http://arxiv.org/abs/2310.00442v2
|
In this paper, we show that the divisor given by couples [C,{\theta}] where C
is a curve of genus 4 with a vanishing thetanull and {\theta} is an ineffective
thetacharacteristic is a rational variety. By our construction, it follows also
that the analogous divisor in the Prym moduli space is rational.
|
http://arxiv.org/abs/2309.05459v1
|
Magnetic fields can be generated in cosmic string wakes due to the Biermann
mechanism in the presence of neutrino inhomogeneities. As the cosmic string
moves through the plasma the small magnetic field is amplified by the
turbulence in the plasma. Relativistic charged particles which cross the
magnetized wake of a cosmic string will therefore emit synchrotron radiation.
The opening angle of the cosmic string is very small and so the wake appears
like a relativistic jet. Assuming a homogeneous magnetic field in the wake of
the string, we obtain the synchrotron emission from non thermal relativistic
electrons in the wake of the string. The emitted radiation has a broad peak and
is over a wide range of frequency. We show that the spectrum can be mapped to
some of the unknown sources in different ranges of the current available
catalogues.
|
http://arxiv.org/abs/2309.12643v2
|
We present durable implementations for two well known universal primitives --
CAS (compare-and-swap), and its ABA-free counter-part LLSC (load-linked,
store-conditional). All our implementations are: writable, meaning they support
a Write() operation; have constant time complexity per operation; allow for
dynamic joining, meaning newly created processes (a.k.a. threads) of arbitrary
names can join a protocol and access our implementations; and have adaptive
space complexities, meaning the space use scales in the number of processes $n$
that actually use the objects, as opposed to previous protocols which are
designed for a maximum number of processes $N$. Our durable Writable-CAS
implementation, DuraCAS, requires $O(m + n)$ space to support $m$ objects that
get accessed by $n$ processes, improving on the state-of-the-art $O(m + N^2)$.
By definition, LLSC objects must store "contexts" in addition to object values.
Our Writable-LLSC implementation, DuraLL, requires $O(m + n + C)$ space, where
$C$ is the number of "contexts" stored across all the objects. While LLSC has
an advantage over CAS due to being ABA-free, the object definition seems to
require additional space usage. To address this trade-off, we define an
External Context (EC) variant of LLSC. Our EC Writable-LLSC implementation is
ABA-free and has a space complexity of just $O(m + n)$.
To our knowledge, we are the first to present durable CAS algorithms that
allow for dynamic joining, and our algorithms are the first to exhibit adaptive
space complexities. To our knowledge, we are the first to implement any type of
durable LLSC objects.
|
http://arxiv.org/abs/2302.00135v1
|
The rapid advancement of chat-based language models has led to remarkable
progress in complex task-solving. However, their success heavily relies on
human input to guide the conversation, which can be challenging and
time-consuming. This paper explores the potential of building scalable
techniques to facilitate autonomous cooperation among communicative agents, and
provides insight into their "cognitive" processes. To address the challenges of
achieving autonomous cooperation, we propose a novel communicative agent
framework named role-playing. Our approach involves using inception prompting
to guide chat agents toward task completion while maintaining consistency with
human intentions. We showcase how role-playing can be used to generate
conversational data for studying the behaviors and capabilities of a society of
agents, providing a valuable resource for investigating conversational language
models. In particular, we conduct comprehensive studies on
instruction-following cooperation in multi-agent settings. Our contributions
include introducing a novel communicative agent framework, offering a scalable
approach for studying the cooperative behaviors and capabilities of multi-agent
systems, and open-sourcing our library to support research on communicative
agents and beyond: https://github.com/camel-ai/camel.
|
http://arxiv.org/abs/2303.17760v2
|
We present a study analyzing the voting behavior of contributors, or vested
users, in Decentralized Autonomous Organizations (DAOs). We evaluate their
involvement in decision-making processes, discovering that in at least 7.54% of
all DAOs, contributors, on average, held the necessary majority to control
governance decisions. Furthermore, contributors have singularly decided at
least one proposal in 20.41% of DAOs. Notably, contributors tend to be
centrally positioned within the DAO governance ecosystem, suggesting the
presence of inner power circles. Additionally, we observed a tendency for
shifts in governance token ownership shortly before governance polls take place
in 1202 (14.81%) of 8116 evaluated proposals. Our findings highlight the
central role of contributors across a spectrum of DAOs, including Decentralized
Finance protocols. Our research also offers important empirical insights
pertinent to ongoing regulatory activities aimed at increasing transparency to
DAO governance frameworks.
|
http://arxiv.org/abs/2309.14232v2
|
In the manuscript, we study the efficiency of pair creation by means of the
centrifugal mechanism. The strong magnetic field and the effects of rotation,
which always take place in Kerr-type black holes, guarantee the frozen-in
condition, leading to the generation of an exponentially amplifying
electrostatic field. This field, when reaching the Schwinger threshold, leads
to efficient pair production. The process has been studied for a wide range of
AGN luminosities and black hole masses, and it was found that the mechanism is
very efficient, indicating that for AGNs where centrifugal effects are
significant, the annihilation lines in the MeV range will be very strong.
|
http://arxiv.org/abs/2309.04021v1
|
The trust game, derived from an economics experiment, has recently attracted
interest in the field of evolutionary dynamics. In a recent version of the
evolutionary trust game, players adopt one of three strategies: investor,
trustworthy trustee, or untrustworthy trustee. Trustworthy trustees enhance and
share the investment with the investor, whereas untrustworthy trustees retain
the full amount, betraying the investor. Following this setup, we investigate a
two-player trust game, which is analytically feasible under weak selection. We
explore the evolution of trust in structured populations, factoring in four
strategy updating rules: pairwise comparison (PC), birth-death (BD), imitation
(IM), and death-birth (DB). Comparing structured populations with well-mixed
populations, we arrive at two main conclusions. First, in the absence of
untrustworthy trustees, there is a saddle point between investors and
trustworthy trustees, with collaboration thriving best in well-mixed
populations. The collaboration diminishes sequentially from DB to IM to PC/BD
updating rules in structured populations. Second, an invasion of untrustworthy
trustees makes this saddle point unstable and leads to the extinction of
investors. The 3-strategy system stabilizes at an equilibrium line where the
trustworthy and untrustworthy trustees coexist. The stability span of
trustworthy trustees is maximally extended under the PC and BD updating rules
in structured populations, while it decreases in a sequence from IM to DB
updating rules, with the well-mixed population being the least favorable. This
research thus adds an analytical lens to the evolution of trust in structured
populations.
|
http://arxiv.org/abs/2309.06636v3
|
With the increasing availability of large scale datasets, computational power
and tools like automatic differentiation and expressive neural network
architectures, sequential data are now often treated in a data-driven way, with
a dynamical model trained from the observation data. While neural networks are
often seen as uninterpretable black-box architectures, they can still benefit
from physical priors on the data and from mathematical knowledge. In this
paper, we use a neural network architecture which leverages the long-known
Koopman operator theory to embed dynamical systems in latent spaces where their
dynamics can be described linearly, enabling a number of appealing features. We
introduce methods that enable to train such a model for long-term continuous
reconstruction, even in difficult contexts where the data comes in
irregularly-sampled time series. The potential for self-supervised learning is
also demonstrated, as we show the promising use of trained dynamical models as
priors for variational data assimilation techniques, with applications to e.g.
time series interpolation and forecasting.
|
http://arxiv.org/abs/2309.05317v3
|
The analysis of 3D point clouds has diverse applications in robotics, vision
and graphics. Processing them presents specific challenges since they are
naturally sparse, can vary in spatial resolution and are typically unordered.
Graph-based networks to abstract features have emerged as a promising
alternative to convolutional neural networks for their analysis, but these can
be computationally heavy as well as memory inefficient. To address these
limitations we introduce a novel Multi-level Graph Convolution Neural (MLGCN)
model, which uses Graph Neural Networks (GNN) blocks to extract features from
3D point clouds at specific locality levels. Our approach employs precomputed
graph KNNs, where each KNN graph is shared between GCN blocks inside a GNN
block, making it both efficient and effective compared to present models. We
demonstrate the efficacy of our approach on point cloud based object
classification and part segmentation tasks on benchmark datasets, showing that
it produces comparable results to those of state-of-the-art models while
requiring up to a thousand times fewer floating-point operations (FLOPs) and
having significantly reduced storage requirements. Thus, our MLGCN model could
be particular relevant to point cloud based 3D shape analysis in industrial
applications when computing resources are scarce.
|
http://arxiv.org/abs/2303.17748v1
|
Van der Waals (vdW) heterostructures composed of two-dimensional (2D)
transition metal dichalcogenides (TMD) and vdW magnetic materials offer an
intriguing platform to functionalize valley and excitonic properties in
non-magnetic TMDs. Here, we report magneto-photoluminescence (PL)
investigations of monolayer (ML) MoSe$_2$ on the layered A-type
antiferromagnetic (AFM) semiconductor CrSBr under different magnetic field
orientations. Our results reveal a clear influence of the CrSBr magnetic order
on the optical properties of MoSe$_2$, such as an anomalous linear-polarization
dependence, changes of the exciton/trion energies, a magnetic-field dependence
of the PL intensities, and a valley $g$-factor with signatures of an asymmetric
magnetic proximity interaction. Furthermore, first principles calculations
suggest that MoSe$_2$/CrSBr forms a broken-gap (type-III) band alignment,
facilitating charge transfer processes. The work establishes that
antiferromagnetic-nonmagnetic interfaces can be used to control the valley and
excitonic properties of TMDs, relevant for the development of opto-spintronics
devices.
|
http://arxiv.org/abs/2309.03766v1
|
In scenarios where a single player cannot control other players, cooperative
AI is a recent technology that takes advantage of deep learning to assess
whether cooperation might occur. One main difficulty of this approach is that
it requires a certain level of consensus on the protocol (actions and rules),
at least from a majority of players. In our work, we study the simulations
performed on the cooperative AI tool proposed in the context of AI for Global
Climate Cooperation (AI4GCC) competition. We experimented simulations with and
without the AI4GCC default negotiation, including with regions configured
slightly differently in terms of labor and/or technology growth. These first
results showed that the AI4GCC framework offers a promising cooperative
framework to experiment with global warming mitigation. We also propose future
work to strengthen this framework.
|
http://arxiv.org/abs/2303.17990v1
|
The CDF, ATLAS and LHCb have released the measurements on the W boson mass
$m_W$ at $\sqrt{S}=1.96, 7, 13 TeV$, respectively. The measured values show the
declining tendency, namely $m_W$ decreases with the increment of the collider
energy. If the declining tendency is confirmed, it might be the signal of
metric field at high energy colliders. In this paper, we propose a model to
account for such tendency and explore the properties of the model.
|
http://arxiv.org/abs/2309.08633v2
|
Microgels are cross-linked, colloidal polymer networks with great potential
for stimuli-response release in drug-delivery applications, as their size in
the nanometer range allows them to pass human cell boundaries. For applications
with specified requirements regarding size, producing tailored microgels in a
continuous flow reactor is advantageous because the microgel properties can be
controlled tightly. However, no fully-specified mechanistic models are
available for continuous microgel synthesis, as the physical properties of the
included components are only studied partly. To address this gap and accelerate
tailor-made microgel development, we propose a data-driven optimization in a
hardware-in-the-loop approach to efficiently synthesize microgels with defined
sizes. We optimize the synthesis regarding conflicting objectives (maximum
production efficiency, minimum energy consumption, and the desired microgel
radius) by applying Bayesian optimization via the solver ``Thompson sampling
efficient multi-objective optimization'' (TS-EMO). We validate the optimization
using the deterministic global solver ``McCormick-based Algorithm for
mixed-integer Nonlinear Global Optimization'' (MAiNGO) and verify three
computed Pareto optimal solutions via experiments. The proposed framework can
be applied to other desired microgel properties and reactor setups and has the
potential of efficient development by minimizing number of experiments and
modelling effort needed.
|
http://arxiv.org/abs/2308.16724v1
|
Identifying the causes of a model's unfairness is an important yet relatively
unexplored task. We look into this problem through the lens of training data -
the major source of unfairness. We ask the following questions: How would the
unfairness of a model change if its training samples (1) were collected from a
different (e.g. demographic) group, (2) were labeled differently, or (3) whose
features were modified? In other words, we quantify the influence of training
samples on unfairness by counterfactually changing samples based on predefined
concepts, i.e. data attributes such as features, labels, and sensitive
attributes. Our framework not only can help practitioners understand the
observed unfairness and mitigate it by repairing their training data, but also
leads to many other applications, e.g. detecting mislabeling, fixing imbalanced
representations, and detecting fairness-targeted poisoning attacks.
|
http://arxiv.org/abs/2306.17828v2
|
We review the progress in modelling the galaxy population in hydrodynamical
simulations of the Lambda-CDM cosmogony. State-of-the-art simulations now
broadly reproduce the observed spatial clustering of galaxies, the
distributions of key characteristics such as mass, size and star formation
rate, and scaling relations connecting diverse properties to mass. Such
improvements engender confidence in the insight drawn from simulations. Many
important outcomes however, particularly the properties of circumgalactic gas,
are sensitive to the details of the subgrid models used to approximate the
macroscopic effects of unresolved physics, such as feedback processes. We
compare the outcomes of leading simulation suites with observations and with
each other, to identify the enduring successes they have cultivated and the
outstanding challenges to be tackled with the next generation of models. Our
key conclusions are: 1) Realistic galaxies can be reproduced by calibrating the
ill-constrained parameters of subgrid feedback models. Feedback is dominated by
stars and by black holes in low mass and high mass galaxies, respectively; 2)
Adjusting or disabling the physical processes implemented in simulations can
elucidate their impact on observables, but outcomes can be degenerate; 3)
Similar galaxy populations can emerge in simulations with dissimilar subgrid
feedback implementations. However, these models generally predict markedly
different gas flow rates into, and out of, galaxies and their haloes. CGM
observations are thus a promising means of breaking this degeneracy and guiding
the development of new feedback models.
|
http://arxiv.org/abs/2309.17075v1
|
What is the impact of human-computer interaction research on industry? While
it is impossible to track all research impact pathways, the growing literature
on translational research impact measurement offers patent citations as one
measure of how industry recognizes and draws on research in its inventions. In
this paper, we perform a large-scale measurement study primarily of 70,000
patent citations to premier HCI research venues, tracing how HCI research are
cited in United States patents over the last 30 years. We observe that 20.1% of
papers from these venues, including 60--80% of papers at UIST and 13% of papers
in a broader dataset of SIGCHI-sponsored venues overall, are cited by patents
-- far greater than premier venues in science overall (9.7%) and NLP (11%).
However, the time lag between a patent and its paper citations is long (10.5
years) and getting longer, suggesting that HCI research and practice may not be
efficiently connected.
|
http://arxiv.org/abs/2301.13431v1
|
Detailed detector simulation is the major consumer of CPU resources at LHCb,
having used more than 90% of the total computing budget during Run 2 of the
Large Hadron Collider at CERN. As data is collected by the upgraded LHCb
detector during Run 3 of the LHC, larger requests for simulated data samples
are necessary, and will far exceed the pledged resources of the experiment,
even with existing fast simulation options. An evolution of technologies and
techniques to produce simulated samples is mandatory to meet the upcoming needs
of analysis to interpret signal versus background and measure efficiencies. In
this context, we propose Lamarr, a Gaudi-based framework designed to offer the
fastest solution for the simulation of the LHCb detector. Lamarr consists of a
pipeline of modules parameterizing both the detector response and the
reconstruction algorithms of the LHCb experiment. Most of the parameterizations
are made of Deep Generative Models and Gradient Boosted Decision Trees trained
on simulated samples or alternatively, where possible, on real data. Embedding
Lamarr in the general LHCb Gauss Simulation framework allows combining its
execution with any of the available generators in a seamless way. Lamarr has
been validated by comparing key reconstructed quantities with Detailed
Simulation. Good agreement of the simulated distributions is obtained with
two-order-of-magnitude speed-up of the simulation phase.
|
http://arxiv.org/abs/2309.13213v1
|
This paper presents the electron and photon energy calibration obtained with
the ATLAS detector using 140 fb$^{-1}$ of LHC proton-proton collision data
recorded at $\sqrt{s}=13$ TeV between 2015 and 2018. Methods for the
measurement of electron and photon energies are outlined, along with the
current knowledge of the passive material in front of the ATLAS electromagnetic
calorimeter. The energy calibration steps are discussed in detail, with
emphasis on the improvements introduced in this paper. The absolute energy
scale is set using a large sample of $Z$-boson decays into electron-positron
pairs, and its residual dependence on the electron energy is used for the first
time to further constrain systematic uncertainties. The achieved calibration
uncertainties are typically 0.05% for electrons from resonant $Z$-boson decays,
0.4% at $E_\text{T}\sim 10$ GeV, and 0.3% at $E_\text{T}\sim 1$ TeV; for
photons at $E_\text{T}\sim 60$ GeV, they are 0.2% on average. This is more than
twice as precise as the previous calibration. The new energy calibration is
validated using $J/\psi \to ee$ and radiative $Z$-boson decays.
|
http://arxiv.org/abs/2309.05471v2
|
We present DictaLM, a large-scale language model tailored for Modern Hebrew.
Boasting 7B parameters, this model is predominantly trained on Hebrew-centric
data. As a commitment to promoting research and development in the Hebrew
language, we release both the foundation model and the instruct-tuned model
under a Creative Commons license. Concurrently, we introduce DictaLM-Rab,
another foundation model geared towards Rabbinic/Historical Hebrew. These
foundation models serve as ideal starting points for fine-tuning various
Hebrew-specific tasks, such as instruction, Q&A, sentiment analysis, and more.
This release represents a preliminary step, offering an initial Hebrew LLM
model for the Hebrew NLP community to experiment with.
|
http://arxiv.org/abs/2309.14568v1
|
When dealing with difficult inverse problems such as inverse rendering, using
Monte Carlo estimated gradients to optimise parameters can slow down
convergence due to variance. Averaging many gradient samples in each iteration
reduces this variance trivially. However, for problems that require thousands
of optimisation iterations, the computational cost of this approach rises
quickly.
We derive a theoretical framework for interleaving sampling and optimisation.
We update and reuse past samples with low-variance finite-difference estimators
that describe the change in the estimated gradients between each iteration. By
combining proportional and finite-difference samples, we continuously reduce
the variance of our novel gradient meta-estimators throughout the optimisation
process. We investigate how our estimator interlinks with Adam and derive a
stable combination.
We implement our method for inverse path tracing and demonstrate how our
estimator speeds up convergence on difficult optimisation tasks.
|
http://arxiv.org/abs/2309.15676v1
|
When mapping subnational health and demographic indicators, direct weighted
estimators of small area means based on household survey data can be unreliable
when data are limited. If survey microdata are available, unit level models can
relate individual survey responses to unit level auxiliary covariates and
explicitly account for spatial dependence and between area variation using
random effects. These models can produce estimators with improved precision,
but often neglect to account for the design of the surveys used to collect
data. Pseudo-Bayesian approaches incorporate sampling weights to address
informative sampling when using such models to conduct population inference but
credible sets based on the resulting pseudo-posterior distributions can be
poorly calibrated without adjustment. We outline a pseudo-Bayesian strategy for
small area estimation that addresses informative sampling and incorporates a
post-processing rescaling step that produces credible sets with close to
nominal empirical frequentist coverage rates. We compare our approach with
existing design-based and model-based estimators using real and simulated data.
|
http://arxiv.org/abs/2309.12119v1
|
The article summarizes the study performed in the context of the Deloitte
Quantum Climate Challenge in 2023. We present a hybrid quantum-classical method
for calculating Potential Energy Surface scans, which are essential for
designing Metal-Organic Frameworks for Direct Air Capture applications. The
primary objective of this challenge was to highlight the potential advantages
of employing quantum computing. To evaluate the performance of the model, we
conducted total energy calculations using various computing frameworks and
methods. The results demonstrate, at a small scale, the potential advantage of
quantum computing-based models. We aimed to define relevant classical computing
model references for method benchmarking. The most important benefits of using
the PISQ approach for hybrid quantum-classical computational model development
and assessment are demonstrated.
|
http://arxiv.org/abs/2309.05465v1
|
In the realm of modern service-oriented architecture, ensuring Quality of
Service (QoS) is of paramount importance. The ability to predict QoS values in
advance empowers users to make informed decisions. However, achieving accurate
QoS predictions in the presence of various issues and anomalies, including
outliers, data sparsity, grey-sheep instances, and cold-start scenarios,
remains a challenge. Current state-of-the-art methods often fall short when
addressing these issues simultaneously, resulting in performance degradation.
In this paper, we introduce a real-time QoS prediction framework (called ARRQP)
with a specific emphasis on improving resilience to anomalies in the data.
ARRQP utilizes the power of graph convolution techniques to capture intricate
relationships and dependencies among users and services, even when the data is
limited or sparse. ARRQP integrates both contextual information and
collaborative insights, enabling a comprehensive understanding of user-service
interactions. By utilizing robust loss functions, ARRQP effectively reduces the
impact of outliers during the model training. Additionally, we introduce a
sparsity-resilient grey-sheep detection method, which is subsequently treated
separately for QoS prediction. Furthermore, we address the cold-start problem
by emphasizing contextual features over collaborative features. Experimental
results on the benchmark WS-DREAM dataset demonstrate the framework's
effectiveness in achieving accurate and timely QoS predictions.
|
http://arxiv.org/abs/2310.02269v1
|
In this paper, we introduce SCALE, a collaborative framework that connects
compact Specialized Translation Models (STMs) and general-purpose Large
Language Models (LLMs) as one unified translation engine. By introducing
translation from STM into the triplet in-context demonstrations, SCALE unlocks
refinement and pivoting ability of LLM, thus mitigating language bias of LLM
and parallel data bias of STM, enhancing LLM speciality without sacrificing
generality, and facilitating continual learning without expensive LLM
fine-tuning. Our comprehensive experiments show that SCALE significantly
outperforms both few-shot LLMs (GPT-4) and specialized models (NLLB) in
challenging low-resource settings. Moreover, in Xhosa to English translation,
SCALE experiences consistent improvement by a 4 BLEURT score without tuning LLM
and surpasses few-shot GPT-4 by 2.5 COMET score and 3.8 BLEURT score when
equipped with a compact model consisting of merely 600M parameters. SCALE could
also effectively exploit the existing language bias of LLMs by using an
English-centric STM as a pivot for translation between any language pairs,
outperforming few-shot GPT-4 by an average of 6 COMET points across eight
translation directions. Furthermore we provide an in-depth analysis of SCALE's
robustness, translation characteristics, and latency costs, providing solid
foundation for future studies exploring the potential synergy between LLMs and
more specialized, task-specific models.
|
http://arxiv.org/abs/2309.17061v1
|
The Gaussian process (GP) is a popular statistical technique for stochastic
function approximation and uncertainty quantification from data. GPs have been
adopted into the realm of machine learning in the last two decades because of
their superior prediction abilities, especially in data-sparse scenarios, and
their inherent ability to provide robust uncertainty estimates. Even so, their
performance highly depends on intricate customizations of the core methodology,
which often leads to dissatisfaction among practitioners when standard setups
and off-the-shelf software tools are being deployed. Arguably the most
important building block of a GP is the kernel function which assumes the role
of a covariance operator. Stationary kernels of the Mat\'ern class are used in
the vast majority of applied studies; poor prediction performance and
unrealistic uncertainty quantification are often the consequences.
Non-stationary kernels show improved performance but are rarely used due to
their more complicated functional form and the associated effort and expertise
needed to define and tune them optimally. In this perspective, we want to help
ML practitioners make sense of some of the most common forms of
non-stationarity for Gaussian processes. We show a variety of kernels in action
using representative datasets, carefully study their properties, and compare
their performances. Based on our findings, we propose a new kernel that
combines some of the identified advantages of existing kernels.
|
http://arxiv.org/abs/2309.10068v2
|
With the proliferation of distributed energy resources (DERs) in the
distribution grid, it is a challenge to effectively control a large number of
DERs resilient to the communication and security disruptions, as well as to
provide the online grid services, such as voltage regulation and virtual power
plant (VPP) dispatch. To this end, a hybrid feedback-based optimization
algorithm along with deep learning forecasting technique is proposed to
specifically address the cyber-related issues. The online decentralized
feedback-based DER optimization control requires timely, accurate voltage
measurement from the grid. However, in practice such information may not be
received by the control center or even be corrupted. Therefore, the long
short-term memory (LSTM) deep learning algorithm is employed to forecast
delayed/missed/attacked messages with high accuracy. The IEEE 37-node feeder
with high penetration of PV systems is used to validate the efficiency of the
proposed hybrid algorithm. The results show that 1) the LSTM-forecasted lost
voltage can effectively improve the performance of the DER control algorithm in
the practical cyber-physical architecture; and 2) the LSTM forecasting strategy
outperforms other strategies of using previous message and skipping dual
parameter update.
|
http://arxiv.org/abs/2308.00152v1
|
The effective spin-1/2 antiferromagnetic Heisenberg-Ising chain materials,
ACo$_2$V$_2$O$_8$, A = Sr, Ba, are a rich source of exotic fundamental
phenomena and have been investigated for their model magnetic properties both
in zero and non-zero magnetic fields. Here we investigate a new member of the
family, namely PbCo$_2$V$_2$O$_8$. We synthesize powder and single crystal
samples of PbCo$_2$V$_2$O$_8$ and determine its magnetic structure using
neutron diffraction. Furthermore, the magnetic field/temperature phase diagrams
for magnetic field applied along the c, a, and [110] crystallographic
directions in the tetragonal unit cell are determined via magnetization and
heat capacity measurements. A complex series of phases and quantum phase
transitions are discovered that depend strongly on both the magnitude and
direction of the field. Our results show that \pcvo is an effective spin-1/2
antiferromagnetic Heisenberg-Ising chain with properties that are in general
comparable to those of SrCo$_2$V$_2$O$_8$ and BaCo$_2$V$_2$O$_8$. One
interesting departure from the results of these related compounds, is however,
the discovery of a new field-induced phase for the field direction $H\|$[110]
which has not been previously observed.
|
http://arxiv.org/abs/2309.16419v1
|
The rise of computational power has led to unprecedented performance gains
for deep learning models. As more data becomes available and model
architectures become more complex, the need for more computational power
increases. On the other hand, since the introduction of Bitcoin as the first
cryptocurrency and the establishment of the concept of blockchain as a
distributed ledger, many variants and approaches have been proposed. However,
many of them have one thing in common, which is the Proof of Work (PoW)
consensus mechanism. PoW is mainly used to support the process of new block
generation. While PoW has proven its robustness, its main drawback is that it
requires a significant amount of processing power to maintain the security and
integrity of the blockchain. This is due to applying brute force to solve a
hashing puzzle. To utilize the computational power available in useful and
meaningful work while keeping the blockchain secure, many techniques have been
proposed, one of which is known as Proof of Deep Learning (PoDL). PoDL is a
consensus mechanism that uses the process of training a deep learning model as
proof of work to add new blocks to the blockchain. In this paper, we survey the
various approaches for PoDL. We discuss the different types of PoDL algorithms,
their advantages and disadvantages, and their potential applications. We also
discuss the challenges of implementing PoDL and future research directions.
|
http://arxiv.org/abs/2308.16730v1
|
Efficient transport and harvesting of excitation energy under low light
conditions is an important process in nature and quantum technologies alike.
Here we formulate a quantum optics perspective to excitation energy transport
in configurations of two-level quantum emitters with a particular emphasis on
efficiency and robustness against disorder. We study a periodic geometry of
emitter rings with subwavelength spacing, where collective electronic states
emerge due to near-field dipole-dipole interactions. The system gives rise to
collective subradiant states that are particularly suited to excitation
transport and are protected from energy disorder and radiative decoherence.
Comparing ring geometries with other configurations shows that that the former
are more efficient in absorbing, transporting, and trapping incident light.
Because our findings are agnostic as to the specific choice of quantum
emitters, they indicate general design principles for quantum technologies with
superior photon transport properties and may elucidate potential mechanisms
resulting in the highly efficient energy transport efficiencies in natural
light-harvesting systems.
|
http://arxiv.org/abs/2309.11376v2
|
Research in model-based reinforcement learning has made significant progress
in recent years. Compared to single-agent settings, the exponential dimension
growth of the joint state-action space in multi-agent systems dramatically
increases the complexity of the environment dynamics, which makes it infeasible
to learn an accurate global model and thus necessitates the use of agent-wise
local models. However, during multi-step model rollouts, the prediction of one
local model can affect the predictions of other local models in the next step.
As a result, local prediction errors can be propagated to other localities and
eventually give rise to considerably large global errors. Furthermore, since
the models are generally used to predict for multiple steps, simply minimizing
one-step prediction errors regardless of their long-term effect on other models
may further aggravate the propagation of local errors. To this end, we propose
Models as AGents (MAG), a multi-agent model optimization framework that
reversely treats the local models as multi-step decision making agents and the
current policies as the dynamics during the model rollout process. In this way,
the local models are able to consider the multi-step mutual affect between each
other before making predictions. Theoretically, we show that the objective of
MAG is approximately equivalent to maximizing a lower bound of the true
environment return. Experiments on the challenging StarCraft II benchmark
demonstrate the effectiveness of MAG.
|
http://arxiv.org/abs/2303.17984v1
|
Detecting transphobia, homophobia, and various other forms of hate speech is
difficult. Signals can vary depending on factors such as language, culture,
geographical region, and the particular online platform. Here, we present a
joint multilingual (M-L) and language-specific (L-S) approach to homophobia and
transphobic hate speech detection (HSD). M-L models are needed to catch words,
phrases, and concepts that are less common or missing in a particular language
and subsequently overlooked by L-S models. Nonetheless, L-S models are better
situated to understand the cultural and linguistic context of the users who
typically write in a particular language. Here we construct a simple and
successful way to merge the M-L and L-S approaches through simple weight
interpolation in such a way that is interpretable and data-driven. We
demonstrate our system on task A of the 'Shared Task on Homophobia/Transphobia
Detection in social media comments' dataset for homophobia and transphobic HSD.
Our system achieves the best results in three of five languages and achieves a
0.997 macro average F1-score on Malayalam texts.
|
http://arxiv.org/abs/2309.13561v1
|
In many parts of the world, the use of vast amounts of data collected on
public roadways for autonomous driving has increased. In order to detect and
anonymize pedestrian faces and nearby car license plates in actual road-driving
scenarios, there is an urgent need for effective solutions. As more data is
collected, privacy concerns regarding it increase, including but not limited to
pedestrian faces and surrounding vehicle license plates. Normal and fisheye
cameras are the two common camera types that are typically mounted on
collection vehicles. With complex camera distortion models, fisheye camera
images were deformed in contrast to regular images. It causes computer vision
tasks to perform poorly when using numerous deep learning models. In this work,
we pay particular attention to protecting privacy while yet adhering to several
laws for fisheye camera photos taken by driverless vehicles. First, we suggest
a framework for extracting face and plate identification knowledge from several
teacher models. Our second suggestion is to transform both the image and the
label from a regular image to fisheye-like data using a varied and realistic
fisheye transformation. Finally, we run a test using the open-source PP4AV
dataset. The experimental findings demonstrated that our model outperformed
baseline methods when trained on data from autonomous vehicles, even when the
data were softly labeled. The implementation code is available at our github:
https://github.com/khaclinh/FisheyePP4AV.
|
http://arxiv.org/abs/2309.03799v1
|
A cosmological scenario in which the onset of neutrino free streaming in the
early Universe is delayed until close to the epoch of matter-radiation equality
has been shown to provide a good fit to some cosmic microwave background (CMB)
data, while being somewhat disfavored by Planck CMB polarization data. To
clarify this situation, we investigate in this paper CMB-independent
constraints on this scenario from the Full Shape of the galaxy power spectrum.
Although this scenario predicts significant changes to the linear matter power
spectrum, we find that it can provide a good fit to the galaxy power spectrum
data. Interestingly, we show that the data display a modest preference for a
delayed onset of neutrino free streaming over the standard model of cosmology,
which is driven by the galaxy power spectrum data on mildly non-linear scales.
This conclusion is supported by both profile likelihood and Bayesian
exploration analyses, showing robustness of the results. Compared to the
standard cosmological paradigm, this scenario predicts a significant
suppression of structure on subgalactic scales. While our analysis relies on
the simplest cosmological representation of neutrino self-interactions, we
argue that this persistent - and somehow consistent - picture in which neutrino
free streaming is delayed motivates the exploration of particle models capable
of reconciling all CMB, large-scale structure, and laboratory data.
|
http://arxiv.org/abs/2309.03941v2
|
A major challenge with off-road autonomous navigation is the lack of maps or
road markings that can be used to plan a path for autonomous robots. Classical
path planning methods mostly assume a perfectly known environment without
accounting for the inherent perception and sensing uncertainty from detecting
terrain and obstacles in off-road environments. Recent work in computer vision
and deep neural networks has advanced the capability of terrain traversability
segmentation from raw images; however, the feasibility of using these noisy
segmentation maps for navigation and path planning has not been adequately
explored. To address this problem, this research proposes an uncertainty-aware
path planning method, URA* using aerial images for autonomous navigation in
off-road environments. An ensemble convolutional neural network (CNN) model is
first used to perform pixel-level traversability estimation from aerial images
of the region of interest. The traversability predictions are represented as a
grid of traversal probability values. An uncertainty-aware planner is then
applied to compute the best path from a start point to a goal point given these
noisy traversal probability estimates. The proposed planner also incorporates
replanning techniques to allow rapid replanning during online robot operation.
The proposed method is evaluated on the Massachusetts Road Dataset, the
DeepGlobe dataset, as well as a dataset of aerial images from off-road proving
grounds at Mississippi State University. Results show that the proposed image
segmentation and planning methods outperform conventional planning algorithms
in terms of the quality and feasibility of the initial path, as well as the
quality of replanned paths.
|
http://arxiv.org/abs/2309.08814v1
|
In this study, we investigate a recent finding based on strong lensing
observations, which suggests that the sub-halos observed in clusters exhibit
greater compactness compared to those predicted by $\Lambda$CDM simulations. To
address this discrepancy, we performed a comparative analysis by comparing the
cumulative mass function of sub-halos and the
$M_{\text{sub}}$-$V_{\text{circ}}$ relation between observed clusters and 324
simulated clusters from The Three Hundred project, focusing on re-simulations
using GADGET-X and GIZMO-SIMBA baryonic models. The sub-halos' cumulative mass
function of the GIZMO-SIMBA simulated clusters agrees with observations, while
the GADGET-X simulations exhibit discrepancies in the lower sub-halo mass range
possibly due to its strong SuperNova feedback. Both GADGET-X and GIZMO-SIMBA
simulations demonstrate a redshift evolution of the sub-halo mass function and
the $V_{max}$ function, with slightly fewer sub-halos observed at lower
redshifts. Neither the GADGET-X nor GIZMO-SIMBA(albeit a little closer)
simulated clusters' predictions for the $M_{\text{sub}}$-$V_{\text{circ}}$
relation align with the observational result. Further investigations on the
correlation between sub-halo/halo properties and the discrepancy in the
$M_{\text{sub}}$-$V_{\text{circ}}$ relation reveals that the sub-halo's half
mass radius and galaxy stellar age, the baryon fraction and sub-halo distance
from the cluster's centre, as well as the halo relaxation state play important
roles on this relation. Nevertheless, we think it is still challenging in
accurately reproducing the observed $M_{\text{sub}}$-$V_{\text{circ}}$ relation
in our current hydrodynamic cluster simulation under the standard $\Lambda$CDM
cosmology.
|
http://arxiv.org/abs/2309.06187v1
|
Hydrogenated amorphous silicon (a-Si:H) is a material having an intrinsically
high radiation hardness that can be deposited on flexible substrates like
Polyimide. For these properties a-Si:H can be used for the production of
flexible sensors. a-Si:H sensors can be successfully utilized in dosimetry,
beam monitoring for particle physics (x-ray, electron, gamma-ray and proton
detection) and radiotherapy, radiation flux measurement for space applications
(study of solar energetic particles and stellar events) and neutron flux
measurements. In this paper we have studied the dosimetric x-ray response of
n-i-p diodes deposited on Polyimide. We measured the linearity of the
photocurrent response to x-rays versus dose-rate from which we have extracted
the dosimetric x-ray sensitivity at various bias voltages. In particular low
bias voltage operation has been studied to assess the high energy efficiency of
these kind of sensor. A measurement of stability of x-ray response versus time
has been shown. The effect of detectors annealing has been studied. Operation
under bending at various bending radii is also shown.
|
http://arxiv.org/abs/2310.00495v1
|
We study a susceptible-infected-recovered (SIR) epidemic model on a network
of $n$ interacting subpopulations. We analyze the transient and asymptotic
behavior of the infection dynamics in each node of the network. In contrast to
the classical scalar epidemic SIR model, where the infection curve is known to
be unimodal (either always decreasing over time, or initially increasing until
reaching a peak and from then on monotonically decreasing and asymptotically
vanishing), we show the possible occurrence of multimodal infection curves in
the network SIR epidemic model with $n\ge2$ subpopulations. We then focus on
the special case of rank-$1$ interaction matrices, modeling subpopulations of
homogeneously mixing individuals with different activity rates, susceptibility
to the disease, and infectivity levels. For this special case, we find $n$
invariants of motion and provide an explicit expression for the limit
equilibrium point. We also determine necessary and sufficient conditions for
stability of the equilibrium points. We then establish an upper bound on the
number of changes of monotonicity of the infection curve at the single node
level and provide sufficient conditions for its multimodality. Finally, we
present some numerical results revealing that, in the case of interaction
matrices with rank larger than $1$, the single nodes' infection curves may
display multiple peaks.
|
http://arxiv.org/abs/2309.14583v2
|
Navigating robots through unstructured terrains is challenging, primarily due
to the dynamic environmental changes. While humans adeptly navigate such
terrains by using context from their observations, creating a similar
context-aware navigation system for robots is difficult. The essence of the
issue lies in the acquisition and interpretation of context information, a task
complicated by the inherent ambiguity of human language. In this work, we
introduce LANCAR, which addresses this issue by combining a context translator
with reinforcement learning (RL) agents for context-aware locomotion. LANCAR
allows robots to comprehend context information through Large Language Models
(LLMs) sourced from human observers and convert this information into
actionable context embeddings. These embeddings, combined with the robot's
sensor data, provide a complete input for the RL agent's policy network. We
provide an extensive evaluation of LANCAR under different levels of context
ambiguity and compare with alternative methods. The experimental results
showcase the superior generalizability and adaptability across different
terrains. Notably, LANCAR shows at least a 7.4% increase in episodic reward
over the best alternatives, highlighting its potential to enhance robotic
navigation in unstructured environments. More details and experiment videos
could be found in http://raaslab.org/projects/LLM_Context_Estimation/
|
http://arxiv.org/abs/2310.00481v3
|
The conditional generative adversarial rainfall model "cGAN" developed for
the UK \cite{Harris22} was trained to post-process into an ensemble and
downscale ERA5 rainfall to 1km resolution over three regions of the USA and the
UK. Relative to radar data (stage IV and NIMROD), the quality of the forecast
rainfall distribution was quantified locally at each grid point and between
grid points using the spatial correlation structure. Despite only having
information from a single lower quality analysis, the ensembles of post
processed rainfall produced were found to be competitive with IFS ensemble
forecasts with lead times of between 8 and 16 hours. Comparison to the original
cGAN trained on the UK using the IFS HRES forecast indicates that improved
training forecasts result in improved post-processing.
The cGAN models were additionally applied to the regions that they were not
trained on. Each model performed well in their own region indicating that each
model is somewhat region specific. However the model trained on the Washington
DC, Atlantic coast, region achieved good scores across the USA and was
competitive over the UK. There are more overall rainfall events spread over the
whole region so the improved scores might be simply due to increased data. A
model was therefore trained using data from all four regions which then
outperformed the models trained locally.
|
http://arxiv.org/abs/2309.15689v1
|
Deep reinforcement learning agents for continuous control are known to
exhibit significant instability in their performance over time. In this work,
we provide a fresh perspective on these behaviors by studying the return
landscape: the mapping between a policy and a return. We find that popular
algorithms traverse noisy neighborhoods of this landscape, in which a single
update to the policy parameters leads to a wide range of returns. By taking a
distributional view of these returns, we map the landscape, characterizing
failure-prone regions of policy space and revealing a hidden dimension of
policy quality. We show that the landscape exhibits surprising structure by
finding simple paths in parameter space which improve the stability of a
policy. To conclude, we develop a distribution-aware procedure which finds such
paths, navigating away from noisy neighborhoods in order to improve the
robustness of a policy. Taken together, our results provide new insight into
the optimization, evaluation, and design of agents.
|
http://arxiv.org/abs/2309.14597v3
|
Electrostatic waves play a critical role in nearly every branch of plasma
physics from fusion to advanced accelerators, to astro, solar, and ionospheric
physics. The properties of planar electrostatic waves are fully determined by
the plasma conditions, such as density, temperature, ionization state, or
details of the distribution functions. Here we demonstrate that electrostatic
wavepackets structured with space-time correlations can have properties that
are independent of the plasma conditions. For instance, an appropriately
structured electrostatic wavepacket can travel at any group velocity, even
backward with respect to its phase fronts, while maintaining a localized energy
density. These linear, propagation-invariant wavepackets can be constructed
with or without orbital angular momentum by superposing natural modes of the
plasma and can be ponderomotively excited by space-time structured laser pulses
like the flying focus.
|
http://arxiv.org/abs/2309.06193v2
|
In this work, we introduce the concept of complex text style transfer tasks,
and constructed complex text datasets based on two widely applicable scenarios.
Our dataset is the first large-scale data set of its kind, with 700 rephrased
sentences and 1,000 sentences from the game Genshin Impact. While large
language models (LLM) have shown promise in complex text style transfer, they
have drawbacks such as data privacy concerns, network instability, and high
deployment costs. To address these issues, we explore the effectiveness of
small models (less than T5-3B) with implicit style pre-training through
contrastive learning. We also propose a method for automated evaluation of text
generation quality based on alignment with human evaluations using ChatGPT.
Finally, we compare our approach with existing methods and show that our model
achieves state-of-art performances of few-shot text style transfer models.
|
http://arxiv.org/abs/2309.10929v1
|
We estabish rigorous estimates for the Hausdorff dimension of the spectra of
Laplacians associated to Sierpi\'nski lattices and infinite Sierpi\'nski
gaskets and other post-critically finite self-similar sets.
|
http://arxiv.org/abs/2308.00185v1
|
Federated Learning (FL) has been successfully adopted for distributed
training and inference of large-scale Deep Neural Networks (DNNs). However,
DNNs are characterized by an extremely large number of parameters, thus,
yielding significant challenges in exchanging these parameters among
distributed nodes and managing the memory. Although recent DNN compression
methods (e.g., sparsification, pruning) tackle such challenges, they do not
holistically consider an adaptively controlled reduction of parameter exchange
while maintaining high accuracy levels. We, therefore, contribute with a novel
FL framework (coined FedDIP), which combines (i) dynamic model pruning with
error feedback to eliminate redundant information exchange, which contributes
to significant performance improvement, with (ii) incremental regularization
that can achieve \textit{extreme} sparsity of models. We provide convergence
analysis of FedDIP and report on a comprehensive performance and comparative
assessment against state-of-the-art methods using benchmark data sets and DNN
models. Our results showcase that FedDIP not only controls the model sparsity
but efficiently achieves similar or better performance compared to other model
pruning methods adopting incremental regularization during distributed model
training. The code is available at: https://github.com/EricLoong/feddip.
|
http://arxiv.org/abs/2309.06805v1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.