publicationDate
stringlengths 1
2.79k
| title
stringlengths 1
36.5k
⌀ | abstract
stringlengths 1
37.3k
⌀ | id
stringlengths 9
47
|
|---|---|---|---|
2019-06-27
|
Indications for Dzyaloshinskii-Moriya Interaction at the Pd/Fe Interface Studied by \textit{In Situ} Polarized Neutron Reflectometry
|
Using \textit{in situ} polarized neutron reflectometry, the depth resolved
evolution of the magnetism and structure in a Pd/Fe/Pd trilayer thin-film is
measured during growth. The initial film structure of Pd/Fe shows a small
proximity induced magnetism in the underlayer and a magnetization in the Fe
layer of $\approx1.6$\,$\mu_{\text{B}}$ per Fe atom, less than the expected
bulk value of $2.2$\,$\mu_{\text{B}}$. Deposition of the Pd capping layer
initially follows an island-like growth mode with subsequent coalescence. With
increasing Pd deposition the Fe moment and the proximity-induced magnetism in
the Pd capping layer decrease. After final deposition of the Pd capping layer,
the magnetic profile is structurally and magnetically symmetric across the Fe
layer, with magnetism induced in Pd up to 0.92 \,nm from the interface.
Throughout the Pd deposition the Pd/Fe/Pd trilayer structure is becoming
increasingly symmetric, a fact which points to a Dzyaloshinskii-Moriya
interaction as a likely cause of the observed magnetic behavior.
|
1906.11532v1
|
2019-07-01
|
Robust Formation of Ultrasmall Room-Temperature Neél Skyrmions in Amorphous Ferrimagnets from Atomistic Simulations
|
Ne\'el skyrmions originate from interfacial Dzyaloshinskii Moriya interaction
(DMI). Recent studies have explored using thin-film ferromagnets and
ferrimagnets to host Ne\'el skyrmions for spintronic applications. However, it
is unclear if ultrasmall (10 nm or less) skyrmions can ever be stabilized at
room temperature for practical use in high density parallel racetrack memories.
While thicker films can improve stability, DMI decays rapidly away from the
interface. As such, spins far away from the interface would experience
near-zero DMI, raising question on whether or not unrealistically large DMI is
needed to stabilize skyrmions, and whether skyrmions will also collapse away
from the interface. To address these questions, we have employed atomistic
stochastic Landau-Lifshitz-Gilbert simulations to investigate skyrmions in
amorphous ferrimagnetic GdCo. It is revealed that a significant reduction in
DMI below that of Pt is sufficient to stabilize ultrasmall skyrmions even in
films as thick as 15 nm. Moreover, skyrmions are found to retain a uniform
columnar shape across the film thickness despite the decaying DMI. Our results
show that increasing thickness and reducing DMI in GdCo can further reduce the
size of skyrmions at room temperature, which is crucial to improve the density
and energy efficiency in skyrmion based devices.
|
1907.00647v1
|
2019-07-03
|
Effect of Zeeman coupling on the Majorana vortex modes in iron-based topological superconductors
|
In the superconducting regime of FeTe$_{(1-x)}$Se$_x$, there exist two types
of vortices which are distinct by the presence or absence of zero energy states
in their core. To understand their origin, we examine the interplay of Zeeman
coupling and superconducting pairings in three-dimensional metals with band
inversion. Weak Zeeman fields are found to suppress the intra-orbital
spin-singlet pairing, known to localize the states at the ends of the vortices
on the surface. On the other hand, an orbital-triplet pairing is shown to be
stable against Zeeman interactions, but leads to delocalized zero-energy
Majorana modes which extend through the vortex. In contrast, the finite-energy
vortex modes remain localized at the vortex ends even when the pairing is of
orbital-triplet form. Phenomenologically, this manifests as an observed
disappearance of zero-bias peaks within the cores of topological vortices upon
increase of the applied magnetic field. The presence of magnetic impurities in
FeTe$_{(1-x)}$Se$_x$, which are attracted to the vortices, would lead to such
Zeeman-induced delocalization of Majorana modes in a fraction of vortices that
capture a large enough number of magnetic impurities. Our results provide an
explanation to the dichotomy between topological and non-topological vortices
recently observed in FeTe$_{(1-x)}$Se$_x$.
|
1907.02077v2
|
2019-07-10
|
Increasing Gender Diversity and Inclusion in Scientific Committees and Related Activities at STScI
|
We present a new initiative by the Women in Astronomy Forum at Space
Telescope Science Institute (STScI) to increase gender diversity and inclusion
in STScI's scientific committees and the activities they generate. This
initiative offers new and uniform guidelines on binary gender representation
goals for each committee and recommendations on how to achieve them in a
homogeneous way, as well as metrics and tools to track progress towards defined
goals. While the new guidelines presented in the paper focus on binary gender
representation, they can be adapted and implemented to support all minority
groups. By creating diverse committees and making them aware of, and trained on
implicit bias, we expect to create a diverse outcome in the activities they
generate, which, in turn, will advance science further and faster.
|
1907.04880v1
|
2019-07-19
|
Sparse Recovery for Orthogonal Polynomial Transforms
|
In this paper we consider the following sparse recovery problem. We have
query access to a vector $\vx \in \R^N$ such that $\vhx = \vF \vx$ is
$k$-sparse (or nearly $k$-sparse) for some orthogonal transform $\vF$. The goal
is to output an approximation (in an $\ell_2$ sense) to $\vhx$ in sublinear
time. This problem has been well-studied in the special case that $\vF$ is the
Discrete Fourier Transform (DFT), and a long line of work has resulted in
sparse Fast Fourier Transforms that run in time $O(k \cdot \mathrm{polylog}
N)$. However, for transforms $\vF$ other than the DFT (or closely related
transforms like the Discrete Cosine Transform), the question is much less
settled.
In this paper we give sublinear-time algorithms---running in time $\poly(k
\log(N))$---for solving the sparse recovery problem for orthogonal transforms
$\vF$ that arise from orthogonal polynomials. More precisely, our algorithm
works for any $\vF$ that is an orthogonal polynomial transform derived from
Jacobi polynomials. The Jacobi polynomials are a large class of classical
orthogonal polynomials (and include Chebyshev and Legendre polynomials as
special cases), and show up extensively in applications like numerical analysis
and signal processing. One caveat of our work is that we require an assumption
on the sparsity structure of the sparse vector, although we note that vectors
with random support have this property with high probability.
Our approach is to give a very general reduction from the $k$-sparse sparse
recovery problem to the $1$-sparse sparse recovery problem that holds for any
flat orthogonal polynomial transform; then we solve this one-sparse recovery
problem for transforms derived from Jacobi polynomials.
|
1907.08362v1
|
2019-08-28
|
Interplay of spin and mass superfluidity in antiferromagnetic spin-1 BEC and bicirculation vortices
|
The paper investigates the coexistence and interplay of spin and mass
superfluidity in the antiferromagnetic spin-1 BEC. The hydrodynamical theory
describes the spin degree of freedom by the equations similar to the
Landau--Lifshitz--Gilbert theory for bipartite antiferromagnetic insulator. The
variables in the spin space are two subspins with absolute value $\hbar/2$,
which play the role of two sublattice spins in the antiferromagnetic
insulators. As well as in bipartite antiferromagnetic insulators, in the
antiferromagnetic spin-1 BEC there are two spin-wave modes, one is a gapless
Goldstone mode, another is gapped. The Landau criterion shows that in limit of
small total spin (two subspins are nearly antiparallel) instability of
supercurrents starts from the gapped mode. In the opposite limit of large total
spin (two subspins are nearly parallel) the gapless modes become unstable
earlier than the gapped one. Mass and spin supercurrents decay via phase slips,
when vortices cross streamlines of supercurrent. The vortices participating in
phase slips are nonsingular bicirculation vortices. They are characterized by
two topological charges, which are winding numbers describing circulations of
two angles around the vortex axis. The winding numbers can be half-integer. A
particular example of a half-integer vortex is a half-quantum vortex with the
superfluid velocity circulation $h/2m$. But the superfluid velocity circulation
is not a topological charge, and in general the quantum of this circulation can
be continuously tuned from 0 to $h/2m$.
|
1908.10633v2
|
2019-09-23
|
The NASA Probe space mission concept, Cosmic Evolution Through UV Surveys (CETUS)
|
The mission concept, Cosmic Origins Through UV Surveys (CETUS) is an all-UV
space mission concept that was selected and funded by NASA for study in 2017.
The main capabilities of CETUS that even Hubble doesn't have are: (1)
wide-field (17.4'x17.4') imaging and spectroscopy of astronomical sources with
<0.5'' resolution; (2) spectral sensitivity to UV radiation at wavelengths as
short as 1000 {\AA}; (3) near-UV multi-object slit spectroscopy; and (4)
rapid-response UV spectroscopy and deep imaging of transients like GW 170817;
and (5) 23 times higher sensitivity to extended sources.
The main purposes of this CETUS Final Report are to describe the CETUS
scientific program and to demonstrate the maturity of its instrumentation,
which forms the basis of its estimated cost. While there are similarities of
this Final Report to that submitted to NASA in March 2019 by the Goddard Space
Flight Center, there are important differences including the following. *
Science. The science case has been refreshed, deepened, and expanded as a
result of ideas and recommendations expressed in the Astro2020 science white
papers. * Instrumentation. Detailed investigations including a high-level error
budget for focus with implications for thermal management, target acquisition
in the MOS micro-shutter array, contamination control have been carried out. *
Mission Design. The spacecraft and mission operations concepts as developed by
NGIS Gilbert (formerly Orbital ATK) rather than the output of Goddard's Mission
Design Lab have been adopted.. * Technology. Technology maturation plans have
been updated.
|
1909.10437v1
|
2019-09-25
|
Towards an improved understanding of molecular evolution: the relative roles of selection, drift, and everything in between
|
A major goal of molecular evolutionary biology is to identify loci or regions
of the genome under selection versus those evolving in a neutral manner.
Correct identification allows accurate inference of the evolutionary process
and thus comprehension of historical and contemporary processes driving
phenotypic change and adaptation. A fundamental difficulty lies in
distinguishing sites targeted by selection from both sites linked to these
targets and sites fully independent of selection. These three categories of
sites necessitate attention in light of the debate over the relative importance
of selection versus neutrality and the neutral theory. Modern genomic insights
have proved that complex processes such as linkage, demography, and biased gene
conversion complicate our understanding of the role of neutral versus selective
processes in evolution. In this perspective, we first highlight the importance
of the genomic and (a)biotic context of new mutations to identify the targets
of natural selection. We then present mechanisms that may constrain the
evolution of genomes and bias the inference of selection. We discuss these
mechanisms within the two critical levels that they occur: the population level
and the molecular level. We highlight that they should be taken into account to
correctly distinguish sites across the genome subject to selective or
non-selective forces and stress that a major current field-wide goal is to
quantify the absolute importance of these mechanisms.
|
1909.11490v4
|
2019-10-08
|
Correlated fluctuations in spin orbit torque-coupled perpendicular nanomagnets
|
Low barrier nanomagnets have attracted a lot of research interest for their
use as sources of high quality true random number generation. More recently,
low barrier nanomagnets with tunable output have been shown to be a natural
hardware platform for unconventional computing paradigms such as probabilistic
spin logic. Efficient generation and tunability of high quality random bits is
critical for these novel applications. However, current spintronic random
number generators are based on superparamagnetic tunnel junctions (SMTJs) with
tunability obtained through spin transfer torque (STT), which unavoidably leads
to challenges in designing concatenated networks using these two terminal
devices. The more recent development of utilizing spin orbit torque (SOT)
allows for a three terminal device design, but can only tune in-plane
magnetization freely, which is not very energy efficient due to the needs of
overcoming a large demagnetization field. In this work, we experimentally
demonstrate for the first time, a stochastic device with perpendicular magnetic
anisotropy (PMA) that is completely tunable by SOT without the aid of any
external magnetic field. Our measurements lead us to hypothesize that a tilted
anisotropy might be responsible for the observed tunability. We carry out
stochastic Landau-Lifshitz-Gilbert (sLLG) simulations to confirm our
experimental observation. Finally, we build an electrically coupled network of
two such stochastic nanomagnet based devices and demonstrate that finite
correlation or anti-correlation can be established between their output
fluctuations by a weak interconnection, despite having a large difference in
their natural fluctuation time scale. Simulations based on a newly developed
dynamical model for autonomous circuits composed of low barrier nanomagnets
show close agreement with the experimental results.
|
1910.03184v1
|
2019-10-09
|
Prophets, Secretaries, and Maximizing the Probability of Choosing the Best
|
Suppose a customer is faced with a sequence of fluctuating prices, such as
for airfare or a product sold by a large online retailer. Given distributional
information about what price they might face each day, how should they choose
when to purchase in order to maximize the likelihood of getting the best price
in retrospect? This is related to the classical secretary problem, but with
values drawn from known distributions. In their pioneering work, Gilbert and
Mosteller [\textit{J. Amer. Statist. Assoc. 1966}] showed that when the values
are drawn i.i.d., there is a thresholding algorithm that selects the best value
with probability approximately $0.5801$. However, the more general problem with
non-identical distributions has remained unsolved.
In this paper we provide an algorithm for the case of non-identical
distributions that selects the maximum element with probability $1/e$, and we
show that this is tight. We further show that if the observations arrive in a
random order, this barrier of $1/e$ can be broken using a static threshold
algorithm, and we show that our success probability is the best possible for
any single-threshold algorithm under random observation order. Moreover, we
prove that one can achieve a strictly better success probability using more
general multi-threshold algorithms, unlike the non-random-order case. Along the
way, we show that the best achievable success probability for the random-order
case matches that of the i.i.d.\ case, which is approximately $0.5801$, under a
"no-superstars" condition that no single distribution is very likely ex ante to
generate the maximum value. We also extend our results to the problem of
selecting one of the $k$ best values.
|
1910.03798v1
|
2019-10-24
|
Order and Information in the Patterns of Spinning Magnetic Micro-disks at the Air-water Interface
|
The application of the Shannon entropy to study the relationship between
information and structures has yielded insights into molecular and material
systems. However, the difficulty in directly observing and manipulating atoms
and molecules hampers the ability of these systems to serve as model systems
for further exploring the links between information and structures. Here, we
use, as a model experimental system, hundreds of spinning magnetic micro-disks
self-organizing at the air-water interface to generate various spatiotemporal
patterns with varying degrees of orders. Using the neighbor distance as the
information-bearing variable, we demonstrate the links among information,
structure, and interactions. Most importantly, we establish a direct link
between information and structure without using explicit knowledge of
interactions. Finally, we show that the Shannon entropy by neighbor distances
is a powerful observable in characterizing structural changes. Our findings are
relevant for analyzing natural self-organizing systems and for designing
collective robots.
|
1910.11226v3
|
2019-11-15
|
A geometric look at MHD and the Braginsky dynamo
|
This paper considers magnetohydrodynamics (MHD) and some of its applications
from the perspective of differential geometry, considering the dynamics of an
ideal fluid flow and magnetic field on a general three-dimensional manifold,
equipped with a metric and an induced volume form. The benefit of this level of
abstraction is that it clarifies basic aspects of fluid dynamics such as how
certain quantities are transported, how they transform under the action of
mappings (for example the flow map between Lagrangian labels and Eulerian
positions), how conservation laws arise, and the origin of certain
approximations that preserve the mathematical structure of classical mechanics.
First, the governing equations for ideal MHD are derived in a general setting
by means of an action principle, and making use of Lie derivatives. The way in
which these equations transform under a pull back, by the map taking the
position of a fluid parcel to a background location, is detailed. This is then
used to parameterise Alfv\'en waves using concepts of pseudomomentum and
pseudofield, in parallel with the development of Generalised Lagrangian Mean
theory in hydrodynamics. Finally non-ideal MHD is considered with a sketch of
the development of the Braginsky $\alpha\omega$-dynamo in a general setting.
Expressions for the $\alpha$-tensor are obtained, including a novel geometric
formulation in terms of connection coefficients, and related to formulae found
elsewhere in the literature.
|
1911.06592v2
|
2019-11-17
|
Interfacial-Redox-Induced Tuning of Superconductivity in YBa$_{2}$Cu$_{3}$O$_{7-δ}$
|
Solid state ionic approaches for modifying ion distributions in getter/oxide
heterostructures offer exciting potentials to control material properties. Here
we report a simple, scalable approach allowing for total control of the
superconducting transition in optimally doped YBa$_{2}$Cu$_{3}$O$_{7-{\delta}}$
(YBCO) films via a chemically-driven ionic migration mechanism. Using a thin Gd
capping layer of up to 20 nm deposited onto 100 nm thick epitaxial YBCO films,
oxygen is found to leach from deep within the YBCO. Progressive reduction of
the superconducting transition is observed, with complete suppression possible
for a sufficiently thick Gd layer. These effects arise from the combined impact
of redox-driven electron doping and modification of the YBCO microstructure due
to oxygen migration and depletion. This work demonstrates an effective ionic
control of superconductivity in oxides, an interface induced effect that goes
well into the quasi-bulk regime, opening up possibilities for electric field
manipulation.
|
1911.07275v1
|
2019-12-10
|
Integration of Neural Network-Based Symbolic Regression in Deep Learning for Scientific Discovery
|
Symbolic regression is a powerful technique that can discover analytical
equations that describe data, which can lead to explainable models and
generalizability outside of the training data set. In contrast, neural networks
have achieved amazing levels of accuracy on image recognition and natural
language processing tasks, but are often seen as black-box models that are
difficult to interpret and typically extrapolate poorly. Here we use a neural
network-based architecture for symbolic regression called the Equation Learner
(EQL) network and integrate it with other deep learning architectures such that
the whole system can be trained end-to-end through backpropagation. To
demonstrate the power of such systems, we study their performance on several
substantially different tasks. First, we show that the neural network can
perform symbolic regression and learn the form of several functions. Next, we
present an MNIST arithmetic task where a separate part of the neural network
extracts the digits. Finally, we demonstrate prediction of dynamical systems
where an unknown parameter is extracted through an encoder. We find that the
EQL-based architecture can extrapolate quite well outside of the training data
set compared to a standard neural network-based architecture, paving the way
for deep learning to be applied in scientific exploration and discovery.
|
1912.04825v2
|
2019-12-17
|
New search for mirror neutron regeneration
|
The possibility of relatively fast neutron oscillations into a mirror neutron
state is not excluded experimentally when a mirror magnetic field is
considered. Direct searches for the disappearance of neutrons into mirror
neutrons in a controlled magnetic field have previously been performed using
ultracold neutrons, with some anomalous results reported. We describe a
technique using cold neutrons to perform a disappearance and regeneration
search, which would allow us to unambiguously identify a possible oscillation
signal. An experiment using the existing General Purpose-Small Angle Neutron
Scattering instrument at the High Flux Isotope Reactor at Oak Ridge National
Laboratory will have the sensitivity to fully explore the parameter space of
prior ultracold neutron searches and confirm or refute previous claims of
observation. This instrument can also conclusively test the validity of
recently suggested oscillation-based explanations for the neutron lifetime
anomaly.
|
1912.08264v1
|
2020-01-06
|
Highly efficient spin orbit torque in Pt/Co/Ir multilayers with antiferromagnetic interlayer exchange coupling
|
We have studied the spin orbit torque (SOT) in Pt/Co/Ir multilayers with 3
repeats of the unit structure. As the system exhibits oscillatory interlayer
exchange coupling (IEC) with varying Ir layer thickness, we compare the SOT of
films when the Co layers are coupled ferromagnetically and
antiferromagnetically. SOT is evaluated using current induced shift of the
anomalous Hall resistance hysteresis loops. A relatively thick Pt layer,
serving as a seed layer to the multilayer, is used to generate spin current via
the spin Hall effect. In the absence of antiferromagnetic coupling, the SOT is
constant against the applied current density and the corresponding spin torque
efficiency (i.e. the effective spin Hall angle) is $\sim$0.09, in agreement
with previous reports. In contrast, for films with antiferromagnetic coupling,
the SOT increases with the applied current density and eventually saturates.
The SOT at saturation is a factor of $\sim$15 larger than that without the
antiferromagnetic coupling. The spin torque efficiency is $\sim$5 times larger
if we assume the net total magnetization is reduced by a factor of 3 due to the
antiferromagnetic coupling. Model calculations based on the Landau Lifshitz
Gilbert equation show that the presence of antiferromagnetic coupling can
increase the SOT but the degree of enhancement is limited, in this case, to a
factor of 1.2-1.4. We thus consider there are other sources of SOT, possibly at
the interfaces, which may account for the highly efficient SOT in the
uncompensated synthetic anti-ferromagnet (SAF) multilayers.
|
2001.01454v1
|
2019-11-24
|
Cybernetical Concepts for Cellular Automaton and Artificial Neural Network Modelling and Implementation
|
As a discipline cybernetics has a long and rich history. In its first
generation it not only had a worldwide span, in the area of computer modelling,
for example, its proponents such as John von Neumann, Stanislaw Ulam, Warren
McCulloch and Walter Pitts, also came up with models and methods such as
cellular automata and artificial neural networks, which are still the
foundation of most modern modelling approaches. At the same time, cybernetics
also got the attention of philosophers, such as the Frenchman Gilbert Simondon,
who made use of cybernetical concepts in order to establish a metaphysics and a
natural philosophy of individuation, giving cybernetics thereby a philosophical
interpretation, which he baptised allagmatic. In this paper, we emphasise this
allagmatic theory by showing how Simondon's philosophical concepts can be used
to formulate a generic computer model or metamodel for complex systems
modelling and its implementation in program code, according to generic
programming. We also present how the developed allagmatic metamodel is capable
of building simple cellular automata and artificial neural networks.
|
2001.02037v3
|
2020-02-12
|
Competition between magnetic order and charge localization in Na$_2$IrO$_3$ thin crystal devices
|
Spin orbit assisted Mott insulators such as sodium iridate (Na$_2$IrO$_3$)
have been an important subject of study in the recent years. In these
materials, the interplay of electronic correlations, spin-orbit coupling,
crystal field effects and a honeycomb arrangement of ions bring exciting ground
states, predicted in the frame of the Kitaev model. The insulating character of
Na$_2$IrO$_3$ has hampered its integration to an electronic device, desirable
for applications, such as the manipulation of quasiparticles interesting for
topological quantum computing. Here we show through electronic transport
measurements supported by Angle Resolved Photoemission Spectroscopy (ARPES)
experiments, that electronic transport in Na$_2$IrO$_3$ is ruled by variable
range hopping and it is strongly dependent on the magnetic ordering transition
known for bulk Na$_2$IrO$_3$, as well as on external electric fields.
Electronic transport measurements allow us to deduce a value for the
localization length and the density of states in our Na$_2$IrO$_3$ thin
crystals devices, offering an alternative approach to study insulating layered
materials.
|
2002.04785v1
|
2020-02-13
|
Electron Beam-Induced Nanopores in Bernal-Stacked Hexagonal Boron Nitride
|
Controlling the size and shape of nanopores in two-dimensional materials is a
key challenge in applications such as DNA sequencing, sieving, and quantum
emission in artificial atoms. We here investigate experimentally and
theoretically triangular vacancies in (unconventional) Bernal-stacked AB-h-BN
formed using a high-energy electron beam. Due to the geometric configuration of
AB-h-BN, triangular pores in different layers are aligned, and their sizes are
controlled by the duration of the electron irradiation. Interlayer covalent
bonding at the vacancy edge is not favored, as opposed to what occurs in the
more common AA'-stacked BN. A variety of monolayer, concentric and bilayer
pores in bilayer AB-h-BN are observed in high-resolution transmission electron
microscopy and characterized using ab initio simulations. Bilayer pores in
AB-h-BN are commonly formed, and grow without breaking the bilayer character.
Nanopores in AB-h-BN exhibit a wide range of electronic properties, ranging
from half-metallic to non-magnetic and magnetic semiconducting. Therefore,
because of the controllability of the pore size, the electronic structure is
also highly controllable in these systems, and can potentially be tuned for
particular applications.
|
2002.05795v3
|
2020-02-26
|
Effect of chemical substitution on the skyrmion phase in Cu$_2$OSeO$_3$
|
Magnetic skyrmions have been the focus of intense research due to their
unique qualities which result from their topological protections. Previous work
on Cu$_2$OSeO$_3$, the only known insulating multiferroic skyrmion material,
has shown that chemical substitution alters the skyrmion phase. We chemically
substitute Zn, Ag, and S into powdered Cu$_2$OSeO$_3$ to study the effect on
the magnetic phase diagram. In both the Ag and the S substitutions, we find
that the skyrmion phase is stabilized over a larger temperature range, as
determined via magnetometry and small-angle neutron scattering (SANS).
Meanwhile, while previous magnetometry characterization suggests two high
temperature skyrmion phases in the Zn-substituted sample, SANS reveals the high
temperature phase to be skyrmionic while we are unable to distinguish the other
from helical order. Overall, chemical substitution weakens helical and skyrmion
order as inferred from neutron scattering of the $|$q$| \approx$ 0.01
$\r{A}^{-1}$ magnetic peak.
|
2002.11827v1
|
2020-03-10
|
Smart City IoT Services Creation through Large Scale Collaboration
|
Smart cities solutions are often monolithically implemented, from sensors
data handling through to the provided services. The same challenges are
regularly faced by different developers, for every new solution in a new city.
Expertise and know-how can be re-used and the effort shared. In this article we
present the methodologies to minimize the efforts of implementing new smart
city solutions and maximizing the sharing of components. The final target is to
have a live technical community of smart city application developers. The
results of this activity comes from the implementation of 35 city services in
27 cities between Europe and South Korea. To share efforts, we encourage
developers to devise applications using a modular approach. Single-function
components that are re-usable by other city services are packaged and published
as standalone components, named Atomic Services. We identify 15 atomic services
addressing smart city challenges in data analytics, data evaluation, data
integration, data validation, and visualization. 38 instances of the atomic
services are already operational in several smart city services. We detail in
this article, as atomic service examples, some data predictor components.
Furthermore, we describe real-world atomic services usage in the scenarios of
Santander and three Danish cities. The resulting atomic services also generate
a side market for smart city solutions, allowing expertise and know-how to be
re-used by different stakeholders.
|
2003.04843v1
|
2020-03-23
|
Low Power Unsupervised Anomaly Detection by Non-Parametric Modeling of Sensor Statistics
|
This work presents AEGIS, a novel mixed-signal framework for real-time
anomaly detection by examining sensor stream statistics. AEGIS utilizes Kernel
Density Estimation (KDE)-based non-parametric density estimation to generate a
real-time statistical model of the sensor data stream. The likelihood estimate
of the sensor data point can be obtained based on the generated statistical
model to detect outliers. We present CMOS Gilbert Gaussian cell-based design to
realize Gaussian kernels for KDE. For outlier detection, the decision boundary
is defined in terms of kernel standard deviation ($\sigma_{Kernel}$) and
likelihood threshold ($P_{Thres}$). We adopt a sliding window to update the
detection model in real-time. We use time-series dataset provided from Yahoo to
benchmark the performance of AEGIS. A f1-score higher than 0.87 is achieved by
optimizing parameters such as length of the sliding window and decision
thresholds which are programmable in AEGIS. Discussed architecture is designed
using 45nm technology node and our approach on average consumes $\sim$75 $\mu$W
power at a sampling rate of 2 MHz while using ten recent inlier samples for
density estimation. \textcolor{red}{Full-version of this research has been
published at IEEE TVLSI}
|
2003.10088v1
|
2020-03-30
|
Efficient nonparametric inference on the effects of stochastic interventions under two-phase sampling, with applications to vaccine efficacy trials
|
The advent and subsequent widespread availability of preventive vaccines has
altered the course of public health over the past century. Despite this
success, effective vaccines to prevent many high-burden diseases, including
HIV, have been slow to develop. Vaccine development can be aided by the
identification of immune response markers that serve as effective surrogates
for clinically significant infection or disease endpoints. However, measuring
immune response marker activity is often costly, which has motivated the usage
of two-phase sampling for immune response evaluation in clinical trials of
preventive vaccines. In such trials, the measurement of immunological markers
is performed on a subset of trial participants, where enrollment in this second
phase is potentially contingent on the observed study outcome and other
participant-level information. We propose nonparametric methodology for
efficiently estimating a counterfactual parameter that quantifies the impact of
a given immune response marker on the subsequent probability of infection.
Along the way, we fill in theoretical gaps pertaining to the asymptotic
behavior of nonparametric efficient estimators in the context of two-phase
sampling, including a multiple robustness property enjoyed by our estimators.
Techniques for constructing confidence intervals and hypothesis tests are
presented, and an open source software implementation of the methodology, the
txshift R package, is introduced. We illustrate the proposed techniques using
data from a recent preventive HIV vaccine efficacy trial.
|
2003.13771v2
|
2020-04-05
|
Effects of the Affordable Care Act Dependent Coverage Mandate on Health Insurance Coverage for Individuals in Same-Sex Couples
|
A large body of research documents that the 2010 dependent coverage mandate
of the Affordable Care Act was responsible for significantly increasing health
insurance coverage among young adults. No prior research has examined whether
sexual minority young adults also benefitted from the dependent coverage
mandate, despite previous studies showing lower health insurance coverage among
sexual minorities and the fact that their higher likelihood of strained
relationships with their parents might predict a lower ability to use parental
coverage. Our estimates from the American Community Surveys using
difference-in-differences and event study models show that men in same-sex
couples age 21-25 were significantly more likely to have any health insurance
after 2010 compared to the associated change for slightly older 27 to
31-year-old men in same-sex couples. This increase is concentrated among
employer-sponsored insurance, and it is robust to permutations of time periods
and age groups. Effects for women in same-sex couples and men in different-sex
couples are smaller than the associated effects for men in same-sex couples.
These findings confirm the broad effects of expanded dependent coverage and
suggest that eliminating the federal dependent mandate could reduce health
insurance coverage among young adult sexual minorities in same-sex couples.
|
2004.02296v1
|
2020-04-07
|
A general framework for inference on algorithm-agnostic variable importance
|
In many applications, it is of interest to assess the relative contribution
of features (or subsets of features) toward the goal of predicting a response
-- in other words, to gauge the variable importance of features. Most recent
work on variable importance assessment has focused on describing the importance
of features within the confines of a given prediction algorithm. However, such
assessment does not necessarily characterize the prediction potential of
features, and may provide a misleading reflection of the intrinsic value of
these features. To address this limitation, we propose a general framework for
nonparametric inference on interpretable algorithm-agnostic variable
importance. We define variable importance as a population-level contrast
between the oracle predictiveness of all available features versus all features
except those under consideration. We propose a nonparametric efficient
estimation procedure that allows the construction of valid confidence
intervals, even when machine learning techniques are used. We also outline a
valid strategy for testing the null importance hypothesis. Through simulations,
we show that our proposal has good operating characteristics, and we illustrate
its use with data from a study of an antibody against HIV-1 infection.
|
2004.03683v2
|
2020-04-15
|
Magic DIAMOND: Multi-Fascicle Diffusion Compartment Imaging with Tensor Distribution Modeling and Tensor-Valued Diffusion Encoding
|
Diffusion tensor imaging provides increased sensitivity to microstructural
tissue changes compared to conventional anatomical imaging but also presents
limited specificity. To tackle this problem, the DIAMOND model subdivides the
voxel content into diffusion compartments and draws from diffusion-weighted
data to estimate compartmental non-central matrix-variate Gamma distribution of
diffusion tensors, thereby resolving crossing fascicles while accounting for
their respective heterogeneity. Alternatively, tensor-valued diffusion encoding
defines new acquisition schemes tagging specific features of the intra-voxel
diffusion tensor distribution directly from the outcome of the measurement.
However, the impact of such schemes on estimating brain microstructural
features has only been studied in a handful of parametric single-fascicle
models. In this work, we derive a general Laplace transform for the non-central
matrix-variate Gamma distribution, which enables the extension of DIAMOND to
tensor-valued encoded data. We then evaluate this "Magic DIAMOND" model in
silico and in vivo on various combinations of tensor-valued encoded data.
Assessing uncertainty on parameter estimation via stratified bootstrap, we
investigate both voxel-based and fixel-based metrics by carrying out multi-peak
tractography. We show that our estimated metrics can be mapped along tracks
robustly across regions of fiber crossing, which opens new perspectives for
tractometry and microstructure mapping along specific white-matter tracts.
|
2004.07340v2
|
2020-04-16
|
Measuring Human and Economic Activity from Satellite Imagery to Support City-Scale Decision-Making during COVID-19 Pandemic
|
The COVID-19 outbreak forced governments worldwide to impose lockdowns and
quarantines to prevent virus transmission. As a consequence, there are
disruptions in human and economic activities all over the globe. The recovery
process is also expected to be rough. Economic activities impact social
behaviors, which leave signatures in satellite images that can be automatically
detected and classified. Satellite imagery can support the decision-making of
analysts and policymakers by providing a different kind of visibility into the
unfolding economic changes. In this work, we use a deep learning approach that
combines strategic location sampling and an ensemble of lightweight
convolutional neural networks (CNNs) to recognize specific elements in
satellite images that could be used to compute economic indicators based on it,
automatically. This CNN ensemble framework ranked third place in the US
Department of Defense xView challenge, the most advanced benchmark for object
detection in satellite images. We show the potential of our framework for
temporal analysis using the US IARPA Function Map of the World (fMoW) dataset.
We also show results on real examples of different sites before and after the
COVID-19 outbreak to illustrate different measurable indicators. Our code and
annotated high-resolution aerial scenes before and after the outbreak are
available on GitHub (https://github.com/maups/covid19-satellite-analysis).
|
2004.07438v4
|
2020-04-16
|
Subjectifying Objectivity: Delineating Tastes in Theoretical Quantum Gravity Research
|
Research in Theoretical Quantum Gravity has continued expansively even as it
has become detached from classic arbiters of research such as direct empirical
falsification. This makes it an interesting test case for social-scientific
theories of what motivates and mediates contemporary scientific research and
the nature of scientific objectivity. For our empirical investigation, we
conducted 50 semi-structured interviews with researchers in the rival camps of
String Theory and Loop Quantum Gravity, coded a subset for reoccurring themes,
and subjected the resulting data to statistical analysis. Theoretically, we
mobilize aspects of Daston and Galison's depiction of the scientific self and
its relation to epistemic virtues, Pierre Bourdieu's field-centered account of
social space, and Kantian notions of aesthetics in order to delineate the
subjective tastes and the related process of collective consensus-making in
contemporary quantum gravity research. We make two key contributions. First,
our analysis sheds light on the inner workings of the field by connecting its
internal epistemic struggles with relevant social-scientific theories. For
example, we are able to suggest an explanation for how one approach, String
Theory, has become so dominant. Second, our application of theories of social
reproduction to the substance of scientific inquiry merits some substantive
generalizations to Daston and Galison's framework. Most significantly, we
propose as an addendum to their progression the notion of objectivity through
intersubjectivity: objectivity obtained not through the suppression of the self
but by its (regulated) pluralistic expression and performance.
|
2004.07450v2
|
2020-04-22
|
Excitation of high-frequency magnon modes in magnetoelastic films by short strain pulses
|
Development of energy efficient techniques for generation of spin waves
(magnons) is important for implementation of low-dissipation spin-wave-based
logic circuits and memory elements. A promising approach to achieve this goal
is based on the injection of short strain pulses into ferromagnetic films with
a strong magnetoelastic coupling between spins and strains. Here we report
micromagnetoelastic simulations of the magnetization and strain dynamics
excited in Fe$_{81}$Ga$_{19}$ films by picosecond and nanosecond acoustic
pulses created in a GaAs substrate by a transducer subjected to an optical or
electrical impulse. The simulations performed via the numerical solution of the
coupled Landau-Lifshitz-Gilbert and elastodynamic equations show that the
injected strain pulse induces an inhomogeneous magnetization precession in the
ferromagnetic film. The precession lasts up to 1 ns and can be treated as a
superposition of magnon modes having the form of standing spin waves. For
Fe$_{81}$Ga$_{19}$ films with nanoscale thickness, up to seven (six) distinct
modes have been revealed under free-surface (pinning) magnetic boundary
conditions. Remarkably, magnon modes with frequencies over 1 THz can be excited
by acoustic pulses with an appropriate shape and duration in the films
subjected to a moderate external magnetic field. This finding shows that short
strain pulses represent a promising tool for the generation of THz spin waves
necessary for the implementation of high-speed magnonic devices.
|
2004.10838v1
|
2020-04-23
|
Correlation-driven eightfold magnetic anisotropy in a two-dimensional oxide monolayer
|
Engineering magnetic anisotropy in two-dimensional systems has enormous
scientific and technological implications. The uniaxial anisotropy universally
exhibited by two-dimensional magnets has only two stable spin directions,
demanding 180 degrees spin switching between states. We demonstrate a novel
eightfold anisotropy in magnetic SrRuO3 monolayers by inducing a spin
reorientation in (SrRuO3)1/(SrTiO3)N superlattices, in which the magnetic easy
axis of Ru spins is transformed from uniaxial <001> direction (N = 1 and 2) to
eightfold <111> directions (N = 3, 4 and 5). This eightfold anisotropy enables
71 and 109 degrees spin switching in SrRuO3 monolayers, analogous to 71 and 109
degrees polarization switching in ferroelectric BiFeO3. First-principle
calculations reveal that increasing the SrTiO3 layer thickness induces an
emergent correlation-driven orbital ordering, tuning spin-orbit interactions
and reorienting the SrRuO3 monolayer easy axis. Our work demonstrates that
correlation effects can be exploited to substantially change spin-orbit
interactions, stabilizing unprecedented properties in two-dimensional magnets
and opening rich opportunities for low-power, multi-state device applications.
|
2004.10939v1
|
2020-04-27
|
Dynamic Predictions of Postoperative Complications from Explainable, Uncertainty-Aware, and Multi-Task Deep Neural Networks
|
Accurate prediction of postoperative complications can inform shared
decisions regarding prognosis, preoperative risk-reduction, and postoperative
resource use. We hypothesized that multi-task deep learning models would
outperform random forest models in predicting postoperative complications, and
that integrating high-resolution intraoperative physiological time series would
result in more granular and personalized health representations that would
improve prognostication compared to preoperative predictions. In a longitudinal
cohort study of 56,242 patients undergoing 67,481 inpatient surgical procedures
at a university medical center, we compared deep learning models with random
forests for predicting nine common postoperative complications using
preoperative, intraoperative, and perioperative patient data. Our study
indicated several significant results across experimental settings that suggest
the utility of deep learning for capturing more precise representations of
patient health for augmented surgical decision support. Multi-task learning
improved efficiency by reducing computational resources without compromising
predictive performance. Integrated gradients interpretability mechanisms
identified potentially modifiable risk factors for each complication. Monte
Carlo dropout methods provided a quantitative measure of prediction uncertainty
that has the potential to enhance clinical trust. Multi-task learning,
interpretability mechanisms, and uncertainty metrics demonstrated potential to
facilitate effective clinical implementation.
|
2004.12551v2
|
2020-05-08
|
Tree! I am no Tree! I am a Low Dimensional Hyperbolic Embedding
|
Given data, finding a faithful low-dimensional hyperbolic embedding of the
data is a key method by which we can extract hierarchical information or learn
representative geometric features of the data. In this paper, we explore a new
method for learning hyperbolic representations by taking a metric-first
approach. Rather than determining the low-dimensional hyperbolic embedding
directly, we learn a tree structure on the data. This tree structure can then
be used directly to extract hierarchical information, embedded into a
hyperbolic manifold using Sarkar's construction \cite{sarkar}, or used as a
tree approximation of the original metric. To this end, we present a novel fast
algorithm \textsc{TreeRep} such that, given a $\delta$-hyperbolic metric (for
any $\delta \geq 0$), the algorithm learns a tree structure that approximates
the original metric. In the case when $\delta = 0$, we show analytically that
\textsc{TreeRep} exactly recovers the original tree structure. We show
empirically that \textsc{TreeRep} is not only many orders of magnitude faster
than previously known algorithms, but also produces metrics with lower average
distortion and higher mean average precision than most previous algorithms for
learning hyperbolic embeddings, extracting hierarchical information, and
approximating metrics via tree metrics.
|
2005.03847v4
|
2020-07-08
|
On the production of He$^+$ of solar origin in the solar wind
|
Solar wind measurements in the heliosphere are predominantly comprised of
protons, alphas, and minor elements in a highly ionized state. The majority of
low charge states, such as He$^{+}$, measured in situ are often attributed to
pick up ions of non-solar origin. However, through inspection of the velocity
distribution functions of near Earth measurements, we find a small but
significant population of He$^+$ ions in the normal solar wind whose properties
indicate that it originated from the Sun and has evolved as part of the normal
solar wind. Current ionization models, largely governed by electron impact and
radiative ionization and recombination processes, underestimate this population
by several orders of magnitude. Therefore, to reconcile the singly ionized He
observed, we investigate recombination of solar He$^{2+}$ through charge
exchange with neutrals from circumsolar dust as a possible formation mechanism
of solar He$^{+}$. We present an empirical profile of neutrals necessary for
charge exchange to become an effective vehicle to recombine He$^{2+}$ to
He$^{+}$ such that it meets observational He$^{+}$ values. We find the
formation of He$^{+}$ is not only sensitive to the density of neutrals but also
to the inner boundary of the neutral distribution encountered along the solar
wind path. However, further observational constraints are necessary to confirm
that the interaction between solar $\alpha$ particles and dust neutrals is the
primary source of the He$^{+}$ observations.
|
2007.04402v2
|
2020-07-28
|
Towers and the first-order theory of hyperbolic groups
|
This paper is devoted to the first-order theory of torsion-free hyperbolic
groups. One of its purposes is to review some results and to provide precise
and correct statements and definitions, as well as some proofs and new results.
A key concept is that of a tower (Sela) or NTQ system
(Kharlampovich-Myasnikov). We discuss them thoroughly.
We state and prove a new general theorem which unifies several results in the
literature: elementarily equivalent torsion-free hyperbolic groups have
isomorphic cores (Sela); if $H$ is elementarily embedded in a torsion-free
hyperbolic group $G$, then $G$ is a tower over $H$ relative to $H$ (Perin);
free groups (Perin-Sklinos, Ould-Houcine), and more generally free products of
prototypes and free groups, are homogeneous.
The converse to Sela and Perin's results just mentioned is true. This follows
from the solution to Tarski's problem on elementary equivalence of free groups,
due independently to Sela and Kharlampovich-Myasnikov, which we treat as a
black box throughout the paper.
We present many examples and counterexamples, and we prove some new
model-theoretic results. We characterize prime models among torsion-free
hyperbolic groups, and minimal models among elementarily free groups. Using
Fra\"iss\'e's method, we associate to every torsion-free hyperbolic group $H$ a
unique homogeneous countable group $\mathcal{M}$ in which any hyperbolic group
$H'$ elementarily equivalent to $H$ has an elementary embedding.
In an appendix we give a complete proof of the fact, due to Sela, that towers
over a torsion-free hyperbolic group $H$ are $H$-limit groups.
|
2007.14148v1
|
2020-08-13
|
Prediction of magnetization dynamics in a reduced dimensional feature space setting utilizing a low-rank kernel method
|
We establish a machine learning model for the prediction of the magnetization
dynamics as function of the external field described by the
Landau-Lifschitz-Gilbert equation, the partial differential equation of motion
in micromagnetism. The model allows for fast and accurate determination of the
response to an external field which is illustrated by a thin-film standard
problem. The data-driven method internally reduces the dimensionality of the
problem by means of nonlinear model reduction for unsupervised learning. This
not only makes accurate prediction of the time steps possible, but also
decisively reduces complexity in the learning process where magnetization
states from simulated micromagnetic dynamics associated with different external
fields are used as input data. We use a truncated representation of kernel
principal components to describe the states between time predictions. The
method is capable of handling large training sample sets owing to a low-rank
approximation of the kernel matrix and an associated low-rank extension of
kernel principal component analysis and kernel ridge regression. The approach
entirely shifts computations into a reduced dimensional setting breaking down
the problem dimension from the thousands to the tens.
|
2008.05986v3
|
2020-07-20
|
Artificial Intelligence is stupid and causal reasoning won't fix it
|
Artificial Neural Networks have reached Grandmaster and even super-human
performance across a variety of games: from those involving perfect-information
(such as Go) to those involving imperfect-information (such as Starcraft). Such
technological developments from AI-labs have ushered concomitant applications
across the world of business - where an AI brand tag is fast becoming
ubiquitous. A corollary of such widespread commercial deployment is that when
AI gets things wrong - an autonomous vehicle crashes; a chatbot exhibits racist
behaviour; automated credit scoring processes discriminate on gender etc. -
there are often significant financial, legal and brand consequences and the
incident becomes major news. As Judea Pearl sees it, the underlying reason for
such mistakes is that, 'all the impressive achievements of deep learning amount
to just curve fitting'. The key, Judea Pearl suggests, is to replace reasoning
by association with causal-reasoning - the ability to infer causes from
observed phenomena. It is a point that was echoed by Gary Marcus and Ernest
Davis in a recent piece for the New York Times: 'we need to stop building
computer systems that merely get better and better at detecting statistical
patterns in data sets - often using an approach known as Deep Learning - and
start building computer systems that from the moment of their assembly innately
grasp three basic concepts: time, space and causality'. In this paper,
foregrounding what in 1949 Gilbert Ryle termed a category mistake, I will offer
an alternative explanation for AI errors: it is not so much that AI machinery
cannot grasp causality, but that AI machinery - qua computation - cannot
understand anything at all.
|
2008.07371v1
|
2020-08-19
|
Dynamical decoupling in interacting systems: applications to signal-enhanced hyperpolarized readout
|
Methods that preserve coherence broadly impact all quantum information
processing and metrology applications. Dynamical decoupling methods accomplish
this by protecting qubits in noisy environments but are typically constrained
to the limit where the qubits themselves are non-interacting. Here we consider
the alternate regime wherein the inter-qubit couplings are of the same order as
dephasing interactions with the environment. We propose and demonstrate a
multi-pulse protocol that protects transverse spin states by suitably
Hamiltonian engineering the inter-spin coupling while simultaneously
suppressing dephasing noise on the qubits. We benchmark the method on 13C
nuclear spin qubits in diamond, dipolar coupled to each other and embedded in a
noisy electronic spin bath, and hyperpolarized via optically pumped NV centers.
We observe effective state lifetimes of 13C nuclei $T_2^{\prime}\approx$2.5s at
room temperature, an extension of over 4700-fold over the conventional
$T_2^{\ast}$ free induction decay. The spins are continuously interrogated
during the applied quantum control, resulting in 13C NMR line narrowing and an
$>$500-fold boost in SNR due to the lifetime extension. Together with
hyperpolarization spin interrogation is accelerated by $>10^{11}$ over
conventional 7T NMR. This work suggests strategies for the dynamical decoupling
of coupled qubit systems with applications in a variety of experimental
platforms.
|
2008.08323v1
|
2020-08-30
|
Microwave and spin transfer torque driven coherent control in ferromagnets
|
Coherent control is a method used to manipulate the state of matter using
oscillatory electromagnetic radiation which relies on the non-adiabatic
interaction. It is commonly applied in quantum processing applications. This
technique is interesting in the context of ferromagnetic materials because of
the ability to combine it with spintronics for the purpose of fundamental spin
transport research, low-power information processing, and potentially future
quantum bit (Qubit) applications. In this work we address the theoretical
grounds of coherent manipulation in practical ferromagnetic systems. We study
electromagnetic radiation driven interaction that is enhanced in the presence
of spin polarized currents and map the conditions that allow coherent
manipulation for which Rabi oscillations take place. The role of the magnetic
anisotropy field is shown to act as an additional oscillatory driving field. We
discuss the Gilbert losses in the context of effective coherence decay rates
and show that it is possible to control these rates by application of a static
spin current. The case of coherent manipulation using oscillatory spin currents
that is free of radiation is discussed as well. Our work paves the way towards
spin current amplification as well as radiation-free coherent control schemes
that may potentially lead to novel Qubits that are robust and scalable.
|
2008.13139v3
|
2020-08-31
|
Philosophy-Guided Modelling and Implementation of Adaptation and Control in Complex Systems
|
Control was from its very beginning an important concept in cybernetics.
Later on, with the works of W. Ross Ashby, for example, biological concepts
such as adaptation were interpreted in the light of cybernetic systems theory.
Adaptation is the process by which a system is capable of regulating or
controlling itself in order to adapt to changes of its inner and outer
environment maintaining a homeostatic state. In earlier works we have developed
a system metamodel that on the one hand refers to cybernetic concepts such as
structure, operation, and system, and on the other to the philosophy of
individuation of Gilbert Simondon. The result is the so-called allagmatic
method that is capable of creating concrete models of systems such as
artificial neural networks and cellular automata starting from abstract
building blocks. In this paper, we add to our already existing method the
cybernetic concepts of control and especially adaptation. In regard to the
system metamodel, we rely again on philosophical theories, this time the
philosophy of organism of Alfred N. Whitehead. We show how these new
meta-theoretical concepts are described formally and how they are implemented
in program code. We also show what role they play in simple experiments. We
conclude that philosophical abstract concepts help to better understand the
process of creating computer models and their control and adaptation. In the
outlook we discuss how the allagmatic method needs to be extended in order to
cover the field of complex systems and Norbert Wiener's ideas on control.
|
2009.00110v4
|
2020-09-02
|
X-ray linear dichroic ptychography
|
Biominerals such as seashells, corals skeletons, bone, and enamel are
optically anisotropic crystalline materials with unique nano- and micro-scale
organization that translates into exceptional macroscopic mechanical
properties, providing inspiration for engineering new and superior biomimetic
structures. Here we use particles of Seriatopora aculeata coral skeleton as a
model and demonstrate, for the first time, x-ray linear dichroic ptychography.
We map the aragonite (CaCO3) crystal c-axis orientations in coral skeleton with
35 nm spatial resolution. Linear dichroic phase imaging at the O K-edge energy
shows strong polarization-dependent contrast and reveals the presence of both
narrow (< 35{\deg}) and wide (> 35{\deg}) c-axis angular spread in
sub-micrometer coral particles. These x-ray ptychography results were
corroborated using 4D scanning transmission electron nano-diffraction on the
same particles. Evidence of co-oriented but disconnected corallite sub-domains
indicates jagged crystal boundaries consistent with formation by amorphous
nanoparticle attachment. Looking forward, we anticipate that x-ray linear
dichroic ptychography can be applied to study nano-crystallites, interfaces,
nucleation and mineral growth of optically anisotropic materials with sub-ten
nanometers spatial resolution in three dimensions.
|
2009.01093v1
|
2020-09-18
|
The effect of the surface magnetic anisotropy of the neodymium atoms on the coercivity in the neodymium permanent magnet
|
The Nd permanent magnet (Nd$_{2}$Fe$_{14}$B) is an indispensable material
used in modern energy conversion devices. The realization of high coercivity at
finite temperatures is a burning issue. One of the important ingredients for
controlling the coercive force is the surface property of magnetic grains. It
has been reported by first-principles studies that the Nd atoms in the first
(001) surface layer facing the vacuum have in-plane anisotropy perpendicular to
the $c$ axis, which may decrease the coercivity. Focusing on the surface
anisotropy effect on the coercivity, we examine the coercivity at zero and
finite temperatures by using an atomistic model reflecting the lattice
structure of the Nd magnet with a stochastic Landau-Lifshitz-Gilbert equation
method. We study general three cases, in which the Nd atoms in surface layers
have (1) no anisotropy, (2) in-plane anisotropy, and (3) reinforced anisotropy
for two types of surfaces, (001) and (100) surfaces. We find that in contrast
to the zero-temperature case, due to the thermal fluctuation effect, the
modification of only the first surface layer has little effect on the
coercivity at finite temperatures. However, the modification of a few layers
results in significant effects. We discuss the details of the dependence of the
coercivity on temperature, type of surface, and modified layer depth, and also
the features of domain growth in magnetization reversal.
|
2009.08572v1
|
2020-09-18
|
Information- and Coding-Theoretic Analysis of the RLWE Channel
|
Several cryptosystems based on the \emph{Ring Learning with Errors} (RLWE)
problem have been proposed within the NIST post-quantum cryptography
standardization process, e.g., NewHope. Furthermore, there are systems like
Kyber which are based on the closely related MLWE assumption. Both previously
mentioned schemes result in a non-zero decryption failure rate (DFR). The
combination of encryption and decryption for these kinds of algorithms can be
interpreted as data transmission over a noisy channel. To the best of our
knowledge this paper is the first work that analyzes the capacity of this
channel. We show how to modify the encryption schemes such that the input
alphabets of the corresponding channels are increased. In particular, we
present lower bounds on their capacities which show that the transmission rate
can be significantly increased compared to standard proposals in the
literature. Furthermore, under the common assumption of stochastically
independent coefficient failures, we give lower bounds on achievable rates
based on both the Gilbert-Varshamov bound and concrete code constructions using
BCH codes. By means of our constructions, we can either increase the total
bitrate (by a factor of $1.84$ for Kyber and by factor of $7$ for NewHope)
while guaranteeing the same DFR or for the same bitrate, we can significantly
reduce the DFR for all schemes considered in this work (e.g., for NewHope from
$2^{-216}$ to $2^{-12769}$).
|
2009.08681v3
|
2020-09-28
|
Precise control of $J_\mathrm{eff}=1/2$ magnetic properties in Sr$_2$IrO$_4$ epitaxial thin films by variation of strain and thin film thickness
|
We report on a comprehensive investigation of the effects of strain and film
thickness on the structural and magnetic properties of epitaxial thin films of
the prototypal $J_\mathrm{eff}=1/2$ compound Sr$_2$IrO$_4$ by advanced X-ray
scattering. We find that the Sr$_2$IrO$_4$ thin films can be grown fully
strained up to a thickness of 108 nm. By using X-ray resonant scattering, we
show that the out-of-plane magnetic correlation length is strongly dependent on
the thin film thickness, but independent of the strain state of the thin films.
This can be used as a finely tuned dial to adjust the out-of-plane magnetic
correlation length and transform the magnetic anisotropy from two-dimensional
(2D) to three-dimensional (3D) behavior by incrementing film thickness. These
results provide a clearer picture for the systematic control of the magnetic
degrees of freedom in epitaxial thin films of Sr$_2$IrO$_4$ and bring to light
the potential for a rich playground to explore the physics of $5d$-transition
metal compounds.
|
2009.13185v1
|
2020-10-03
|
WinterLab: Developing a low-cost, portable experiment platform to encourage engagement in the electronics lab
|
Encouraging student engagement is a key aim in any educational setting, and
allowing students the freedom to pursue their own methods of solving problems
through independent experimentation has been shown to markedly improve this. In
many contexts, however, allowing students this flexibility in their learning is
hampered by constraints of the material itself, such as in the electronics
laboratory, where expensive and bulky equipment confines the learning
environment to the laboratory room. Finding ourselves in the position of
teaching one such laboratory course at the undergraduate level, we sought to
encourage students to learn through independent investigation and the pursuit
of personal projects, by providing a more flexible and inquiry-based learning
environment and allowing them to take their measurement equipment -- and their
learning -- beyond the laboratory itself. We present this project as a case of
design both for and by students, with the lead designer undertaking the project
after attending the course in question, and pursuing its development as a
foundational step in their graduate career. We discuss the challenges and
opportunities we encountered over the course of the design and development
process, and the eventual key output of the project: a portable, low-cost,
integrated electronics experimentation platform called the Winterlab board.
|
2010.01426v2
|
2020-10-16
|
Hyperspectral interference tomography of nacre
|
Structural characterization of biologically formed materials is essential for
understanding biological phenomena and their environment, and generating new
bio-inspired engineering concepts. For example, nacre -- formed by mollusks in
the ocean -- encodes local environmental conditions throughout its formation
and has exceptional strength due to its nanoscale brick-and-mortar structure.
This layered structure, comprising transparent aragonite tablets bonded with an
ultra-thin organic polymer, also results in stunning interference colors.
Existing methods of structural characterization of nacre rely on some form of
cross-sectional analysis, such as scanning electron microscopy or
polarization-dependent imaging contrast (PIC) mapping. However, these
techniques are destructive and too time- and resource-intensive to analyze
large sample areas. Here we present an all-optical, rapid, and non-destructive
imaging technique -- hyperspectral interference tomography (HIT) -- to
spatially map the structural parameters of nacre and other disordered layered
materials. We combined hyperspectral imaging with optical-interference modeling
to infer the mean tablet thickness and disordering of nacre layers across
entire mollusk shells at various stages of development, observing a previously
unknown relationship between the growth of the mollusk and tablet thickness.
Our rapid, inexpensive, and nondestructive method can be readily applied to
in-field studies.
|
2010.08170v1
|
2020-11-03
|
Recent results for the Landau-Lifshitz equation
|
We give a survey on some recent results concerning the Landau-Lifshitz
equation, a fundamental nonlinear PDE with a strong geometric content,
describing the dynamics of the magnetization in ferromagnetic materials. We
revisit the Cauchy problem for the anisotropic Landau-Lifshitz equation,
without dissipation, for smooth solutions, and also in the energy space in
dimension one. We also examine two approximations of the Landau-Lifshitz
equation given by of the Sine-Gordon equation and cubic Schr\"odinger
equations, arising in certain singular limits of strong easy-plane and
easy-axis anisotropy, respectively.
Concerning localized solutions, we review the orbital and asymptotic
stability problems for a sum of solitons in dimension one, exploiting the
variational nature of the solitons in the hydrodynamical framework.
Finally, we survey results concerning the existence, uniqueness and stability
of self-similar solutions (expanders and shrinkers) for the isotropic
Landau-Lifshitz equation with Gilbert term. Since expanders are associated with
a singular initial condition with a jump discontinuity, we also review their
well-posedness in spaces linked to the BMO space.
|
2011.01692v3
|
2020-11-10
|
The Virtual Goniometer: A new method for measuring angles on 3D models of fragmentary bone and lithics
|
The contact goniometer is a commonly used tool in lithic and
zooarchaeological analysis, despite suffering from a number of shortcomings due
to the physical interaction between the measuring implement, the object being
measured, and the individual taking the measurements. However, lacking a simple
and efficient alternative, researchers in a variety of fields continue to use
the contact goniometer to this day. In this paper, we present a new goniometric
method that we call the virtual goniometer, which takes angle measurements
virtually on a 3D model of an object. The virtual goniometer allows for rapid
data collection, and for the measurement of many angles that cannot be
physically accessed by a manual goniometer. We compare the intra-observer
variability of the manual and virtual goniometers, and find that the virtual
goniometer is far more consistent and reliable. Furthermore, the virtual
goniometer allows for precise replication of angle measurements, even among
multiple users, which is important for reproducibility of goniometric-based
research. The virtual goniometer is available as a plug-in in the open source
mesh processing packages Meshlab and Blender, making it easily accessible to
researchers exploring the potential for goniometry to improve archaeological
methods and address anthropological questions.
|
2011.04898v2
|
2020-11-17
|
Competing energy scales in topological superconducting heterostructures
|
Artificially engineered topological superconductivity has emerged as a viable
route to create Majorana modes, exotic quasiparticles which have raised great
expectations for storing and manipulating information in topological quantum
computational schemes. The essential ingredients for their realization are spin
non-degenerate metallic states proximitized to an s-wave superconductor. In
this context, proximity-induced superconductivity in materials with a sizable
spin-orbit coupling has been heavily investigated in recent years. Although
there is convincing evidence that superconductivity may indeed be induced, it
has been difficult to elucidate its topological nature. In this work, we
systematically engineer an artificial topological superconductor by
progressively introducing superconductivity (Nb) into metals with strong
spin-orbital coupling (Pt) and 3D topological surface states (Bi2Te3). Through
a longitudinal study of the character of superconducting vortices within s-wave
superconducting Nb and proximity-coupled Nb/Pt and Nb/Bi2Te3, we detect the
emergence of a zero-bias peak that is directly linked to the presence of
topological surface states. Supported by a detailed theoretical model, our
results are rationalized in terms of competing energy trends which are found to
impose an upper limit to the size of the minigap separating Majorana and
trivial modes, its size being ultimately linked to fundamental materials
properties.
|
2011.08812v1
|
2020-12-01
|
Phase-field modeling of biomineralization in mollusks and corals: Microstructure vs. formation mechanism
|
While biological crystallization processes have been studied on the
microscale extensively, models addressing the mesoscale aspects of such
phenomena are rare. In this work, we investigate whether the phase-field theory
developed in materials science for describing complex polycrystalline
structures on the mesoscale can be meaningfully adapted to model
crystallization in biological systems. We demonstrate the abilities of the
phase-field technique by modeling a range of microstructures observed in
mollusk shells and coral skeletons, including granular, prismatic,
sheet/columnar nacre, and sprinkled spherulitic structures. We also compare two
possible micromechanisms of calcification: the classical route via ion-by-ion
addition from a fluid state and a non-classical route, crystallization of an
amorphous precursor deposited at the solidification front. We show that with
appropriate choice of the model parameters microstructures similar to those
found in biomineralized systems can be obtained along both routes, though the
timescale of the non-classical route appears to be more realistic. The
resemblance of the simulated and natural biominerals suggests that, underneath
the immense biological complexity observed in living organisms, the underlying
design principles for biological structures may be understood with simple math,
and simulated by phase-field theory.
|
2012.00666v1
|
2020-12-02
|
Symmetry of the Magnetoelastic Interaction of Rayleigh and Shear Horizontal Magnetoacoustic Waves in Nickel Thin Films on LiTaO$_3$
|
We study the interaction of Rayleigh and shear horizontal surface acoustic
waves (SAWs) with spin waves in thin Ni films on a piezoelectric LiTaO$_3$
substrate, which supports both SAW modes simultaneously. Because Rayleigh and
shear horizontal modes induce different strain components in the Ni thin films,
the symmetries of the magnetoelastic driving fields, of the magnetoelastic
response, and of the transmission nonreciprocity differ for both SAW modes. Our
experimental findings are well explained by a theoretical model based on a
modified Landau--Lifshitz--Gilbert approach. We show that the symmetries of the
magnetoelastic response driven by Rayleigh- and shear horizontal SAWs
complement each other, which makes it possible to excite spin waves for any
relative orientation of magnetization and SAW propagation direction and,
moreover, can be utilized to characterize surface strain components of unknown
acoustic wave modes.
|
2012.01055v2
|
2020-12-03
|
Localization of Malaria Parasites and White Blood Cells in Thick Blood Smears
|
Effectively determining malaria parasitemia is a critical aspect in assisting
clinicians to accurately determine the severity of the disease and provide
quality treatment. Microscopy applied to thick smear blood smears is the de
facto method for malaria parasitemia determination. However, manual
quantification of parasitemia is time consuming, laborious and requires
considerable trained expertise which is particularly inadequate in highly
endemic and low resourced areas. This study presents an end-to-end approach for
localisation and count of malaria parasites and white blood cells (WBCs) which
aid in the effective determination of parasitemia; the quantitative content of
parasites in the blood. On a dataset of slices of images of thick blood smears,
we build models to analyse the obtained digital images. To improve model
performance due to the limited size of the dataset, data augmentation was
applied. Our preliminary results show that our deep learning approach reliably
detects and returns a count of malaria parasites and WBCs with a high precision
and recall. We also evaluate our system against human experts and results
indicate a strong correlation between our deep learning model counts and the
manual expert counts (p=0.998 for parasites, p=0.987 for WBCs). This approach
could potentially be applied to support malaria parasitemia determination
especially in settings that lack sufficient Microscopists.
|
2012.01994v1
|
2020-12-05
|
Age-Optimal Low-Power Status Update over Time-Correlated Fading Channel
|
In this paper, we consider transmission scheduling in a status update system,
where updates are generated periodically and transmitted over a Gilbert-Elliott
fading channel. The goal is to minimize the long-run average age of information
(AoI) at the destination under an average energy constraint. We consider two
practical cases to obtain channel state information (CSI): (i) \emph{without
channel sensing} and (ii) \emph{with delayed channel sensing}. For case (i),
the channel state is revealed when an ACK/NACK is received at the transmitter
following a transmission, but when no transmission occurs, the channel state is
not revealed. Thus, we have to design schemes that balance tradeoffs across
energy, AoI, channel exploration, and channel exploitation. The problem is
formulated as a constrained partially observable Markov decision process
problem (POMDP). To reduce algorithm complexity, we show that the optimal
policy is a randomized mixture of no more than two stationary deterministic
policies each of which is of a threshold-type in the belief on the channel. For
case (ii), (delayed) CSI is available at the transmitter via channel sensing.
In this case, the tradeoff is only between the AoI and energy consumption and
the problem is formulated as a constrained MDP. The optimal policy is shown to
have a similar structure as in case (i) but with an AoI associated threshold.
Finally, the performance of the proposed structure-aware algorithms is
evaluated numerically and compared with a Greedy policy.
|
2012.02958v2
|
2020-11-30
|
Procode: the Swiss Multilingual Solution for Automatic Coding and Recoding of Occupations and Economic Activities
|
Objective. Epidemiological studies require data that are in alignment with
the classifications established for occupations or economic activities. The
classifications usually include hundreds of codes and titles. Manual coding of
raw data may result in misclassification and be time consuming. The goal was to
develop and test a web-tool, named Procode, for coding of free-texts against
classifications and recoding between different classifications. Methods. Three
text classifiers, i.e. Complement Naive Bayes (CNB), Support Vector Machine
(SVM) and Random Forest Classifier (RFC), were investigated using a k-fold
cross-validation. 30 000 free-texts with manually assigned classification codes
of French classification of occupations (PCS) and French classification of
activities (NAF) were available. For recoding, Procode integrated a workflow
that converts codes of one classification to another according to existing
crosswalks. Since this is a straightforward operation, only the recoding time
was measured. Results. Among the three investigated text classifiers, CNB
resulted in the best performance, where the classifier predicted accurately
57-81% and 63-83% classification codes for PCS and NAF, respectively. SVM lead
to somewhat lower results (by 1-2%), while RFC coded accurately up to 30% of
the data. The coding operation required one minute per 10 000 records, while
the recoding was faster, i.e. 5-10 seconds. Conclusion. The algorithm
integrated in Procode showed satisfactory performance, since the tool had to
assign the right code by choosing between 500-700 different choices. Based on
the results, the authors decided to implement CNB in Procode. In future, if
another classifier shows a superior performance, an update will include the
required modifications.
|
2012.07521v1
|
2020-12-16
|
Dynamic clay microstructures emerge via ion complexation waves
|
Clays control carbon, water and nutrient transport in the lithosphere,
promote cloud formation5 and lubricate fault slip through interactions among
hydrated mineral interfaces. Clay mineral properties are difficult to model
because their structures are disordered, curved and dynamic. Consequently,
interactions at the clay mineral-aqueous interface have been approximated using
electric double layer models based on single crystals of mica and atomistic
simulations. We discover that waves of complexation dipoles at dynamically
curving interfaces create an emergent long-range force that drives exfoliation
and restacking over time- and length-scales that are not captured in existing
models. Curvature delocalizes electrostatic interactions in ways that
fundamentally differ from planar surfaces, altering the ratio of ions bound to
the convex and concave sides of a layer. Multiple-scattering reconstruction of
low-dose energy-filtered cryo electron tomography enabled direct imaging of ion
complexes and electrolyte distributions at hydrated and curved mineral
interfaces with {\aa}ngstrom resolution over micron length scales. Layers
exfoliate and restack abruptly and repeatedly over timescales that depend
strongly on the counterion identity, demonstrating that the strong coupling
between elastic, electrostatic and hydration forces in clays promote collective
reorganization previously thought to be a feature only of active matter.
|
2012.09295v1
|
2020-12-17
|
Age-optimal Scheduling over Hybrid Channels
|
We consider the problem of minimizing the age of information when a source
can transmit status updates over two heterogeneous channels. Our work is
motivated by recent developments in 5G mmWave technology, where transmissions
may occur over an unreliable but fast (e.g., mmWave) channel or a slow reliable
(e.g., sub-6GHz) channel. The unreliable channel is modeled as a
time-correlated Gilbert-Elliot channel at a high rate when the channel is in
the 'ON' state. The reliable channel provides a deterministic but lower data
rate. The scheduling strategy determines the channel to be used for
transmission in each time slot, aiming to minimize the time-average age of
information (AoI). The optimal scheduling problem is formulated as a Markov
Decision Process (MDP), which is challenging to solve because super-modularity
does not hold in a part of the state space. We address this challenge and show
that a multi-dimensional threshold-type scheduling policy is optimal for
minimizing the age. By exploiting the structure of the MDP and analyzing the
discrete-time Markov chains (DTMCs) of the threshold-type policy, we devise a
low-complexity bisection algorithm to compute the optimal thresholds. We
compare different scheduling policies using numerical simulations.
|
2012.09403v6
|
2020-12-21
|
Variations on the Maiani-Testa approach and the inverse problem
|
We discuss a method to construct hadronic scattering and decay amplitudes
from Euclidean correlators, by combining the approach of a regulated inverse
Laplace transform with the work of Maiani and Testa. Revisiting the original
result, we observe that the key observation, i.e. that only threshold
scattering information can be extracted at large separations, can be understood
by interpreting the correlator as a spectral function, $\rho(\omega)$,
convoluted with the Euclidean kernel, $e^{- \omega t}$, which is sharply peaked
at threshold. We therefore consider a modification in which a smooth step
function, equal to one above a target energy, is inserted in the spectral
decomposition. This can be achieved either through Backus-Gilbert-like methods
or more directly using the variational approach. The result is a shifted
resolution function, such that the large $t$ limit projects onto scattering or
decay amplitudes above threshold. The utility of this method is highlighted
through large $t$ expansions of both three- and four-point functions that
include leading terms proportional to the real and imaginary parts (separately)
of the target observable. This work also presents new results relevant for the
un-modified correlator at threshold, including expressions for extracting the
$N \pi$ scattering length from four-point functions and a new strategy to
organize the large $t$ expansion that exhibits better convergence than the
expansion in powers of $1/t$.
|
2012.11488v1
|
2021-01-13
|
PID passivity-based droop control of power converters: Large-signal stability, robustness and performance
|
We present a full review of PID passivity-based controllers (PBC) applied to
power electronic converters, discussing limitations, unprecedented merits and
potential improvements in terms of large-signal stability, robustness and
performance. We provide four main contributions. The nominal case is first
considered and it is shown, under the assumption of perfect knowledge of the
system parameters, that the PID-PBC is able to guarantee global exponential
stability of a desired operating point for any positive gains. Second, we
analyze robustness of the controller to parameters uncertainty for a specific
class of power converters, by establishing precise stability margins. Third, we
propose a modification of the controller by introducing a leakage, in order to
overcome some of the intrinsic performance and robustness limitations.
Interestingly, such controller can be interpreted at steady-state as a droop
between the input and the passive output, similar to traditional primary
controllers. Fourth, we robustify the design against saturation of the control
input via an appropriate monotone transformation of the controller. The
obtained results are thoroughly discussed and validated by simulations on two
relevant power applications: a dc/dc boost converter and an HVDC grid-connected
voltage source converter.
|
2101.05047v2
|
2021-02-15
|
Recent Developments in Blockchain Technology and their Impact on Energy Consumption
|
The enormous power consumption of Bitcoin has led to undifferentiated
discussions in science and practice about the sustainability of blockchain and
distributed ledger technology in general. However, blockchain technology is far
from homogeneous - not only with regard to its applications, which now go far
beyond cryptocurrencies and have reached businesses and the public sector, but
also with regard to its technical characteristics and, in particular, its power
consumption. This paper summarizes the status quo of the power consumption of
various implementations of blockchain technology, with special emphasis on the
recent 'Bitcoin Halving' and so-called 'zk-rollups'. We argue that although
Bitcoin and other proof-of-work blockchains do indeed consume a lot of power,
alternative blockchain solutions with significantly lower power consumption are
already available today, and new promising concepts are being tested that could
further reduce in particular the power consumption of large blockchain networks
in the near future. From this we conclude that although the criticism of
Bitcoin's power consumption is legitimate, it should not be used to derive an
energy problem of blockchain technology in general. In many cases in which
processes can be digitised or improved with the help of more energy-efficient
blockchain variants, one can even expect net energy savings.
|
2102.07886v1
|
2021-03-11
|
Toward the Next Generation of News Recommender Systems
|
This paper proposes a vision and research agenda for the next generation of
news recommender systems (RS), called the table d'hote approach. A table d'hote
(translates as host's table) meal is a sequence of courses that create a
balanced and enjoyable dining experience for a guest. Likewise, we believe news
RS should strive to create a similar experience for the users by satisfying the
news-diet needs of a user. While extant news RS considers criteria such as
diversity and serendipity, and RS bundles have been studied for other contexts
such as tourism, table d'hote goes further by ensuring the recommended articles
satisfy a diverse set of user needs in the right proportions and in a specific
order. In table d'hote, available articles need to be stratified based on the
different ways that news can create value for the reader, building from
theories and empirical research in journalism and user engagement. Using
theories and empirical research from communication on the uses and
gratifications (U&G) consumers derive from media, we define two main strata in
a table d'hote news RS, each with its own substrata: 1) surveillance, which
consists of information the user needs to know, and 2) serendipity, which are
the articles offering unexpected surprises. The diversity of the articles
according to the defined strata and the order of the articles within the list
of recommendations are also two important aspects of the table d'hote in order
to give the users the most effective reading experience. We propose our vision,
link it to the existing concepts in the RS literature, and identify challenges
for future research.
|
2103.06909v1
|
2021-03-16
|
Machine learning methods for the prediction of micromagnetic magnetization dynamics
|
Machine learning (ML) entered the field of computational micromagnetics only
recently. The main objective of these new approaches is the automatization of
solutions of parameter-dependent problems in micromagnetism such as fast
response curve estimation modeled by the Landau-Lifschitz-Gilbert (LLG)
equation. Data-driven models for the solution of time- and parameter-dependent
partial differential equations require high dimensional training
data-structures. ML in this case is by no means a straight-forward trivial
task, it needs algorithmic and mathematical innovation. Our work introduces
theoretical and computational conceptions of certain kernel and neural network
based dimensionality reduction approaches for efficient prediction of solutions
via the notion of low-dimensional feature space integration. We introduce
efficient treatment of kernel ridge regression and kernel principal component
analysis via low-rank approximation. A second line follows neural network (NN)
autoencoders as nonlinear data-dependent dimensional reduction for the training
data with focus on accurate latent space variable description suitable for a
feature space integration scheme. We verify and compare numerically by means of
a NIST standard problem. The low-rank kernel method approach is fast and
surprisingly accurate, while the NN scheme can even exceed this level of
accuracy at the expense of significantly higher costs.
|
2103.09079v2
|
2021-03-18
|
Bounding the detection efficiency threshold in Bell tests using multiple copies of the maximally entangled two-qubit state carried by a single pair of particles
|
In this paper, we investigate the critical efficiency of detectors to observe
Bell nonlocality using multiple copies of the maximally entangled two-qubit
state carried by a single pair of particles, such as hyperentangled states, and
the product of Pauli measurements. It is known that in a
Clauser-Horne-Shimony-Holt (CHSH) Bell test the symmetric detection efficiency
of $82.84\%$ can be tolerated for the two-qubit maximally entangled state. We
beat this enigmatic threshold by entangling two particles with multiple degrees
of freedom. The obtained upper bounds of the symmetric detection efficiency
thresholds are $80.86\%$, $73.99\%$ and $69.29\%$ for two, three and four
copies of the two-qubit maximally entangled state, respectively. The number of
measurements and outcomes in the respective cases are 4, 8 and 16. To find the
improved thresholds, we use large-scale convex optimization tools, which allows
us to significantly go beyond state-of-the-art results. The proof is exact up
to three copies, while for four copies it is due to reliable numerical
computations. Specifically, we used linear programming to obtain the two-copy
threshold and the corresponding Bell inequality, and convex optimization based
on Gilbert's algorithm for three and four copies of the two-qubit state. We
show analytically that the symmetric detection efficiency threshold decays
exponentially with the number of copies of the two-qubit state. Our techniques
can also be applied to more general Bell nonlocality scenarios with more than
two parties.
|
2103.10413v2
|
2021-04-05
|
When Can Liquid Democracy Unveil the Truth?
|
In this paper, we investigate the so-called ODP-problem that has been
formulated by Caragiannis and Micha [10]. Here, we are in a setting with two
election alternatives out of which one is assumed to be correct. In ODP, the
goal is to organise the delegations in the social network in order to maximize
the probability that the correct alternative, referred to as ground truth, is
elected. While the problem is known to be computationally hard, we strengthen
existing hardness results by providing a novel strong approximation hardness
result: For any positive constant $C$, we prove that, unless $P=NP$, there is
no polynomial-time algorithm for ODP that achieves an approximation guarantee
of $\alpha \ge (\ln n)^{-C}$, where $n$ is the number of voters. The reduction
designed for this result uses poorly connected social networks in which some
voters suffer from misinformation. Interestingly, under some hypothesis on
either the accuracies of voters or the connectivity of the network, we obtain a
polynomial-time $1/2$-approximation algorithm. This observation proves formally
that the connectivity of the social network is a key feature for the efficiency
of the liquid democracy paradigm. Lastly, we run extensive simulations and
observe that simple algorithms (working either in a centralized or
decentralized way) outperform direct democracy on a large class of instances.
Overall, our contributions yield new insights on the question in which
situations liquid democracy can be beneficial.
|
2104.01828v1
|
2021-04-05
|
Floquet prethermalization with lifetime exceeding 90s in a bulk hyperpolarized solid
|
We report the observation of long-lived Floquet prethermal states in a bulk
solid composed of dipolar-coupled $^{13}$C nuclei in diamond at room
temperature. For precessing nuclear spins prepared in an initial transverse
state, we demonstrate pulsed spin-lock Floquet control that prevents their
decay over multiple-minute long periods. We observe Floquet prethermal
lifetimes $T_2'\approx$90.9s, extended >60,000-fold over the nuclear free
induction decay times. The spins themselves are continuously interrogated for
$\sim$10min, corresponding to the application of $\approx$5.8M control pulses.
The $^{13}$C nuclei are optically hyperpolarized by lattice Nitrogen Vacancy
(NV) centers; the combination of hyperpolarization and continuous spin readout
yields significant signal-to-noise in the measurements. This allows probing the
Floquet thermalization dynamics with unprecedented clarity. We identify four
characteristic regimes of the thermalization process, discerning short-time
transient processes leading to the prethermal plateau, and long-time system
heating towards infinite temperature. This work points to new opportunities
possible via Floquet control in networks of dilute, randomly distributed,
low-sensitivity nuclei. In particular, the combination of minutes-long
prethermal lifetimes and continuous spin interrogation opens avenues for
quantum sensors constructed from hyperpolarized Floquet prethermal nuclei.
|
2104.01988v2
|
2021-04-14
|
Generalized Simple Streaming Codes from MDS Codes
|
Streaming codes represent a packet-level FEC scheme for achieving reliable,
low-latency communication. In the literature on streaming codes, the
commonly-assumed Gilbert-Elliott channel model, is replaced by a more
tractable, delay-constrained, sliding-window (DCSW) channel model that can
introduce either random or burst erasures. The known streaming codes that are
rate optimal over the DCSW channel model are constructed by diagonally
embedding a scalar block code across successive packets. These code
constructions have field size that is quadratic in the delay parameter $\tau$
and have a somewhat complex structure with an involved decoding procedure. This
led to the introduction of simple streaming (SS) codes in which diagonal
embedding is replaced by staggered-diagonal embedding (SDE). The SDE approach
reduces the impact of a burst of erasures and makes it possible to construct
near-rate-optimal streaming codes using Maximum Distance Separable (MDS) code
having linear field size. The present paper takes this development one step
further, by retaining the staggered-diagonal feature, but permitting the
placement of more than one code symbol from a given scalar codeword within each
packet. These generalized, simple streaming codes allow us to improve upon the
rate of SS codes, while retaining the simplicity of working with MDS codes. We
characterize the maximum code rate of streaming codes under a constraint on the
number of contiguous packets over which symbols of the underlying scalar code
are dispersed. Such a constraint leads to simplified code construction and
reduced-complexity decoding.
|
2104.07005v1
|
2021-04-22
|
COVID-19 and Big Data: Multi-faceted Analysis for Spatio-temporal Understanding of the Pandemic with Social Media Conversations
|
COVID-19 has been devastating the world since the end of 2019 and has
continued to play a significant role in major national and worldwide events,
and consequently, the news. In its wake, it has left no life unaffected. Having
earned the world's attention, social media platforms have served as a vehicle
for the global conversation about COVID-19. In particular, many people have
used these sites in order to express their feelings, experiences, and
observations about the pandemic. We provide a multi-faceted analysis of
critical properties exhibited by these conversations on social media regarding
the novel coronavirus pandemic. We present a framework for analysis, mining,
and tracking the critical content and characteristics of social media
conversations around the pandemic. Focusing on Twitter and Reddit, we have
gathered a large-scale dataset on COVID-19 social media conversations. Our
analyses cover tracking potential reports on virus acquisition, symptoms,
conversation topics, and language complexity measures through time and by
region across the United States. We also present a BERT-based model for
recognizing instances of hateful tweets in COVID-19 conversations, which
achieves a lower error-rate than the state-of-the-art performance. Our results
provide empirical validation for the effectiveness of our proposed framework
and further demonstrate that social media data can be efficiently leveraged to
provide public health experts with inexpensive but thorough insight over the
course of an outbreak.
|
2104.10807v1
|
2021-05-05
|
exoplanet: Gradient-based probabilistic inference for exoplanet data & other astronomical time series
|
"exoplanet" is a toolkit for probabilistic modeling of astronomical time
series data, with a focus on observations of exoplanets, using PyMC3 (Salvatier
et al., 2016). PyMC3 is a flexible and high-performance model-building language
and inference engine that scales well to problems with a large number of
parameters. "exoplanet" extends PyMC3's modeling language to support many of
the custom functions and probability distributions required when fitting
exoplanet datasets or other astronomical time series. While it has been used
for other applications, such as the study of stellar variability, the primary
purpose of "exoplanet" is the characterization of exoplanets or multiple star
systems using time-series photometry, astrometry, and/or radial velocity. In
particular, the typical use case would be to use one or more of these datasets
to place constraints on the physical and orbital parameters of the system, such
as planet mass or orbital period, while simultaneously taking into account the
effects of stellar variability.
|
2105.01994v2
|
2021-05-05
|
Elemental Abundances in M31: Gradients in the Giant Stellar Stream
|
We analyze existing measurements of [Fe/H] and [$\alpha$/Fe] for individual
red giant branch (RGB) stars in the Giant Stellar Stream (GSS) of M31 to
determine whether spatial abundance gradients are present. These measurements
were obtained from low- ($R \sim 3000$) and moderate- ($R \sim 6000$)
resolution Keck/DEIMOS spectroscopy using spectral synthesis techniques as part
of the Elemental Abundances in M31 survey. From a sample of 62 RGB stars
spanning the GSS at 17, 22, and 33 projected kpc, we measure a [Fe/H] gradient
of $-$0.018 $\pm$ 0.003 dex kpc$^{-1}$ and negligible [$\alpha$/Fe] gradient
with M31-centric radius. We investigate GSS abundance patterns in the outer
halo using additional [Fe/H] and [$\alpha$/Fe] measurements for 6 RGB stars
located along the stream at 45 and 58 projected kpc. These abundances provide
tentative evidence that the trends in [Fe/H] and [$\alpha$/Fe] beyond 40 kpc in
the GSS are consistent with those within 33 kpc. We also compare the GSS
abundances to 65 RGB stars located along the possibly related Southeast (SE)
shelf substructure at 12 and 18 projected kpc. The abundances of the GSS and SE
shelf are consistent, supporting a common origin hypothesis, although this
interpretation may be complicated by the presence of [Fe/H] gradients in the
GSS. We discuss the abundance patterns in the context of photometric studies
from the literature and explore implications for the properties of the GSS
progenitor, suggesting that the high $\langle$[$\alpha$/Fe]$\rangle$ of the GSS
(+0.40 $\pm$ 0.05 dex) favors a major merger scenario for its formation.
|
2105.02339v1
|
2021-05-17
|
A Unified Adaptive Recoding Framework for Batched Network Coding
|
Batched network coding is a variation of random linear network coding which
has low computational and storage costs. In order to adapt to random
fluctuations in the number of erasures in individual batches, it is not optimal
to recode and transmit the same number of packets for all batches. Different
distributed optimization models, which are called adaptive recoding schemes,
were formulated for this purpose. The key component of these optimization
problems is the expected value of the rank distribution of a batch at the next
network node, which is also known as the expected rank. In this paper, we put
forth a unified adaptive recoding framework with an arbitrary recoding field
size. We show that the expected rank functions are concave when the packet loss
pattern is a stationary stochastic process, which covers but not limited to
independent packet loss and Gilbert-Elliott packet loss model. Under this
concavity assumption, we show that there always exists a solution which not
only can minimize the randomness on the number of recoded packets but also can
tolerate rank distribution errors due to inaccurate measurements or limited
precision of the machine. We provide an algorithm to obtain such an optimal
optimal solution, and propose tuning schemes that can turn any feasible
solution into a desired optimal solution.
|
2105.07614v2
|
2021-05-18
|
Magnetic flux structuring of the quiet Sun internetwork. Center-to-limb analysis of solar-cycle variations
|
It is now well established that the quiet Sun contains in total more magnetic
flux than active regions and represents an important reservoir of magnetic
energy. But the nature and evolution of these fields remain largely unknown.
We investigate the solar-cycle and center-to-limb variations of magnetic-flux
structures at small scales in internetwork regions of the quiet Sun.
We used Hinode SOT/SP data from the irradiance program between 2008 and 2016.
Maps of the magnetic-flux density are derived from the center-of gravity method
applied to the FeI 630.15 nm and FeI 630.25 nm lines. To correct the maps from
the instrumental smearing, we applied a deconvolution method based on a
principal component analysis of the line profiles and on a Richardson-Lucy
deconvolution of their coefficients. We then performed a spectral analysis of
the spatial fluctuations of the magnetic-flux density in 10'' x 10''
internetwork regions spanning a wide range of latitudes.
At low and mid latitudes the power spectra do not vary significantly with the
solar cycle. However at solar maximum for one scan in the activity belt showing
an enhanced network, a marginal increase in the power of the magnetic
fluctuations is observed at granular and larger scales in the internetwork. At
high latitudes, we observe variations at granular and larger scales where the
power decreases at solar maximum. At all the latitudes the power of the
magnetic fluctuations at scales smaller than 0.5''remain constant throughout
the solar cycle.
Our results favor a small-scale dynamo that operates in the internetwork, but
they show that the global dynamo also contributes to the internetwork fields.
|
2105.08657v1
|
2021-05-21
|
Hybrid Machine Learning for Scanning Near-field Optical Spectroscopy
|
The underlying physics behind an experimental observation often lacks a
simple analytical description. This is especially the case for scanning probe
microscopy techniques, where the interaction between the probe and the sample
is nontrivial. Realistic modeling to include the details of the probe is always
exponentially more difficult than its "spherical cow" counterparts. On the
other hand, a well-trained artificial neural network based on real data can
grasp the hidden correlation between the signal and sample properties. In this
work, we show that, via a combination of model calculation and experimental
data acquisition, a physics-infused hybrid neural network can predict the
tip-sample interaction in the widely used scattering-type scanning near-field
optical microscope. This hybrid network provides a long-sought solution for
accurate extraction of material properties from tip-specific raw data. The
methodology can be extended to other scanning probe microscopy techniques as
well as other data-oriented physical problems in general.
|
2105.10551v1
|
2021-05-26
|
Contention Resolution with Predictions
|
In this paper, we consider contention resolution algorithms that are
augmented with predictions about the network. We begin by studying the natural
setup in which the algorithm is provided a distribution defined over the
possible network sizes that predicts the likelihood of each size occurring. The
goal is to leverage the predictive power of this distribution to improve on
worst-case time complexity bounds. Using a novel connection between contention
resolution and information theory, we prove lower bounds on the expected time
complexity with respect to the Shannon entropy of the corresponding network
size random variable, for both the collision detection and no collision
detection assumptions. We then analyze upper bounds for these settings,
assuming now that the distribution provided as input might differ from the
actual distribution generating network sizes. We express their performance with
respect to both entropy and the statistical divergence between the two
distributions -- allowing us to quantify the cost of poor predictions. Finally,
we turn our attention to the related perfect advice setting, parameterized with
a length $b\geq 0$, in which all active processes in a given execution are
provided the best possible $b$ bits of information about their network. We
provide tight bounds on the speed-up possible with respect to $b$ for
deterministic and randomized algorithms, with and without collision detection.
These bounds provide a fundamental limit on the maximum power that can be
provided by any predictive model with a bounded output size.
|
2105.12706v1
|
2021-05-27
|
Balancing Static Vacuum Black Holes with Signed Masses in 4 and 5 Dimensions
|
We construct a new set of asymptotically flat, static vacuum solutions to the
Einstein equations in dimensions 4 and 5, which may be interpreted as a
superposition of positive and negative mass black holes. The resulting
spacetimes are axisymmetric in 4-dimensions and bi-axisymmetric in
5-dimensions, and are regular away from the negative mass singularities, for
instance conical singularities are absent along the axes. In 5-dimensions, the
topologies of signed mass black holes used in the construction may be either
spheres $S^3$ or rings $S^1 \times S^2$; in particular, the negative mass
static black ring solution is introduced. A primary observation that
facilitates the superposition is the fact that, in Weyl-Papapetrou coordinates,
negative mass singularities arise as overlapping singular support for a
particular type of Green's function. Furthermore, a careful analysis of conical
singularities along axes is performed, and formulas are obtained for their
propagation across horizons, negative mass singularities, and corners. The
methods are robust, and may be used to construct a multitude of further
examples. Lastly, we show that balancing does not occur between any two signed
mass black holes of the type studied here in 4 dimensions, while in 5
dimensions two-body balancing is possible.
|
2105.13260v2
|
2021-06-11
|
Inference for treatment-specific survival curves using machine learning
|
In the absence of data from a randomized trial, researchers often aim to use
observational data to draw causal inference about the effect of a treatment on
a time-to-event outcome. In this context, interest often focuses on the
treatment-specific survival curves; that is, the survival curves were the
entire population under study to be assigned to receive the treatment or not.
Under certain causal conditions, including that all confounders of the
treatment-outcome relationship are observed, the treatment-specific survival
can be identified with a covariate-adjusted survival function. Several
estimators of this function have been proposed, including estimators based on
outcome regression, inverse probability weighting, and doubly robust
estimators. In this article, we propose a new cross-fitted doubly-robust
estimator that incorporates data-adaptive (e.g. machine learning) estimators of
the conditional survival functions. We establish conditions on the nuisance
estimators under which our estimator is consistent and asymptotically linear,
both pointwise and uniformly in time. We also propose a novel ensemble learner
for combining multiple candidate estimators of the conditional survival
estimators. Notably, our methods and results accommodate events occurring in
discrete or continuous time (or both). We investigate the practical performance
of our methods using numerical studies and an application to the effect of a
surgical treatment to prevent metastases of parotid carcinoma on mortality.
|
2106.06602v1
|
2021-06-10
|
Hard Choices in Artificial Intelligence
|
As AI systems are integrated into high stakes social domains, researchers now
examine how to design and operate them in a safe and ethical manner. However,
the criteria for identifying and diagnosing safety risks in complex social
contexts remain unclear and contested. In this paper, we examine the vagueness
in debates about the safety and ethical behavior of AI systems. We show how
this vagueness cannot be resolved through mathematical formalism alone, instead
requiring deliberation about the politics of development as well as the context
of deployment. Drawing from a new sociotechnical lexicon, we redefine vagueness
in terms of distinct design challenges at key stages in AI system development.
The resulting framework of Hard Choices in Artificial Intelligence (HCAI)
empowers developers by 1) identifying points of overlap between design
decisions and major sociotechnical challenges; 2) motivating the creation of
stakeholder feedback channels so that safety issues can be exhaustively
addressed. As such, HCAI contributes to a timely debate about the status of AI
development in democratic societies, arguing that deliberation should be the
goal of AI Safety, not just the procedure by which it is ensured.
|
2106.11022v1
|
2021-06-30
|
A long-period substellar object exhibiting a single transit in Kepler
|
We report the detection of a single transit-like signal in the Kepler data of
the slightly evolved F star KIC4918810. The transit duration is ~45 hours, and
while the orbital period ($P\sim10$ years) is not well constrained, it is one
of the longest among companions known to transit. We calculate the size of the
transiting object to be $R_P = 0.910$ $R_J$. Objects of this size vary by
orders of magnitude in their densities, encompassing masses between that of
Saturn ($0.3$ $M_J$) and stars above the hydrogen-burning limit (~80 $M_J$).
Radial-velocity observations reveal that the companion is unlikely to be a
star. The mass posterior is bimodal, indicating a mass of either ~0.24 $M_J$ or
~26 $M_J$. Continued spectroscopic monitoring should either constrain the mass
to be planetary or detect the orbital motion, the latter of which would yield a
benchmark long-period brown dwarf with a measured mass, radius, and age.
|
2107.00027v1
|
2021-07-02
|
Scaling of Turbulent Viscosity and Resistivity: Extracting a Scale-dependent Turbulent Magnetic Prandtl Number
|
Turbulent viscosity $\nu_t$ and resistivity $\eta_t$ are perhaps the simplest
models for turbulent transport of angular momentum and magnetic fields,
respectively. The associated turbulent magnetic Prandtl number $Pr_t\equiv
\nu_t/\eta_t$ has been well recognized to determine the final magnetic
configuration of accretion disks. Here, we present an approach to determining
these ''effective transport'' coefficients acting at different length-scales
using coarse-graining and recent results on decoupled kinetic and magnetic
energy cascades [Bian & Aluie 2019]. By analyzing the kinetic and magnetic
energy cascades from a suite of high-resolution simulations, we show that our
definitions of $\nu_t$, $\eta_t$, and $Pr_t$ have power-law scalings in the
''decoupled range.'' We observe that $Pr_t\approx1 \text{~to~}2$ at the
smallest inertial-inductive scales, increasing to $\approx 5$ at the largest
scales. However, based on physical considerations, our analysis suggests that
$Pr_t$ has to become scale-independent and of order unity in the decoupled
range at sufficiently high Reynolds numbers (or grid-resolution), and that the
power-law scaling exponents of velocity and magnetic spectra become equal. In
addition to implications to astrophysical systems, the scale-dependent
turbulent transport coefficients offer a guide for large eddy simulation
modeling.
|
2107.00861v1
|
2021-07-24
|
Dual-Attention Enhanced BDense-UNet for Liver Lesion Segmentation
|
In this work, we propose a new segmentation network by integrating DenseUNet
and bidirectional LSTM together with attention mechanism, termed as
DA-BDense-UNet. DenseUNet allows learning enough diverse features and enhancing
the representative power of networks by regulating the information flow.
Bidirectional LSTM is responsible to explore the relationships between the
encoded features and the up-sampled features in the encoding and decoding
paths. Meanwhile, we introduce attention gates (AG) into DenseUNet to diminish
responses of unrelated background regions and magnify responses of salient
regions progressively. Besides, the attention in bidirectional LSTM takes into
account the contribution differences of the encoded features and the up-sampled
features in segmentation improvement, which can in turn adjust proper weights
for these two kinds of features. We conduct experiments on liver CT image data
sets collected from multiple hospitals by comparing them with state-of-the-art
segmentation models. Experimental results indicate that our proposed method
DA-BDense-UNet has achieved comparative performance in terms of dice
coefficient, which demonstrates its effectiveness.
|
2107.11645v1
|
2021-08-03
|
Comparative study of magnetic properties of Mn$^{3+}$ magnetic clusters in GaN using classical and quantum mechanical approach
|
Currently, simulations of many-body quantum systems are known to be
computationally too demanding to be solved on classical computers. The main
problem is that the computation time and memory necessary for performing the
calculations usually grow exponentially with the number of particles $N$. An
efficient approach to simulate many-body quantum systems is the use of
classical approximation. However, it is known that at least at low
temperatures, the allowed spin fluctuations in this approach are overestimated
what results in enhanced thermal fluctuations. It is therefore timely and
important to assess the validity of the classical approximation. To this end,
in this work, we compare the results of numerical calculations of small
Mn$^{3+}$ magnetic clusters in GaN, where the Mn spins are treated classically
with those where they are treated quantum-mechanically (crystal field model).
In the first case, we solve the Landau-Lifshitz-Gilbert (LLG) equation that
describes the precessional dynamics of spins represented by classical vectors.
On the other hand, in the crystal field model, the state of Mn$^{3+}$ ion
($d^4$ configuration with $S=2$, $L=2$) is characterized by the set of orbital
and spin quantum numbers $|m_s,m_L>$. Particular attention is paid to use
numerical parameters that ensure the same single ion magnetic anisotropy in
both classical and quantum approximation. Finally, a detailed comparative study
of magnetization $\mathbf{M}(\mathbf{H}, T)$ as a function of the magnetic
field $\mathbf{H}$, temperature $T$, number of ions in a given cluster $N$ and
the strength of super-exchange interaction $J$, obtained from both approaches
will be presented.
|
2108.01474v1
|
2021-08-06
|
Performance trade-offs in cyber-physical control applications with multi-connectivity
|
Modern communication devices are often equipped with multiple wireless
communication interfaces with diverse characteristics. This enables exploiting
a form of multi-connectivity known as interface diversity to provide path
diversity with multiple communication interfaces. Interface diversity helps to
combat the problems suffered by single-interface systems due to error bursts in
the link, which are a consequence of temporal correlation in the wireless
channel. The length of an error burst is an essential performance indicator for
cyber-physical control applications with periodic traffic, as these define the
period in which the control link is unavailable. However, the available
interfaces must be correctly orchestrated to achieve an adequate trade-off
between latency, reliability, and energy consumption. This work investigates
how the packet error statistics from different interfaces impacts the overall
latency-reliability characteristics and explores mechanisms to derive adequate
interface diversity policies. For this, we model the optimization problem as a
partially observable Markov Decision Process (POMDP), where the state of each
interface is determined by a Gilbert-Elliott model whose parameters are
estimated based on experimental measurement traces from LTE and Wi-Fi. Our
results show that the POMDP approach provides an all-round adaptable solution,
whose performance is only 0.1% below the absolute upper bound, dictated by the
optimal policy under the impractical assumption of full observability.
|
2108.03035v1
|
2021-08-16
|
$Q$-ary non-overlapping codes: a generating function approach
|
Non-overlapping codes are a set of codewords in $\bigcup_{n \ge 2}
\mathbb{Z}_q^n$, where $\mathbb{Z}_q = \{0,1,\dots,q-1\}$, such that, the
prefix of each codeword is not a suffix of any codeword in the set, including
itself; and for variable-length codes, a codeword does not contain any other
codeword as a subword. In this paper, we investigate a generic method to
generalize binary codes to $q$-ary for $q > 2$, and analyze this generalization
on the two constructions given by Levenshtein (also by Gilbert; Chee, Kiah,
Purkayastha, and Wang) and Bilotta, respectively. The generalization on the
former construction gives large non-expandable fixed-length non-overlapping
codes whose size can be explicitly determined; the generalization on the later
construction is the first attempt to generate $q$-ary variable-length
non-overlapping codes. More importantly, this generic method allows us to
utilize the generating function approach to analyze the cardinality of the
underlying $q$-ary non-overlapping codes. The generating function approach not
only enables us to derive new results, e.g., recurrence relations on their
cardinalities, new combinatorial interpretations for the constructions, and the
limit superior of their cardinalities for some special cases, but also greatly
simplifies the arguments for these results. Furthermore, we give an exact
formula for the number of fixed-length words that do not contain the codewords
in a variable-length non-overlapping code as subwords. This thereby solves an
open problem by Bilotta and induces a recursive upper bound on the maximum size
of variable-length non-overlapping codes.
|
2108.06934v1
|
2021-08-17
|
Searching For or Reviewing Evidence Improves Crowdworkers' Misinformation Judgments and Reduces Partisan Bias
|
Can crowd workers be trusted to judge whether news-like articles circulating
on the Internet are misleading, or does partisanship and inexperience get in
the way? And can the task be structured in a way that reduces partisanship? We
assembled pools of both liberal and conservative crowd raters and tested three
ways of asking them to make judgments about 374 articles. In a no research
condition, they were just asked to view the article and then render a judgment.
In an individual research condition, they were also asked to search for
corroborating evidence and provide a link to the best evidence they found. In a
collective research condition, they were not asked to search, but instead to
review links collected from workers in the individual research condition. Both
research conditions reduced partisan disagreement in judgments. The individual
research condition was most effective at producing alignment with journalists'
assessments. In this condition, the judgments of a panel of sixteen or more
crowd workers were better than that of a panel of three expert journalists, as
measured by alignment with a held out journalist's ratings.
|
2108.07898v3
|
2021-08-23
|
The Multiverse: Logical Modularity for Proof Assistants
|
Proof assistants play a dual role as programming languages and logical
systems. As programming languages, proof assistants offer standard modularity
mechanisms such as first-class functions, type polymorphism and modules. As
logical systems, however, modularity is lacking, and understandably so:
incompatible reasoning principles -- such as univalence and uniqueness of
identity proofs -- can indirectly lead to logical inconsistency when used in a
given development, even when they appear to be confined to different modules.
The lack of logical modularity in proof assistants also hinders the adoption of
richer programming constructs, such as effects. We propose the multiverse, a
general type-theoretic approach to endow proof assistants with logical
modularity. The multiverse consists of multiple universe hierarchies that
statically describe the reasoning principles and effects available to define a
term at a given type. We identify sufficient conditions for this structuring to
modularly ensure that incompatible principles do not interfere, and to locally
restrict the power of dependent elimination when necessary. This extensible
approach generalizes the ad-hoc treatment of the sort of propositions in the
Coq proof assistant. We illustrate the power of the multiverse by describing
the inclusion of Coq-style propositions, the strict propositions of Gilbert et
al., the exceptional type theory of P\'edrot and Tabareau, and general
axiomatic extensions of the logic.
|
2108.10259v1
|
2021-08-27
|
Distributed Control and Optimization of DC Microgrids: A Port-Hamiltonian Approach
|
This article proposes a distributed secondary control scheme that drives a dc
microgrid to an equilibrium point where the generators share optimal currents,
and their voltages have a weighted average of nominal value. The scheme does
not rely on the electric system topology nor its specifications; it guarantees
plug-and-play design and functionality of the generators. First, the
incremental model of the microgrid system with constant impedance, current, and
power devices is shown to admit a port-Hamiltonian (pH) representation, and its
passive output is determined. The economic dispatch problem is then solved by
the Lagrange multipliers method; the Karush-Kuhn-Tucker conditions and weighted
average formation of voltages are then formulated as the control objectives. We
propose a control scheme that is based on the Control by Interconnection design
philosophy, where the consensus-based controller is viewed as a virtual pH
system to be interconnected with the physical one. We prove the regional
asymptotic stability of the closed-loop system using Lyapunov and LaSalle
theorems. Equilibrium analysis is also conducted based on the concepts of graph
theory and economic dispatch. Finally, the effectiveness of the presented
scheme for different case studies is validated with a test microgrid system,
simulated in both MATLAB/Simulink and OPAL-RT environments.
|
2108.12341v1
|
2021-10-23
|
Bootstrap percolation in random geometric graphs
|
Following Bradonji\'c and Saniee, we study a model of bootstrap percolation
on the Gilbert random geometric graph on the $2$-dimensional torus. In this
model, the expected number of vertices of the graph is $n$, and the expected
degree of a vertex is $a\log n$ for some fixed $a>1$. Each vertex is added with
probability $p$ to a set $A_0$ of initially infected vertices. Vertices
subsequently become infected if they have at least $ \theta a \log n $ infected
neighbours. Here $p, \theta \in [0,1]$ are taken to be fixed constants.
We show that if $\theta < (1+p)/2$, then a sufficiently large local outbreak
leads with high probability to the infection spreading globally, with all but
$o(n)$ vertices eventually becoming infected. On the other hand, for $ \theta >
(1+p)/2$, even if one adversarially infects every vertex inside a ball of
radius $O(\sqrt{\log n} )$, with high probability the infection will spread to
only $o(n)$ vertices beyond those that were initially infected.
In addition we give some bounds on the $(a, p, \theta)$ regions ensuring the
emergence of large local outbreaks or the existence of islands of vertices that
never become infected. We also give a complete picture of the (surprisingly
complex) behaviour of the analogous $1$-dimensional bootstrap percolation model
on the circle. Finally we raise a number of problems, and in particular make a
conjecture on an `almost no percolation or almost full percolation' dichotomy
which may be of independent interest.
|
2110.12166v1
|
2021-11-02
|
Orbital Dynamics and the Evolution of Planetary Habitability in the AU Mic System
|
The diversity of planetary systems that have been discovered are revealing
the plethora of possible architectures, providing insights into planet
formation and evolution. They also increase our understanding of system
parameters that may affect planetary habitability, and how such conditions are
influenced by initial conditions. The AU~Mic system is unique among known
planetary systems in that it is a nearby, young, multi-planet transiting
system. Such a young and well characterized system provides an opportunity to
study orbital dynamical and habitability studies for planets in the very early
stages of their evolution. Here, we calculate the evolution of the Habitable
Zone of the system through time, including the pre-main sequence phase that the
system currently resides in. We discuss the planetary atmospheric processes
occurring for an Earth-mass planet during this transitionary period, and
provide calculations of the climate state convergence age for both volatile
rich and poor initial conditions. We present results of an orbital dynamical
analysis of the AU~Mic system that demonstrate the rapid eccentricity evolution
of the known planets, and show that terrestrial planets within the Habitable
Zone of the system can retain long-term stability. Finally, we discuss
follow-up observation prospects, detectability of possible Habitable Zone
planets, and how the AU Mic system may be used as a template for studies of
planetary habitability evolution.
|
2111.01816v1
|
2021-11-17
|
Privacy-preserving Federated Learning for Residential Short Term Load Forecasting
|
With high levels of intermittent power generation and dynamic demand
patterns, accurate forecasts for residential loads have become essential. Smart
meters can play an important role when making these forecasts as they provide
detailed load data. However, using smart meter data for load forecasting is
challenging due to data privacy requirements. This paper investigates how these
requirements can be addressed through a combination of federated learning and
privacy preserving techniques such as differential privacy and secure
aggregation. For our analysis, we employ a large set of residential load data
and simulate how different federated learning models and privacy preserving
techniques affect performance and privacy. Our simulations reveal that
combining federated learning and privacy preserving techniques can secure both
high forecasting accuracy and near-complete privacy. Specifically, we find that
such combinations enable a high level of information sharing while ensuring
privacy of both the processed load data and forecasting models. Moreover, we
identify and discuss challenges of applying federated learning, differential
privacy and secure aggregation for residential short-term load forecasting.
|
2111.09248v4
|
2021-11-30
|
The AiiDA-Spirit plugin for automated spin-dynamics simulations and multi-scale modelling based on first-principles calculations
|
Landau-Lifshitz-Gilbert (LLG) spin-dynamics calculations based on the
extended Heisenberg Hamiltonian is an important tool in computational materials
science involving magnetic materials. LLG simulations allow to bridge the gap
from expensive quantum mechanical calculations with small unit cells to large
supercells where the collective behavior of millions of spins can be studied.
In this work we present the AiiDA-Spirit plugin that connects the spin-dynamics
code Spirit to the AiiDA framework. AiiDA provides a Python interface that
facilitates performing high-throughput calculations while automatically
augmenting the calculations with metadata describing the data provenance
between calculations in a directed acyclic graph. The AiiDA-Spirit interface
thus provides an easy way for high-throughput spin-dynamics calculations. The
interface to the AiiDA infrastructure furthermore has the advantage that input
parameters for the extended Heisenberg model can be extracted from
high-throughput first-principles calculations including a proper treatment of
the data provenance that ensures reproducibility of the calculation results in
accordance to the FAIR principles. We describe the layout of the AiiDA-Spirit
plugin and demonstrate its capabilities using selected examples for LLG
spin-dynamics and Monte Carlo calculations. Furthermore, the integration with
first-principles calculations through AiiDA is demonstrated at the example of
$\gamma$-Fe, where the complex spin-spiral ground state is investigated.
|
2111.15229v1
|
2021-12-10
|
A Framework for Fairness: A Systematic Review of Existing Fair AI Solutions
|
In a world of daily emerging scientific inquisition and discovery, the
prolific launch of machine learning across industries comes to little surprise
for those familiar with the potential of ML. Neither so should the congruent
expansion of ethics-focused research that emerged as a response to issues of
bias and unfairness that stemmed from those very same applications. Fairness
research, which focuses on techniques to combat algorithmic bias, is now more
supported than ever before. A large portion of fairness research has gone to
producing tools that machine learning practitioners can use to audit for bias
while designing their algorithms. Nonetheless, there is a lack of application
of these fairness solutions in practice. This systematic review provides an
in-depth summary of the algorithmic bias issues that have been defined and the
fairness solution space that has been proposed. Moreover, this review provides
an in-depth breakdown of the caveats to the solution space that have arisen
since their release and a taxonomy of needs that have been proposed by machine
learning practitioners, fairness researchers, and institutional stakeholders.
These needs have been organized and addressed to the parties most influential
to their implementation, which includes fairness researchers, organizations
that produce ML algorithms, and the machine learning practitioners themselves.
These findings can be used in the future to bridge the gap between
practitioners and fairness experts and inform the creation of usable fair ML
toolkits.
|
2112.05700v1
|
2021-12-12
|
Effect of Topological Non-hexagonal Rings and Stone Wale Defects on the Vibrational Response of Single and Multi-Layer Ion Irradiated Graphene
|
Present study explores the observation of topological non-hexagonal rings
(NHR) and Stone Wale (SW) defects by Raman experiments in both single (SLG) and
multi-layer graphene (MLG) after they are irradiated with 100- 300 eV Ar ions.
Although predicted by theoretical studies, here it is experimentally shown for
the first time that graphene SW/NHR defects have a signature in Raman. Broad
bandwidth of the pertinent Raman features suggests the presence of more than
one SW/NHR defect mode, in agreement with the DFT studies. Variations in the
SW/NHR related Raman mode intensities demonstrate the annihilation of these
topological defects at higher energies. Behavior of Raman allowed G and 2D
excitations, as well as the disorder-activated D, D' and G* lines, has also
been investigated in SLG and MLG. These indicate an evolution of defects in
graphene with ion irradiation, as well as presence of a transition state beyond
which the Raman modes are dominated by a rise in sp3 content. Correlation of
these aspects with the SW/NHR Raman provide significant insight into ion
induced evolution of graphene. The direct observation of SW/NHR defects by
Raman spectroscopy could be important in promoting exploration of rich
topological aspects of Graphene in various fields.
|
2112.06294v1
|
2021-12-16
|
Minimal blowing pressure allowing periodic oscillations in a model of bass brass instruments
|
In this study, an acoustic resonator -- a bass brass instrument -- with
multiple resonances coupled to an exciter -- the player's lips -- with one
resonance is modelled by a multidimensional dynamical system, and studied using
a continuation and bifurcation software. Bifurcation diagrams are explored with
respect to the blowing pressure, in particular with focus on the minimal
blowing pressure allowing stable periodic oscillations and the associated
frequency.The behaviour of the instrument is first studied close to a (non
oscillating) equilibrium using linear stability analysis. This allows to
determine the conditions at which an equilibrium destabilises and as such where
oscillating regimes can emerge (corresponding to a sound production). This
approach is useful to characterise the ease of playing of a brass instrument,
which is assumed here to be related -- as a first approximation -- to the
linear threshold pressure. In particular, the lower the threshold pressure, the
lower the physical effort the player has to make to play a note [Campbell et
al., 2021].Cases are highlighted where periodic solutions in the bifurcation
diagrams are reached for blowing pressures below the value given by the linear
stability analysis. Thus, bifurcation diagrams allow a more in-depth analysis.
Particular attention is devoted to the first playing regime of bass brass
instruments (the pedal note and the ghost note of a tuba in particular), whose
behaviour qualitatively differs from a trombone to a euphonium for instance.
|
2112.08751v2
|
2021-12-20
|
Refined modelling of the radio SZ signal: kinematic terms, relativistic temperature corrections and anisotropies in the radio background
|
A significant cosmological radio background will inevitably lead to a radio
Sunyaev-Zeldovich (SZ) effect. In the simplest limit, the combined signal from
the scattered radio and cosmic microwave background exhibits a null at around
$\nu \simeq 735$ MHz. Here, we show that kinematic and relativistic temperature
corrections to this radio SZ signal are easily calculable. We treat both the
cluster and observer motion, and the scattering of anisotropies in the radio
background, highlighting how the spectrum of the radio SZ effect is affected in
each case. Although relativistic temperature corrections only enter at the
level of a few percent, our expressions allow high-precision modelling of these
terms. By measuring the SZ signal around the radio null, one is in principle
able to place constraints on the properties of a cosmological radio background.
A combination with standard SZ measurements from large cluster samples could
provide a promising avenue towards breaking degeneracies between different
contributions. Stacking analyses can reduce the effect of kinematic corrections
and dipolar anisotropies in the radio background, thereby providing a way to
constrain the redshift dependence of the average radio background. Our
qualitative discussion is meant to give an analytic understanding of the
various effects and also motivate further studies with the aim to obtain
quantitative forecasts of their observability. At this stage, a detection of
the corrections seems rather futuristic, but the advent of large SZ and X-ray
cluster samples could drastically improve our ability to disentangle various
effects.
|
2112.10666v2
|
2021-12-22
|
Conductive and convective heat transfer in inductive heating of subsea buried pipelines
|
Inductive heating with high-voltage cables reduces the risk of hydrate
formation by raising the temperature of the production fluid in pipelines.
Heating the pipeline results in losing a certain fraction of the heat to the
surrounding soil through conduction or convection-dominated flow through the
soil. However, the amount of heat lost in conduction versus convection and the
transition from conduction to convection-dominated heat loss remains unknown.
Soil permeability, temperature gradient between cable and mudline, and burial
depth influence the mode of heat transfer and the amount of heat lost. We study
the dominant mode of heat transfer in pipelines with inductive heating using 2D
Finite Difference analysis under different soil and environmental conditions.
Low permeability soils primarily exhibit conductive heat transfer, thus losing
minimum heat to the surrounding soil. In contrast, convective flow drives a
significant fraction of the heat away from the pipeline and towards the ground
surface for highly permeable soils, barely heating the fluid in the pipe. We
identify a critical Rayleigh-Darcy number of 1 as the controlling value
separating conduction and convection-dominated heat transfer. An increase in
burial depth deteriorates the heating efficiency in convection-dominated high
permeability soils, while it remains unaffected in conduction-dominated low
permeability soils.
|
2112.11826v1
|
2021-12-28
|
Phonon, Electron, and Magnon Excitations in Antiferromagnetic L1$_{0}$-type MnPt
|
Antiferromagnetic L1$_{0}$-type MnPt is a material with relatively simple
crystal and magnetic structure, recently attracting interest due to its high
N{\'{e}}el temperature and wide usage as a pinning layer in magnetic devices.
While it is experimentally well characterized, the theoretical understanding is
much less developed, in part due to the challenging accuracy requirements
dictated by the small underlying energy scales that govern magnetic ordering in
antiferromagnetic metals. In this work, we use density functional theory, the
Korringa-Kohn-Rostoker formalism, and a Heisenberg model to establish a
comprehensive theoretical description of antiferromagnetic L1$_{0}$-type MnPt,
along with accuracy limits, by thoroughly comparing to available literature
data. Our simulations show that the contribution of the magnetic dipole
interaction to the magnetocrystalline anisotropy energy of $K_{1}$=1.07$\times
10^{6}$\,J/m$^3$ is comparable in magnitude to the spin-orbit contribution.
Using our result for the magnetic susceptibility of $5.25\times10^{-4}$, a
lowest magnon frequency of about 2.02\,THz is predicted, confirming THz spin
dynamics in this material. From our data for electron, phonon, and magnon
dispersion we compute the individual contributions to the total heat capacity
and show that the dominant term at or above 2\,K arises from phonons. From the
Landau-Lifshitz-Gilbert equation, we compute a N\'{e}el temperature of
990--1070 K. Finally, we quantify the magnitude of the magneto-optical Kerr
effect generated by applying an external magnetic field. Our results provide
insight into the underlying physics, which is critical for a deep understanding
of fundamental limits of the time scale of spin dynamics, stability of the
magnetic ordering, and the possibility of magneto-optical detection of
collective spin motion.
|
2112.13954v1
|
2022-01-22
|
Estimation and Hypothesis Testing of Strain-Specific Vaccine Efficacy with Missing Strain Types, with Applications to a COVID-19 Vaccine Trial
|
Statistical methods are developed for analysis of clinical and virus genetics
data from phase 3 randomized, placebo-controlled trials of vaccines against
novel coronavirus COVID-19. Vaccine efficacy (VE) of a vaccine to prevent
COVID-19 caused by one of finitely many genetic strains of SARS-CoV-2 may vary
by strain. The problem of assessing differential VE by viral genetics can be
formulated under a competing risks model where the endpoint is virologically
confirmed COVID-19 and the cause-of-failure is the infecting SARS-CoV-2
genotype. Strain-specific VE is defined as one minus the cause-specific hazard
ratio (vaccine/placebo). For the COVID-19 VE trials, the time to COVID-19 is
right-censored, and a substantial percentage of failure cases are missing the
infecting virus genotype. We develop estimation and hypothesis testing
procedures for strain-specific VE when the failure time is subject to right
censoring and the cause-of-failure is subject to missingness, focusing on $J
\ge 2$ discrete categorical unordered or ordered virus genotypes. The
stratified Cox proportional hazards model is used to relate the cause-specific
outcomes to explanatory variables. The inverse probability weighted
complete-case (IPW) estimator and the augmented inverse probability weighted
complete-case (AIPW) estimator are investigated. Hypothesis tests are developed
to assess whether the vaccine provides at least a specified level of efficacy
against some viral genotypes and whether VE varies across genotypes, adjusting
for covariates. The finite-sample properties of the proposed tests are studied
through simulations and are shown to have good performances. In preparation for
the real data analyses, the developed methods are applied to a pseudo dataset
mimicking the Moderna COVE trial.
|
2201.08946v1
|
2022-01-30
|
OverChain: Building a robust overlay with a blockchain
|
Blockchains use peer-to-peer networks for disseminating information among
peers, but these networks currently do not have any provable guarantees for
desirable properties such as Byzantine fault tolerance, good connectivity and
small diameter. This is not just a theoretical problem, as recent works have
exploited unsafe peer connection policies and weak network synchronization to
mount partitioning attacks on Bitcoin. Cryptocurrency blockchains are safety
critical systems, so we need principled algorithms to maintain their networks.
Our key insight is that we can leverage the blockchain itself to share
information among the peers, and thus simplify the network maintenance process.
Given that the peers have restricted computational resources, and at most a
constant fraction of them are Byzantine, we provide communication-efficient
protocols to maintain a hypercubic network for blockchains, where peers can
join and leave over time. Interestingly, we discover that our design can
\emph{recover} from substantial adversarial failures. Moreover, these
properties hold despite significant churn.
A key contribution is a secure mechanism for joining the network that uses
the blockchain to help new peers to contact existing peers. Furthermore, by
examining how peers join the network, i.e., the "bootstrapping service," we
give a lower bound showing that (within log factors) our network tolerates the
maximum churn rate possible. In fact, we can give a lower bound on churn for
any fully distributed service that requires connectivity.
|
2201.12809v1
|
2022-02-04
|
Three-axis torque investigation of interfacial exchange coupling in a NiFe/CoO bilayer micromagnetic disk
|
Micrometer diameter bilayers of NiFe (permalloy, Py) and cobalt oxide (CoO)
deposited on nanomechanical resonators were used to investigate exchange bias
effects. The mechanical compliances of two resonator axes were enhanced by
severing one torsion arm, resulting in a unique three-axis resonator that
responds resonantly to torques generated by a three-axis RF field. Our
technique permits simultaneous measurement of three orthogonal torque
components. Measurements of the anisotropies associated with interfacial
exchange coupling effects have been made. At cryogenic temperatures,
observations of shifted linear hysteresis loops confirmed the presence of
exchange bias from the Py/CoO interface. An in-plane rotating DC bias field was
used to probe in-plane anisotropies through the out-of-plane torque. Training
effects in the rotational hysteresis data were observed and showed that
features due to interfacial coupling did not diminish irrespective of
substantial training of the unidirectional anisotropy. The data from the
rotational hysteresis loops were fit with parameters from a macrospin solution
to the Landau-Lifshitz-Gilbert equation. Each parameter of the exchange bias
model accounts for specific features of the rotational loop.
|
2202.02386v1
|
2022-02-11
|
Choices, Risks, and Reward Reports: Charting Public Policy for Reinforcement Learning Systems
|
In the long term, reinforcement learning (RL) is considered by many AI
theorists to be the most promising path to artificial general intelligence.
This places RL practitioners in a position to design systems that have never
existed before and lack prior documentation in law and policy. Public agencies
could intervene on complex dynamics that were previously too opaque to
deliberate about, and long-held policy ambitions would finally be made
tractable. In this whitepaper we illustrate this potential and how it might be
technically enacted in the domains of energy infrastructure, social media
recommender systems, and transportation. Alongside these unprecedented
interventions come new forms of risk that exacerbate the harms already
generated by standard machine learning tools. We correspondingly present a new
typology of risks arising from RL design choices, falling under four
categories: scoping the horizon, defining rewards, pruning information, and
training multiple agents. Rather than allowing RL systems to unilaterally
reshape human domains, policymakers need new mechanisms for the rule of reason,
foreseeability, and interoperability that match the risks these systems pose.
We argue that criteria for these choices may be drawn from emerging subfields
within antitrust, tort, and administrative law. It will then be possible for
courts, federal and state agencies, and non-governmental organizations to play
more active roles in RL specification and evaluation. Building on the "model
cards" and "datasheets" frameworks proposed by Mitchell et al. and Gebru et
al., we argue the need for Reward Reports for AI systems. Reward Reports are
living documents for proposed RL deployments that demarcate design choices.
|
2202.05716v1
|
2022-02-22
|
Entropy-driven order in an array of nanomagnets
|
Long-range ordering is typically associated with a decrease in entropy. Yet,
it can also be driven by increasing entropy in certain special cases. We
demonstrate that artificial spin ice arrays of single-domain nanomagnets can be
designed to produce entropy-driven order. We focus on the tetris artificial
spin ice structure, a highly frustrated array geometry with a zero-point Pauli
entropy, which is formed by selectively creating regular vacancies on the
canonical square ice lattice. We probe thermally active tetris artificial spin
ice both experimentally and through simulations, measuring the magnetic moments
of the individual nanomagnets. We find two-dimensional magnetic ordering in one
subset of these moments, which we demonstrate to be induced by disorder (i.e.,
increased entropy) in another subset of the moments. In contrast with other
entropy-driven systems, the discrete degrees of freedom in tetris artificial
spin ice are binary and are both designable and directly observable at the
microscale, and the entropy of the system is precisely calculable in
simulations. This example, in which the system's interactions and ground state
entropy are well-defined, expands the experimental landscape for the study of
entropy-driven ordering.
|
2202.11010v1
|
2022-03-30
|
Kinematics and Metallicity of Red Giant Branch Stars in the Northeast Shelf of M31
|
We obtained Keck/DEIMOS spectra of 556 individual red giant branch stars in 4
spectroscopic fields spanning $13-31$ projected kpc along the Northeast (NE)
shelf of M31. We present the first detection of a complete wedge pattern in the
space of projected M31-centric radial distance versus line-of-sight velocity
for this feature, which includes the returning stream component of the shelf.
This wedge pattern agrees with expectations of a tidal shell formed in a radial
merger and provides strong evidence in favor of predictions of Giant Stellar
Stream (GSS) formation models in which the NE shelf originates from the second
orbital wrap of the tidal debris. The observed concentric wedge patterns of the
NE, West (W), and Southeast (SE) shelves corroborate this interpretation
independently of the models. We do not detect a kinematical signature in the NE
shelf region corresponding to an intact progenitor core, favoring GSS formation
models in which the progenitor is completely disrupted. The shelf's photometric
metallicity distribution implies that it is dominated by tidal material, as
opposed to the phase-mixed stellar halo or the disk. The metallicity
distribution ([Fe/H]$_{\rm phot}$ = $-0.42$ $\pm$ $0.01$) also matches the GSS,
and consequently the W and SE shelves, further supporting a direct physical
association between the tidal features.
|
2203.16675v1
|
2022-04-06
|
Stability and Safety through Event-Triggered Intermittent Control with Application to Spacecraft Orbit Stabilization
|
In systems where the ability to actuate is a scarce resource, e.g.,
spacecrafts, it is desirable to only apply a given controller in an
intermittent manner--with periods where the controller is on and periods where
it is off. Motivated by the event-triggered control paradigm, where
state-dependent triggers are utilized in a sample-and-hold context, we
generalize this concept to include state triggers where the controller is off
thereby creating a framework for intermittent control. Our approach utilizes
certificates--either Lyapunov or barrier functions--to design intermittent
trigger laws that guarantee stability or safety; the controller is turned on
for the period for which is beneficial with regard to the certificate, and
turned off until a performance threshold is reached. The main result of this
paper is that the intermittent controller scheme guarantees (set) stability
when Lyapunov functions are utilized, and safety (forward set invariance) in
the setting of barrier functions. As a result, our trigger designs can leverage
the intermittent nature of the actuator, and at the same time, achieve the task
of stabilization or safety. We further demonstrate the application and benefits
of intermittent control in the context of the spacecraft orbit stabilization
problem.
|
2204.03110v1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.