publicationDate
stringlengths 1
2.79k
| title
stringlengths 1
36.5k
⌀ | abstract
stringlengths 1
37.3k
⌀ | id
stringlengths 9
47
|
|---|---|---|---|
2023-12-24
|
Sphaleron rate as an inverse problem: a novel lattice approach
|
We compute the sphaleron rate on the lattice. We adopt a novel strategy based
on the extraction of the spectral density via a modified version of the
Backus-Gilbert method from finite-lattice-spacing and finite-smoothing-radius
Euclidean topological charge density correlators. The physical sphaleron rate
is computed by performing controlled continuum limit and zero-smoothing
extrapolations both in pure gauge and, for the first time, in full QCD.
|
2312.15468v1
|
1999-12-17
|
Expectations For an Interferometric Sunyaev-Zel'dovich Effect Survey for Galaxy Clusters
|
Non-targeted surveys for galaxy clusters using the Sunyaev-Zel'dovich effect
(SZE) will yield valuable information on both cosmology and evolution of the
intra-cluster medium (ICM). The redshift distribution of detected clusters will
constrain cosmology, while the properties of the discovered clusters will be
important for studies of the ICM and galaxy formation. Estimating survey yields
requires a detailed model for both cluster properties and the survey strategy.
We address this by making mock observations of galaxy clusters in cosmological
hydrodynamical simulations. The mock observatory consists of an interferometric
array of ten 2.5 m diameter telescopes, operating at a central frequency of 30
GHz with a bandwidth of 8 GHz. We find that clusters with a mass above $2.5
\times 10^{14} h_{50}^{-1} M_\odot$ will be detected at any redshift, with the
exact limit showing a very modest redshift dependence. Using a Press-Schechter
prescription for evolving the number densities of clusters with redshift, we
determine that such a survey should find hundreds of galaxy clusters per year,
many at high redshifts and relatively low mass -- an important regime uniquely
accessible to SZE surveys. Currently favored cosmological models predict
roughly 25 clusters per square degree.
|
9912364v2
|
2000-02-17
|
K-Band Spectroscopy of an Obscured Massive Stellar Cluster in the Antennae Galaxies (NGC 4038/4039) with NIRSPEC
|
We present infrared spectroscopy of the Antennae Galaxies (NGC 4038/4039)
with NIRSPEC at the W. M. Keck Observatory. We imaged the star clusters in the
vicinity of the southern nucleus (NGC 4039) in 0.39" seeing in K-band using
NIRSPEC's slit-viewing camera. The brightest star cluster revealed in the
near-IR (M_K(0) = -17.9) is insignificant optically, but coincident with the
highest surface brightness peak in the mid-IR (12-18 um) ISO image presented by
Mirabel et al (1998). We obtained high signal-to-noise 2.03-2.45 um spectra of
the nucleus and the obscured star cluster at R = 1900.
The cluster is very young (age ~ 4 Myr), massive (M ~ 16E6 M_sun), and
compact (density ~ 115 M_sun pc^(-3) within a 32 pc half-light radius),
assuming a Salpeter IMF (0.1-100 M_sun). Its hot stars have a radiation field
characterized by T_eff ~ 39,000 K, and they ionize a compact HII region with
n_e ~ 10^4 cm^(-3). The stars are deeply embedded in gas and dust (A_V = 9-10
mag), and their strong FUV field powers a clumpy photodissociation region with
densities n_H > 10^5 cm^(-3) on scales of ~ 200 pc, radiating L{H_2 1-0 S(1)}=
9600 L_sun.
|
0002357v1
|
2003-02-20
|
The Reionization History at High Redshifts II: Estimating the Optical Depth to Thomson Scattering from CMB Polarization
|
In light of the recent inference of a high optical depth to Thomson
scattering, tau, from the WMAP data we investigate the effects of extended
periods of partial ionization and ask if the value of tau inferred by assuming
a single sharp transition is an unbiased estimate. We construct and consider
several representative ionization models and evaluate their signatures in the
CMB. If tau is estimated with a single sharp transition we show that there can
be a significant bias in the derived value (and therefore a bias in sigma8 as
well). For WMAP noise levels the bias in tau is smaller than the statistical
uncertainty, but for Planck or a cosmic variance limited experiment the tau
bias could be much larger than the statistical uncertainties. This bias can be
reduced in the ionization models we consider by fitting a slightly more
complicated ionization history, such as a two-step ionization process. Assuming
this two-step process we find the Planck satellite can simultaneously determine
the initial redshift of reionization to +-2 and tau to +-0.01 Uncertainty about
the ionization history appears to provide a limit of about 0.005 on how well
tau can be estimated from CMB polarization data, much better than expected from
WMAP but significantly worse than expected from cosmic-variance limits.
|
0302404v2
|
2007-02-27
|
The Sunyaev-Zeldovich Background
|
The cosmic background due to the Sunyaev-Zeldovich (SZ) effect is expected to
be the largest signal at mm and cm wavelengths at a resolution of a few
arcminutes. We investigate some simple statistics of SZ maps and their scaling
with the normalization of the matter power spectrum, sigma_8, as well as the
effects of the unknown physics of the intracluster medium on these statistics.
We show that the SZ background provides a significant background for SZ cluster
searches, with the onset of confusion occurring around 10^{14} h^{-1} solar
masses in a cosmology-dependent way, where confusion is defined as typical
errors in recovered flux larger than 20%. The confusion limit, corresponds to
the mass at which there are roughly ten clusters per square degree, with this
number nearly independent of cosmology and cluster gas physics. Typical errors
grow quickly as lower mass objects are included in the catalog.
We also point out that there is nothing in particular about the rms of the
filtered map that makes it especially well-suited for capturing aspects of the
SZ effect, and other indicators of the one-point SZ probability distribution
function are at least as well suited for the task. For example, the full width
at half maximum of the one point probability distribution has a field-to-field
scatter that is about 60% that of the rms.
The simplest statistics of SZ maps are largely unaffected by cluster physics
such aspreheating, although the impact of preheating is clear by eye in the
maps.Studies aimed at learning about the physics of the intracluster medium
will apparently require more specialized statistical indicators.
|
0702727v1
|
1998-01-23
|
An Analytical Construction of the SRB Measures for Baker-type Maps
|
For a class of dynamical systems, called the axiom-A systems, Sinai, Ruelle
and Bowen showed the existence of an invariant measure (SRB measure) weakly
attracting the temporal average of any initial distribution that is absolutely
continuous with respect to the Lebesgue measure. Recently, the SRB measures
were found to be related to the nonequilibrium stationary state distribution
functions for thermostated or open systems. Inspite of the importance of these
SRB measures, it is difficult to handle them analytically because they are
often singular functions. In this article, for three kinds of Baker-type maps,
the SRB measures are analytically constructed with the aid of a functional
equation, which was proposed by de Rham in order to deal with a class of
singular functions. We first briefly review the properties of singular
functions including those of de Rham. Then, the Baker-type maps are described,
one of which is non-conservative but time reversible, the second has a
Cantor-like invariant set, and the third is a model of a simple chemical
reaction $R \leftrightarrow I \leftrightarrow P$. For the second example, the
cases with and without escape are considered. For the last example, we consider
the reaction processes in a closed system and in an open system under a flux
boundary condition. In all cases, we show that the evolution equation of the
distribution functions partially integrated over the unstable direction is very
similar to de Rham's functional equation and, employing this analogy, we
explicitly construct the SRB measures.
|
9801031v2
|
1998-04-08
|
Entropy Production : From Open Volume Preserving to Dissipative Systems
|
We generalize Gaspard's method for computing the \epsilon-entropy production
rate in Hamiltonian systems to dissipative systems with attractors considered
earlier by T\'el, Vollmer, and Breymann. This approach leads to a natural
definition of a coarse grained Gibbs entropy which is extensive, and which can
be expressed in terms of the SRB measures and volumes of the coarse graining
sets which cover the attractor. One can also study the entropy and entropy
production as functions of the degree of resolution of the coarse graining
process, and examine the limit as the coarse graining size approaches zero. We
show that this definition of the Gibbs entropy leads to a positive rate of
irreversible entropy production for reversible dissipative systems. We apply
the method to the case of a two dimensional map, based upon a model considered
by Vollmer, T\'el and Breymann, that is a deterministic version of a
biased-random walk. We treat both volume preserving and dissipative versions of
the basic map, and make a comparison between the two cases. We discuss the
\epsilon-entropy production rate as a function of the size of the coarse
graining cells for these biased-random walks and, for an open system with flux
boundary conditions, show regions of exponential growth and decay of the rate
of entropy production as the size of the cells decreases. This work describes
in some detail the relation between the results of Gaspard, those of T\'el,
Vollmer and Breymann, and those of Ruelle, on entropy production in various
systems described by Anosov or Anosov-like maps.
|
9804009v2
|
1998-07-23
|
A priori bounds for co-dimension one isometric embeddings
|
We prove a priori bounds for the trace of the second fundamental form of a
$C^4$ isometric embedding into $R^{n+1}$ of a metric $g$ of non-negative
sectional curvature on $S^n$, in terms of the scalar curvature, and the
diameter of $g$. These estimates give a bound on the extrinsic geometry in
terms of intrinsic quantities. They generalize estimates originally obtained by
Weyl for the case $n=2$ and positive curvature, and then by P. Guan and the
first author for non-negative curvature and $n=2$. Using $C^{2,\alpha}$
interior estimates of Evans and Krylov for concave fully nonlinear elliptic
partial differential equations, these bounds allow us to obtain the following
convergence theorem: For any $\epsilon>0$, the set of metrics of non-negative
sectional curvature and scalar curvature bounded below by $\epsilon$ which are
isometrically embedable in Euclidean space $R^{n+1}$ is closed in the H\"older
space $C^{4,\alpha}$, $0<\alpha<1$. These results are obtained in an effort to
understand the following higher dimensional version of the Weyl embedding
problem which we propose: \emph{Suppose that $g$ is a smooth metric of
non-negative sectional curvature and positive scalar curvature on \S^n$ which
is locally isometrically embeddable in $R^{n+1}$. Does $(S^n,g)$ then admit a
smooth global isometric embedding into $R^{n+1}$?}
|
9807130v1
|
2002-07-02
|
Active and Passive Fields in Turbulent Transport: the Role of Statistically Preserved Structures
|
We have recently proposed that the statistics of active fields (which affect
the velocity field itself) in well-developed turbulence are also dominated by
the Statistically Preserved Structures of auxiliary passive fields which are
advected by the same velocity field. The Statistically Preserved Structures are
eigenmodes of eigenvalue 1 of an appropriate propagator of the decaying
(unforced) passive field, or equivalently, the zero modes of a related
operator. In this paper we investigate further this surprising finding via two
examples, one akin to turbulent convection in which the temperature is the
active scalar, and the other akin to magneto-hydrodynamics in which the
magnetic field is the active vector. In the first example, all the even
correlation functions of the active and passive fields exhibit identical
scaling behavior. The second example appears at first sight to be a
counter-example: the statistical objects of the active and passive fields have
entirely different scaling exponents. We demonstrate nevertheless that the
Statistically Preserved Structures of the passive vector dominate again the
statistics of the active field, except that due to a dynamical conservation law
the amplitude of the leading zero mode cancels exactly. The active vector is
then dominated by the sub-leading zero mode of the passive vector. Our work
thus suggests that the statistical properties of active fields in turbulence
can be understood with the same generality as those of passive fields.
|
0207005v1
|
2001-06-07
|
Secrecy, Computational Loads and Rates in Practical Quantum Cryptography
|
A number of questions associated with practical implementations of quantum
cryptography systems having to do with unconditional secrecy, computational
loads and effective secrecy rates in the presence of perfect and imperfect
sources are discussed. The different types of unconditional secrecy, and their
relationship to general communications security, are discussed in the context
of quantum cryptography. In order to actually carry out a quantum cryptography
protocol it is necessary that sufficient computational resources be available
to perform the various processing steps, such as sifting, error correction,
privacy amplification and authentication. We display the full computer machine
instruction requirements needed to support a practical quantum cryptography
implementation. We carry out a numerical comparison of system performance
characteristics for implementations that make use of either weak coherent
sources of light or perfect single photon sources, for eavesdroppers making
individual attacks on the quantum channel characterized by different levels of
technological capability. We find that, while in some circumstances it is best
to employ perfect single photon sources, in other situations it is preferable
to utilize weak coherent sources. In either case the secrecy level of the final
shared cipher is identical, with the relevant distinguishing figure-of-merit
being the effective throughput rate.
|
0106043v2
|
2001-08-02
|
Privacy Amplification in Quantum Key Distribution: Pointwise Bound versus Average Bound
|
In order to be practically useful, quantum cryptography must not only provide
a guarantee of secrecy, but it must provide this guarantee with a useful,
sufficiently large throughput value. The standard result of generalized privacy
amplification yields an upper bound only on the average value of the mutual
information available to an eavesdropper. Unfortunately this result by itself
is inadequate for cryptographic applications. A naive application of the
standard result leads one to incorrectly conclude that an acceptable upper
bound on the mutual information has been achieved. It is the pointwise value of
the bound on the mutual information, associated with the use of some specific
hash function, that corresponds to actual implementations. We provide a fully
rigorous mathematical derivation that shows how to obtain a cryptographically
acceptable upper bound on the actual, pointwise value of the mutual
information. Unlike the bound on the average mutual information, the value of
the upper bound on the pointwise mutual information and the number of bits by
which the secret key is compressed are specified by two different parameters,
and the actual realization of the bound in the pointwise case is necessarily
associated with a specific failure probability. The constraints amongst these
parameters, and the effect of their values on the system throughput, have not
been previously analyzed. We show that the necessary shortening of the key
dictated by the cryptographically correct, pointwise bound, can still produce
viable throughput rates that will be useful in practice.
|
0108013v1
|
2008-03-27
|
Assessing surrogate endpoints in vaccine trials with case-cohort sampling and the Cox model
|
Assessing immune responses to study vaccines as surrogates of protection
plays a central role in vaccine clinical trials. Motivated by three ongoing or
pending HIV vaccine efficacy trials, we consider such surrogate endpoint
assessment in a randomized placebo-controlled trial with case-cohort sampling
of immune responses and a time to event endpoint. Based on the principal
surrogate definition under the principal stratification framework proposed by
Frangakis and Rubin [Biometrics 58 (2002) 21--29] and adapted by Gilbert and
Hudgens (2006), we introduce estimands that measure the value of an immune
response as a surrogate of protection in the context of the Cox proportional
hazards model. The estimands are not identified because the immune response to
vaccine is not measured in placebo recipients. We formulate the problem as a
Cox model with missing covariates, and employ novel trial designs for
predicting the missing immune responses and thereby identifying the estimands.
The first design utilizes information from baseline predictors of the immune
response, and bridges their relationship in the vaccine recipients to the
placebo recipients. The second design provides a validation set for the
unmeasured immune responses of uninfected placebo recipients by immunizing them
with the study vaccine after trial closeout. A maximum estimated likelihood
approach is proposed for estimation of the parameters. Simulated data examples
are given to evaluate the proposed designs and study their properties.
|
0803.3919v1
|
2008-06-13
|
The Formation and Evolution of Massive Stellar Clusters in IC 4662
|
We present a multiwavelength study of the formation of massive stellar
clusters, their emergence from cocoons of gas and dust, and their feedback on
surrounding matter. Using data that span from radio to optical wavelengths,
including Spitzer and Hubble ACS observations, we examine the population of
young star clusters in the central starburst region of the irregular Wolf-Rayet
galaxy IC 4662. We model the radio-to-IR spectral energy distributions of
embedded clusters to determine the properties of their HII regions and dust
cocoons (sizes, masses, densities, temperatures), and use near-IR and optical
data with mid-IR spectroscopy to constrain the properties of the embedded
clusters themselves (mass, age, extinction, excitation, abundance). The two
massive star-formation regions in IC 4662 are excited by stellar populations
with ages of ~ 4 million years and masses of ~ 3 x 10^5 M_sun (assuming a
Kroupa IMF). They have high excitation and sub-solar abundances, and they may
actually be comprised of several massive clusters rather than the single
monolithic massive compact objects known as Super Star Clusters (SSCs). Mid-IR
spectra reveal that these clusters have very high extinctions, A_V ~ 20-25 mag,
and that the dust in IC 4662 is well-mixed with the emitting gas, not in a
foreground screen.
|
0806.2302v1
|
2009-01-28
|
Searching for Main-Belt Comets Using the Canada-France-Hawaii Telescope Legacy Survey
|
The Canada-France-Hawaii Telescope Legacy Survey, specifically the Very Wide
segment of data, is used to search for possible main-belt comets. In the first
data set, 952 separate objects with asteroidal orbits within the main-belt are
examined using a three-level technique. First, the full-width-half-maximum of
each object is compared to stars of similar magnitude, to look for evidence of
a coma. Second, the brightness profiles of each object are compared with three
stars of the same magnitude, which are nearby on the image to ensure any
extended profile is not due to imaging variations. Finally, the star profiles
are subtracted from the asteroid profile and the residuals are compared with
the background using an unpaired T-test. No objects in this survey show
evidence of cometary activity. The second survey includes 11438 objects in the
main-belt, which are examined visually. One object, an unknown comet, is found
to show cometary activity. Its motion is consistent with being a main-belt
asteroid, but the observed arc is too short for a definitive orbit calculation.
No other body in this survey shows evidence of cometary activity. Upper limits
of the number of weakly and strongly active main-belt comets are derived to be
630+/-77 and 87+/-28, respectively. These limits are consistent with those
expected from asteroid collisions. In addition, data extracted from the
Canada-France-Hawaii Telescope image archive of main-belt comet 176P/LINEAR is
presented.
|
0901.4511v1
|
2009-10-02
|
Spectroscopic Observations of New Oort Cloud Comet 2006 VZ13 and Four Other Comets
|
Spectral data are presented for comets 2006 VZ13 (LINEAR), 2006 K4 (NEAT),
2006 OF2 (Broughton), 2P/Encke, and 93P/Lovas I, obtained with the Cerro-Tololo
Inter-American Observatory 1.5-m telescope in August 2007. Comet 2006 VZ13 is a
new Oort cloud comet and shows strong lines of CN (3880 angstroms), the Swan
band sequence for C_2 (4740, 5160, and 5630 angstroms), C_3 (4056 angstroms),
and other faint species. Lines are also identified in the spectra of the other
comets. Flux measurements of the CN, C_2 (Delta v = +1,0), and C_3 lines are
recorded for each comet and production rates and ratios are derived. When
considering the comets as a group, there is a correlation of C_2 and C_3
production with CN, but there is no conclusive evidence that the production
rate ratios depend on heliocentric distance. The continuum is also measured,
and the dust production and dust-to-gas ratios are calculated. There is a
general trend, for the group of comets, between the dust-to-gas ratio and
heliocentric distance, but it does not depend on dynamical age or class. Comet
2006 VZ13 is determined to be in the carbon-depleted (or Tempel 1 type) class.
|
0910.0416v1
|
2009-12-01
|
Approximate Sparse Recovery: Optimizing Time and Measurements
|
An approximate sparse recovery system consists of parameters $k,N$, an
$m$-by-$N$ measurement matrix, $\Phi$, and a decoding algorithm, $\mathcal{D}$.
Given a vector, $x$, the system approximates $x$ by $\widehat x
=\mathcal{D}(\Phi x)$, which must satisfy $\| \widehat x - x\|_2\le C \|x -
x_k\|_2$, where $x_k$ denotes the optimal $k$-term approximation to $x$. For
each vector $x$, the system must succeed with probability at least 3/4. Among
the goals in designing such systems are minimizing the number $m$ of
measurements and the runtime of the decoding algorithm, $\mathcal{D}$.
In this paper, we give a system with $m=O(k \log(N/k))$
measurements--matching a lower bound, up to a constant factor--and decoding
time $O(k\log^c N)$, matching a lower bound up to $\log(N)$ factors.
We also consider the encode time (i.e., the time to multiply $\Phi$ by $x$),
the time to update measurements (i.e., the time to multiply $\Phi$ by a
1-sparse $x$), and the robustness and stability of the algorithm (adding noise
before and after the measurements). Our encode and update times are optimal up
to $\log(N)$ factors.
|
0912.0229v1
|
2010-04-07
|
Concatenated quantum codes can attain the quantum Gilbert-Varshamov bound
|
A family of quantum codes of increasing block length with positive rate is
asymptotically good if the ratio of its distance to its block length approaches
a positive constant. The asymptotic quantum Gilbert-Varshamov (GV) bound states
that there exist $q$-ary quantum codes of sufficiently long block length $N$
having fixed rate $R$ with distance at least $N H^{-1}_{q^2}((1-R)/2)$, where
$H_{q^2}$ is the $q^2$-ary entropy function. For $q < 7$, only random quantum
codes are known to asymptotically attain the quantum GV bound. However, random
codes have little structure. In this paper, we generalize the classical result
of Thommesen to the quantum case, thereby demonstrating the existence of
concatenated quantum codes that can asymptotically attain the quantum GV bound.
The outer codes are quantum generalized Reed-Solomon codes, and the inner codes
are random independently chosen stabilizer codes, where the rates of the inner
and outer codes lie in a specified feasible region.
|
1004.1127v6
|
2010-09-02
|
Stable and unstable regimes in higher-dimensional convex billiards with cylindrical shape
|
We introduce a class of convex, higher-dimensional billiard models which
generalise stadium billiards. These models correspond to the free motion of a
point-particle in a region bounded by cylinders cut by planes. They are
motivated by models of particles interacting via a string-type mechanism, and
confined by hard walls. The combination of these elements may give rise to a
defocusing mechanism, similar to that in two dimensions, which allows large
chaotic regions in phase space. The remaining part of phase space is associated
with marginally stable behaviour. In fact periodic orbits in these systems
generically come in continuous parametric families, sociated with a pair of
parabolic eigen-directions: the periodic orbits are unstable in the presence of
a defocusing mechanism, but marginally stable otherwise. By performing the
stability analysis of families of periodic orbits at a nonlinear level, we
establish the conditions under which families are nonlinearly stable or
unstable. As a result, we identify regions in the parameter space of the models
which admit non-linearly stable oscillations in the form of whispering gallery
modes. Where no families of periodic orbits are stable, the billiards are
completely chaotic, i.e.\ the Lyapunov exponents of the billiard map are
non-zero.
|
1009.0337v1
|
2011-08-29
|
Magnetization Dynamics, Throughput and Energy Dissipation in a Universal Multiferroic Nanomagnetic Logic Gate with Fan-in and Fan-out
|
The switching dynamics of a multiferroic nanomagnetic NAND gate with
fan-in/fan-out is simulated by solving the Landau-Lifshitz-Gilbert (LLG)
equation while neglecting thermal fluctuation effects. The gate and logic wires
are implemented with dipole-coupled 2-phase (magnetostrictive/piezoelectric)
multiferroic elements that are clocked with electrostatic potentials of ~50 mV
applied to the piezoelectric layer generating 10 MPa stress in the
magnetostrictive layers for switching. We show that a pipeline bit throughput
rate of ~ 0.5 GHz is achievable with proper magnet layout and sinusoidal
four-phase clocking. The gate operation is completed in 2 ns with a latency of
4 ns. The total (internal + external) energy dissipated for a single gate
operation at this throughput rate is found to be only ~ 1000 kT in the gate and
~3000 kT in the 12-magnet array comprising two input and two output wires for
fan-in and fan-out. This makes it respectively 3 and 5 orders of magnitude more
energy-efficient than complementary-metal-oxide-semiconductor-transistor (CMOS)
based and spin-transfer-torque-driven nanomagnet based NAND gates. Finally, we
show that the dissipation in the external clocking circuit can always be
reduced asymptotically to zero using increasingly slow adiabatic clocking, such
as by designing the RC time constant to be 3 orders of magnitude smaller than
the clocking period. However, the internal dissipation in the device must
remain and cannot be eliminated if we want to perform fault-tolerant classical
computing.
Keywords: Nanomagnetic logic, multiferroics, straintronics and spintronics,
Landau-Lifshitz-Gilbert equation.
|
1108.5758v1
|
2011-09-15
|
Stato evolutivo delle stelle della Cintura di Orione ed implicazioni archeoastronomiche
|
In the present work it is evaluated the evolutionary state of the Orion Belt
stars, an asterism very important for the ancient Egyptians, finding that, when
the pyramids were built, the brightness of the three stars of the Belt was
practically the same as today. This not trivial result has important
implications in the framework of the so-called Orion Correlation Theory, a
controversial theory proposed by Bauval and Gilbert (1994), according to which
a perfect coincidence would exist between the disposition of the three stars of
the Orion Belt and that of the main Giza pyramids, so that the latter would
represent the monumental reproduction on the ground of that important asterism.
----
Nel presente lavoro viene determinato lo stato evolutivo delle stelle della
Cintura di Orione, ricavando che, all'epoca della costruzione delle piramidi,
la luminosita' delle tre stelle della Cintura era di fatto uguale a quella
odierna. Tale non banale risultato riveste una importanza fondamentale
nell'ambito della verifica della controversa Teoria della Correlazione di
Orione proposta da Bauval e Gilbert nel 1994, secondo la quale esisterebbe una
perfetta coincidenza tra la disposizione delle tre stelle della Cintura e
quella delle tre piramidi nella piana di Giza.
|
1109.3284v2
|
2012-07-31
|
Surface Acoustic Wave-Driven Ferromagnetic Resonance in Nickel Thin Films: Theory and Experiment
|
We present an extensive experimental and theoretical study of surface
acoustic wave-driven ferromagnetic resonance. In a first modeling approach
based on the Landau-Lifshitz-Gilbert equation, we derive expressions for the
magnetization dynamics upon magnetoelastic driving that are used to calculate
the absorbed microwave power upon magnetic resonance as well as the spin
current density generated by the precessing magnetization in the vicinity of a
ferromagnet/normal metal interface. In a second modeling approach, we deal with
the backaction of the magnetization dynamics on the elastic wave by solving the
elastic wave equation and the Landau-Lifshitz-Gilbert equation
selfconsistently, obtaining analytical solutions for the acoustic wave phase
shift and attenuation. We compare both modeling approaches with the complex
forward transmission of a LiNbO$_3$/Ni surface acoustic wave hybrid device
recorded experimentally as a function of the external magnetic field
orientation and magnitude, rotating the field within three different planes and
employing three different surface acoustic wave frequencies. We find
quantitative agreement of the experimentally observed power absorption and
surface acoustic wave phase shift with our modeling predictions using one set
of parameters for all field configurations and frequencies.
|
1208.0001v1
|
2012-09-27
|
Vortex Lattices in the Superconducting Phases of Doped Topological Insulators and Heterostructures
|
Majorana fermions are predicted to play a crucial role in condensed matter
realizations of topological quantum computation. These heretofore undiscovered
quasiparticles have been predicted to exist at the cores of vortex excitations
in topological superconductors and in heterostructures of superconductors and
materials with strong spin-orbit coupling. In this work we examine topological
insulators with bulk s-wave superconductivity in the presence of a
vortex-lattice generated by a perpendicular magnetic field. Using
self-consistent Bogoliubov-de Gennes, calculations we confirm that beyond the
semi-classical, weak-pairing limit that the Majorana vortex states appear as
the chemical potential is tuned from either side of the band edge so long as
the density of states is sufficient for superconductivity to form. Further, we
demonstrate that the previously predicted vortex phase transition survives
beyond the semi-classical limit. At chemical potential values smaller than the
critical chemical potential, the vortex lattice modes hybridize within the top
and bottom surfaces giving rise to a dispersive low-energy mid-gap band. As the
chemical potential is increased, the Majorana states become more localized
within a single surface but spread into the bulk toward the opposite surface.
Eventually, when the chemical potential is sufficiently high in the bulk bands,
the Majorana modes can tunnel between surfaces and eventually a critical point
is reached at which modes on opposite surfaces can freely tunnel and annihilate
leading to the topological phase transition previously studied in the work of
Hosur et al.
|
1209.6373v1
|
2013-04-23
|
L2/L2-foreach sparse recovery with low risk
|
In this paper, we consider the "foreach" sparse recovery problem with failure
probability $p$. The goal of which is to design a distribution over $m \times
N$ matrices $\Phi$ and a decoding algorithm $\algo$ such that for every
$\vx\in\R^N$, we have the following error guarantee with probability at least
$1-p$ \[\|\vx-\algo(\Phi\vx)\|_2\le C\|\vx-\vx_k\|_2,\] where $C$ is a constant
(ideally arbitrarily close to 1) and $\vx_k$ is the best $k$-sparse
approximation of $\vx$.
Much of the sparse recovery or compressive sensing literature has focused on
the case of either $p = 0$ or $p = \Omega(1)$. We initiate the study of this
problem for the entire range of failure probability. Our two main results are
as follows: \begin{enumerate} \item We prove a lower bound on $m$, the number
measurements, of $\Omega(k\log(n/k)+\log(1/p))$ for $2^{-\Theta(N)}\le p <1$.
Cohen, Dahmen, and DeVore \cite{CDD2007:NearOptimall2l2} prove that this bound
is tight. \item We prove nearly matching upper bounds for \textit{sub-linear}
time decoding. Previous such results addressed only $p = \Omega(1)$.
\end{enumerate}
Our results and techniques lead to the following corollaries: (i) the first
ever sub-linear time decoding $\lolo$ "forall" sparse recovery system that
requires a $\log^{\gamma}{N}$ extra factor (for some $\gamma<1$) over the
optimal $O(k\log(N/k))$ number of measurements, and (ii) extensions of Gilbert
et al. \cite{GHRSW12:SimpleSignals} results for information-theoretically
bounded adversaries.
|
1304.6232v1
|
2013-11-28
|
Starbugs: all-singing, all-dancing fibre positioning robots
|
Starbugs are miniature piezoelectric 'walking' robots with the ability to
simultaneously position many optical fibres across a telescope's focal plane.
Their simple design incorporates two piezoceramic tubes to form a pair of
concentric 'legs' capable of taking individual steps of a few microns, yet with
the capacity to move a payload several millimetres per second. The Australian
Astronomical Observatory has developed this technology to enable fast and
accurate field reconfigurations without the inherent limitations of more
traditional positioning techniques, such as the 'pick and place' robotic arm.
We report on our recent successes in demonstrating Starbug technology, driven
principally by R&D efforts for the planned MANIFEST (many instrument
fibre-system) facility for the Giant Magellan Telescope. Significant
performance gains have resulted from improvements to the Starbug system,
including i) the use of a vacuum to attach Starbugs to the underside of a
transparent field plate, ii) optimisation of the control electronics, iii) a
simplified mechanical design with high sensitivity piezo actuators, and iv) the
construction of a dedicated laboratory 'test rig'. A method of reliably
rotating Starbugs in steps of several arcminutes has also been devised, which
integrates with the pre-existing x-y movement directions and offers greater
flexibility while positioning. We present measured performance data from a
prototype system of 10 Starbugs under full (closed-loop control), at field
plate angles of 0-90 degrees.
|
1311.7371v1
|
2014-02-05
|
Magnetization dynamics: path-integral formalism for the stochastic Landau-Lifshitz-Gilbert equation
|
We construct a path-integral representation of the generating functional for
the dissipative dynamics of a classical magnetic moment as described by the
stochastic generalization of the Landau-Lifshitz-Gilbert equation proposed by
Brown, with the possible addition of spin-torque terms. In the process of
constructing this functional in the Cartesian coordinate system, we critically
revisit this stochastic equation. We present it in a form that accommodates for
any discretization scheme thanks to the inclusion of a drift term. The
generalized equation ensures the conservation of the magnetization modulus and
the approach to the Gibbs-Boltzmann equilibrium in the absence of non-potential
and time-dependent forces. The drift term vanishes only if the mid-point
Stratonovich prescription is used. We next reset the problem in the more
natural spherical coordinate system. We show that the noise transforms
non-trivially to spherical coordinates acquiring a non-vanishing mean value in
this coordinate system, a fact that has been often overlooked in the
literature. We next construct the generating functional formalism in this
system of coordinates for any discretization prescription. The functional
formalism in Cartesian or spherical coordinates should serve as a starting
point to study different aspects of the out-of-equilibrium dynamics of magnets.
Extensions to colored noise, micro-magnetism and disordered problems are
straightforward.
|
1402.1200v2
|
2014-10-17
|
The fixed irreducible bridge ensemble for self-avoiding walks
|
We define a new ensemble for self-avoiding walks in the upper half-plane, the
fixed irredicible bridge ensemble, by considering self-avoiding walks in the
upper half-plane up to their $n$-th bridge height, $Y_n$, and scaling the walk
by $1/Y_n$ to obtain a curve in the unit strip, and then taking $n\to\infty$.
We then conjecture a relationship between this ensemble to $\SLE$ in the unit
strip from $0$ to a fixed point along the upper boundary of the strip,
integrated over the conjectured exit density of self-avoiding walk spanning a
strip in the scaling limit. We conjecture that there exists a positive constant
$\sigma$ such that $n^{-\sigma}Y_n$ converges in distribution to that of a
stable random variable as $n\to\infty$. Then the conjectured relationship
between the fixed irreducible bridge scaling limit and $\SLE$ can be described
as follows: If one takes a SAW considered up to $Y_n$ and scales by $1/Y_n$ and
then weights the walk by $Y_n$ to an appropriate power, then in the limit
$n\to\infty$, one should obtain a curve from the scaling limit of the
self-avoiding walk spanning the unit strip. In addition to a heuristic
derivation, we provide numerical evidence to support the conjecture and give
estimates for the boundary scaling exponent.
|
1410.4796v1
|
2014-11-20
|
Type II Seesaw Higgsology and LEP/LHC constraints
|
In the {\sl type II seesaw} model, if spontaneous violation of the lepton
number conservation prevails over that of explicit violation, a rich Higgs
sector phenomenology is expected to arise with light scalar states having mixed
charged-fermiophobic/neutrinophilic properties. We study the constraints on
these light CP-even ($h^0$) and CP-odd ($A^0$) states from LEP exclusion
limits, combined with the so far established limits and properties of the
$125-126$~GeV ${\cal H}$ boson discovered at the LHC. We show that, apart from
a fine-tuned region of the parameter space, masses in the $\sim 44$ to $80$ GeV
range escape from the LEP limits if the vacuum expectation value of the Higgs
triplet is $\lesssim {\cal O}(10^{-3})$GeV, that is comfortably in the region
for 'natural' generation of Majorana neutrino masses within this model. In the
lower part of the scalar mass spectrum the decay channels ${\cal H} \to h^0
h^0, A^0 A^0$ lead predominantly to heavy flavor plus missing energy or to
totally invisible Higgs decays, mimicking dark matter signatures without a dark
matter candidate. Exclusion limits at the percent level of these
(semi-)invisible decay channels would be needed, together with stringent bounds
on the (doubly-)charged states, to constrain significantly this scenario. We
also revisit complementary constraints from ${\cal H} \to \gamma \gamma$ and
${\cal H} \to Z \gamma$ channels on the (doubly)charged scalar sector of the
model, pinpointing non-sensitivity regions, and carry out a likeliness study
for the theoretically allowed couplings in the scalar potential.
|
1411.5645v1
|
2015-01-11
|
Epidemic Threshold of an SIS Model in Dynamic Switching Networks
|
In this paper, we analyze dynamic switching networks, wherein the networks
switch arbitrarily among a set of topologies. For this class of dynamic
networks, we derive an epidemic threshold, considering the SIS epidemic model.
First, an epidemic probabilistic model is developed assuming independence
between states of nodes. We identify the conditions under which the epidemic
dies out by linearizing the underlying dynamical system and analyzing its
asymptotic stability around the origin. The concept of joint spectral radius is
then used to derive the epidemic threshold, which is later validated using
several networks (Watts-Strogatz, Barabasi-Albert, MIT reality mining graphs,
Regular, and Gilbert). A simplified version of the epidemic threshold is
proposed for undirected networks. Moreover, in the case of static networks, the
derived epidemic threshold is shown to match conventional analytical results.
Then, analytical results for the epidemic threshold of dynamic networksare
proved to be applicable to periodic networks. For dynamic regular networks, we
demonstrate that the epidemic threshold is identical to the epidemic threshold
for static regular networks. An upper bound for the epidemic spread probability
in dynamic Gilbert networks is also derived and verified using simulation.
|
1501.02472v2
|
2015-04-29
|
Entropy measures as geometrical tools in the study of cosmology
|
Classical chaos is often characterized as exponential divergence of nearby
trajectories. In many interesting cases these trajectories can be identified
with geodesic curves. We define here the entropy by $S = \ln \chi (x)$ with
$\chi(x)$ being the distance between two nearby geodesics. We derive an
equation for the entropy which by transformation to a Ricatti-type equation
becomes similar to the Jacobi equation. We further show that the geodesic
equation for a null geodesic in a double warped space time leads to the same
entropy equation. By applying a Robertson-Walker metric for a flat
three-dimensional Euclidian space expanding as a function of time, we again
reach the entropy equation stressing the connection between the chosen entropy
measure and time. We finally turn to the Raychaudhuri equation for expansion,
which also is a Ricatti equation similar to the transformed entropy equation.
Those Ricatti-type equations have solutions of the same form as the Jacobi
equation. The Raychaudhuri equation can be transformed to a harmonic oscillator
equation, and it has been shown that the geodesic deviation equation of Jacobi
is essentially equivalent to that of a harmonic oscillator. The Raychaudhuri
equations are strong geometrical tools in the study of General Relativity and
Cosmology. We suggest a refined entropy measure applicable in Cosmology and
defined by the average deviation of the geodesics in a congruence.
|
1504.07855v2
|
2015-06-24
|
Ebb: A DSL for Physical Simulation on CPUs and GPUs
|
Designing programming environments for physical simulation is challenging
because simulations rely on diverse algorithms and geometric domains. These
challenges are compounded when we try to run efficiently on heterogeneous
parallel architectures. We present Ebb, a domain-specific language (DSL) for
simulation, that runs efficiently on both CPUs and GPUs. Unlike previous DSLs,
Ebb uses a three-layer architecture to separate (1) simulation code, (2)
definition of data structures for geometric domains, and (3) runtimes
supporting parallel architectures. Different geometric domains are implemented
as libraries that use a common, unified, relational data model. By structuring
the simulation framework in this way, programmers implementing simulations can
focus on the physics and algorithms for each simulation without worrying about
their implementation on parallel computers. Because the geometric domain
libraries are all implemented using a common runtime based on relations, new
geometric domains can be added as needed, without specifying the details of
memory management, mapping to different parallel architectures, or having to
expand the runtime's interface.
We evaluate Ebb by comparing it to several widely used simulations,
demonstrating comparable performance to hand-written GPU code where available,
and surpassing existing CPU performance optimizations by up to 9$\times$ when
no GPU code exists.
|
1506.07577v3
|
2016-04-27
|
Scoping of material response under DEMO neutron irradiation: comparison with fission and influence of nuclear library selection
|
Predictions of material activation inventories will be a key input to
virtually all aspects of the operation, safety and environmental assessment of
future fusion nuclear plants. Additionally, the neutron-induced transmutation
(change) of material composition (inventory) with time, and the creation and
evolution of configurational damage from atomic displacements, require precise
quantification because they can lead to significant changes in material
properties, and thus influence reactor-component lifetime. A comprehensive
scoping study has been performed to quantify the activation, transmutation
(depletion and build-up) and immediate damage response under neutron
irradiation for all naturally occurring elements from hydrogen to bismuth. The
resulting database provides a global picture of the response of a material,
covering the majority of nuclear technological space, but focussing
specifically on typical conditions expected for a demonstration fusion power
plant (DEMO). Results from fusion are compared against typical fission
conditions for selected fusion relevant materials, demonstrating that the
latter cannot be relied upon to give accurate scalable experimental predictions
of material response in a future fusion reactor. Results from different nuclear
data libraries are also compared, highlighting the variations and deficiencies.
|
1604.08496v1
|
2016-05-23
|
Beyond the Interface Limit: Structural and Magnetic Depth Profiles of Voltage-Controlled Magneto-Ionic Heterostructures
|
Electric-field control of magnetism provides a promising route towards
ultralow power information storage and sensor technologies. The effects of
magneto-ionic motion have so far been prominently featured in the direct
modification of interface chemical and physical characteristics. Here we
demonstrate magnetoelectric coupling moderated by voltage-driven oxygen
migration beyond the interface limit in relatively thick AlOx/GdOx/Co (15 nm)
films. Oxygen migration and its ramifications on the Co magnetization are
quantitatively mapped with polarized neutron reflectometry under thermal and
electro-thermal conditionings. The depth-resolved profiles uniquely identify
interfacial and bulk behaviors and a semi-reversible suppression and recovery
of the magnetization. Magnetometry measurements show that the conditioning
changes the microstructure so as to disrupt long-range ferromagnetic ordering,
resulting in an additional magnetically soft phase. X-ray spectroscopy confirms
electric field induced changes in the Co oxidation state but not in the Gd,
suggesting that the GdOx transmits oxygen but does not source or sink it. These
results together provide crucial insight into controlling magnetic
heterostructures via magneto-ionic motion, not only at the interface, but also
throughout the bulk of the films.
|
1605.07209v1
|
2016-06-02
|
RankSign: an efficient signature algorithm based on the rank metric
|
In this paper we propose a new approach to code-based signatures that makes
use in particular of rank metric codes. When the classical approach consists in
finding the unique preimage of a syndrome through a decoding algorithm, we
propose to introduce the notion of mixed decoding of erasures and errors for
building signature schemes. In that case the difficult problem becomes, as is
the case in lattice-based cryptography, finding a preimage of weight above the
Gilbert-Varshamov bound (case where many solutions occur) rather than finding a
unique preimage of weight below the Gilbert-Varshamov bound. The paper
describes RankSign: a new signature algorithm for the rank metric based on a
new mixed algorithm for decoding erasures and errors for the recently
introduced Low Rank Parity Check (LRPC) codes. We explain how it is possible
(depending on choices of parameters) to obtain a full decoding algorithm which
is able to find a preimage of reasonable rank weight for any random syndrome
with a very strong probability. We study the semantic security of our signature
algorithm and show how it is possible to reduce the unforgeability to direct
attacks on the public matrix, so that no information leaks through signatures.
Finally, we give several examples of parameters for our scheme, some of which
with public key of size $11,520$ bits and signature of size $1728$ bits.
Moreover the scheme can be very fast for small base fields.
|
1606.00629v2
|
2016-09-09
|
Image and Video Mining through Online Learning
|
Within the field of image and video recognition, the traditional approach is
a dataset split into fixed training and test partitions. However, the labelling
of the training set is time-consuming, especially as datasets grow in size and
complexity. Furthermore, this approach is not applicable to the home user, who
wants to intuitively group their media without tirelessly labelling the
content. Our interactive approach is able to iteratively cluster classes of
images and video. Our approach is based around the concept of an image
signature which, unlike a standard bag of words model, can express
co-occurrence statistics as well as symbol frequency. We efficiently compute
metric distances between signatures despite their inherent high dimensionality
and provide discriminative feature selection, to allow common and distinctive
elements to be identified from a small set of user labelled examples. These
elements are then accentuated in the image signature to increase similarity
between examples and pull correct classes together. By repeating this process
in an online learning framework, the accuracy of similarity increases
dramatically despite labelling only a few training examples. To demonstrate
that the approach is agnostic to media type and features used, we evaluate on
three image datasets (15 scene, Caltech101 and FG-NET), a mixed text and image
dataset (ImageTag), a dataset used in active learning (Iris) and on three
action recognition datasets (UCF11, KTH and Hollywood2). On the UCF11 video
dataset, the accuracy is 86.7% despite using only 90 labelled examples from a
dataset of over 1200 videos, instead of the standard 1122 training videos. The
approach is both scalable and efficient, with a single iteration over the full
UCF11 dataset of around 1200 videos taking approximately 1 minute on a standard
desktop machine.
|
1609.02770v2
|
2016-11-17
|
Stashing the stops in multijet events at the LHC
|
While the presence of a light stop is increasingly disfavored by the
experimental limits set on R-parity conserving scenarios, the naturalness of
supersymmetry could still be safely concealed in the more challenging final
states predicted by the existence of non-null R-parity violating couplings.
Although R-parity violating signatures are extensively looked for at the Large
Hadron Collider, these searches always assume 100\% branching ratios for the
direct decays of supersymmetric particles into Standard Model ones. In this
paper we scrutinize the implications of relaxing this assumption by focusing on
one motivated scenario where the lightest stop is heavier than a chargino and a
neutralino. Considering a class of R-parity baryon number violating couplings,
we show on general grounds that while the direct decay of the stop into
Standard Model particles is dominant for large values of these couplings,
smaller values give rise, instead, to the dominance of a plethora of longer
decay chains and richer final states that have not yet been analyzed at the
LHC, thus weakening the impact of the present experimental stop mass limits. We
characterize the case for R-parity baryon number violating couplings in the
$10^{-7} - 10^{-1}$ range, in two different benchmark points scenarios within
the model-independent setting of the low-energy phenomenological Minimal
Supersymmetric Standard Model. We identify the different relevant experimental
signatures, estimate the corresponding proton--proton cross sections at
$\sqrt{s}=14$ TeV and discuss signal versus background issues.
|
1611.05850v2
|
2017-02-18
|
Inf-sup stable finite-element methods for the Landau--Lifshitz--Gilbert and harmonic map heat flow equation
|
In this paper we propose and analyze a finite element method for both the
harmonic map heat and Landau--Lifshitz--Gilbert equation, the time variable
remaining continuous. Our starting point is to set out a unified saddle point
approach for both problems in order to impose the unit sphere constraint at the
nodes since the only polynomial function satisfying the unit sphere constraint
everywhere are constants. A proper inf-sup condition is proved for the Lagrange
multiplier leading to the well-posedness of the unified formulation. \emph{A
priori} energy estimates are shown for the proposed method.
When time integrations are combined with the saddle point finite element
approximation some extra elaborations are required in order to ensure both
\emph{a priori} energy estimates for the director or magnetization vector
depending on the model and an inf-sup condition for the Lagrange multiplier.
This is due to the fact that the unit length at the nodes is not satisfied in
general when a time integration is performed. We will carry out a linear Euler
time-stepping method and a non-linear Crank--Nicolson method. The latter is
solved by using the former as a non-linear solver.
|
1702.05588v2
|
2017-06-15
|
Generalized Voltage-based State-Space Modelling of Modular Multilevel Converters with Constant Equilibrium in Steady-State
|
This paper demonstrates that the sum and difference of the upper and lower
arm voltages are suitable variables for deriving a generalized state-space
model of an MMC which settles at a constant equilibrium in steady-state
operation, while including the internal voltage and current dynamics. The
presented modelling approach allows for separating the multiple frequency
components appearing within the MMC as a first step of the model derivation, to
avoid variables containing multiple frequency components in steady-state. On
this basis, it is shown that Park transformations at three different
frequencies ($+\omega$, $-2\omega$ and $+3\omega$) can be applied for deriving
a model formulation where all state-variables will settle at constant values in
steady-state, corresponding to an equilibrium point of the model. The resulting
model is accurately capturing the internal current and voltage dynamics of a
three-phase MMC, independently from how the control system is implemented. The
main advantage of this model formulation is that it can be linearised, allowing
for eigenvalue-based analysis of the MMC dynamics. Furthermore, the model can
be utilized for control system design by multi-variable methods requiring any
stable equilibrium to be defined by a fixed operating point. Time-domain
simulations in comparison to an established average model of the MMC, as well
as results from a detailed simulation model of an MMC with 400 sub-modules per
arm, are presented as verification of the validity and accuracy of the
developed model.
|
1706.04959v1
|
2017-11-07
|
Global Properties of M31's Stellar Halo from the SPLASH Survey: III. Measuring the Stellar Velocity Dispersion Profile
|
We present the velocity dispersion of red giant branch (RGB) stars in M31's
halo, derived by modeling the line of sight velocity distribution of over 5000
stars in 50 fields spread throughout M31's stellar halo. The dataset was
obtained as part of the SPLASH (Spectroscopic and Photometric Landscape of
Andromeda's Stellar Halo) Survey, and covers projected radii of 9 to 175 kpc
from M31's center. All major structural components along the line of sight in
both the Milky Way (MW) and M31 are incorporated in a Gaussian Mixture Model,
including all previously identified M31 tidal debris features in the observed
fields. The probability an individual star is a constituent of M31 or the MW,
based on a set of empirical photometric and spectroscopic diagnostics, is
included as a prior probability in the mixture model. The velocity dispersion
of stars in M31's halo is found to decrease only mildly with projected radius,
from 108 km/s in the innermost radial bin (8.2 to 14.1 kpc) to $\sim 80$ to 90
km/s at projected radii of $\sim 40$ to 130 kpc, and can be parameterized with
a power-law of slope $-0.12\pm 0.05$. The quoted uncertainty on the power-law
slope reflects only the precision of the method, although other sources of
uncertainty we consider contribute negligibly to the overall error budget.
|
1711.02700v1
|
2017-12-19
|
Efficient implementations of the Multivariate Decomposition Method for approximating infinite-variate integrals
|
In this paper we focus on efficient implementations of the Multivariate
Decomposition Method (MDM) for approximating integrals of $\infty$-variate
functions. Such $\infty$-variate integrals occur for example as expectations in
uncertainty quantification. Starting with the anchored decomposition $f =
\sum_{\mathfrak{u}\subset\mathbb{N}} f_\mathfrak{u}$, where the sum is over all
finite subsets of $\mathbb{N}$ and each $f_\mathfrak{u}$ depends only on the
variables $x_j$ with $j\in\mathfrak{u}$, our MDM algorithm approximates the
integral of $f$ by first truncating the sum to some `active set' and then
approximating the integral of the remaining functions $f_\mathfrak{u}$
term-by-term using Smolyak or (randomized) quasi-Monte Carlo (QMC) quadratures.
The anchored decomposition allows us to compute $f_\mathfrak{u}$ explicitly by
function evaluations of $f$. Given the specification of the active set and
theoretically derived parameters of the quadrature rules, we exploit structures
in both the formula for computing $f_\mathfrak{u}$ and the quadrature rules to
develop computationally efficient strategies to implement the MDM in various
scenarios. In particular, we avoid repeated function evaluations at the same
point. We provide numerical results for a test function to demonstrate the
effectiveness of the algorithm.
|
1712.06782v3
|
2018-05-24
|
Impact of thermal fluctuations on transport in antiferromagnetic semimetals
|
Recent demonstrations on manipulating antiferromagnetic (AF) order have
triggered a growing interest in antiferromagnetic metal (AFM), and potential
high-density spintronic applications demand further improvements in the
anisotropic magnetoresistance (AMR). The antiferromagnetic semimetals (AFS) are
newly discovered materials that possess massless Dirac fermions that are
protected by the crystalline symmetries. In this material, a reorientation of
the AF order may break the underlying symmetries and induce a finite energy
gap. As such, the possible phase transition from the semimetallic to insulating
phase gives us a choice for a wide range of resistance ensuring a large AMR. To
further understand the robustness of the phase transition, we study thermal
fluctuations of the AF order in AFS at a finite temperature. For macroscopic
samples, we find that the thermal fluctuations effectively decrease the
magnitude of the AF order by renormalizing the effective Hamiltonian. Our
finding suggests that the insulating phase exhibits a gap narrowing at elevated
temperatures, which leads to a substantial decrease in AMR. We also examine
spatially correlated thermal fluctuations for microscopic samples by solving
the microscopic Landau-Lifshitz-Gilbert equation finding a qualitative
difference of the gap narrowing in the insulating phase. For both cases, the
semimetallic phase shows a minimal change in its transmission spectrum
illustrating the robustness of the symmetry protected states in AFS. Our
finding may serve as a guideline for estimating and maximizing AMR of the AFS
samples at elevated temperatures.
|
1805.09826v1
|
2018-05-29
|
An exact solution for choosing the largest measurement from a sample drawn from an uniform distribution
|
In "Recognizing the Maximum of a Sequence", Gilbert and Mosteller analyze a
full information game where n measurements from an uniform distribution are
drawn and a player (knowing n) must decide at each draw whether or not to
choose that draw. The goal is to maximize the probability of choosing the draw
that corresponds to the maximum of the sample. In their calculations of the
optimal strategy, the optimal probability and the asymptotic probability, they
assume that after a draw x the probability that the next i numbers are all
smaller than x is $x^i$; but this fails to recognize that continuing the game
(not choosing a draw because it is lower than a cutoff and waiting for the next
draw) conditions the distribution of the following i numbers such that their
expected maximum is higher then i/(i+1). The problem is now redefined with each
draw leading to a win, a false positive loss, a false negative loss and a
continuation. An exact formula for these probabilities is deduced, both for the
general case of n-1 different indifference numbers (assuming 0 as the last
cutoff) and the particular case of the same indifference number for all cutoffs
but the last. An approximation is found that preserves the main characteristics
of the optimal solution (slow decay of win probability, quick decay of false
positives and linear decay of false negatives). This new solution and the
original Gilbert and Mosteller formula are compared against simulations, and
their asymptotic behavior is studied.
|
1805.11556v1
|
2018-06-28
|
From clusters to queries: exploiting uncertainty in the modularity landscape of complex networks
|
Uncovering latent community structure in complex networks is a field that has
received an enormous amount of attention. Unfortunately, whilst potentially
very powerful, unsupervised methods for uncovering labels based on topology
alone has been shown to suffer from several difficulties. For example, the
search space for many module extraction approaches, such as the modularity
maximisation algorithm, appears to be extremely glassy, with many high valued
solutions that lack any real similarity to one another. However, in this paper
we argue that this is not a flaw with the modularity maximisation algorithm
but, rather, information that can be used to aid the context specific
classification of functional relationships between vertices. Formally, we
present an approach for generating a high value modularity consensus space for
a network, based on the ensemble space of locally optimal modular partitions.
We then use this approach to uncover latent relationships, given small query
sets. The methods developed in this paper are applied to biological and social
datasets with ground-truth label data, using a small number of examples used as
seed sets to uncover relationships. When tested on both real and synthetic
datasets our method is shown to achieve high levels of classification accuracy
in a context specific manner, with results comparable to random walk with
restart methods.
|
1806.10904v1
|
2018-07-05
|
Veloce Rosso: Australia's new precision radial velocity spectrograph
|
Veloce is an ultra-stable fibre-fed R4 echelle spectrograph for the 3.9 m
Anglo-Australian Telescope. The first channel to be commissioned, Veloce
'Rosso', utilises multiple low-cost design innovations to obtain Doppler
velocities for Sun-like and M-dwarf stars at <1 m/s precision. The spectrograph
has an asymmetric white-pupil format with a 100-mm beam diameter, delivering
R>75,000 spectra over a 580-950 nm range for the Rosso channel. Simultaneous
calibration is provided by a single-mode pulsed laser frequency comb in tandem
with a traditional arc lamp. A bundle of 19 object fibres provides a 2.4" field
of view for full sampling of stellar targets from the AAT site. Veloce is
housed in dual environmental enclosures that maintain positive air pressure at
a stability of +/-0.3 mbar, with a thermal stability of +/-0.01 K on the
optical bench. We present a technical overview and early performance data from
Australia's next major spectroscopic machine.
|
1807.01938v1
|
2018-07-19
|
Generalized Metric Repair on Graphs
|
Many modern data analysis algorithms either assume that or are considerably
more efficient if the distances between the data points satisfy a metric. These
algorithms include metric learning, clustering, and dimensionality reduction.
Because real data sets are noisy, the similarity measures often fail to satisfy
a metric. For this reason, Gilbert and Jain [11] and Fan, et al. [8] introduce
the closely related problems of $\textit{sparse metric repair}$ and
$\textit{metric violation distance}$. The goal of each problem is to repair as
few distances as possible to ensure that the distances between the data points
satisfy a metric. We generalize these problems so as to no longer require all
the distances between the data points. That is, we consider a weighted graph
$G$ with corrupted weights w and our goal is to find the smallest number of
modifications to the weights so that the resulting weighted graph distances
satisfy a metric. This problem is a natural generalization of the sparse metric
repair problem and is more flexible as it takes into account different
relationships amongst the input data points. As in previous work, we
distinguish amongst the types of repairs permitted (decrease, increase, and
general repairs). We focus on the increase and general versions and establish
hardness results and show the inherent combinatorial structure of the problem.
We then show that if we restrict to the case when $G$ is a chordal graph, then
the problem is fixed parameter tractable. We also present several classes of
approximation algorithms. These include and improve upon previous metric repair
algorithms for the special case when $G = K_n$
|
1807.07619v1
|
2018-10-17
|
Precipitating Ordered Skyrmion Lattices from Helical Spaghetti
|
Magnetic skyrmions have been the focus of intense research due to their
potential applications in ultra-high density data and logic technologies, as
well as for the unique physics arising from their antisymmetric exchange term
and topological protections. In this work we prepare a chiral jammed state in
chemically disordered (Fe, Co)Si consisting of a combination of
randomly-oriented magnetic helices, labyrinth domains, rotationally disordered
skyrmion lattices and/or isolated skyrmions. Using small angle neutron
scattering, (SANS) we demonstrate a symmetry-breaking magnetic field sequence
which disentangles the jammed state, resulting in an ordered, oriented skyrmion
lattice. The same field sequence was performed on a sample of powdered Cu2OSeO3
and again yields an ordered, oriented skyrmion lattice, despite relatively
non-interacting nature of the grains. Micromagnetic simulations confirm the
promotion of a preferred skyrmion lattice orientation after field treatment,
independent of the initial configuration, suggesting this effect may be
universally applicable. Energetics extracted from the simulations suggest that
approaching a magnetic hard axis causes the moments to diverge away from the
magnetic field, increasing the Dzyaloshinskii-Moriya energy, followed
subsequently by a lattice re-orientation. The ability to facilitate an emergent
ordered magnetic lattice with long-range orientation in a variety of materials
despite overwhelming internal disorder enables the study of skyrmions even in
imperfect powdered or polycrystalline systems and greatly improves the ability
to rapidly screen candidate skyrmion materials.
|
1810.07631v1
|
2018-11-09
|
Post-randomization Biomarker Effect Modification in an HIV Vaccine Clinical Trial
|
While the HVTN 505 trial showed no overall efficacy of the tested vaccine to
prevent HIV infection over placebo, previous studies, biological theories, and
the finding that immune response markers strongly correlated with infection in
vaccine recipients generated the hypothesis that a qualitative interaction
occurred. This hypothesis can be assessed with statistical methods for studying
treatment effect modification by an intermediate response variable (i.e.,
principal stratification effect modification (PSEM) methods). However,
available PSEM methods make untestable structural risk assumptions, such that
assumption-lean versions of PSEM methods are needed in order to surpass the
high bar of evidence to demonstrate a qualitative interaction. Fortunately, the
survivor average causal effect (SACE) literature is replete with
assumption-lean methods that can be readily adapted to the PSEM application for
the special case of a binary intermediate response variable. We map this
adaptation, opening up a host of new PSEM methods for a binary intermediate
variable measured via two-phase sampling, for a dichotomous or failure time
final outcome and including or excluding the SACE monotonicity assumption. The
new methods support that the vaccine partially protected vaccine recipients
with a high polyfunctional CD8+ T cell response, an important new insight for
the HIV vaccine field.
|
1811.03930v1
|
2019-03-22
|
Natural reward as the fundamental macroevolutionary force
|
Darwin's theory of evolution by natural selection does not predict long-term
progress or advancement, nor does it provide a useful way to define or
understand these concepts. Nevertheless, the history of life is marked by major
trends that appear progressive, and seemingly more advanced forms of life have
appeared. To reconcile theory and fact, evolutionists have proposed novel
theories that extend natural selection to levels and time frames not justified
by the original structure of Darwin's theory. To extend evolutionary theory
without violating the most basic tenets of Darwinism, I here identify a
separate struggle and an alternative evolutionary force. Owing to the abundant
free energy in our universe, there is a struggle for supremacy that naturally
rewards those that are first to invent novelties that allow exploitation of
untapped resources. This natural reward comes in form of a temporary monopoly,
which is granted to those who win a competitive race to innovate. By analogy to
human economies, natural selection plays the role of nature's inventor,
gradually fashioning inventions to the situation at hand, while natural reward
plays the role of nature's entrepreneur, choosing which inventions to first
disseminate to large markets. Natural reward leads to progress through a
process of invention-conquest macroevolution, in which the dual forces of
natural selection and natural reward create and disseminate major innovations.
Over vast time frames, natural reward drives the advancement of life by a
process of extinction-replacement megaevolution that releases constraints on
progress and increases the innovativeness of life.
|
1903.09567v1
|
2019-07-15
|
Entanglement-assisted Quantum Codes from Algebraic Geometry Codes
|
Quantum error correcting codes play the role of suppressing noise and
decoherence in quantum systems by introducing redundancy. Some strategies can
be used to improve the parameters of these codes. For example, entanglement can
provide a way for quantum error correcting codes to achieve higher rates than
the one obtained via the traditional stabilizer formalism. Such codes are
called entanglement-assisted quantum (QUENTA) codes. In this paper, we use
algebraic geometry codes to construct several families of QUENTA codes via the
Euclidean and the Hermitian construction. Two of the families created have
maximal entanglement and have quantum Singleton defect equal to zero or one.
Comparing the other families with the codes with the respective quantum
Gilbert-Varshamov bound, we show that our codes have a rate that surpasses that
bound. At the end, asymptotically good towers of linear complementary dual
codes are used to obtain asymptotically good families of maximal entanglement
QUENTA codes. Furthermore, a simple comparison with the quantum
Gilbert-Varshamov bound demonstrates that using our construction it is possible
to create an asymptotically family of QUENTA codes that exceeds this bound.
|
1907.06357v2
|
2019-09-06
|
Parameter identification for the Landau-Lifshitz-Gilbert equation in Magnetic Particle Imaging
|
Magnetic particle imaging (MPI) is a tracer-based technique for medical
imaging where the tracer consists of ironoxide nanoparticles. The key idea is
to measure the particle response to a temporally changing external magnetic
field to compute the spatial concentration of the tracer inside the object. A
decent mathematical model demands for a data-driven computation of the system
function which does not only describe the measurement geometry but also encodes
the interaction of the particles with the external magnetic field. The physical
model of this interaction is given by the Landau-Lifshitz-Gilbert (LLG)
equation. The determination of the system function can be seen as an inverse
problem of its own which can be interpreted as a calibration problem for MPI.
In this contribution the calibration problem is formulated as an inverse
parameter identification problem for the LLG equation. We give a detailed
analysis of the direct as well as the inverse problem in an all-at-once as well
as in a reduced setting. The analytical results yield a deeper understanding of
inverse problems connected to the LLG equation and provide a starting point for
the development of robust numerical solution methods in MPI.
|
1909.02912v1
|
2019-11-06
|
Automated Left Ventricle Dimension Measurement in 2D Cardiac Ultrasound via an Anatomically Meaningful CNN Approach
|
Two-dimensional echocardiography (2DE) measurements of left ventricle (LV)
dimensions are highly significant markers of several cardiovascular diseases.
These measurements are often used in clinical care despite suffering from large
variability between observers. This variability is due to the challenging
nature of accurately finding the correct temporal and spatial location of
measurement endpoints in ultrasound images. These images often contain fuzzy
boundaries and varying reflection patterns between frames. In this work, we
present a convolutional neural network (CNN) based approach to automate 2DE LV
measurements. Treating the problem as a landmark detection problem, we propose
a modified U-Net CNN architecture to generate heatmaps of likely coordinate
locations. To improve the network performance we use anatomically meaningful
heatmaps as labels and train with a multi-component loss function. Our network
achieves 13.4%, 6%, and 10.8% mean percent error on intraventricular septum
(IVS), LV internal dimension (LVID), and LV posterior wall (LVPW) measurements
respectively. The design outperforms other networks and matches or approaches
intra-analyser expert error.
|
1911.02448v1
|
2019-11-12
|
Linear-mode avalanche photodiode arrays for low-noise near-infrared imaging in space
|
Astronomical observations often require the detection of faint signals in the
presence of noise, and the near-infrared regime is no exception. In particular,
where the application has short exposure time constraints, we are frequently
and unavoidably limited by the read noise of a system. A recent and
revolutionary development in detector technology is that of linear-mode
avalanche photodiode (LmAPD) arrays. By the introduction of a signal
multiplication region within the device, effective read noise can be reduced to
<0.2 e-, enabling the detection of very small signals at frame rates of up to 1
kHz. This is already impacting ground-based astronomy in high-speed
applications such as wavefront sensing and fringe tracking, but has not yet
been exploited for scientific space missions. We present the current status of
a collaboration with Leonardo MW - creators of the 'SAPHIRA' LmAPD array - as
we work towards the first in-orbit demonstration of a SAPHIRA device in 'Emu',
a hosted payload on the International Space Station. The Emu mission will fully
benefit from the 'noiseless' gains offered by LmAPD technology as it produces a
time delay integration photometric sky survey at 1.4 microns, using compact
readout electronics developed at the Australian National University. This is
just one example of a use case that could not be achieved with conventional
infrared sensors.
|
1911.04684v1
|
2020-03-17
|
Maximizing Influence-based Group Shapley Centrality
|
One key problem in network analysis is the so-called influence maximization
problem, which consists in finding a set $S$ of at most $k$ seed users, in a
social network, maximizing the spread of information from $S$. This paper
studies a related but slightly different problem: We want to find a set $S$ of
at most $k$ seed users that maximizes the spread of information, when $S$ is
added to an already pre-existing - but unknown - set of seed users $T$. We
consider such scenario to be very realistic. Assume a central entity wants to
spread a piece of news, while having a budget to influence $k$ users. This
central authority may know that some users are already aware of the information
and are going to spread it anyhow. The identity of these users being however
completely unknown. We model this optimization problem using the Group Shapley
value, a well-founded concept from cooperative game theory. While the standard
influence maximization problem is easy to approximate within a factor
$1-1/e-\epsilon$ for any $\epsilon>0$, assuming common computational complexity
conjectures, we obtain strong hardness of approximation results for the problem
at hand in this paper. Maybe most prominently, we show that it cannot be
approximated within $1/n^{o(1)}$ under the Gap Exponential Time Hypothesis.
Hence, it is unlikely to achieve anything better than a polynomial factor
approximation. Nevertheless, we show that a greedy algorithm can achieve a
factor of $\frac{1-1/e}{k}-\epsilon$ for any $\epsilon>0$, showing that not all
is lost in settings where $k$ is bounded.
|
2003.07966v1
|
2020-04-24
|
Single-electron operation of a silicon-CMOS 2x2 quantum dot array with integrated charge sensing
|
The advanced nanoscale integration available in silicon complementary
metal-oxide-semiconductor (CMOS) technology provides a key motivation for its
use in spin-based quantum computing applications. Initial demonstrations of
quantum dot formation and spin blockade in CMOS foundry-compatible devices are
encouraging, but results are yet to match the control of individual electrons
demonstrated in university-fabricated multi-gate designs. We show here that the
charge state of quantum dots formed in a CMOS nanowire device can be sensed by
using floating gates to electrostatically couple it to a remote single electron
transistor (SET) formed in an adjacent nanowire. By biasing the nanowire and
gates of the remote SET with respect to the nanowire hosting the quantum dots,
we controllably form ancillary quantum dots under the floating gates, thus
enabling the demonstration of independent control over charge transitions in a
quadruple (2x2) quantum dot array. This device overcomes the limitations
associated with measurements based on tunnelling transport through the dots and
permits the sensing of all charge transitions, down to the last electron in
each dot. We use effective mass theory to investigate the necessary
optimization of the device parameters in order to achieve the tunnel rates
required for spin-based quantum computation.
|
2004.11558v1
|
2020-08-22
|
Measurement of magnetic fields using the voltage generated by a vibrating wire
|
A vibrating wire may be used as an instrument with a variety of applications,
one of which is the measurement of magnetic fields. Often, the magnetic fields
are determined by measuring the amplitude of the wire vibration under the
action of a Lorentz force. Though generally adequate, this approach may be
inconvenient in certain circumstances. One of these occurs when it is necessary
to measure the amplitude of high-frequency vibration, as the amplitude is
expected to decrease linearly with frequency, and thus becomes harder to
measure. Another example may be found in situations where the sensor must
operate over a wide range of vibration frequencies. In this case the sensor
will be unresponsive to specific frequencies of wire vibration, which are
determined by the placement of the sensor. This means that for the instrument
to be robust, the sensor must be precisely mobile, or multiple sensors must be
used.
Here a technique which may be used to supplement the displacement sensor is
described. This technique makes use of the voltage generated by the motion of
the wire in the magnetic field under measurement. It is predicted that the
technique may be more suitable for measurements requiring high frequency
vibration, and is sensitive to all frequencies of vibration. Measurements of a
magnetic field obtained using this technique are compared to those found using
only a displacement sensor, and the benefits and drawbacks of the technique are
discussed.
|
2008.09898v1
|
2020-11-25
|
Domain wall motion in axially symmetric spintronic nanowires
|
This article is concerned with the dynamics of magnetic domain walls (DWs) in
nanowires as solutions to the classical Landau-Lifschitz-Gilbert equation
augmented by a typically non-variational Slonczewski term for spin-torque
effects. Taking applied field and spin-polarization as the primary parameters,
we study dynamic stability as well as selection mechanisms analytically and
numerically in an axially symmetric setting. Concerning the stability of the
DWs' asymptotic states, we distinguish the bistable (both stable) and the
monostable (one unstable, one stable) parameter regime. In the bistable regime,
we extend known stability results of an explicit family of precessing solutions
and identify a relation of applied field and spin-polarization for standing
DWs. We verify that this family is convectively unstable into the monostable
regime, thus forming so-called pushed fronts, before turning absolutely
unstable. In the monostable regime, we present explicit formulas for the
so-called absolute spectrum of more general matrix operators. This allows us to
relate translation and rotation symmetries to the position of the singularities
of the pointwise Green's function. Thereby, we determine the linear selection
mechanism for the asymptotic velocity and frequency of DWs and corroborate
these by long-time numerical simulations. All these results include the axially
symmetric Landau-Lifschitz-Gilbert equation.
|
2012.01343v1
|
2020-12-08
|
Sparse Correspondence Analysis for Contingency Tables
|
Since the introduction of the lasso in regression, various sparse methods
have been developed in an unsupervised context like sparse principal component
analysis (s-PCA), sparse canonical correlation analysis (s-CCA) and sparse
singular value decomposition (s-SVD). These sparse methods combine feature
selection and dimension reduction. One advantage of s-PCA is to simplify the
interpretation of the (pseudo) principal components since each one is expressed
as a linear combination of a small number of variables. The disadvantages lie
on the one hand in the difficulty of choosing the number of non-zero
coefficients in the absence of a well established criterion and on the other
hand in the loss of orthogonality for the components and/or the loadings. In
this paper we propose sparse variants of correspondence analysis (CA)for large
contingency tables like documents-terms matrices used in text mining, together
with pPMD, a deation technique derived from projected deflation in s-PCA. We
use the fact that CA is a double weighted PCA (for rows and columns) or a
weighted SVD, as well as a canonical correlation analysis of indicator
variables. Applying s-CCA or s-SVD allows to sparsify both rows and columns
weights. The user may tune the level of sparsity of rows and columns and
optimize it according to some criterium, and even decide that no sparsity is
needed for rows (or columns) by relaxing one sparsity constraint. The latter is
equivalent to apply s-PCA to matrices of row (or column) profiles.
|
2012.04271v1
|
2020-12-27
|
Vacuum Stability Conditions for Higgs Potentials with $SU(2)_L$ Triplets
|
Tree-level dynamical stability of scalar field potentials in renormalizable
theories can in principle be expressed in terms of positivity conditions on
quartic polynomial structures. However, these conditions cannot always be cast
in a fully analytical resolved form, involving only the couplings and being
valid for all field directions. In this paper we consider such forms in three
physically motivated models involving $SU(2)$ triplet scalar fields: the
Type-II seesaw model, the Georgi-Machacek model, and a generalized two-triplet
model. A detailed analysis of the latter model allows to establish the full set
of necessary and sufficient boundedness from below conditions. These can serve
as a guide, together with unitarity and vacuum structure constraints, for
consistent phenomenological (tree-level) studies. They also provide a seed for
improved loop-level conditions, and encompass in particular the leading ones
for the more specific Georgi-Machacek case. Incidentally, we present complete
proofs of various properties and also derive general positivity conditions on
quartic polynomials that are equivalent but much simpler than the ones used in
the literature.
|
2012.13947v2
|
2021-03-25
|
Phases of Small Worlds: A Mean Field Formulation
|
A network is said to have the properties of a small world if a suitably
defined average distance between any two nodes is proportional to the logarithm
of the number of nodes, $N$. In this paper, we present a novel derivation of
the small-world property for Gilbert-Erd\"os-Renyi random networks. We employ a
mean field approximation that permits the analytic derivation of the
distribution of shortest paths that exhibits logarithmic scaling away from the
phase transition, inferable via a suitably interpreted order parameter. We
begin by framing the problem in generality with a formal generating functional
for undirected weighted random graphs with arbitrary disorder, recovering the
result that the free energy associated with an ensemble of Gilbert graphs
corresponds to a system of non-interacting fermions identified with the edge
states. We then present a mean field solution for this model and extend it to
more general realizations of network randomness. For a two family class of
stochastic block models that we refer to as dimorphic networks, which allow for
links within the different families to be drawn from two independent discrete
probability distributions, we find the mean field approximation maps onto a
spin chain combinatorial problem and again yields useful approximate analytic
expressions for mean path lengths. Dimorophic networks exhibit a richer phase
structure, where distinct small world regimes separate in analogy to the
spinodal decomposition of a fluid. We find that is it possible to induce small
world behavior in sub-networks that by themselves would not be in the
small-world regime.
|
2103.14001v2
|
2021-05-04
|
Evaluating Metrics for Standardized Benchmarking of Remote Presence Systems
|
To reduce the need for business-related air travel and its associated energy
consumption and carbon footprint, the U.S. Department of Energy's ARPA-E is
supporting a research project called SCOTTIE - Systematic Communication
Objectives and Telecommunications Technology Investigations and Evaluations.
SCOTTIE tests virtual and augmented reality platforms in a functional
comparison with face-to-face (FtF) interactions to derive travel replacement
thresholds for common industrial training scenarios. The primary goal of Study
1 is to match the communication effectiveness and learning outcomes obtained
from a FtF control using virtual reality (VR) training scenarios in which a
local expert with physical equipment trains a remote apprentice without
physical equipment immediately present. This application scenario is
commonplace in industrial settings where access to expensive equipment and
materials is limited and a number of apprentices must travel to a central
location in order to undergo training. Supplying an empirically validated
virtual training alternative constitutes a readily adoptable use-case for
businesses looking to reduce time and monetary expenditures associated with
travel. The technology used for three different virtual presence technologies
was strategically selected for feasibility, relatively low cost, business
relevance, and potential for impact through transition. The authors suggest
that the results of this study might generalize to the challenge of virtual
conferences.
|
2105.01772v1
|
2021-07-12
|
Partially Concatenated Calderbank-Shor-Steane Codes Achieving the Quantum Gilbert-Varshamov Bound Asymptotically
|
In this paper, we utilize a concatenation scheme to construct new families of
quantum error correction codes achieving the quantum Gilbert-Varshamov (GV)
bound asymptotically. We concatenate alternant codes with any linear code
achieving the classical GV bound to construct Calderbank-Shor-Steane (CSS)
codes. We show that the concatenated code can achieve the quantum GV bound
asymptotically and can approach the Hashing bound for asymmetric Pauli
channels. By combing Steane's enlargement construction of CSS codes, we derive
a family of enlarged stabilizer codes achieving the quantum GV bound for
enlarged CSS codes asymptotically. As applications, we derive two families of
fast encodable and decodable CSS codes with parameters
$\mathscr{Q}_1=[[N,\Omega(\sqrt{N}),\Omega( \sqrt{N})]],$ and
$\mathscr{Q}_2=[[N,\Omega(N/\log N),\Omega(N/\log N)/\Omega(\log N)]].$ We show
that $\mathscr{Q}_1$ can be encoded very efficiently by circuits of size $O(N)$
and depth $O(\sqrt{N})$. For an input error syndrome, $\mathscr{Q}_1$ can
correct any adversarial error of weight up to half the minimum distance bound
in $O(N)$ time. $\mathscr{Q}_1$ can also be decoded in parallel in
$O(\sqrt{N})$ time by using $O(\sqrt{N})$ classical processors. For an input
error syndrome, we proved that $\mathscr{Q}_2$ can correct a linear number of
${X}$-errors with high probability and an almost linear number of ${Z}$-errors
in $O(N )$ time. Moreover, $\mathscr{Q}_2$ can be decoded in parallel in
$O(\log(N))$ time by using $O(N)$ classical processors.
|
2107.05174v2
|
2021-07-12
|
Assessment of Immune Correlates of Protection via Controlled Vaccine Efficacy and Controlled Risk
|
Immune correlates of protection (CoPs) are immunologic biomarkers accepted as
a surrogate for an infectious disease clinical endpoint and thus can be used
for traditional or provisional vaccine approval. To study CoPs in randomized,
placebo-controlled trials, correlates of risk (CoRs) are first assessed in
vaccine recipients. This analysis does not assess causation, as a CoR may fail
to be a CoP. We propose a causal CoP analysis that estimates the controlled
vaccine efficacy curve across biomarker levels $s$, $CVE(s)$, equal to one
minus the ratio of the controlled-risk curve $r_C(s)$ at $s$ and placebo risk,
where $r_C(s)$ is causal risk if all participants are assigned vaccine and the
biomarker is set to $s$. The criterion for a useful CoP is wide variability of
$CVE(s)$ in $s$. Moreover, estimation of $r_C(s)$ is of interest in itself,
especially in studies without a placebo arm. For estimation of $r_C(s)$,
measured confounders can be adjusted for by any regression method that
accommodates missing biomarkers, to which we add sensitivity analysis to
quantify robustness of CoP evidence to unmeasured confounding. Application to
two harmonized phase 3 trials supports that 50% neutralizing antibody titer has
value as a controlled vaccine efficacy CoP for virologically confirmed dengue
(VCD): in CYD14 the point estimate (95% confidence interval) for $CVE(s)$
accounting for measured confounders and building in conservative margin for
unmeasured confounding increases from 29.6% (95% CI 3.5 to 45.9) at titer 1:36
to 78.5% (95% CI 67.9 to 86.8) at titer 1:1200; these estimates are 17.4% (95%
CI -14.4 to 36.5) and 84.5% (95% CI 79.6 to 89.1) for CYD15.
|
2107.05734v1
|
2021-07-23
|
Efficient nonparametric estimation of the covariate-adjusted threshold-response function, a support-restricted stochastic intervention
|
Identifying a biomarker or treatment-dose threshold that marks a specified
level of risk is an important problem, especially in clinical trials. This
risk, viewed as a function of thresholds and possibly adjusted for covariates,
we call the threshold-response function. Extending the work of Donovan, Hudgens
and Gilbert (2019), we propose a nonparametric efficient estimator for the
covariate-adjusted threshold-response function, which utilizes machine learning
and Targeted Minimum-Loss Estimation (TMLE). We additionally propose a more
general estimator, based on sequential regression, that also applies when there
is outcome missingness. We show that the threshold-response for a given
threshold may be viewed as the expected outcome under a stochastic intervention
where all participants are given a treatment dose above the threshold. We prove
the estimator is efficient and characterize its asymptotic distribution. A
method to construct simultaneous 95% confidence bands for the
threshold-response function and its inverse is given. Furthermore, we discuss
how to adjust our estimator when the treatment or biomarker is
missing-at-random, as is the case in clinical trials with biased sampling
designs, using inverse-probability-weighting. The methods are assessed in a
diverse set of simulation settings with rare outcomes and cumulative
case-control sampling. The methods are employed to estimate neutralizing
antibody thresholds for virologically confirmed dengue risk in the CYD14 and
CYD15 dengue vaccine trials.
|
2107.11459v2
|
2021-10-15
|
The radio SZ effect as a probe of the cosmological radio background
|
If there is a substantial cosmological radio background, there should be a
radio Sunyaev-Zeldovich (SZ) effect that goes along with it. The radio
background Comptonization leads to a slight photon excess at all wavelengths,
while Comptonization of the CMB at low frequencies leads to a decrement. For
levels of the radio background consistent with observations, these effects
cancel each other around $\nu\simeq 735~$MHz, with an excess at lower
frequencies and a decrement at higher frequencies. Assuming a purely
cosmological origin of the observed ARCADE radio excess, at $\nu \lesssim
20\,{\rm GHz}$ the signal scales as $\Delta T / T_{\rm CMB}\simeq 2\,y\left[
(\nu/735\,{\rm MHz})^{-2.59}-1\right]$ with frequency and the Compton-$y$
parameter of the cluster. For a typical cluster, the total radio SZ signal is
at the level of $\Delta T\simeq 1\,{\rm mK}$ around the null, with a steep
scaling towards radio frequencies. This is above current raw sensitivity limits
for many radio facilities at these wavelengths, providing a unique way to
confirm the cosmological origin of the ARCADE excess and probe its properties
(e.g., redshift dependence and isotropy). We also give an expression to compute
the radio-analogue of the kinematic SZ effect, highlighting that this might
provide a new tool to probe large-scale velocity fields and the cosmic
evolution of the radio background.
|
2110.08373v1
|
2021-10-20
|
No Transits of Proxima Centauri Planets in High-Cadence TESS Data
|
Proxima Centauri is our nearest stellar neighbor and one of the most
well-studied stars in the sky. In 2016, a planetary companion was detected
through radial velocity measurements. Proxima Centauri b has a minimum mass of
1.3 Earth masses and orbits with a period of 11.2 days at 0.05 AU from its
stellar host, and resides within the star's Habitable Zone. While recent work
has shown that Proxima Centauri b likely does not transit, given the value of
potential atmospheric observations via transmission spectroscopy of the closest
possible Habitable Zone planet, we reevaluate the possibility that Proxima
Centauri b is a transiting exoplanet using data from the Transiting Exoplanet
Survey Satellite (TESS). We use three sectors (Sectors 11, 12, and 38 at
2-minute cadence) of observations from TESS to search for planets. Proxima
Centauri is an extremely active M5.5 star, emitting frequent white-light
flares; we employ a novel method that includes modeling the stellar activity in
our planet search algorithm. We do not detect any planet signals. We injected
synthetic transiting planets into the TESS and use this analysis to show that
Proxima Centauri b cannot be a transiting exoplanet with a radius larger than
0.4 R$_\oplus$. Moreover, we show that it is unlikely that any Habitable Zone
planets larger than Mars transit Proxima Centauri.
|
2110.10702v2
|
2021-12-20
|
Analysis of preintegration followed by quasi-Monte Carlo integration for distribution functions and densities
|
In this paper, we analyse a method for approximating the distribution
function and density of a random variable that depends in a non-trivial way on
a possibly high number of independent random variables, each with support on
the whole real line. Starting with the integral formulations of the
distribution and density, the method involves smoothing the original integrand
by preintegration with respect to one suitably chosen variable, and then
applying a suitable quasi-Monte Carlo (QMC) method to compute the integral of
the resulting smoother function. Interpolation is then used to reconstruct the
distribution or density on an interval. The preintegration technique is a
special case of conditional sampling, a method that has previously been applied
to a wide range of problems in statistics and computational finance. In
particular, the pointwise approximation studied in this work is a specific case
of the conditional density estimator previously considered in L'Ecuyer et al.,
arXiv:1906.04607. Our theory provides a rigorous regularity analysis of the
preintegrated function, which is then used to show that the errors of the
pointwise and interpolated estimators can both achieve nearly first-order
convergence. Numerical results support the theory.
|
2112.10308v5
|
2021-12-21
|
Exponential decay of intersection volume with applications on list-decodability and Gilbert-Varshamov type bound
|
We give some natural sufficient conditions for balls in a metric space to
have small intersection. Roughly speaking, this happens when the metric space
is (i) expanding and (ii) well-spread, and (iii) a certain random variable on
the boundary of a ball has a small tail. As applications, we show that the
volume of intersection of balls in Hamming, Johnson spaces and symmetric groups
decay exponentially as their centers drift apart. To verify condition (iii), we
prove some large deviation inequalities `on a slice' for functions with
Lipschitz conditions.
We then use these estimates on intersection volumes to
$\bullet$ obtain a sharp lower bound on list-decodability of random $q$-ary
codes, confirming a conjecture of Li and Wootters; and
$\bullet$ improve the classical bound of Levenshtein from 1971 on constant
weight codes by a factor linear in dimension, resolving a problem raised by
Jiang and Vardy.
Our probabilistic point of view also offers a unified framework to obtain
improvements on other Gilbert--Varshamov type bounds, giving conceptually
simple and calculation-free proofs for $q$-ary codes, permutation codes, and
spherical codes. Another consequence is a counting result on the number of
codes, showing ampleness of large codes.
|
2112.11274v2
|
2021-12-22
|
Preintegration is not smoothing when monotonicity fails
|
Preintegration is a technique for high-dimensional integration over
$d$-dimensional Euclidean space, which is designed to reduce an integral whose
integrand contains kinks or jumps to a $(d-1)$-dimensional integral of a smooth
function. The resulting smoothness allows efficient evaluation of the
$(d-1)$-dimensional integral by a Quasi-Monte Carlo or Sparse Grid method. The
technique is similar to conditional sampling in statistical contexts, but the
intention is different: in conditional sampling the aim is to reduce the
variance, rather than to achieve smoothness. Preintegration involves an initial
integration with respect to one well chosen real-valued variable. Griebel, Kuo,
Sloan [Math. Comp. 82 (2013), 383--400] and Griewank, Kuo, Le\"ovey, Sloan [J.
Comput. Appl. Maths. 344 (2018), 259--274] showed that the resulting
$(d-1)$-dimensional integrand is indeed smooth under appropriate conditions,
including a key assumption -- the integrand of the smooth function underlying
the kink or jump is strictly monotone with respect to the chosen special
variable when all other variables are held fixed. The question addressed in
this paper is whether this monotonicity property with respect to one well
chosen variable is necessary. We show here that the answer is essentially yes,
in the sense that without this property the resulting $(d-1)$-dimensional
integrand is generally not smooth, having square-root or other singularities.
|
2112.11621v1
|
2021-12-30
|
A causal inference framework for spatial confounding
|
Recently, addressing spatial confounding has become a major topic in spatial
statistics. However, the literature has provided conflicting definitions, and
many proposed definitions do not address the issue of confounding as it is
understood in causal inference. We define spatial confounding as the existence
of an unmeasured causal confounder with a spatial structure. We present a
causal inference framework for nonparametric identification of the causal
effect of a continuous exposure on an outcome in the presence of spatial
confounding. We propose double machine learning (DML), a procedure in which
flexible models are used to regress both the exposure and outcome variables on
confounders to arrive at a causal estimator with favorable robustness
properties and convergence rates, and we prove that this approach is consistent
and asymptotically normal under spatial dependence. As far as we are aware,
this is the first approach to spatial confounding that does not rely on
restrictive parametric assumptions (such as linearity, effect homogeneity, or
Gaussianity) for both identification and estimation. We demonstrate the
advantages of the DML approach analytically and in simulations. We apply our
methods and reasoning to a study of the effect of fine particulate matter
exposure during pregnancy on birthweight in California.
|
2112.14946v7
|
2022-01-20
|
Accurate modeling of grazing transits using umbrella sampling
|
Grazing transits present a special problem for statistical studies of
exoplanets. Even though grazing planetary orbits are rare (due to geometric
selection effects), for many low to moderate signal-to-noise cases, a
significant fraction of the posterior distribution is nonetheless consistent
with a grazing geometry. A failure to accurately model grazing transits can
therefore lead to biased inferences even for cases where the planet is not
actually on a grazing trajectory. With recent advances in stellar
characterization, the limiting factor for many scientific applications is now
the quality of available transit fits themselves, and so the time is ripe to
revisit the transit fitting problem. In this paper, we model exoplanet transits
using a novel application of umbrella sampling and a geometry-dependent
parameter basis that minimizes covariances between transit parameters. Our
technique splits the transit fitting problem into independent Monte Carlo
sampling runs for the grazing, non-grazing, and transition regions of the
parameter space, which we then recombine into a single joint posterior
probability distribution using a robust weighting scheme. Our method can be
trivially parallelized and so requires no increase in the wall clock time
needed for computations. Most importantly, our method produces accurate
estimates of exoplanet properties for both grazing and non-grazing orbits,
yielding more robust results than standard methods for many common star-planet
configurations.
|
2201.08350v1
|
2022-04-22
|
Reward Reports for Reinforcement Learning
|
Building systems that are good for society in the face of complex societal
effects requires a dynamic approach. Recent approaches to machine learning (ML)
documentation have demonstrated the promise of discursive frameworks for
deliberation about these complexities. However, these developments have been
grounded in a static ML paradigm, leaving the role of feedback and
post-deployment performance unexamined. Meanwhile, recent work in reinforcement
learning has shown that the effects of feedback and optimization objectives on
system behavior can be wide-ranging and unpredictable. In this paper we sketch
a framework for documenting deployed and iteratively updated learning systems,
which we call Reward Reports. Taking inspiration from various contributions to
the technical literature on reinforcement learning, we outline Reward Reports
as living documents that track updates to design choices and assumptions behind
what a particular automated system is optimizing for. They are intended to
track dynamic phenomena arising from system deployment, rather than merely
static properties of models or data. After presenting the elements of a Reward
Report, we discuss a concrete example: Meta's BlenderBot 3 chatbot. Several
others for game-playing (DeepMind's MuZero), content recommendation
(MovieLens), and traffic control (Project Flow) are included in the appendix.
|
2204.10817v3
|
2022-05-29
|
Generalized Stochastic Matching
|
In this paper, we generalize the recently studied Stochastic Matching problem
to more accurately model a significant medical process, kidney exchange, and
several other applications. Up until now the Stochastic Matching problem that
has been studied was as follows: given a graph G = (V, E), each edge is
included in the realized sub-graph of G mutually independently with probability
p_e, and the goal is to find a degree-bounded sub-graph Q of G that has an
expected maximum matching that approximates the expected maximum matching of
the realized sub-graph. This model does not account for possibilities of vertex
dropouts, which can be found in several applications, e.g. in kidney exchange
when donors or patients opt out of the exchange process as well as in online
freelancing and online dating when online profiles are found to be faked. Thus,
we will study a more generalized model of Stochastic Matching in which vertices
and edges are both realized independently with some probabilities p_v, p_e,
respectively, which more accurately fits important applications than the
previously studied model.
We will discuss the first algorithms and analysis for this generalization of
the Stochastic Matching model and prove that they achieve good approximation
ratios. In particular, we show that the approximation factor of a natural
algorithm for this problem is at least $0.6568$ in unweighted graphs, and $1/2
+ \epsilon$ in weighted graphs for some constant $\epsilon > 0$. We further
improve our result for unweighted graphs to $2/3$ using edge degree constrained
subgraphs (EDCS).
|
2205.14717v1
|
2022-07-25
|
Spin-transfer and spin-orbit torques in the Landau-Lifshitz-Gilbert equation
|
Dynamic simulations of spin-transfer and spin-orbit torques are increasingly
important for a wide range of spintronic devices including magnetic random
access memory, spin-torque nano-oscillators and electrical switching of
antiferromagnets. Here we present a computationally efficient method for the
implementation of spin-transfer and spin-orbit torques within the
Landau-Lifshitz-Gilbert equation used in micromagnetic and atomistic
simulations. We consolidate and simplify the varying terminology of different
kinds of torques into a physical action and physical origin that clearly shows
the common action of spin torques while separating their different physical
origins. Our formalism introduces the spin torque as an effective magnetic
field, greatly simplifying the numerical implementation and aiding the
interpretation of results. The strength of the effective spin torque field
unifies the action of the spin torque and subsumes the details of experimental
effects such as interface resistance and spin Hall angle into a simple
transferable number between numerical simulations. We present a series of
numerical tests demonstrating the mechanics of generalised spin torques in a
range of spintronic devices. This revised approach to modelling spin-torque
effects in numerical simulations enables faster simulations and a more direct
way of interpreting the results, and thus it is also suitable to be used in
direct comparisons with experimental measurements or in a modelling tool that
takes experimental values as input.
|
2207.12071v2
|
2022-08-03
|
On ergodic invariant measures for the stochastic Landau-Lifschitz-Gilbert equation in 1D
|
We establish existence of an ergodic invariant measure on
$H^1(D,\mathbb{R}^3)\cap L^2(D,\mathbb{S}^2)$ for the stochastic
Landau-Lifschitz-Gilbert equation on a bounded one dimensional interval $D$.
The conclusion is achieved by employing the classical Krylov-Bogoliubov
theorem. In contrast to other equations, verifying the hypothesis of the
Krylov-Bogoliubov theorem is not a standard procedure. We employ rough paths
theory to show that the semigroup associated to the equation has the Feller
property in $H^1(D,\mathbb{R}^3)\cap L^2(D,\mathbb{S}^2)$. It does not seem
possible to achieve the same conclusion by the classical Stratonovich calculus.
On the other hand, we employ the classical Stratonovich calculus to prove the
tightness hypothesis. The Krein-Milman theorem implies existence of an ergodic
invariant measure. In case of spatially constant noise, we show that there
exists a unique Gibbs invariant measure and we establish the qualitative
behaviour of the unique stationary solution. In absence of the anisotropic
energy and for a spatially constant noise, we are able to provide a path-wise
long time behaviour result: in particular, every solution synchronises with a
spherical Brownian motion and it is recurrent for large times
|
2208.02136v2
|
2023-01-25
|
The Benchmark M Dwarf Eclipsing Binary CM Draconis With TESS: Spots, Flares and Ultra-Precise Parameters
|
A gold standard for the study of M dwarfs is the eclipsing binary CM
Draconis. It is rare because it is bright ($J_{\rm mag}=8.5$) and contains twin
fully convective stars on an almost perfectly edge-on orbit. Both masses and
radii were previously measured to better than $1\%$ precision, amongst the best
known. We use 15 sectors of data from the Transiting Exoplanet Survey Satellite
(TESS) to show that CM Draconis is the gift that keeps on giving. Our paper has
three main components. First, we present updated parameters, with radii and
masses constrained to previously unheard of precisions of $\approx 0.06\%$ and
$\approx 0.12\%$, respectively. Second, we discover strong and variable spot
modulation, suggestive of spot clustering and an activity cycle on the order of
$\approx 4$ years. Third, we discover 163 flares. We find a relationship
between the spot modulation and flare rate, with flares more likely to occur
when the stars appear brighter. This may be due to a positive correlation
between flares and the occurrence of bright spots (plages). The flare rate is
surprisingly not reduced during eclipse, but one flare may show evidence of
being occulted. We suggest the flares may be preferentially polar, which has
positive implications for the habitability of planets orbiting M dwarfs.
|
2301.10858v2
|
2023-02-23
|
Beyond Bias and Compliance: Towards Individual Agency and Plurality of Ethics in AI
|
AI ethics is an emerging field with multiple, competing narratives about how
to best solve the problem of building human values into machines. Two major
approaches are focused on bias and compliance, respectively. But neither of
these ideas fully encompasses ethics: using moral principles to decide how to
act in a particular situation. Our method posits that the way data is labeled
plays an essential role in the way AI behaves, and therefore in the ethics of
machines themselves. The argument combines a fundamental insight from ethics
(i.e. that ethics is about values) with our practical experience building and
scaling machine learning systems. We want to build AI that is actually ethical
by first addressing foundational concerns: how to build good systems, how to
define what is good in relation to system architecture, and who should provide
that definition.
Building ethical AI creates a foundation of trust between a company and the
users of that platform. But this trust is unjustified unless users experience
the direct value of ethical AI. Until users have real control over how
algorithms behave, something is missing in current AI solutions. This causes
massive distrust in AI, and apathy towards AI ethics solutions. The scope of
this paper is to propose an alternative path that allows for the plurality of
values and the freedom of individual expression. Both are essential for
realizing true moral character.
|
2302.12149v1
|
2023-04-03
|
Three-Dimensional Structure of Hybrid Magnetic Skyrmions Determined by Neutron Scattering
|
Magnetic skyrmions are topologically protected chiral spin textures which
present opportunities for next-generation magnetic data storage and logic
information technologies. The topology of these structures originates in the
geometric configuration of the magnetic spins - more generally described as the
structure. While the skyrmion structure is most often depicted using a 2D
projection of the three-dimensional structure, recent works have emphasized the
role of all three dimensions in determining the topology and their response to
external stimuli. In this work, grazing-incidence small-angle neutron
scattering and polarized neutron reflectometry are used to determine the
three-dimensional structure of hybrid skyrmions. The structure of the hybrid
skyrmions, which includes a combination of N\'eel-like and Bloch-like
components along their length, is expected to significantly contribute to their
notable stability, which includes ambient conditions. To interpret the neutron
scattering data, micromagnetic simulations of the hybrid skyrmions were
performed, and the corresponding diffraction patterns were determined using a
Born approximation transformation. The converged magnetic profile reveals the
magnetic structure along with the skyrmion depth profile, including the
thickness of the Bloch and N\'eel segments and the diameter of the core.
|
2304.01369v2
|
2023-05-18
|
Towards Intersectional Moderation: An Alternative Model of Moderation Built on Care and Power
|
Shortcomings of current models of moderation have driven policy makers,
scholars, and technologists to speculate about alternative models of content
moderation. While alternative models provide hope for the future of online
spaces, they can fail without proper scaffolding. Community moderators are
routinely confronted with similar issues and have therefore found creative ways
to navigate these challenges. Learning more about the decisions these
moderators make, the challenges they face, and where they are successful can
provide valuable insight into how to ensure alternative moderation models are
successful.
In this study, I perform a collaborative ethnography with moderators of
r/AskHistorians, a community that uses an alternative moderation model,
highlighting the importance of accounting for power in moderation. Drawing from
Black feminist theory, I call this "intersectional moderation." I focus on
three controversies emblematic of r/AskHistorians' alternative model of
moderation: a disagreement over a moderation decision; a collaboration to fight
racism on Reddit; and a period of intense turmoil and its impact on policy.
Through this evidence I show how volunteer moderators navigated multiple layers
of power through care work. To ensure the successful implementation of
intersectional moderation, I argue that designers should support
decision-making processes and policy makers should account for the impact of
the sociotechnical systems in which moderators work.
|
2305.11250v1
|
2023-06-08
|
Environmental Considerations in the age of Space Exploration: the Conservation and Protection of Non-Earth Environments
|
This document is an abbreviated version of the law review, led by Alexander
Q. Gilbert, entitled: "Major Federal Actions Significantly Affecting the
Quality of the Space Environment: Applying NEPA to Federal and Federally
Authorized Outer Space Activities." Here, we discuss the future of the space
environment, and how it is increasingly becoming a human environment with
regard to continued robotic and human presence in orbit, planned and proposed
robotic and human presence on bodies such as the Moon and Mars, planned space
mining projects, the increase use of low-Earth orbit for communications
satellites, and other human uses of space. As such, we must evaluate and
protect these environments just as we do on Earth. In order to prioritize
mitigating threat of contamination, avoiding conflict, and promoting
sustainability in space, all to ensure that actors maintain equal and safe
access to space, we propose applying the National Environmental Policy Act, or
NEPA, to space missions. We put forward three examples of environmental best
practices for those involved in space missions to consider: adopting
precautionary and communicative structure to before, during, and after missions
taking place off-world, environmental impact statements, and transparency in
tools that may impact the environment (including radioisotope power sources,
plans in case of vehicle loss or loss of trajectory, and others). For
additional discussion related to potential space applications of NEPA, NEPA's
statutory text, and NEPA's relation to space law and judicial precedent for
space, we recommend reading the full law review.
|
2306.05594v1
|
2023-07-13
|
Accurate and efficient photo-eccentric transit modeling
|
A planet's orbital eccentricity is fundamental to understanding the present
dynamical state of a system and is a relic of its formation history. There is
high scientific value in measuring eccentricities of Kepler and TESS planets
given the sheer size of these samples and the diversity of their planetary
systems. However, Kepler and TESS lightcurves typically only permit robust
determinations of planet-to-star radius ratio $r$, orbital period $P$, and
transit mid-point $t_0$. Three other orbital properties, including impact
parameter $b$, eccentricity $e$, and argument of periastron $\omega$, are more
challenging to measure because they are all encoded in the lightcurve through
subtle effects on a single observable -- the transit duration $T_{14}$. In
Gilbert, MacDougall, & Petigura (2022), we showed that a five-parameter transit
description $\{P, t_0, r, b, T_{14}\}$ naturally yields unbiased measurements
of $r$ and $b$. Here, we build upon our previous work and introduce an accurate
and efficient prescription to measure $e$ and $\omega$. We validate this
approach through a suite of injection-and-recovery experiments. Our method
agrees with previous approaches that use a seven-parameter transit description
$\{P, t_0, r, b, \rho_\star, e, \omega\}$ which explicitly fits the
eccentricity vector and mean stellar density. The five-parameter method is
simpler than the seven-parameter method and is "future-proof" in that posterior
samples can be quickly reweighted (via importance sampling) to accommodate
updated priors and updated stellar properties. This method thus circumvents the
need for an expensive reanalysis of the raw photometry, offering a streamlined
path toward large-scale population analyses of eccentricity from transit
surveys.
|
2307.07070v1
|
2023-09-01
|
A decoupled, convergent and fully linear algorithm for the Landau--Lifshitz--Gilbert equation with magnetoelastic effects
|
We consider the coupled system of the Landau--Lifshitz--Gilbert equation and
the conservation of linear momentum law to describe magnetic processes in
ferromagnetic materials including magnetoelastic effects in the small-strain
regime. For this nonlinear system of time-dependent partial differential
equations, we present a decoupled integrator based on first-order finite
elements in space and an implicit one-step method in time. We prove
unconditional convergence of the sequence of discrete approximations towards a
weak solution of the system as the mesh size and the time-step size go to zero.
Compared to previous numerical works on this problem, for our method, we prove
a discrete energy law that mimics that of the continuous problem and, passing
to the limit, yields an energy inequality satisfied by weak solutions.
Moreover, our method does not employ a nodal projection to impose the unit
length constraint on the discrete magnetisation, so that the stability of the
method does not require weakly acute meshes. Furthermore, our integrator and
its analysis hold for a more general setting, including body forces and
traction, as well as a more general representation of the magnetostrain.
Numerical experiments underpin the theory and showcase the applicability of the
scheme for the simulation of the dynamical processes involving magnetoelastic
materials at submicrometer length scales.
|
2309.00605v2
|
2023-11-09
|
Skyrmion-Excited Spin Wave Fractal Network
|
Magnetic skyrmions exhibit unique, technologically relevant pseudo-particle
behaviors which arise from their topological protection, including
well-defined, three-dimensional dynamic modes that occur at microwave
frequencies. During dynamic excitation, spin waves are ejected into the
interstitial regions between skyrmions, creating the magnetic equivalent of a
turbulent sea. However, since the spin waves in these systems have a
well-defined length scale, and the skyrmions are on an ordered lattice, ordered
structures from spin wave interference can precipitate from the chaos. This
work uses small angle neutron scattering (SANS) to capture the dynamics in
hybrid skyrmions and investigate the spin wave structure. Performing
simultaneous ferromagnetic resonance and SANS, the diffraction pattern shows a
large increase in low-angle scattering intensity which is present only in the
resonance condition. This scattering pattern is best fit using a mass fractal
model, which suggests the spin waves form a long-range fractal network. The
fractal structure is constructed of fundamental units with a size that encodes
the spin wave emissions and are constrained by the skyrmion lattice. These
results offer critical insights into the nanoscale dynamics of skyrmions,
identify a new dynamic spin wave fractal structure, and demonstrates SANS as a
unique tool to probe high-speed dynamics.
|
2311.05469v1
|
2023-12-08
|
Analysis of the magnetization control problem for the 2D evolutionary Landau-Lifshitz-Gilbert equation
|
The magnetization control problem for the Landau-Lifshitz-Gilbert (LLG)
equation $m_t= m \times (\Delta m +u)- m \times (m \times (\Delta m +u)),\
(x,t) \in \Omega\times (0,T] $ with zero Neumann boundary data on a
two-dimensional bounded domain $\Omega$ is studied when the control energy $u$
is applied on the effective field. First, we show the existence of a weak
solution, and the magnetization vector field $m$ satisfies an energy
inequality. If a weak solution $m$ obeys the condition that $\nabla m\in
L^4(0,T;L^4(\Omega)),$ then we show that it is a regular solution. The
classical cost functional is modified by incorporating
$L^4(0,T;L^4(\Omega))$-norm of $\nabla m$ so that a rigorous study of the
optimal control problem is established. Then, we justified the existence of an
optimal control and derived first-order necessary optimality conditions using
an adjoint problem approach. We have established the continuous dependency and
Fr\'echet differentiability of the control-to-state and control-to-costate
operators and shown the Lipschitz continuity of their Fr\'echet derivatives.
Using these postulates, we derived a local second-order sufficient optimality
condition when a control belongs to a critical cone. Finally, we also obtain
another remarkable global optimality condition posed only in terms of the
adjoint state associated with the control problem.
|
2312.05165v1
|
2024-01-05
|
Solutions to the Landau-Lifshitz-Gilbert equation in the frequency space: Discretization schemes for the dynamic-matrix approach
|
The dynamic matrix method addresses the Landau-Lifshitz-Gilbert (LLG)
equation in the frequency domain by transforming it into an eigenproblem.
Subsequent numerical solutions are derived from the eigenvalues and
eigenvectors of the dynamic matrix. In this work we explore discretization
methods needed to obtain a matrix representation of the dynamic operator, a
fundamental counterpart of the dynamic matrix. Our approach opens a new set of
linear algebra tools for the dynamic matrix method and expose the
approximations and limitations intrinsic to it. Moreover, our discretization
algorithms can be applied to various discretization schemes, extending beyond
micromagnetism problems. We present some application examples, including a
technique to obtain the dynamic matrix directly from the magnetic free energy
function of an ensemble of macrospins, and an algorithmic method to calculate
numerical micromagnetic kernels, including plane wave kernels. We also show how
to exploit symmetries and reduce the numerical size of micromagnetic
dynamic-matrix problems by a change of basis. This procedure significantly
reduces the size of the dynamic matrix by several orders of magnitude while
maintaining high numerical precision. Additionally, we calculate analytical
approximations for the dispersion relations in magnonic crystals. This work
contributes to the understanding of the current magnetization dynamics methods,
and could help the development and formulations of novel analytical and
numerical methods for solving the LLG equation within the frequency domain.
|
2401.02933v2
|
1995-10-27
|
Radiation Damping and Quantum Excitation for Longitudinal Charged Particle Dynamics in the Thermal Wave Model
|
On the basis of the recently proposed {\it Thermal Wave Model (TWM) for
particle beams}, we give a description of the longitudinal charge particle
dynamics in circular accelerating machines by taking into account both
radiation damping and quantum excitation (stochastic effect), in presence of a
RF potential well. The longitudinal dynamics is governed by a 1-D
Schr\"{o}dinger-like equation for a complex wave function whose squared modulus
gives the longitudinal bunch density profile. In this framework, the
appropriate {\it r.m.s. emittance} scaling law, due to the damping effect, is
naturally recovered, and the asymptotic equilibrium condition for the bunch
length, due to the competition between quantum excitation (QE) and radiation
damping (RD), is found. This result opens the possibility to apply the TWM,
already tested for protons, to electrons, for which QE and RD are very
important.
|
9510004v1
|
1994-02-04
|
Constraints on Models of Galaxy Formation from the Evolution of Damped Ly$α$ Absorption Systems
|
There is accumulating observational evidence suggesting that damped
Ly$\alpha$ absorption systems systems are the progenitors of present-day spiral
galaxies. We use the observed properties of these systems to place constraints
on the history of star formation in galactic disks, and on cosmological
theories of structure formation in the universe. We show that the observed
increase in $\Omega_{HI}$ contributed by damped Ly$\alpha$ systems at high
redshift implies that star formation must have been considerably less efficient
in the past. We also show that the data can constrain cosmological models in
which structure forms at late epochs. A mixed dark matter (MDM) model with
$\Omega_{\nu}=0.3$ is unable to reproduce the mass densities of cold gas seen
at high redshift, even in the absence of any star formation. We show that at
redshifts greater than 3, this model predicts that the total baryonic mass
contained in dark matter halos with circular velocities $V_c > 35$ km s$^{-1}$
is less than the observed mass of HI in damped systems. At these redshifts, the
photo-ionizing background would prevent gas from dissipating and collapsing to
form high column density systems in halos smaller than 35 km s$^{-1}$. MDM
models are thus ruled out by the observations.
|
9402015v1
|
1999-02-11
|
The HI Column Density Distribution Function at z=0: the Connection to Damped Ly alpha Statistics
|
We present a measurement of the HI column density distribution function,
f(N), at the present epoch for column densities log N > 20 cm^-2. These high
column densities compare to those measured in damped Ly alpha lines seen in
absorption against background quasars. Although observationally rare, it
appears that the bulk of the neutral gas in the Universe is associated with
these damped Ly alpha systems. In order to obtain a good anchor point at z=0 we
determine f(N) in the local Universe by using 21cm synthesis observations of a
complete sample of spiral galaxies. We show that f(N) for damped Ly alpha
systems has changed significantly from high z to the present and that change is
greatest for the highest column densities. The measurements indicate that low
surface brightness galaxies make a minor contribution to the cross section for
HI, especially for log N > 21^-2.
|
9902171v1
|
2000-10-27
|
Planetary Torques as the Viscosity of Protoplanetary Disks
|
We revisit the idea that density-wave wakes of planets drive accretion in
protostellar disks. The effects of many small planets can be represented as a
viscosity if the wakes damp locally, but the viscosity is proportional to the
damping length. Damping occurs mainly by shocks even for earth-mass planets.
The excitation of the wake follows from standard linear theory including the
torque cutoff. We use this as input to an approximate but quantitative
nonlinear theory based on Burger's equation for the subsequent propagation and
shock. Shock damping is indeed local but weakly so. If all metals in a
minimum-mass solar nebula are invested in planets of a few earth masses each,
dimensionless viscosities [alpha] of order dex(-4) to dex(-3) result. We
compare this with observational constraints. Such small planets would have
escaped detection in radial-velocity surveys and could be ubiquitous. If so,
then the similarity of the observed lifetime of T Tauri disks to the
theoretical timescale for assembling a rocky planet may be fate rather than
coincidence.
|
0010576v1
|
2000-12-27
|
Constraining Dark Matter candidates from structure formation
|
We show that collisional damping of adiabatic primordial fluctuations yields
constraints on the possible range of mass and interaction rates of Dark Matter
particles. Our analysis relies on a general classification of Dark Matter
candidates, that we establish independently of any specific particle theory or
model. From a relation between the collisional damping scale and the Dark
Matter interaction rate, we find that Dark Matter candidates must have
cross-sections at decoupling smaller than $ 10^{-33} \frac{m_{dm}}{1 MeV} cm^2$
with photons and $10^{-37} \frac{m_{dm}}{1 MeV} cm^2$ with neutrinos, to
explain the observed primordial structures of $10^9$ Solar mass. These damping
constraints are particularly relevant for Warm Dark Matter candidates. They
also leave open less known regions of parameter space corresponding to
particles having rather high interaction rates with other species than
neutrinos and photons.
|
0012504v2
|
2001-07-26
|
The Contribution of HI-Rich Galaxies to the Damped Absorber Population at z=0
|
We present a study of HI-rich galaxies in the local universe selected from
blind emission-line surveys. These galaxies represent the emission-line
counterparts of local damped Lyman-alpha systems. We find that the HI
cross-section of galaxies is drawn from a large range of galaxy masses below
M_star, 66% of the area comes from galaxies in the range 8.5 < Log M_star <
9.7. Both because of the low mass galaxy contribution, and because of the range
of galaxy types and luminosities at any given HI mass, the galaxies
contributing to the HI cross-section are not exclusively L_star spirals, as is
often expected. The optical and near infrared counterparts of these galaxies
cover a range of types (from spirals to irregulars), luminosities (from L_star
to <0.01 L_star), and surface brightnesses. The range of optical and near
infrared properties as well as the kinematics for this population are
consistent with the properties for the low-z damped Lyman-alpha absorbers. We
also show that the number of HI-rich galaxies in the local universe does not
preclude evolution of the low-z damped absorber population, but it is
consistent with no evolution.
|
0107495v1
|
2003-11-17
|
Cosmic Ray Scattering by Compressible Magnetohydrodynamic Turbulence
|
Recent advances in understanding of magnetohydrodynamic (MHD) turbulence call
for substantial revisions in the picture of cosmic ray transport. In this paper
we use recently obtained scaling laws for MHD modes to calculate the scattering
frequency for cosmic rays in the ISM. We consider gyroresonance with MHD modes
(Alfvenic, slow and fast) and transit-time damping (TTD) by fast modes. We
provide calculations of cosmic ray scattering for various phases of
interstellar medium with realistic interstellar turbulence driving that is
consistent with the velocity dispersions observed in diffuse gas. We account
for the turbulence cutoff arising from both collisional and collisionless
damping. We obtain analytical expressions for diffusion coefficients that enter
Fokker-Planck equation describing cosmic ray evolution. We calculate the
scattering rate and parallel spatial diffusion coefficients of cosmic rays for
both Alfvenic and fast modes. We conclude that fast modes provides the dominant
contribution to cosmic ray scattering for the typical interstellar conditions
in spite of the fact that fast modes are subjected to damping. We show that the
efficiency of the scattering depends on the plasma beta since it determines the
damping of the fast modes. We also show that the streaming instability is
modified in the presence of turbulence.
|
0311369v1
|
2003-11-17
|
Wave damping by MHD turbulence and its effect upon cosmic ray propagation in the ISM
|
Cosmic rays scatter off magnetic irregularities (Alfven waves) with which
they are resonant, that is waves of wavelength comparable to their gyroradii.
These waves may be generated either by the cosmic rays themselves, if they
stream faster than the Alfven speed, or by sources of MHD turbulence. Waves
excited by streaming cosmic rays are ideally shaped for scattering, whereas the
scattering efficiency of MHD turbulence is severely diminished by its
anisotropy. We show that MHD turbulence has an indirect effect on cosmic ray
propagation by acting as a damping mechanism for cosmic ray generated waves.
The hot (``coronal'') phase of the interstellar medium is the best candidate
location for cosmic ray confinement by scattering from self-generated waves. We
relate the streaming velocity of cosmic rays to the rate of turbulent
dissipation in this medium, for the case in which turbulent damping is the
dominant damping mechanism. We conclude that cosmic rays with up to 10^2 GeV
could not stream much faster than the Alfven speed, but that 10^6 GeV cosmic
rays would stream unimpeded by self-generated waves unless the coronal gas were
remarkably turbulence-free.
|
0311400v1
|
2004-10-25
|
Constraints on Dark Matter interactions from structure formation: Damping lengths
|
(Shortened) Weakly Interacting Massive Particles are often said to be the
best Dark Matter candidates. Studies have shown however that rather large Dark
Matter-photon or Dark Matter-baryon interactions could be allowed by cosmology.
Here we address the question of the role of the Dark Matter interactions in
more detail to determine at which extent Dark Matter has to be necessarily
weakly interacting. To this purpose, we compute the collisional damping (and
free-streaming) lengths of generic interacting Dark Matter candidates and
compare them to the scale of the smallest primordial structures known to exist
in the Universe. We obtain necessary conditions that any candidate must
satisfy. We point out the existence of new Dark Matter scenarios and exhibit
new damping regimes. For example, an interacting candidate may bear a similar
damping than that of collisionless Warm Dark Matter particles. The main
difference is due to the Dark Matter coupling to interacting (or even
freely-propagating) species. Our approach yields a general classification of
Dark Matter candidates which extends the definitions of the usual Cold, Warm
and Hot Dark Matter scenarios when interactions, weak or strong, are
considered.
|
0410591v1
|
2005-10-10
|
Collisional dissipation of Alfvén waves in a partially ionised solar chromosphere
|
Certain regions of the solar atmosphere are at sufficiently low temperatures
to be only partially ionised. The lower chromosphere contains neutral atoms,
the existence of which greatly increases the efficiency of the damping of waves
due to collisional friction momentum transfer. More specifically the Cowling
conductivity can be up to 12 orders of magnitude smaller than the Spitzer
value, so that the main damping mechanism in this region is due to the
collisions between neutrals and positive ions. Using values for the gas density
and temperature as functions of height taken from the VAL C model of the quiet
Sun, an estimate is made for the dependance of the Cowling conductivity on
height and strength of magnetic field. Using both analytic and numerical
approaches the passage of Alfven waves over a wide spectrum through this
partially ionised region is investigated. Estimates of the efficiency of this
region in the damping of Alfven waves are made and compared for both
approaches. We find that Alfven waves with frequencies above 0.6Hz are
completely damped and frequencies below 0.01 Hz unaffected.
|
0510265v1
|
2006-04-10
|
The Nearby Damped Lyman-alpha Absorber SBS 1543+593: A Large HI Envelope in a Gas-Rich Galaxy Group
|
We present a Very Large Array (VLA) HI 21cm map and optical observations of
the region around one of the nearest damped Lyman-alpha absorbers beyond the
local group, SBS 1543+593. Two previously uncataloged galaxies have been
discovered and a redshift has been determined for a third. All three of these
galaxies are at the redshift of SBS 1543+593 and are ~185 kpc from the damped
Lyman-alpha absorber. We discuss the HI and optical properties of SBS 1543+593
and its newly identified neighbors. Both SBS 1543+593 and Dwarf 1 have baryonic
components that are dominated by neutral gas -- unusual for damped Lyman-alpha
absorbers for which only ~5% of the HI cross-section originates in such
strongly gas-dominated systems. What remains unknown is whether low mass
gas-rich groups are common surrounding gas-rich galaxies in the local universe
and whether the low star-formation rate in these systems is indicative of a
young system or a stable, slowly evolving system. We discuss these evolutionary
scenarios and future prospects for answering these questions.
|
0604220v1
|
2006-08-02
|
SINS of Viscosity Damped Turbulence
|
The problems with explaining the Small Ionized and Neutral Structures (SINS)
appealing to turbulence stem from inefficiency of the Kolmogorov cascade in
creating large fluctuations at sufficiently small scales. However, other types
of cascades are possible. When magnetic turbulence in a fluid with viscosity
that is much larger than resistivity gets to a viscous damping scale, the
turbulence does not vanish. Instead, it gets into a different new regime.
Viscosity-damped turbulence produces fluctuations on the small scales. Magnetic
fields sheared by turbulent motions by eddies not damped by turbulence create
small scale filaments that are confined by the external plasma pressure. This
creates small scale density fluctuations. In addition, extended current sheets
create even stronger density gradients that accompany field reversals in the
plane perpendicular to mean magnetic field. Those can be responsible for the
SINS formation. This scenario is applicable to partially ionized gas. More
studies of reconnection in the viscosity dominated regime are necessary to
understand better the extend to which the magnetic reversals can compress the
gas.
|
0608046v3
|
1998-01-13
|
Comparative Study of the Adiabatic Evolution of a Nonlinear Damped Oscillator and an Hamiltonian Generalized Nonlinear Oscillator
|
In this paper we study to what extent the canonical equivalence and the
identity of the geometric phases of dissipative and conservative linear
oscillators, established in a preceeding paper, can be generalized to nonlinear
ones. Considering first the 1-D quartic generalized oscillator we determine, by
means of a perturbative time dependent technic of reduction to normal forms,
the canonical transformations which lead to the adiabatic invariant of the
system and to the first order non linear correction to its Hannay angle. Then,
applying the same transformations to the 1-D quartic damped oscillator we show
that this oscillator is canonically equivalent to the linear generalized
harmonic oscillator for finite values of the damping parameter (which implies
no correction to the linear Hannay angle) whereas, in an appropriate weak
damping limit, it becomes equivalent to the quartic generalized oscillator
(which implies a non linear correction to this angle) .
|
9801017v1
|
1995-03-20
|
Quasiparticle damping in two-dimensional superconductors with unconventional pairing.
|
We calculate the damping of excitations due to four-fermionic interaction in
the case of two-dimensional superconductor with nodes in the spectrum. At zero
temperature and low frequencies it reveals gapless $\omega^3$ behavior at the
nodal points. With the frequency increasing the crossover to the normal-state
regimes appears. At high frequencies the damping strongly depends on details of
a normal-state spectrum parametrization. Two important particular cases such as
the models of almost free and tight-binding electrons are studied explicitly
and the characteristic scales are expressed through the model-free parameters
of the spectrum at the nodal points. The possibility of crossover in
temperature dependence of damping in the superconducting phase is discussed.
|
9503112v1
|
1996-01-09
|
Relaxation of Collective Excitations in LJ-13 Cluster
|
We have performed classical molecular dynamics simulation of $Ar_{13}$
cluster to study the behavior of collective excitations. In the solid ``phase''
of the cluster, the collective oscillation of the monopole mode can be well
fitted to a damped harmonic oscillator. The parameters of the equivalent damped
harmonic oscillator-- the damping coefficient, spring constant, time period of
oscillation and the mass of the oscillator -- all show a sharp change in
behavior at a kinetic temperature of about $7.0^oK$. This marks yet another
characteristic temperature of the system, a temperature $T_s$ below which
collective excitations are very stable, and at higher temperatures the single
particle excitations cause the damping of the collective oscillations. We argue
that so long as the cluster remains confined within the global potential energy
minimum the collective excitations do not decay; and once the cluster comes out
of this well, the local potential energy minima pockets act as single particle
excitation channels in destroying the collective motion. The effect is manifest
in almost all the physical observables of the cluster.
|
9601026v2
|
1997-10-14
|
Damping of Hydrodynamic Modes in a Trapped Bose Gas above the Bose-Einstein Transition Temperature
|
We calculate the damping of low-lying collective modes of a trapped Bose gas
in the hydrodynamic regime, and show that this comes solely from the shear
viscosity, since the contributions from bulk viscosity and thermal conduction
vanish. The hydrodynamic expression for the damping diverges due to the failure
of hydrodynamics in the outer parts of the cloud, and we take this into account
by a physically motivated cutoff procedure. Our analysis of available
experimental data indicates that higher densities than have yet been achieved
are necessary for investigating hydrodynamic modes above the Bose-Einstein
transition temperature.
|
9710130v2
|
1997-12-24
|
Thermal dephasing and the echo effect in a confined Bose-Einstein condensate
|
It is shown that thermal fluctuations of the normal component induce
dephasing -- reversible damping of the low energy collective modes of a
confined Bose-Einstein condensate. The dephasing rate is calculated for the
isotropic oscillator trap, where Landau damping is expected to be suppressed.
This rate is characterized by a steep temperature dependence, and it is weakly
amplitude dependent.
In the limit of large numbers of bosons forming the condensate, the rate
approaches zero. However, for the numbers employed by the JILA group, the
calculated value of the rate is close to the experimental one. We suggest that
a reversible nature of the damping caused by the thermal dephasing in the
isotropic trap can be tested by the echo effect. A reversible nature of Landau
damping is also discussed, and a possibility of observing the echo effect in an
anisotropic trap is considered as well. The parameters of the echo are
calculated in the weak echo limit for the isotropic trap. Results of the
numerical simulations of the echo are also presented.
|
9712287v1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.