publicationDate stringlengths 1 2.79k | title stringlengths 1 36.5k ⌀ | abstract stringlengths 1 37.3k ⌀ | id stringlengths 9 47 |
|---|---|---|---|
2021-05-05 | Elemental Abundances in M31: Gradients in the Giant Stellar Stream | We analyze existing measurements of [Fe/H] and [$\alpha$/Fe] for individual
red giant branch (RGB) stars in the Giant Stellar Stream (GSS) of M31 to
determine whether spatial abundance gradients are present. These measurements
were obtained from low- ($R \sim 3000$) and moderate- ($R \sim 6000$)
resolution Keck/DEIMOS spectroscopy using spectral synthesis techniques as part
of the Elemental Abundances in M31 survey. From a sample of 62 RGB stars
spanning the GSS at 17, 22, and 33 projected kpc, we measure a [Fe/H] gradient
of $-$0.018 $\pm$ 0.003 dex kpc$^{-1}$ and negligible [$\alpha$/Fe] gradient
with M31-centric radius. We investigate GSS abundance patterns in the outer
halo using additional [Fe/H] and [$\alpha$/Fe] measurements for 6 RGB stars
located along the stream at 45 and 58 projected kpc. These abundances provide
tentative evidence that the trends in [Fe/H] and [$\alpha$/Fe] beyond 40 kpc in
the GSS are consistent with those within 33 kpc. We also compare the GSS
abundances to 65 RGB stars located along the possibly related Southeast (SE)
shelf substructure at 12 and 18 projected kpc. The abundances of the GSS and SE
shelf are consistent, supporting a common origin hypothesis, although this
interpretation may be complicated by the presence of [Fe/H] gradients in the
GSS. We discuss the abundance patterns in the context of photometric studies
from the literature and explore implications for the properties of the GSS
progenitor, suggesting that the high $\langle$[$\alpha$/Fe]$\rangle$ of the GSS
(+0.40 $\pm$ 0.05 dex) favors a major merger scenario for its formation. | 2105.02339v1 |
2021-05-17 | A Unified Adaptive Recoding Framework for Batched Network Coding | Batched network coding is a variation of random linear network coding which
has low computational and storage costs. In order to adapt to random
fluctuations in the number of erasures in individual batches, it is not optimal
to recode and transmit the same number of packets for all batches. Different
distributed optimization models, which are called adaptive recoding schemes,
were formulated for this purpose. The key component of these optimization
problems is the expected value of the rank distribution of a batch at the next
network node, which is also known as the expected rank. In this paper, we put
forth a unified adaptive recoding framework with an arbitrary recoding field
size. We show that the expected rank functions are concave when the packet loss
pattern is a stationary stochastic process, which covers but not limited to
independent packet loss and Gilbert-Elliott packet loss model. Under this
concavity assumption, we show that there always exists a solution which not
only can minimize the randomness on the number of recoded packets but also can
tolerate rank distribution errors due to inaccurate measurements or limited
precision of the machine. We provide an algorithm to obtain such an optimal
optimal solution, and propose tuning schemes that can turn any feasible
solution into a desired optimal solution. | 2105.07614v2 |
2021-05-21 | Hybrid Machine Learning for Scanning Near-field Optical Spectroscopy | The underlying physics behind an experimental observation often lacks a
simple analytical description. This is especially the case for scanning probe
microscopy techniques, where the interaction between the probe and the sample
is nontrivial. Realistic modeling to include the details of the probe is always
exponentially more difficult than its "spherical cow" counterparts. On the
other hand, a well-trained artificial neural network based on real data can
grasp the hidden correlation between the signal and sample properties. In this
work, we show that, via a combination of model calculation and experimental
data acquisition, a physics-infused hybrid neural network can predict the
tip-sample interaction in the widely used scattering-type scanning near-field
optical microscope. This hybrid network provides a long-sought solution for
accurate extraction of material properties from tip-specific raw data. The
methodology can be extended to other scanning probe microscopy techniques as
well as other data-oriented physical problems in general. | 2105.10551v1 |
2021-05-26 | Contention Resolution with Predictions | In this paper, we consider contention resolution algorithms that are
augmented with predictions about the network. We begin by studying the natural
setup in which the algorithm is provided a distribution defined over the
possible network sizes that predicts the likelihood of each size occurring. The
goal is to leverage the predictive power of this distribution to improve on
worst-case time complexity bounds. Using a novel connection between contention
resolution and information theory, we prove lower bounds on the expected time
complexity with respect to the Shannon entropy of the corresponding network
size random variable, for both the collision detection and no collision
detection assumptions. We then analyze upper bounds for these settings,
assuming now that the distribution provided as input might differ from the
actual distribution generating network sizes. We express their performance with
respect to both entropy and the statistical divergence between the two
distributions -- allowing us to quantify the cost of poor predictions. Finally,
we turn our attention to the related perfect advice setting, parameterized with
a length $b\geq 0$, in which all active processes in a given execution are
provided the best possible $b$ bits of information about their network. We
provide tight bounds on the speed-up possible with respect to $b$ for
deterministic and randomized algorithms, with and without collision detection.
These bounds provide a fundamental limit on the maximum power that can be
provided by any predictive model with a bounded output size. | 2105.12706v1 |
2021-05-27 | Balancing Static Vacuum Black Holes with Signed Masses in 4 and 5 Dimensions | We construct a new set of asymptotically flat, static vacuum solutions to the
Einstein equations in dimensions 4 and 5, which may be interpreted as a
superposition of positive and negative mass black holes. The resulting
spacetimes are axisymmetric in 4-dimensions and bi-axisymmetric in
5-dimensions, and are regular away from the negative mass singularities, for
instance conical singularities are absent along the axes. In 5-dimensions, the
topologies of signed mass black holes used in the construction may be either
spheres $S^3$ or rings $S^1 \times S^2$; in particular, the negative mass
static black ring solution is introduced. A primary observation that
facilitates the superposition is the fact that, in Weyl-Papapetrou coordinates,
negative mass singularities arise as overlapping singular support for a
particular type of Green's function. Furthermore, a careful analysis of conical
singularities along axes is performed, and formulas are obtained for their
propagation across horizons, negative mass singularities, and corners. The
methods are robust, and may be used to construct a multitude of further
examples. Lastly, we show that balancing does not occur between any two signed
mass black holes of the type studied here in 4 dimensions, while in 5
dimensions two-body balancing is possible. | 2105.13260v2 |
2021-06-11 | Inference for treatment-specific survival curves using machine learning | In the absence of data from a randomized trial, researchers often aim to use
observational data to draw causal inference about the effect of a treatment on
a time-to-event outcome. In this context, interest often focuses on the
treatment-specific survival curves; that is, the survival curves were the
entire population under study to be assigned to receive the treatment or not.
Under certain causal conditions, including that all confounders of the
treatment-outcome relationship are observed, the treatment-specific survival
can be identified with a covariate-adjusted survival function. Several
estimators of this function have been proposed, including estimators based on
outcome regression, inverse probability weighting, and doubly robust
estimators. In this article, we propose a new cross-fitted doubly-robust
estimator that incorporates data-adaptive (e.g. machine learning) estimators of
the conditional survival functions. We establish conditions on the nuisance
estimators under which our estimator is consistent and asymptotically linear,
both pointwise and uniformly in time. We also propose a novel ensemble learner
for combining multiple candidate estimators of the conditional survival
estimators. Notably, our methods and results accommodate events occurring in
discrete or continuous time (or both). We investigate the practical performance
of our methods using numerical studies and an application to the effect of a
surgical treatment to prevent metastases of parotid carcinoma on mortality. | 2106.06602v1 |
2021-06-10 | Hard Choices in Artificial Intelligence | As AI systems are integrated into high stakes social domains, researchers now
examine how to design and operate them in a safe and ethical manner. However,
the criteria for identifying and diagnosing safety risks in complex social
contexts remain unclear and contested. In this paper, we examine the vagueness
in debates about the safety and ethical behavior of AI systems. We show how
this vagueness cannot be resolved through mathematical formalism alone, instead
requiring deliberation about the politics of development as well as the context
of deployment. Drawing from a new sociotechnical lexicon, we redefine vagueness
in terms of distinct design challenges at key stages in AI system development.
The resulting framework of Hard Choices in Artificial Intelligence (HCAI)
empowers developers by 1) identifying points of overlap between design
decisions and major sociotechnical challenges; 2) motivating the creation of
stakeholder feedback channels so that safety issues can be exhaustively
addressed. As such, HCAI contributes to a timely debate about the status of AI
development in democratic societies, arguing that deliberation should be the
goal of AI Safety, not just the procedure by which it is ensured. | 2106.11022v1 |
2021-06-30 | A long-period substellar object exhibiting a single transit in Kepler | We report the detection of a single transit-like signal in the Kepler data of
the slightly evolved F star KIC4918810. The transit duration is ~45 hours, and
while the orbital period ($P\sim10$ years) is not well constrained, it is one
of the longest among companions known to transit. We calculate the size of the
transiting object to be $R_P = 0.910$ $R_J$. Objects of this size vary by
orders of magnitude in their densities, encompassing masses between that of
Saturn ($0.3$ $M_J$) and stars above the hydrogen-burning limit (~80 $M_J$).
Radial-velocity observations reveal that the companion is unlikely to be a
star. The mass posterior is bimodal, indicating a mass of either ~0.24 $M_J$ or
~26 $M_J$. Continued spectroscopic monitoring should either constrain the mass
to be planetary or detect the orbital motion, the latter of which would yield a
benchmark long-period brown dwarf with a measured mass, radius, and age. | 2107.00027v1 |
2021-08-03 | Comparative study of magnetic properties of Mn$^{3+}$ magnetic clusters in GaN using classical and quantum mechanical approach | Currently, simulations of many-body quantum systems are known to be
computationally too demanding to be solved on classical computers. The main
problem is that the computation time and memory necessary for performing the
calculations usually grow exponentially with the number of particles $N$. An
efficient approach to simulate many-body quantum systems is the use of
classical approximation. However, it is known that at least at low
temperatures, the allowed spin fluctuations in this approach are overestimated
what results in enhanced thermal fluctuations. It is therefore timely and
important to assess the validity of the classical approximation. To this end,
in this work, we compare the results of numerical calculations of small
Mn$^{3+}$ magnetic clusters in GaN, where the Mn spins are treated classically
with those where they are treated quantum-mechanically (crystal field model).
In the first case, we solve the Landau-Lifshitz-Gilbert (LLG) equation that
describes the precessional dynamics of spins represented by classical vectors.
On the other hand, in the crystal field model, the state of Mn$^{3+}$ ion
($d^4$ configuration with $S=2$, $L=2$) is characterized by the set of orbital
and spin quantum numbers $|m_s,m_L>$. Particular attention is paid to use
numerical parameters that ensure the same single ion magnetic anisotropy in
both classical and quantum approximation. Finally, a detailed comparative study
of magnetization $\mathbf{M}(\mathbf{H}, T)$ as a function of the magnetic
field $\mathbf{H}$, temperature $T$, number of ions in a given cluster $N$ and
the strength of super-exchange interaction $J$, obtained from both approaches
will be presented. | 2108.01474v1 |
2021-08-06 | Performance trade-offs in cyber-physical control applications with multi-connectivity | Modern communication devices are often equipped with multiple wireless
communication interfaces with diverse characteristics. This enables exploiting
a form of multi-connectivity known as interface diversity to provide path
diversity with multiple communication interfaces. Interface diversity helps to
combat the problems suffered by single-interface systems due to error bursts in
the link, which are a consequence of temporal correlation in the wireless
channel. The length of an error burst is an essential performance indicator for
cyber-physical control applications with periodic traffic, as these define the
period in which the control link is unavailable. However, the available
interfaces must be correctly orchestrated to achieve an adequate trade-off
between latency, reliability, and energy consumption. This work investigates
how the packet error statistics from different interfaces impacts the overall
latency-reliability characteristics and explores mechanisms to derive adequate
interface diversity policies. For this, we model the optimization problem as a
partially observable Markov Decision Process (POMDP), where the state of each
interface is determined by a Gilbert-Elliott model whose parameters are
estimated based on experimental measurement traces from LTE and Wi-Fi. Our
results show that the POMDP approach provides an all-round adaptable solution,
whose performance is only 0.1% below the absolute upper bound, dictated by the
optimal policy under the impractical assumption of full observability. | 2108.03035v1 |
2021-08-16 | $Q$-ary non-overlapping codes: a generating function approach | Non-overlapping codes are a set of codewords in $\bigcup_{n \ge 2}
\mathbb{Z}_q^n$, where $\mathbb{Z}_q = \{0,1,\dots,q-1\}$, such that, the
prefix of each codeword is not a suffix of any codeword in the set, including
itself; and for variable-length codes, a codeword does not contain any other
codeword as a subword. In this paper, we investigate a generic method to
generalize binary codes to $q$-ary for $q > 2$, and analyze this generalization
on the two constructions given by Levenshtein (also by Gilbert; Chee, Kiah,
Purkayastha, and Wang) and Bilotta, respectively. The generalization on the
former construction gives large non-expandable fixed-length non-overlapping
codes whose size can be explicitly determined; the generalization on the later
construction is the first attempt to generate $q$-ary variable-length
non-overlapping codes. More importantly, this generic method allows us to
utilize the generating function approach to analyze the cardinality of the
underlying $q$-ary non-overlapping codes. The generating function approach not
only enables us to derive new results, e.g., recurrence relations on their
cardinalities, new combinatorial interpretations for the constructions, and the
limit superior of their cardinalities for some special cases, but also greatly
simplifies the arguments for these results. Furthermore, we give an exact
formula for the number of fixed-length words that do not contain the codewords
in a variable-length non-overlapping code as subwords. This thereby solves an
open problem by Bilotta and induces a recursive upper bound on the maximum size
of variable-length non-overlapping codes. | 2108.06934v1 |
2021-08-17 | Searching For or Reviewing Evidence Improves Crowdworkers' Misinformation Judgments and Reduces Partisan Bias | Can crowd workers be trusted to judge whether news-like articles circulating
on the Internet are misleading, or does partisanship and inexperience get in
the way? And can the task be structured in a way that reduces partisanship? We
assembled pools of both liberal and conservative crowd raters and tested three
ways of asking them to make judgments about 374 articles. In a no research
condition, they were just asked to view the article and then render a judgment.
In an individual research condition, they were also asked to search for
corroborating evidence and provide a link to the best evidence they found. In a
collective research condition, they were not asked to search, but instead to
review links collected from workers in the individual research condition. Both
research conditions reduced partisan disagreement in judgments. The individual
research condition was most effective at producing alignment with journalists'
assessments. In this condition, the judgments of a panel of sixteen or more
crowd workers were better than that of a panel of three expert journalists, as
measured by alignment with a held out journalist's ratings. | 2108.07898v3 |
2021-08-23 | The Multiverse: Logical Modularity for Proof Assistants | Proof assistants play a dual role as programming languages and logical
systems. As programming languages, proof assistants offer standard modularity
mechanisms such as first-class functions, type polymorphism and modules. As
logical systems, however, modularity is lacking, and understandably so:
incompatible reasoning principles -- such as univalence and uniqueness of
identity proofs -- can indirectly lead to logical inconsistency when used in a
given development, even when they appear to be confined to different modules.
The lack of logical modularity in proof assistants also hinders the adoption of
richer programming constructs, such as effects. We propose the multiverse, a
general type-theoretic approach to endow proof assistants with logical
modularity. The multiverse consists of multiple universe hierarchies that
statically describe the reasoning principles and effects available to define a
term at a given type. We identify sufficient conditions for this structuring to
modularly ensure that incompatible principles do not interfere, and to locally
restrict the power of dependent elimination when necessary. This extensible
approach generalizes the ad-hoc treatment of the sort of propositions in the
Coq proof assistant. We illustrate the power of the multiverse by describing
the inclusion of Coq-style propositions, the strict propositions of Gilbert et
al., the exceptional type theory of P\'edrot and Tabareau, and general
axiomatic extensions of the logic. | 2108.10259v1 |
2021-08-27 | Distributed Control and Optimization of DC Microgrids: A Port-Hamiltonian Approach | This article proposes a distributed secondary control scheme that drives a dc
microgrid to an equilibrium point where the generators share optimal currents,
and their voltages have a weighted average of nominal value. The scheme does
not rely on the electric system topology nor its specifications; it guarantees
plug-and-play design and functionality of the generators. First, the
incremental model of the microgrid system with constant impedance, current, and
power devices is shown to admit a port-Hamiltonian (pH) representation, and its
passive output is determined. The economic dispatch problem is then solved by
the Lagrange multipliers method; the Karush-Kuhn-Tucker conditions and weighted
average formation of voltages are then formulated as the control objectives. We
propose a control scheme that is based on the Control by Interconnection design
philosophy, where the consensus-based controller is viewed as a virtual pH
system to be interconnected with the physical one. We prove the regional
asymptotic stability of the closed-loop system using Lyapunov and LaSalle
theorems. Equilibrium analysis is also conducted based on the concepts of graph
theory and economic dispatch. Finally, the effectiveness of the presented
scheme for different case studies is validated with a test microgrid system,
simulated in both MATLAB/Simulink and OPAL-RT environments. | 2108.12341v1 |
2021-10-23 | Bootstrap percolation in random geometric graphs | Following Bradonji\'c and Saniee, we study a model of bootstrap percolation
on the Gilbert random geometric graph on the $2$-dimensional torus. In this
model, the expected number of vertices of the graph is $n$, and the expected
degree of a vertex is $a\log n$ for some fixed $a>1$. Each vertex is added with
probability $p$ to a set $A_0$ of initially infected vertices. Vertices
subsequently become infected if they have at least $ \theta a \log n $ infected
neighbours. Here $p, \theta \in [0,1]$ are taken to be fixed constants.
We show that if $\theta < (1+p)/2$, then a sufficiently large local outbreak
leads with high probability to the infection spreading globally, with all but
$o(n)$ vertices eventually becoming infected. On the other hand, for $ \theta >
(1+p)/2$, even if one adversarially infects every vertex inside a ball of
radius $O(\sqrt{\log n} )$, with high probability the infection will spread to
only $o(n)$ vertices beyond those that were initially infected.
In addition we give some bounds on the $(a, p, \theta)$ regions ensuring the
emergence of large local outbreaks or the existence of islands of vertices that
never become infected. We also give a complete picture of the (surprisingly
complex) behaviour of the analogous $1$-dimensional bootstrap percolation model
on the circle. Finally we raise a number of problems, and in particular make a
conjecture on an `almost no percolation or almost full percolation' dichotomy
which may be of independent interest. | 2110.12166v1 |
2021-11-02 | Orbital Dynamics and the Evolution of Planetary Habitability in the AU Mic System | The diversity of planetary systems that have been discovered are revealing
the plethora of possible architectures, providing insights into planet
formation and evolution. They also increase our understanding of system
parameters that may affect planetary habitability, and how such conditions are
influenced by initial conditions. The AU~Mic system is unique among known
planetary systems in that it is a nearby, young, multi-planet transiting
system. Such a young and well characterized system provides an opportunity to
study orbital dynamical and habitability studies for planets in the very early
stages of their evolution. Here, we calculate the evolution of the Habitable
Zone of the system through time, including the pre-main sequence phase that the
system currently resides in. We discuss the planetary atmospheric processes
occurring for an Earth-mass planet during this transitionary period, and
provide calculations of the climate state convergence age for both volatile
rich and poor initial conditions. We present results of an orbital dynamical
analysis of the AU~Mic system that demonstrate the rapid eccentricity evolution
of the known planets, and show that terrestrial planets within the Habitable
Zone of the system can retain long-term stability. Finally, we discuss
follow-up observation prospects, detectability of possible Habitable Zone
planets, and how the AU Mic system may be used as a template for studies of
planetary habitability evolution. | 2111.01816v1 |
2021-11-17 | Privacy-preserving Federated Learning for Residential Short Term Load Forecasting | With high levels of intermittent power generation and dynamic demand
patterns, accurate forecasts for residential loads have become essential. Smart
meters can play an important role when making these forecasts as they provide
detailed load data. However, using smart meter data for load forecasting is
challenging due to data privacy requirements. This paper investigates how these
requirements can be addressed through a combination of federated learning and
privacy preserving techniques such as differential privacy and secure
aggregation. For our analysis, we employ a large set of residential load data
and simulate how different federated learning models and privacy preserving
techniques affect performance and privacy. Our simulations reveal that
combining federated learning and privacy preserving techniques can secure both
high forecasting accuracy and near-complete privacy. Specifically, we find that
such combinations enable a high level of information sharing while ensuring
privacy of both the processed load data and forecasting models. Moreover, we
identify and discuss challenges of applying federated learning, differential
privacy and secure aggregation for residential short-term load forecasting. | 2111.09248v4 |
2021-11-30 | The AiiDA-Spirit plugin for automated spin-dynamics simulations and multi-scale modelling based on first-principles calculations | Landau-Lifshitz-Gilbert (LLG) spin-dynamics calculations based on the
extended Heisenberg Hamiltonian is an important tool in computational materials
science involving magnetic materials. LLG simulations allow to bridge the gap
from expensive quantum mechanical calculations with small unit cells to large
supercells where the collective behavior of millions of spins can be studied.
In this work we present the AiiDA-Spirit plugin that connects the spin-dynamics
code Spirit to the AiiDA framework. AiiDA provides a Python interface that
facilitates performing high-throughput calculations while automatically
augmenting the calculations with metadata describing the data provenance
between calculations in a directed acyclic graph. The AiiDA-Spirit interface
thus provides an easy way for high-throughput spin-dynamics calculations. The
interface to the AiiDA infrastructure furthermore has the advantage that input
parameters for the extended Heisenberg model can be extracted from
high-throughput first-principles calculations including a proper treatment of
the data provenance that ensures reproducibility of the calculation results in
accordance to the FAIR principles. We describe the layout of the AiiDA-Spirit
plugin and demonstrate its capabilities using selected examples for LLG
spin-dynamics and Monte Carlo calculations. Furthermore, the integration with
first-principles calculations through AiiDA is demonstrated at the example of
$\gamma$-Fe, where the complex spin-spiral ground state is investigated. | 2111.15229v1 |
2021-12-10 | A Framework for Fairness: A Systematic Review of Existing Fair AI Solutions | In a world of daily emerging scientific inquisition and discovery, the
prolific launch of machine learning across industries comes to little surprise
for those familiar with the potential of ML. Neither so should the congruent
expansion of ethics-focused research that emerged as a response to issues of
bias and unfairness that stemmed from those very same applications. Fairness
research, which focuses on techniques to combat algorithmic bias, is now more
supported than ever before. A large portion of fairness research has gone to
producing tools that machine learning practitioners can use to audit for bias
while designing their algorithms. Nonetheless, there is a lack of application
of these fairness solutions in practice. This systematic review provides an
in-depth summary of the algorithmic bias issues that have been defined and the
fairness solution space that has been proposed. Moreover, this review provides
an in-depth breakdown of the caveats to the solution space that have arisen
since their release and a taxonomy of needs that have been proposed by machine
learning practitioners, fairness researchers, and institutional stakeholders.
These needs have been organized and addressed to the parties most influential
to their implementation, which includes fairness researchers, organizations
that produce ML algorithms, and the machine learning practitioners themselves.
These findings can be used in the future to bridge the gap between
practitioners and fairness experts and inform the creation of usable fair ML
toolkits. | 2112.05700v1 |
2021-12-12 | Effect of Topological Non-hexagonal Rings and Stone Wale Defects on the Vibrational Response of Single and Multi-Layer Ion Irradiated Graphene | Present study explores the observation of topological non-hexagonal rings
(NHR) and Stone Wale (SW) defects by Raman experiments in both single (SLG) and
multi-layer graphene (MLG) after they are irradiated with 100- 300 eV Ar ions.
Although predicted by theoretical studies, here it is experimentally shown for
the first time that graphene SW/NHR defects have a signature in Raman. Broad
bandwidth of the pertinent Raman features suggests the presence of more than
one SW/NHR defect mode, in agreement with the DFT studies. Variations in the
SW/NHR related Raman mode intensities demonstrate the annihilation of these
topological defects at higher energies. Behavior of Raman allowed G and 2D
excitations, as well as the disorder-activated D, D' and G* lines, has also
been investigated in SLG and MLG. These indicate an evolution of defects in
graphene with ion irradiation, as well as presence of a transition state beyond
which the Raman modes are dominated by a rise in sp3 content. Correlation of
these aspects with the SW/NHR Raman provide significant insight into ion
induced evolution of graphene. The direct observation of SW/NHR defects by
Raman spectroscopy could be important in promoting exploration of rich
topological aspects of Graphene in various fields. | 2112.06294v1 |
2021-12-16 | Minimal blowing pressure allowing periodic oscillations in a model of bass brass instruments | In this study, an acoustic resonator -- a bass brass instrument -- with
multiple resonances coupled to an exciter -- the player's lips -- with one
resonance is modelled by a multidimensional dynamical system, and studied using
a continuation and bifurcation software. Bifurcation diagrams are explored with
respect to the blowing pressure, in particular with focus on the minimal
blowing pressure allowing stable periodic oscillations and the associated
frequency.The behaviour of the instrument is first studied close to a (non
oscillating) equilibrium using linear stability analysis. This allows to
determine the conditions at which an equilibrium destabilises and as such where
oscillating regimes can emerge (corresponding to a sound production). This
approach is useful to characterise the ease of playing of a brass instrument,
which is assumed here to be related -- as a first approximation -- to the
linear threshold pressure. In particular, the lower the threshold pressure, the
lower the physical effort the player has to make to play a note [Campbell et
al., 2021].Cases are highlighted where periodic solutions in the bifurcation
diagrams are reached for blowing pressures below the value given by the linear
stability analysis. Thus, bifurcation diagrams allow a more in-depth analysis.
Particular attention is devoted to the first playing regime of bass brass
instruments (the pedal note and the ghost note of a tuba in particular), whose
behaviour qualitatively differs from a trombone to a euphonium for instance. | 2112.08751v2 |
2021-12-20 | Refined modelling of the radio SZ signal: kinematic terms, relativistic temperature corrections and anisotropies in the radio background | A significant cosmological radio background will inevitably lead to a radio
Sunyaev-Zeldovich (SZ) effect. In the simplest limit, the combined signal from
the scattered radio and cosmic microwave background exhibits a null at around
$\nu \simeq 735$ MHz. Here, we show that kinematic and relativistic temperature
corrections to this radio SZ signal are easily calculable. We treat both the
cluster and observer motion, and the scattering of anisotropies in the radio
background, highlighting how the spectrum of the radio SZ effect is affected in
each case. Although relativistic temperature corrections only enter at the
level of a few percent, our expressions allow high-precision modelling of these
terms. By measuring the SZ signal around the radio null, one is in principle
able to place constraints on the properties of a cosmological radio background.
A combination with standard SZ measurements from large cluster samples could
provide a promising avenue towards breaking degeneracies between different
contributions. Stacking analyses can reduce the effect of kinematic corrections
and dipolar anisotropies in the radio background, thereby providing a way to
constrain the redshift dependence of the average radio background. Our
qualitative discussion is meant to give an analytic understanding of the
various effects and also motivate further studies with the aim to obtain
quantitative forecasts of their observability. At this stage, a detection of
the corrections seems rather futuristic, but the advent of large SZ and X-ray
cluster samples could drastically improve our ability to disentangle various
effects. | 2112.10666v2 |
2021-12-22 | Conductive and convective heat transfer in inductive heating of subsea buried pipelines | Inductive heating with high-voltage cables reduces the risk of hydrate
formation by raising the temperature of the production fluid in pipelines.
Heating the pipeline results in losing a certain fraction of the heat to the
surrounding soil through conduction or convection-dominated flow through the
soil. However, the amount of heat lost in conduction versus convection and the
transition from conduction to convection-dominated heat loss remains unknown.
Soil permeability, temperature gradient between cable and mudline, and burial
depth influence the mode of heat transfer and the amount of heat lost. We study
the dominant mode of heat transfer in pipelines with inductive heating using 2D
Finite Difference analysis under different soil and environmental conditions.
Low permeability soils primarily exhibit conductive heat transfer, thus losing
minimum heat to the surrounding soil. In contrast, convective flow drives a
significant fraction of the heat away from the pipeline and towards the ground
surface for highly permeable soils, barely heating the fluid in the pipe. We
identify a critical Rayleigh-Darcy number of 1 as the controlling value
separating conduction and convection-dominated heat transfer. An increase in
burial depth deteriorates the heating efficiency in convection-dominated high
permeability soils, while it remains unaffected in conduction-dominated low
permeability soils. | 2112.11826v1 |
2021-12-28 | Phonon, Electron, and Magnon Excitations in Antiferromagnetic L1$_{0}$-type MnPt | Antiferromagnetic L1$_{0}$-type MnPt is a material with relatively simple
crystal and magnetic structure, recently attracting interest due to its high
N{\'{e}}el temperature and wide usage as a pinning layer in magnetic devices.
While it is experimentally well characterized, the theoretical understanding is
much less developed, in part due to the challenging accuracy requirements
dictated by the small underlying energy scales that govern magnetic ordering in
antiferromagnetic metals. In this work, we use density functional theory, the
Korringa-Kohn-Rostoker formalism, and a Heisenberg model to establish a
comprehensive theoretical description of antiferromagnetic L1$_{0}$-type MnPt,
along with accuracy limits, by thoroughly comparing to available literature
data. Our simulations show that the contribution of the magnetic dipole
interaction to the magnetocrystalline anisotropy energy of $K_{1}$=1.07$\times
10^{6}$\,J/m$^3$ is comparable in magnitude to the spin-orbit contribution.
Using our result for the magnetic susceptibility of $5.25\times10^{-4}$, a
lowest magnon frequency of about 2.02\,THz is predicted, confirming THz spin
dynamics in this material. From our data for electron, phonon, and magnon
dispersion we compute the individual contributions to the total heat capacity
and show that the dominant term at or above 2\,K arises from phonons. From the
Landau-Lifshitz-Gilbert equation, we compute a N\'{e}el temperature of
990--1070 K. Finally, we quantify the magnitude of the magneto-optical Kerr
effect generated by applying an external magnetic field. Our results provide
insight into the underlying physics, which is critical for a deep understanding
of fundamental limits of the time scale of spin dynamics, stability of the
magnetic ordering, and the possibility of magneto-optical detection of
collective spin motion. | 2112.13954v1 |
2022-01-22 | Estimation and Hypothesis Testing of Strain-Specific Vaccine Efficacy with Missing Strain Types, with Applications to a COVID-19 Vaccine Trial | Statistical methods are developed for analysis of clinical and virus genetics
data from phase 3 randomized, placebo-controlled trials of vaccines against
novel coronavirus COVID-19. Vaccine efficacy (VE) of a vaccine to prevent
COVID-19 caused by one of finitely many genetic strains of SARS-CoV-2 may vary
by strain. The problem of assessing differential VE by viral genetics can be
formulated under a competing risks model where the endpoint is virologically
confirmed COVID-19 and the cause-of-failure is the infecting SARS-CoV-2
genotype. Strain-specific VE is defined as one minus the cause-specific hazard
ratio (vaccine/placebo). For the COVID-19 VE trials, the time to COVID-19 is
right-censored, and a substantial percentage of failure cases are missing the
infecting virus genotype. We develop estimation and hypothesis testing
procedures for strain-specific VE when the failure time is subject to right
censoring and the cause-of-failure is subject to missingness, focusing on $J
\ge 2$ discrete categorical unordered or ordered virus genotypes. The
stratified Cox proportional hazards model is used to relate the cause-specific
outcomes to explanatory variables. The inverse probability weighted
complete-case (IPW) estimator and the augmented inverse probability weighted
complete-case (AIPW) estimator are investigated. Hypothesis tests are developed
to assess whether the vaccine provides at least a specified level of efficacy
against some viral genotypes and whether VE varies across genotypes, adjusting
for covariates. The finite-sample properties of the proposed tests are studied
through simulations and are shown to have good performances. In preparation for
the real data analyses, the developed methods are applied to a pseudo dataset
mimicking the Moderna COVE trial. | 2201.08946v1 |
2022-01-30 | OverChain: Building a robust overlay with a blockchain | Blockchains use peer-to-peer networks for disseminating information among
peers, but these networks currently do not have any provable guarantees for
desirable properties such as Byzantine fault tolerance, good connectivity and
small diameter. This is not just a theoretical problem, as recent works have
exploited unsafe peer connection policies and weak network synchronization to
mount partitioning attacks on Bitcoin. Cryptocurrency blockchains are safety
critical systems, so we need principled algorithms to maintain their networks.
Our key insight is that we can leverage the blockchain itself to share
information among the peers, and thus simplify the network maintenance process.
Given that the peers have restricted computational resources, and at most a
constant fraction of them are Byzantine, we provide communication-efficient
protocols to maintain a hypercubic network for blockchains, where peers can
join and leave over time. Interestingly, we discover that our design can
\emph{recover} from substantial adversarial failures. Moreover, these
properties hold despite significant churn.
A key contribution is a secure mechanism for joining the network that uses
the blockchain to help new peers to contact existing peers. Furthermore, by
examining how peers join the network, i.e., the "bootstrapping service," we
give a lower bound showing that (within log factors) our network tolerates the
maximum churn rate possible. In fact, we can give a lower bound on churn for
any fully distributed service that requires connectivity. | 2201.12809v1 |
2022-02-04 | Three-axis torque investigation of interfacial exchange coupling in a NiFe/CoO bilayer micromagnetic disk | Micrometer diameter bilayers of NiFe (permalloy, Py) and cobalt oxide (CoO)
deposited on nanomechanical resonators were used to investigate exchange bias
effects. The mechanical compliances of two resonator axes were enhanced by
severing one torsion arm, resulting in a unique three-axis resonator that
responds resonantly to torques generated by a three-axis RF field. Our
technique permits simultaneous measurement of three orthogonal torque
components. Measurements of the anisotropies associated with interfacial
exchange coupling effects have been made. At cryogenic temperatures,
observations of shifted linear hysteresis loops confirmed the presence of
exchange bias from the Py/CoO interface. An in-plane rotating DC bias field was
used to probe in-plane anisotropies through the out-of-plane torque. Training
effects in the rotational hysteresis data were observed and showed that
features due to interfacial coupling did not diminish irrespective of
substantial training of the unidirectional anisotropy. The data from the
rotational hysteresis loops were fit with parameters from a macrospin solution
to the Landau-Lifshitz-Gilbert equation. Each parameter of the exchange bias
model accounts for specific features of the rotational loop. | 2202.02386v1 |
2022-02-11 | Choices, Risks, and Reward Reports: Charting Public Policy for Reinforcement Learning Systems | In the long term, reinforcement learning (RL) is considered by many AI
theorists to be the most promising path to artificial general intelligence.
This places RL practitioners in a position to design systems that have never
existed before and lack prior documentation in law and policy. Public agencies
could intervene on complex dynamics that were previously too opaque to
deliberate about, and long-held policy ambitions would finally be made
tractable. In this whitepaper we illustrate this potential and how it might be
technically enacted in the domains of energy infrastructure, social media
recommender systems, and transportation. Alongside these unprecedented
interventions come new forms of risk that exacerbate the harms already
generated by standard machine learning tools. We correspondingly present a new
typology of risks arising from RL design choices, falling under four
categories: scoping the horizon, defining rewards, pruning information, and
training multiple agents. Rather than allowing RL systems to unilaterally
reshape human domains, policymakers need new mechanisms for the rule of reason,
foreseeability, and interoperability that match the risks these systems pose.
We argue that criteria for these choices may be drawn from emerging subfields
within antitrust, tort, and administrative law. It will then be possible for
courts, federal and state agencies, and non-governmental organizations to play
more active roles in RL specification and evaluation. Building on the "model
cards" and "datasheets" frameworks proposed by Mitchell et al. and Gebru et
al., we argue the need for Reward Reports for AI systems. Reward Reports are
living documents for proposed RL deployments that demarcate design choices. | 2202.05716v1 |
2022-02-22 | Entropy-driven order in an array of nanomagnets | Long-range ordering is typically associated with a decrease in entropy. Yet,
it can also be driven by increasing entropy in certain special cases. We
demonstrate that artificial spin ice arrays of single-domain nanomagnets can be
designed to produce entropy-driven order. We focus on the tetris artificial
spin ice structure, a highly frustrated array geometry with a zero-point Pauli
entropy, which is formed by selectively creating regular vacancies on the
canonical square ice lattice. We probe thermally active tetris artificial spin
ice both experimentally and through simulations, measuring the magnetic moments
of the individual nanomagnets. We find two-dimensional magnetic ordering in one
subset of these moments, which we demonstrate to be induced by disorder (i.e.,
increased entropy) in another subset of the moments. In contrast with other
entropy-driven systems, the discrete degrees of freedom in tetris artificial
spin ice are binary and are both designable and directly observable at the
microscale, and the entropy of the system is precisely calculable in
simulations. This example, in which the system's interactions and ground state
entropy are well-defined, expands the experimental landscape for the study of
entropy-driven ordering. | 2202.11010v1 |
2022-03-30 | Kinematics and Metallicity of Red Giant Branch Stars in the Northeast Shelf of M31 | We obtained Keck/DEIMOS spectra of 556 individual red giant branch stars in 4
spectroscopic fields spanning $13-31$ projected kpc along the Northeast (NE)
shelf of M31. We present the first detection of a complete wedge pattern in the
space of projected M31-centric radial distance versus line-of-sight velocity
for this feature, which includes the returning stream component of the shelf.
This wedge pattern agrees with expectations of a tidal shell formed in a radial
merger and provides strong evidence in favor of predictions of Giant Stellar
Stream (GSS) formation models in which the NE shelf originates from the second
orbital wrap of the tidal debris. The observed concentric wedge patterns of the
NE, West (W), and Southeast (SE) shelves corroborate this interpretation
independently of the models. We do not detect a kinematical signature in the NE
shelf region corresponding to an intact progenitor core, favoring GSS formation
models in which the progenitor is completely disrupted. The shelf's photometric
metallicity distribution implies that it is dominated by tidal material, as
opposed to the phase-mixed stellar halo or the disk. The metallicity
distribution ([Fe/H]$_{\rm phot}$ = $-0.42$ $\pm$ $0.01$) also matches the GSS,
and consequently the W and SE shelves, further supporting a direct physical
association between the tidal features. | 2203.16675v1 |
2022-04-06 | Stability and Safety through Event-Triggered Intermittent Control with Application to Spacecraft Orbit Stabilization | In systems where the ability to actuate is a scarce resource, e.g.,
spacecrafts, it is desirable to only apply a given controller in an
intermittent manner--with periods where the controller is on and periods where
it is off. Motivated by the event-triggered control paradigm, where
state-dependent triggers are utilized in a sample-and-hold context, we
generalize this concept to include state triggers where the controller is off
thereby creating a framework for intermittent control. Our approach utilizes
certificates--either Lyapunov or barrier functions--to design intermittent
trigger laws that guarantee stability or safety; the controller is turned on
for the period for which is beneficial with regard to the certificate, and
turned off until a performance threshold is reached. The main result of this
paper is that the intermittent controller scheme guarantees (set) stability
when Lyapunov functions are utilized, and safety (forward set invariance) in
the setting of barrier functions. As a result, our trigger designs can leverage
the intermittent nature of the actuator, and at the same time, achieve the task
of stabilization or safety. We further demonstrate the application and benefits
of intermittent control in the context of the spacecraft orbit stabilization
problem. | 2204.03110v1 |
2022-04-19 | Higher-order modulations in the skyrmion-lattice phase of Cu$_2$OSeO$_3$ | Using small angle neutron scattering, we have investigated higher-order peaks
in the skyrmion-lattice phase of Cu$_2$OSeO$_3$, in which two different
skyrmion lattices, SkX1 and SkX2, are known to form. For each skyrmion-lattice
phase, we observed two sets of symmetrically inequivalent peaks at the
higher-order-reflection positions with the indices $(110)$ and $(200)$. Under
the condition where the SkX1 and SkX2 coexist, we confirmed the absence of the
scattering at $\mathbf{Q}$ positions combining reflections from the two phases,
indicating a significantly weak double-scattering component. Detailed analysis
of the peak profile, as well as the temperature and magnetic-field dependence
of the peak intensity, also supports the intrinsic higher-order modulation
rather than the parasitic double scattering. The two higher-order modulations
show contrasting magnetic-field dependence; the former $(110)$ increases as the
field is increased, whereas the latter $(200)$ decreases. This indicates that,
in Cu$_2$OSeO$_3$, skyrmions are weakly distorted, and the distortion is
field-dependent in a way that the dominant higher-order modulation switches
from $(110)$ to $(200)$ under field. Monte Carlo simulations under sweeping
external magnetic field qualitatively reproduce the observed magnetic-field
dependence, and suggests that the higher-order modulations correspond to the
superlattices of weak swirlings appearing in the middle of the original
triangular-latticed skyrmions. | 2204.08614v1 |
2022-04-19 | Emu: A Case Study for TDI-like Imaging for Infrared Observation from Space | A wide-field zenith-looking telescope operating in a mode similar to
Time-Delay-Integration (TDI) or drift scan imaging can perform an infrared sky
survey without active pointing control but it requires a high-speed, low-noise
infrared detector. Operating from a hosted payload platform on the
International Space Station (ISS), the Emu space telescope employs the
paradigm-changing properties of the Leonardo SAPHIRA electron avalanche
photodiode array to provide powerful new observations of cool stars at the
critical water absorption wavelength (1.4 $\mu$m) largely inaccessible to
ground-based telescopes due to the Earth's own atmosphere. Cool stars,
especially those of spectral-type M, are important probes across contemporary
astrophysics, from the formation history of the Galaxy to the formation of
rocky exoplanets. Main sequence M-dwarf stars are the most abundant stars in
the Galaxy and evolved M-giant stars are some of the most distant stars that
can be individually observed. The Emu sky survey will deliver critical stellar
properties of these cool stars by inferring oxygen abundances via measurement
of the water absorption band strength at 1.4 $\mu$m. Here we present the
TDI-like imaging capability of Emu mission, its science objectives, instrument
details and simulation results. | 2204.08713v2 |
2022-05-05 | Photon emissivity of the quark-gluon plasma: a lattice QCD analysis of the transverse channel | We present results for the thermal photon emissivity of the quark-gluon
plasma derived from spatially transverse vector correlators computed in lattice
QCD at a temperature of 250 MeV. The analysis of the spectral functions,
performed at fixed spatial momentum, is based on continuum-extrapolated
correlators obtained with two flavours of dynamical Wilson fermions. We compare
the next-to-leading order perturbative QCD correlators, as well as the ${\cal
N}=4$ supersymmetric Yang-Mills correlators at infinite coupling, to the
correlators from lattice QCD and find them to lie within $\sim10\%$ of each
other. We then refine the comparison, performing it at the level of filtered
spectral functions obtained model-independently via the Backus-Gilbert method.
Motivated by these studies, for frequencies $\omega\lesssim2.5\,$GeV we use fit
ans\"atze to the spectral functions that perform well when applied to mock data
generated from the NLO QCD or from the strongly-coupled SYM spectral functions,
while the high-frequency part, $\omega\gtrsim 2.5\,$GeV, is matched to NLO QCD.
We compare our results for the photon emissivity to our previous analysis of a
different vector channel at the same temperature. We obtain the most stringent
constraint at photon momenta around $k\simeq0.8\,$GeV, for which we find a
differential photon emission rate per unit volume of $d\Gamma_\gamma/d^3k =
(\alpha_{\rm em}/(\exp(k/T)-1))\times (2.2 \pm 0.8 ) \times 10^{-3}\,{\rm
GeV}$. | 2205.02821v1 |
2022-05-17 | Highlighting relations between Wave-particle duality, Uncertainty principle, Phase space and Microstates | Wave-particle duality is often considered as the modern answer to the problem
of the nature of light after more than 2000 years of questioning. It is also
the answer given by quantum physics concerning the nature of matter particles
and any other radiations. The main objective of this work is to analyze the
relations that are existing between this concept of wave-particle duality, the
uncertainty principle and the concepts of phase space and microstates
considered in statistical mechanics. It is mainly highlighted that while the
concepts of phase space and microstates were already introduced in classical
physics before the discovery of the wave-particle duality, a correct
understanding of them cannot be achieved without the use of the concept of
quantum phase space and phase space representation of quantum mechanics which
are directly related to the uncertainty principle. The possibility of using
these concepts of quantum phase space and phase space representations of
quantum mechanics to help in a deeper description of the wave-particle duality
and in the study of some current issues related to foundational problems of
quantum mechanics like quantum decoherence and the measurement problem is also
discussed. | 2205.08538v4 |
2022-05-26 | New Explicit Good Linear Sum-Rank-Metric Codes | Sum-rank-metric codes have wide applications in universal error correction,
multishot network coding, space-time coding and the construction of partial-MDS
codes for repair in distributed storage. Fundamental properties of
sum-rank-metric codes have been studied and some explicit or probabilistic
constructions of good sum-rank-metric codes have been proposed. In this paper
we give three simple constructions of explicit linear sum-rank-metric codes. In
finite length regime, numerous larger linear sum-rank-metric codes with the
same minimum sum-rank distances as the previous constructed codes can be
derived from our constructions. For example several better linear
sum-rank-metric codes over ${\bf F}_q$ with small block sizes and the matrix
size $2 \times 2$ are constructed for $q=2, 3, 4$ by applying our construction
to the presently known best linear codes. Asymptotically our constructed
sum-rank-metric codes are close to the Gilbert-Varshamov-like bound on
sum-rank-metric codes for some parameters. Finally we construct a linear MSRD
code over an arbitrary finite field ${\bf F}_q$ with various square matrix
sizes $n_1, n_2, \ldots, n_t$ satisfying $n_i \geq n_{i+1}^2+\cdots+n_t^2$ ,
$i=1, 2, \ldots, t-1$, for any given minimum sum-rank distance. There is no
restriction on the block lengths $t$ and parameters $N=n_1+\cdots+n_t$ of these
linear MSRD codes from the sizes of the fields ${\bf F}_q$. \end{abstract} | 2205.13087v8 |
2022-06-17 | Multi-scale Super-resolution Magnetic Resonance Spectroscopic Imaging with Adjustable Sharpness | Magnetic Resonance Spectroscopic Imaging (MRSI) is a valuable tool for
studying metabolic activities in the human body, but the current applications
are limited to low spatial resolutions. The existing deep learning-based MRSI
super-resolution methods require training a separate network for each upscaling
factor, which is time-consuming and memory inefficient. We tackle this
multi-scale super-resolution problem using a Filter Scaling strategy that
modulates the convolution filters based on the upscaling factor, such that a
single network can be used for various upscaling factors. Observing that each
metabolite has distinct spatial characteristics, we also modulate the network
based on the specific metabolite. Furthermore, our network is conditioned on
the weight of adversarial loss so that the perceptual sharpness of the
super-resolved metabolic maps can be adjusted within a single network. We
incorporate these network conditionings using a novel Multi-Conditional Module.
The experiments were carried out on a 1H-MRSI dataset from 15 high-grade glioma
patients. Results indicate that the proposed network achieves the best
performance among several multi-scale super-resolution methods and can provide
super-resolved metabolic maps with adjustable sharpness. | 2206.08984v1 |
2022-06-20 | How to Assess Trustworthy AI in Practice | This report is a methodological reflection on
Z-Inspection$^{\small{\circledR}}$. Z-Inspection$^{\small{\circledR}}$ is a
holistic process used to evaluate the trustworthiness of AI-based technologies
at different stages of the AI lifecycle. It focuses, in particular, on the
identification and discussion of ethical issues and tensions through the
elaboration of socio-technical scenarios. It uses the general European Union's
High-Level Expert Group's (EU HLEG) guidelines for trustworthy AI. This report
illustrates for both AI researchers and AI practitioners how the EU HLEG
guidelines for trustworthy AI can be applied in practice. We share the lessons
learned from conducting a series of independent assessments to evaluate the
trustworthiness of AI systems in healthcare. We also share key recommendations
and practical suggestions on how to ensure a rigorous trustworthy AI assessment
throughout the life-cycle of an AI system. | 2206.09887v2 |
2022-06-23 | LRPC codes with multiple syndromes: near ideal-size KEMs without ideals | We introduce a new rank-based key encapsulation mechanism (KEM) with public
key and ciphertext sizes around 3.5 Kbytes each, for 128 bits of security,
without using ideal structures. Such structures allow to compress objects, but
give reductions to specific problems whose security is potentially weaker than
for unstructured problems. To the best of our knowledge, our scheme improves in
size all the existing unstructured post-quantum lattice or code-based
algorithms such as FrodoKEM or Classic McEliece. Our technique, whose
efficiency relies on properties of rank metric, is to build upon existing Low
Rank Parity Check (LRPC) code-based KEMs and to send multiple syndromes in one
ciphertext, allowing to reduce the parameters and still obtain an acceptable
decoding failure rate. Our system relies on the hardness of the Rank Support
Learning problem, a well-known variant of the Rank Syndrome Decoding problem.
The gain on parameters is enough to significantly close the gap between ideal
and non-ideal constructions. It enables to choose an error weight close to the
rank Gilbert-Varshamov bound, which is a relatively harder zone for algebraic
attacks. We also give a version of our KEM that keeps an ideal structure and
permits to roughly divide the bandwidth by two compared to previous versions of
LRPC KEMs submitted to the NIST with a Decoding Failure Rate (DFR) of
$2^{-128}$. | 2206.11961v1 |
2022-07-08 | Rate-Optimal Streaming Codes Over the Three-Node Decode-And-Forward Relay Network | In this paper, we study the three-node Decode-and-Forward (D&F) relay network
subject to random and burst packet erasures. The source wishes to transmit an
infinite stream of packets to the destination via the relay. The three-node D&F
relay network is constrained by a decoding delay of T packets, i.e., the packet
transmitted by the source at time i must be decoded by the destination by time
i+T. For the individual channels from source to relay and relay to destination,
we assume a delay-constrained sliding-window (DCSW) based packet-erasure model
that can be viewed as a tractable approximation to the commonly-accepted
Gilbert-Elliot channel model. Under the model, any time-window of width w
contains either up to a random erasure or else erasure burst of length at most
b (>= a). Thus the source-relay and relay-destination channels are modeled as
(a_1, b_1, w_1, T_1) and (a_2, b_2, w_2, T_2) DCSW channels. We first derive an
upper bound on the capacity of the three-node D&F relay network. We then show
that the upper bound is tight for the parameter regime: max{b_1,
b_2}|(T-b_1-b_2-max{a_1, a_2}+1), a1=a2 OR b1=b2 by constructing streaming
codes achieving the bound. The code construction requires field size linear in
T, and has decoding complexity equivalent to that of decoding an MDS code. | 2207.04025v2 |
2022-07-12 | Diversity of ghost notes in tubas, euphoniums and saxhorns | The ghost note is a natural note which can be played exclusively on bass
brass instruments with a predominantly-expanding bore profile such as tubas,
euphoniums or saxhorns. It stands between the pedal note-the lowest natural
note playable, or first regime-and the instrument's second regime. However, if
the interval between the pedal note and the second regime remains close to an
octave regardless of the instrument, the interval between the pedal note and
the ghost note vary from a minor third to a perfect fourth. References about
this note are very scarce, and it is not commonly known among tuba players.This
study shows that an elementary brass model describing the player coupled to the
instrument is capable of bringing both the ghost and the pedal note to light.
Here, we adopt a dynamical systems point of view and perform a bifurcation
analysis using a software of numerical continuation. The numerical results
provided in terms of frequency intervals between pedal note and ghost note are
compared with frequency intervals experimentally inferred from recordings of
seven different types of tuba, each of them being played by two professional
tuba players. | 2207.05395v3 |
2022-07-20 | Flow-based Visual Quality Enhancer for Super-resolution Magnetic Resonance Spectroscopic Imaging | Magnetic Resonance Spectroscopic Imaging (MRSI) is an essential tool for
quantifying metabolites in the body, but the low spatial resolution limits its
clinical applications. Deep learning-based super-resolution methods provided
promising results for improving the spatial resolution of MRSI, but the
super-resolved images are often blurry compared to the experimentally-acquired
high-resolution images. Attempts have been made with the generative adversarial
networks to improve the image visual quality. In this work, we consider another
type of generative model, the flow-based model, of which the training is more
stable and interpretable compared to the adversarial networks. Specifically, we
propose a flow-based enhancer network to improve the visual quality of
super-resolution MRSI. Different from previous flow-based models, our enhancer
network incorporates anatomical information from additional image modalities
(MRI) and uses a learnable base distribution. In addition, we impose a guide
loss and a data-consistency loss to encourage the network to generate images
with high visual quality while maintaining high fidelity. Experiments on a
1H-MRSI dataset acquired from 25 high-grade glioma patients indicate that our
enhancer network outperforms the adversarial networks and the baseline
flow-based methods. Our method also allows visual quality adjustment and
uncertainty estimation. | 2207.10181v1 |
2022-07-24 | Contention Resolution for Coded Radio Networks | Randomized backoff protocols, such as exponential backoff, are a powerful
tool for managing access to a shared resource, often a wireless communication
channel (e.g., [1]). For a wireless device to transmit successfully, it uses a
backoff protocol to ensure exclusive access to the channel. Modern radios,
however, do not need exclusive access to the channel to communicate; in
particular, they have the ability to receive useful information even when more
than one device transmits at the same time. These capabilities have now been
exploited for many years by systems that rely on interference cancellation,
physical layer network coding and analog network coding to improve efficiency.
For example, Zigzag decoding [56] demonstrated how a base station can decode
messages sent by multiple devices simultaneously.
In this paper, we address the following question: Can we design a backoff
protocol that is better than exponential backoff when exclusive channel access
is not required. We define the Coded Radio Network Model, which generalizes
traditional radio network models (e.g., [30]). We then introduce the Decodable
Backoff Algorithm, a randomized backoff protocol that achieves an optimal
throughput of $1-o(1)$. (Throughput $1$ is optimal, as simultaneous reception
does not increase the channel capacity.) The algorithm breaks the constant
throughput lower bound for traditional radio networks [47-49], showing the
power of these new hardware capabilities. | 2207.11824v1 |
2022-07-25 | Control of dephasing in spin qubits during coherent transport in silicon | One of the key pathways towards scalability of spin-based quantum computing
systems lies in achieving long-range interactions between electrons and
increasing their inter-connectivity. Coherent spin transport is one of the most
promising strategies to achieve this architectural advantage. Experimental
results have previously demonstrated high fidelity transportation of spin
qubits between two quantum dots in silicon and identified possible sources of
error. In this theoretical study, we investigate these errors and analyze the
impact of tunnel coupling, magnetic field and spin-orbit effects on the spin
transfer process. The interplay between these effects gives rise to double dot
configurations that include regimes of enhanced decoherence that should be
avoided for quantum information processing. These conclusions permit us to
extrapolate previous experimental conclusions and rationalize the future design
of large scale quantum processors. | 2207.11865v2 |
2022-07-29 | Orthogonal Spin Current Injected Magnetic Tunnel Junction for Convolutional Neural Networks | We propose that a spin Hall effect driven magnetic tunnel junction device can
be engineered to provide a continuous change in the resistance across it when
injected with orthogonal spin currents. Using this concept, we develop a hybrid
device-circuit simulation platform to design a network that realizes multiple
functionalities of a convolutional neural network. At the atomistic level, we
use the Keldysh non-equilibrium Green's function technique that is coupled
self-consistently with the stochastic Landau-Lifshitz-Gilbert-Slonczewski
equations, which in turn is coupled with the HSPICE circuit simulator. We
demonstrate the simultaneous functionality of the proposed network to evaluate
the rectified linear unit and max-pooling functionalities. We present a
detailed power and error analysis of the designed network against the thermal
stability factor of the free ferromagnets. Our results show that there exists a
non-trivial power-error trade-off in the proposed network, which enables an
energy-efficient network design based on unstable free ferromagnets with
reliable outputs. The static power for the proposed ReLU circuit is $0.56\mu W$
and whereas the energy cost of a nine-input rectified linear unit-max-pooling
network with an unstable free ferromagnet($\Delta=15$) is $3.4pJ$ in the
worst-case scenario. We also rationalize the magnetization stability of the
proposed device by analyzing the vanishing torque gradient points. | 2207.14603v3 |
2022-08-09 | Good locally repairable codes via propagation rules | In classical coding theory, it is common to construct new codes via
propagation rules. There are various propagation rules to construct classical
block codes. However, propagation rules have not been extensively explored for
constructions of locally repairable codes. In this paper, we introduce a few
propagation rules to construct good locally repairable codes. To our surprise,
these simple propagation rules produce a few interesting results. Firstly, by
concatenating a locally repairable code as an inner code with a classical block
code as an outer code, we obtain quite a few dimension-optimal binary locally
repairable codes. Secondly, from this concatenation, we explicitly build a
family of locally repairable codes that exceeds the Zyablov-type bound.
Thirdly, by a lengthening propagation rule that adds some rows and columns from
a parity-check matrix of a given linear code, we are able to produce a family
of dimension-optimal binary locally repairable codes from the extended Hamming
codes, and to convert a classical maximum distance separable (MDS) code into a
Singleton-optimal locally repairable code. Furthermore, via the lengthening
propagation rule, we greatly simplify the construction of a family of locally
repairable codes in \cite[Theorem 5]{MX20} that breaks the asymptotic
Gilbert-Varshamov bound. In addition, we make use of three other propagation
rules to produce more dimension-optimal binary locally repairable codes.
Finally, one of phenomena that we observe in this paper is that some trivial
propagation rules in classical block codes do not hold anymore for locally
repairable codes. | 2208.04484v1 |
2022-08-10 | Forward volume magnetoacoustic spin wave excitation with micron-scale spatial resolution | The interaction between surface acoustic waves (SAWs) and spin waves (SWs) in
a piezoelectric-magnetic thin film heterostructure yields potential for the
realization of novel microwave devices and applications in magnonics. In the
present work, we characterize magnetoacoustic waves in three adjacent magnetic
micro-stripes made from CoFe+Ga, CoFe, and CoFe+Pt with a single pair of
tapered interdigital transducers (TIDTs). The magnetic micro-stripes were
deposited by focused electron beam-induced deposition (FEBID) and focused ion
beam-induced deposition (FIBID) direct-writing techniques. The transmission
characteristics of the TIDTs are leveraged to selectively address the
individual micro-stripes. Here, the external magnetic field is continuously
rotated out of the plane of the magnetic thin film and the forward volume SW
geometry is probed with the external magnetic field along the film normal. Our
experimental findings are well explained by an extended phenomenological model
based on a modified Landau-Lifshitz-Gilbert approach that considers SWs with
nonzero wave vectors. Magnetoelastic excitation of forward volume SWs is
possible because of the vertical shear strain $\varepsilon_{xz}$ of the
Rayleigh-type SAW. | 2208.05205v1 |
2022-08-29 | Programmable photonic integrated meshes for modular generation of optical entanglement links | Large-scale generation of quantum entanglement between individually
controllable qubits is at the core of quantum computing, communications, and
sensing. Modular architectures of remotely-connected quantum technologies have
been proposed for a variety of physical qubits, with demonstrations reported in
atomic and all-photonic systems. However, an open challenge in these
architectures lies in constructing high-speed and high-fidelity reconfigurable
photonic networks for optically-heralded entanglement among target qubits. Here
we introduce a programmable photonic integrated circuit (PIC), realized in a
piezo-actuated silicon nitride (SiN)-in-oxide CMOS-compatible process, that
implements an N x N Mach-Zehnder mesh (MZM) capable of high-speed execution of
linear optical transformations. The visible-spectrum photonic integrated mesh
is programmed to generate optical connectivity on up to N = 8 inputs for a
range of optically-heralded entanglement protocols. In particular, we
experimentally demonstrated optical connections between 16 independent pairwise
mode couplings through the MZM, with optical transformation fidelities
averaging 0.991 +/- 0.0063. The PIC's reconfigurable optical connectivity
suffices for the production of 8-qubit resource states as building blocks of
larger topological cluster states for quantum computing. Our programmable PIC
platform enables the fast and scalable optical switching technology necessary
for network-based quantum information processors. | 2208.13911v1 |
2022-09-15 | Almost Ramanujan Expanders from Arbitrary Expanders via Operator Amplification | We give an efficient algorithm that transforms any bounded degree expander
graph into another that achieves almost optimal (namely, near-quadratic, $d
\leq 1/\lambda^{2+o(1)}$) trade-off between (any desired) spectral expansion
$\lambda$ and degree $d$. Furthermore, the algorithm is local: every vertex can
compute its new neighbors as a subset of its original neighborhood of radius
$O(\log(1/\lambda))$. The optimal quadratic trade-off is known as the Ramanujan
bound, so our construction gives almost Ramanujan expanders from arbitrary
expanders.
The locality of the transformation preserves structural properties of the
original graph, and thus has many consequences. Applied to Cayley graphs, our
transformation shows that any expanding finite group has almost Ramanujan
expanding generators. Similarly, one can obtain almost optimal explicit
constructions of quantum expanders, dimension expanders, monotone expanders,
etc., from existing (suboptimal) constructions of such objects. Another
consequence is a "derandomized" random walk on the original (suboptimal)
expander with almost optimal convergence rate. Our transformation also applies
when the degree is not bounded or the expansion is not constant.
We obtain our results by a generalization of Ta-Shma's technique in his
breakthrough paper [STOC 2017], used to obtain explicit almost optimal binary
codes. Specifically, our spectral amplification extends Ta-Shma's analysis of
bias amplification from scalars to matrices of arbitrary dimension in a very
natural way. Curiously, while Ta-Shma's explicit bias amplification
derandomizes a well-known probabilistic argument (underlying the
Gilbert--Varshamov bound), there seems to be no known probabilistic (or other
existential) way of achieving our explicit ("high-dimensional") spectral
amplification. | 2209.07024v1 |
2022-09-15 | An analytical study of the MHD clamshell instability on a sphere | This paper studies the instability of two-dimensional magnetohydrodynamic
(MHD) systems on a sphere using analytical methods. The underlying flow
consists of a zonal differential rotation and a toroidal magnetic field is
present. Semicircle rules that prescribe the possible domain of the wave
velocity in the complex plane for general flow and field profiles are derived.
The paper then sets out an analytical study of the `clamshell instability',
which features field lines on the two hemispheres tilting in opposite
directions (Cally 2001, Sol. Phys. vol. 199, pp. 231--249). An asymptotic
solution for the instability problem is derived for the limit of weak shear of
the zonal flow, via the method of matched asymptotic expansions. It is shown
that when the zonal flow is solid body rotation, there exists a neutral mode
that tilts the magnetic field lines, referred to as the `tilting mode'. A weak
shear of the zonal flow excites the critical layer of the tilting mode, which
reverses the tilting direction to form the clamshell pattern and induces the
instability. The asymptotic solution provides insights into properties of the
instability for a range of flow and field profiles. A remarkable feature is
that the magnetic field affects the instability only through its local
behaviour in the critical layer. | 2209.07349v1 |
2022-09-15 | $\tilde{O}(n+\mathrm{poly}(k))$-time Algorithm for Bounded Tree Edit Distance | Computing the edit distance of two strings is one of the most basic problems
in computer science and combinatorial optimization. Tree edit distance is a
natural generalization of edit distance in which the task is to compute a
measure of dissimilarity between two (unweighted) rooted trees with node
labels. Perhaps the most notable recent application of tree edit distance is in
NoSQL big databases, such as MongoDB, where each row of the database is a JSON
document represented as a labeled rooted tree, and finding dissimilarity
between two rows is a basic operation. Until recently, the fastest algorithm
for tree edit distance ran in cubic time (Demaine, Mozes, Rossman, Weimann;
TALG'10); however, Mao (FOCS'21) broke the cubic barrier for the tree edit
distance problem using fast matrix multiplication.
Given a parameter $k$ as an upper bound on the distance, an $O(n+k^2)$-time
algorithm for edit distance has been known since the 1980s due to the works of
Myers (Algorithmica'86) and Landau and Vishkin (JCSS'88). The existence of an
$\tilde{O}(n+\mathrm{poly}(k))$-time algorithm for tree edit distance has been
posed as an open question, e.g., by Akmal and Jin (ICALP'21), who gave a
state-of-the-art $\tilde{O}(nk^2)$-time algorithm. In this paper, we answer
this question positively. | 2209.07524v1 |
2022-09-23 | Multiplexed control of spin quantum memories in a photonic circuit | A central goal in many quantum information processing applications is a
network of quantum memories that can be entangled with each other while being
individually controlled and measured with high fidelity. This goal has
motivated the development of programmable photonic integrated circuits (PICs)
with integrated spin quantum memories using diamond color center spin-photon
interfaces. However, this approach introduces a challenge in the microwave
control of individual spins within closely packed registers. Here, we present a
quantum-memory-integrated photonics platform capable of (i) the integration of
multiple diamond color center spins into a cryogenically compatible, high-speed
programmable PIC platform; (ii) selective manipulation of individual spin
qubits addressed via tunable magnetic field gradients; and (iii) simultaneous
control of multiple qubits using numerically optimized microwave pulse shaping.
The combination of localized optical control, enabled by the PIC platform,
together with selective spin manipulation opens the path to scalable quantum
networks on intra-chip and inter-chip platforms. | 2209.11853v2 |
2022-09-26 | A detailed star formation history for the extremely diffuse Andromeda XIX dwarf galaxy | We present deep imaging of the ultra-diffuse Andromeda XIX dwarf galaxy from
the Advance Camera for Surveys on the Hubble Space Telescope which resolves its
stellar populations to below the oldest main sequence turn-off. We derive a
full star formation history for the galaxy using MATCH, and find no evidence of
star formation in the past 8 Gyr. We calculate a quenching time of
$\tau_{90}=9.7\pm0.2$~Gyr, suggesting Andromeda~XIX ceased forming stars very
early on. This early quenching, combined with its extremely large half-light
radius, low density dark matter halo and lower than expected metallicity make
it a unique galaxy within the Local Group and raises questions about how it
formed. The early quenching time allows us to rule out feedback from bursty
star formation as a means to explain its diffuse stellar population and low
density dark matter halo. We find that the extended stellar population, low
density halo and star formation could be explained by either tidal interactions
(such as tidal shocking) or by late dry mergers, with the latter also
explaining its low metallicity. Proper motions and detailed abundances would
allow us to distinguish between these two scenarios. | 2209.12912v1 |
2022-10-06 | Scalable photonic integrated circuits for programmable control of atomic systems | Advances in laser technology have driven discoveries in atomic, molecular,
and optical (AMO) physics and emerging applications, from quantum computers
with cold atoms or ions, to quantum networks with solid-state color centers.
This progress is motivating the development of a new generation of
"programmable optical control" systems, characterized by criteria (C1) visible
(VIS) and near-infrared (IR) wavelength operation, (C2) large channel counts
extensible beyond 1000s of individually addressable atoms, (C3) high intensity
modulation extinction and (C4) repeatability compatible with low gate errors,
and (C5) fast switching times. Here, we address these challenges by introducing
an atom control architecture based on VIS-IR photonic integrated circuit (PIC)
technology. Based on a complementary metal-oxide-semiconductor (CMOS)
fabrication process, this Atom-control PIC (APIC) technology meets the system
requirements (C1)-(C5). As a proof of concept, we demonstrate a 16-channel
silicon nitride based APIC with (5.8$\pm$0.4) ns response times and -30 dB
extinction ratio at a wavelength of 780 nm. This work demonstrates the
suitability of PIC technology for quantum control, opening a path towards
scalable quantum information processing based on optically-programmable atomic
systems. | 2210.03100v2 |
2022-10-10 | Andreev processes in mesoscopic multi-terminal graphene Josephson junctions | There is growing interest in using multi-terminal Josephson junctions (MTJJs)
as a platform to artificially emulate topological phases and to investigate
complex superconducting mechanisms such as quartet and multiplet Cooper
pairings. Current experimental signatures in MTJJs have led to conflicting
interpretations of the salient features. In this work, we report a
collaborative experimental and theoretical investigation of graphene-based
four-terminal Josephson junctions. We observe resonant features in the
differential resistance maps that resemble those ascribed to multiplet Cooper
pairings. To understand these features, we model our junctions using a circuit
network of coupled two-terminal resistively and capacitively shunted junctions
(RCSJs). Under appropriate bias current, the model predicts that a current
flowing between two diagonal terminals in a four-terminal geometry may be
represented as a sinusoidal function of a weighted sum of the superconducting
phases. We show that starting from a semi-classical model with diffusive
current-phase relations, the MTJJ effectively emulates a general form of the
expected current-phase relation for multiplet Cooper pairings. Our study
therefore suggests that differential resistance measurements alone are
insufficient to conclusively distinguish resonant Andreev reflection processes
from semi-classical circuit-network effects. | 2210.04408v3 |
2022-10-10 | Infrared Remote Sensing Using Low Noise Avalanche Photodiode Detector | For a remote sensing optical payload to achieve a Ground Sampling Distance of
~ 10-30 m, a critical problem is platform-induced motion blur. While forward
motion compensation can reduce this transit speed, it comes at the expense of a
more challenging satellite attitude control system and induces a variable
observation/illumination angle. This relative motion can be frozen out by
simply reading the sensor system at a frame rate that matches the ground
resolution element's pixel crossing time. To achieve high resolution using this
Time-Delay Integration (TDI)-like approach requires high speed and hence near
"zero" readout noise detector arrays to avoid swamping the observed signal.
This requires associated control electronics for fast frame readout and direct
interface with smart- Artificial Intelligence (AI) onboard processing. With
this technique, the platform freezes out its movement concerning the ground,
reducing the demands placed on the attitude control systems, which can
otherwise be difficult to implement on a small satellite platform. Here we
report the Australian National University's OzFuel mission which applies this
technical solution to deliver high ground resolution via high frame rate
imaging. OzFuel is built around the Leonardo SAPHIRA Mercury Cadmium Telluride
linear mode electron avalanche photodiode (LMeAPD) detector and the in-house
developed Rosella electronics control system. The mission will deliver an
integrated sensor system in a suite of Short-Wave Infrared (SWIR) passbands
dedicated to monitoring the flammability of Eucalypt trees. The OzFuel mission
concept focuses on the application of SWIR remote sensing data to deliver a
strategic evaluation of fuel loads and moisture content in the bushfire-prone
Australian environment. | 2210.04770v1 |
2022-10-17 | On construction of quantum codes with dual-containing quasi-cyclic codes | One of the main objectives of quantum error-correction theory is to construct
quantum codes with optimal parameters and properties. In this paper, we propose
a class of 2-generator quasi-cyclic codes and study their applications in the
construction of quantum codes over small fields. Firstly, some sufficient
conditions for these 2-generator quasi-cyclic codes to be dual-containing
concerning Hermitian inner product are determined. Then, we utilize these
Hermitian dual-containing quasi-cyclic codes to produce quantum codes via the
famous Hermitian construction. Moreover, we present a lower bound on the
minimum distance of these quasi-cyclic codes, which is helpful to construct
quantum codes with larger lengths and dimensions. As the computational results,
many new quantum codes that exceed the quantum Gilbert-Varshamov bound are
constructed over $F_q$, where $q$ is $2,3,4,5$. In particular, 16 binary
quantum codes raise the lower bound on the minimum distance in Grassl's table
\cite{Grassl:codetables}. In nonbinary cases, many quantum codes are new or
have better parameters than those in the literature. | 2210.08716v1 |
2022-10-18 | Intense γ-photon and high-energy electron production by neutron irradiation: effects of nuclear excitations on reactor materials | The effects of neutron irradiation on materials are often interpreted in
terms of atomic recoils, initiated by neutron impacts and producing crystal
lattice defects. In addition, there is a remarkable two-step process, strongly
pronounced in the medium-weight and heavy elements. This process involves the
generation of energetic {\gamma} photons in nonelastic collisions of neutrons
with atomic nuclei, achieved via capture and inelastic reactions. Subsequently,
high-energy electrons are excited through the scattering of {\gamma} photons by
the atomic electrons. We derive and validate equations enabling a fast and
robust evaluation of photon and electron fluxes produced by the neutrons in the
bulk of materials. The two-step n-{\gamma}-e scattering creates a
nonequilibrium dynamically fluctuating steady-state population of high-energy
electrons, with the spectra of photon and electron energies extending well into
the mega-electron-volt range. This stimulates vacancy diffusion through
electron-triggered atomic recoils, primarily involving vacancy-impurity
dissociation, even if thermal activation is ineffective. Tungsten converts the
energy of fusion or fission neutrons into a flux of {\gamma} radiation at the
conversion efficiency approaching 99%, with implications for structural
materials, superconductors, and insulators, as well as phenomena like
corrosion, and helium and hydrogen isotope retention. | 2210.09667v2 |
2022-11-06 | A framework for leveraging machine learning tools to estimate personalized survival curves | The conditional survival function of a time-to-event outcome subject to
censoring and truncation is a common target of estimation in survival analysis.
This parameter may be of scientific interest and also often appears as a
nuisance in nonparametric and semiparametric problems. In addition to classical
parametric and semiparametric methods (e.g., based on the Cox proportional
hazards model), flexible machine learning approaches have been developed to
estimate the conditional survival function. However, many of these methods are
either implicitly or explicitly targeted toward risk stratification rather than
overall survival function estimation. Others apply only to discrete-time
settings or require inverse probability of censoring weights, which can be as
difficult to estimate as the outcome survival function itself. Here, we employ
a decomposition of the conditional survival function in terms of observable
regression models in which censoring and truncation play no role. This allows
application of an array of flexible regression and classification methods
rather than only approaches that explicitly handle the complexities inherent to
survival data. We outline estimation procedures based on this decomposition,
empirically assess their performance, and demonstrate their use on data from an
HIV vaccine trial. | 2211.03031v4 |
2022-11-14 | High-resolution single-shot spiral diffusion-weighted imaging at 7T using expanded encoding with compressed sensing | Purpose: The expanded encoding model incorporates spatially- and time-varying
field perturbations for correction during reconstruction. So far, these
reconstructions have used the conjugate gradient method with early stopping
used as implicit regularization. However, this approach is likely suboptimal
for low-SNR cases like diffusion or high-resolution MRI. Here, we investigate
the extent that l1-wavelet regularization, or equivalently compressed sensing
(CS), combined with expanded encoding improves trade-offs between spatial
resolution, readout time and SNR for single-shot spiral diffusion-weighted
imaging at 7T. The reconstructions were performed using our open-source
GPU-enabled reconstruction toolbox, MatMRI, that allows inclusion of the
different components of the expanded encoding model, with or without CS.
Methods: In vivo accelerated single-shot spirals were acquired with five
acceleration factors (2-6) and three in-plane spatial resolutions (1.5, 1.3,
and 1.1 mm). From the in vivo reconstructions, we estimated diffusion tensors
and computed fractional anisotropy maps. Then, simulations were used to
quantitatively investigate and validate the impact of CS-based regularization
on image quality when compared to a known ground truth. Results: In vivo
reconstructions revealed improved image quality with retainment of small
features when CS was used. Simulations showed that the joint use of the
expanded encoding model and CS improves accuracy of image reconstructions
(reduced mean-squared error) over the range of acceleration factors
investigated. Conclusion: The expanded encoding model and CS regularization are
complementary tools for single-shot spiral diffusion MRI, which enables both
higher spatial resolutions and higher acceleration factors. | 2211.07532v1 |
2022-11-17 | On universal butterfly and antisymmetric magnetoresistances | Butterfly magnetoresistance (BMR) and antisymmetric magnetoresistance (ASMR)
are about a butterfly-cross curve and a curve with one peak and one valley when
a magnetic field is swept up and down along a fixed direction. Other than the
parallelogram-shaped magnetoresistance-curve (MR-curve) often observed in
magnetic memory devices, BMR and ASMR are two ubiquitous types of MR-curves
observed in diversified magnetic systems, including van der Waals materials,
strongly correlated systems, and traditional magnets. Here, we reveal the
general principles and the picture behind the BMR and the ASMR that do not
depend on the detailed mechanisms of magnetoresistance: 1) The systems exhibit
hysteresis loops, common for most magnetic materials with coercivities. 2) The
magnetoresistance of the magnetic structures in a large positive magnetic field
and in a large negative magnetic field is approximately the same. With the
generalized Ohm's law in magnetic materials, these principles explain why most
BMR appears in the longitudinal resistance measurements and is very rare in the
Hall resistance measurements. Simple toy models, in which the
Landau-Lifshitz-Gilbert equation governs magnetization, are used to demonstrate
the principles and explain the appearance and disappearance of BMR in various
experiments. Our finding provides a simple picture to understand
magnetoresistance-related experiments. | 2211.09369v1 |
2022-12-22 | Photon production rate from Transverse-Longitudinal ($T-L$) mesonic correlator on the lattice | Thermal photons from the QGP provide important information about the
interaction among plasma constituents. The photon production rate from a
thermally equilibrated system is proportional to the transverse spectral
function $\rho_T(\omega=|\vec k|, \vec k)$. One can also calculate the photon
production rate from the difference between $\rho_T(\omega,\vec k)$
(transverse) and $\rho_L(\omega,\vec k)$ (longitudinal) projections, as
$\rho_L$ vanishes on the photon point. Because the UV part of $\rho_T-\rho_L$
is suppressed, the corresponding Euclidean correlator receives most of its
contribution from the IR part. We calculate the $T\!-\!L$ correlator on
$N_f=2+1$ flavour HISQ configurations with $m_l=m_s/5$ at temperature of about
$1.15\,T_{pc}$ (220 MeV). We have used two ans\"{a}tze for the spectral
function: 1) A polynomial connected to the UV region consistent with OPE
expansion and 2) a hydro-inspired spectral function. We have also applied the
Backus-Gilbert method to estimate the spectral function. All these different
approaches are combined to estimate the photon production rate. | 2212.11509v2 |
2023-01-12 | Incremental Dead State Detection in Logarithmic Time | Identifying live and dead states in an abstract transition system is a
recurring problem in formal verification; for example, it arises in our recent
work on efficiently deciding regex constraints in SMT. However,
state-of-the-art graph algorithms for maintaining reachability information
incrementally (that is, as states are visited and before the entire state space
is explored) assume that new edges can be added from any state at any time,
whereas in many applications, outgoing edges are added from each state as it is
explored. To formalize the latter situation, we propose guided incremental
digraphs (GIDs), incremental graphs which support labeling closed states
(states which will not receive further outgoing edges). Our main result is that
dead state detection in GIDs is solvable in $O(\log m)$ amortized time per edge
for $m$ edges, improving upon $O(\sqrt{m})$ per edge due to Bender, Fineman,
Gilbert, and Tarjan (BFGT) for general incremental directed graphs.
We introduce two algorithms for GIDs: one establishing the logarithmic time
bound, and a second algorithm to explore a lazy heuristics-based approach. To
enable an apples-to-apples experimental comparison, we implemented both
algorithms, two simpler baselines, and the state-of-the-art BFGT baseline using
a common directed graph interface in Rust. Our evaluation shows $110$-$530$x
speedups over BFGT for the largest input graphs over a range of graph classes,
random graphs, and graphs arising from regex benchmarks. | 2301.05308v2 |
2023-02-07 | Computational capability for physical reservoir computing using a spin-torque oscillator with two free layers | A numerical analysis on the computational capability of physical reservoir
computing utilizing a spin-torque oscillator with two free layers is reported.
Conventional spintronics devices usually consist of two ferromagnets, where the
direction of magnetization in one layer, called the free layer, can move while
that of the other, the reference layer, is fixed. Recently, however, devices
with two free layers, where the reference layer is replaced by another free
layer, have been developed for various practical applications. Adding another
free layer drastically changes the dynamical response of the device through the
couplings via the spin-transfer effect and the dipole magnetic field. A
numerical simulation of the Landau-Lifshitz-Gilbert equation and a statistical
analyses of the Lyapunov exponent and the synchronization index reveal the
appearance of an amplitude-modulated oscillation and chaos in the oscillators
with two free layers. Such complex dynamics qualitatively change the
computational capability of physical reservoir computing because the
computational resource is dynamics of the physical system. An evaluation of the
short-term memory capacity clarifies that oscillators with two free layers have
a larger capacity than those of conventional oscillators. An enhancement in
capacity near the edge of echo state property, i.e., the boundary between zero
and finite synchronization index, is also found. | 2302.03769v1 |
2023-02-13 | Ultra-bright single photon source based on an atomically thin material | Solid-state single photon sources are central building blocks in quantum
communication networks and on-chip quantum information processing. Atomically
thin crystals were established as possible candidates to emit non-classical
states of light, however, the performance of monolayer-based single photon
sources has so far been lacking behind state-of-the-art devices based on volume
crystals. Here, we implement a single photon source based on an atomically thin
sheet of WSe2 coupled to a spectrally tunable optical cavity. It is
characterized by a high single photon purity with a $g^{(2)}(0)$ value as low
as $4.7 \pm 0.7 \%$ and a record-high first lens brightness of linearly
polarized photons as large as $65 \pm 4 \%$. Interestingly, the high
performance of our devices allows us to observe genuine quantum interference
phenomena in a Hong-Ou-Mandel experiment. Our results demonstrate that open
cavities and two-dimensional materials constitute an excellent platform for
ultra-bright quantum light sources: the unique properties of such
two-dimensional materials and the versatility of open cavities open an
inspiring avenue for novel quantum optoelectronic devices. | 2302.06340v1 |
2023-02-21 | A Prompt Pattern Catalog to Enhance Prompt Engineering with ChatGPT | Prompt engineering is an increasingly important skill set needed to converse
effectively with large language models (LLMs), such as ChatGPT. Prompts are
instructions given to an LLM to enforce rules, automate processes, and ensure
specific qualities (and quantities) of generated output. Prompts are also a
form of programming that can customize the outputs and interactions with an
LLM. This paper describes a catalog of prompt engineering techniques presented
in pattern form that have been applied to solve common problems when conversing
with LLMs. Prompt patterns are a knowledge transfer method analogous to
software patterns since they provide reusable solutions to common problems
faced in a particular context, i.e., output generation and interaction when
working with LLMs. This paper provides the following contributions to research
on prompt engineering that apply LLMs to automate software development tasks.
First, it provides a framework for documenting patterns for structuring prompts
to solve a range of problems so that they can be adapted to different domains.
Second, it presents a catalog of patterns that have been applied successfully
to improve the outputs of LLM conversations. Third, it explains how prompts can
be built from multiple patterns and illustrates prompt patterns that benefit
from combination with other prompt patterns. | 2302.11382v1 |
2023-03-11 | Power efficient ReLU design for neuromorphic computing using spin Hall effect | We demonstrate a magnetic tunnel junction injected with spin Hall current to
exhibit linear rotation of magnetization of the free-ferromagnet using only the
spin current. Using the linear resistance change of the MTJ, we devise a
circuit for the rectified linear activation (ReLU) function of the artificial
neuron. We explore the role of different spin Hall effect (SHE) heavy metal
layers on the power consumption of the ReLU circuit. We benchmark the power
consumption of the ReLU circuit with different SHE layers by defining a new
parameter called the spin Hall power factor. It combines the spin Hall angle,
resistivity, and thickness of the heavy metal layer, which translates to the
power consumption of the different SHE layers during spin-orbit
switching/rotation of the free FM. We employ a hybrid spintronics-CMOS
simulation framework that couples Keldysh non-equilibrium Green's function
formalism with Landau-Lifshitz-Gilbert-Slonzewski equations and the HSPICE
circuit simulator to account for diverse physics of spin-transport and the CMOS
elements in our proposed ReLU design. We also demonstrate the robustness of the
proposed ReLU circuit against thermal noise and non-trivial power-error
trade-off that enables the use of an unstable free-ferromagnet for
energy-efficient design. Using the proposed circuit, we evaluate the
performance of the convolutional neural network for MNIST datasets and
demonstrate comparable classification accuracies to the ideal ReLU with an
energy consumption of 75 $pJ$ per sample. | 2303.06463v1 |
2023-03-28 | Optimal Scheduling Policies for Remote Estimation of Autoregressive Markov Processes over Time-Correlated Fading Channel | We consider the problem of transmission scheduling for the remote estimation
of a discrete-time autoregressive Markov process that is driven by white
Gaussian noise. A sensor observes this process, and then decides to either
encode the current state of this process into a data packet and attempts to
transmit it to the estimator over an unreliable wireless channel modeled as a
Gilbert-Elliott channel, or does not send any update. Each transmission attempt
consumes $\lambda$ units of transmission power, and the remote estimator is
assumed to be linear. The channel state is revealed only via the feedback
(ACK\slash NACK) of a transmission, and hence the channel state is not revealed
if no transmission occurs. The goal of the scheduler is to minimize the
expected value of an infinite-horizon cumulative discounted cost, in which the
instantaneous cost is composed of the following two quantities: (i)~squared
estimation error, (ii) transmission power. We show that this problem can
equivalently be posed as a partially observable Markov decision process
(POMDP), in which the scheduler maintains a belief about the current state of
the channel, and makes decisions on the basis of the current value of the
estimation error, and the belief state.~We then show that the optimal policy is
of threshold-type, i.e. for each value of the estimation error $e$, there is a
threshold $b\ust(e)$ such that when the error is equal to $e$, then it is
optimal to transmit only when the current belief state is greater than
$b\ust(e)$. | 2303.16285v1 |
2023-04-14 | Study on Soft Robotic Pinniped Locomotion | Legged locomotion is a highly promising but under-researched subfield within
the field of soft robotics. The compliant limbs of soft-limbed robots offer
numerous benefits, including the ability to regulate impacts, tolerate falls,
and navigate through tight spaces. These robots have the potential to be used
for various applications, such as search and rescue, inspection, surveillance,
and more. The state-of-the-art still faces many challenges, including limited
degrees of freedom, a lack of diversity in gait trajectories, insufficient limb
dexterity, and limited payload capabilities. To address these challenges, we
develop a modular soft-limbed robot that can mimic the locomotion of pinnipeds.
By using a modular design approach, we aim to create a robot that has improved
degrees of freedom, gait trajectory diversity, limb dexterity, and payload
capabilities. We derive a complete floating-base kinematic model of the
proposed robot and use it to generate and experimentally validate a variety of
locomotion gaits. Results show that the proposed robot is capable of
replicating these gaits effectively. We compare the locomotion trajectories
under different gait parameters against our modeling results to demonstrate the
validity of our proposed gait models. | 2304.06945v1 |
2023-04-19 | Local object crop collision network for efficient simulation of non-convex objects in GPU-based simulators | Our goal is to develop an efficient contact detection algorithm for
large-scale GPU-based simulation of non-convex objects. Current GPU-based
simulators such as IsaacGym and Brax must trade-off speed with fidelity,
generality, or both when simulating non-convex objects. Their main issue lies
in contact detection (CD): existing CD algorithms, such as
Gilbert-Johnson-Keerthi (GJK), must trade off their computational speed with
accuracy which becomes expensive as the number of collisions among non-convex
objects increases. We propose a data-driven approach for CD, whose accuracy
depends only on the quality and quantity of offline dataset rather than online
computation time. Unlike GJK, our method inherently has a uniform computational
flow, which facilitates efficient GPU usage based on advanced compilers such as
XLA (Accelerated Linear Algebra). Further, we offer a data-efficient solution
by learning the patterns of colliding local crop object shapes, rather than
global object shapes which are harder to learn. We demonstrate our approach
improves the efficiency of existing CD methods by a factor of 5-10 for
non-convex objects with comparable accuracy. Using the previous work on contact
resolution for a neural-network-based contact detector, we integrate our CD
algorithm into the open-source GPU-based simulator, Brax, and show that we can
improve the efficiency over IsaacGym and generality over standard Brax. We
highly recommend the videos of our simulator included in the supplementary
materials. | 2304.09439v2 |
2023-04-25 | Semantic Compression With Large Language Models | The rise of large language models (LLMs) is revolutionizing information
retrieval, question answering, summarization, and code generation tasks.
However, in addition to confidently presenting factually inaccurate information
at times (known as "hallucinations"), LLMs are also inherently limited by the
number of input and output tokens that can be processed at once, making them
potentially less effective on tasks that require processing a large set or
continuous stream of information. A common approach to reducing the size of
data is through lossless or lossy compression. Yet, in some cases it may not be
strictly necessary to perfectly recover every detail from the original data, as
long as a requisite level of semantic precision or intent is conveyed.
This paper presents three contributions to research on LLMs. First, we
present the results from experiments exploring the viability of approximate
compression using LLMs, focusing specifically on GPT-3.5 and GPT-4 via ChatGPT
interfaces. Second, we investigate and quantify the capability of LLMs to
compress text and code, as well as to recall and manipulate compressed
representations of prompts. Third, we present two novel metrics -- Exact
Reconstructive Effectiveness (ERE) and Semantic Reconstruction Effectiveness
(SRE) -- that quantify the level of preserved intent between text compressed
and decompressed by the LLMs we studied. Our initial results indicate that
GPT-4 can effectively compress and reconstruct text while preserving the
semantic essence of the original text, providing a path to leverage
$\sim$5$\times$ more tokens than present limits allow. | 2304.12512v1 |
2023-04-28 | Optimal majority rules and quantitative Condorcet properties of setwise Kemeny voting schemes | The important Kemeny problem, which consists of computing median consensus
rankings of an election with respect to the Kemeny voting rule, admits
important applications in biology and computational social choice and was
generalized recently via an interesting setwise approach by Gilbert et. al. Our
first results establish optimal quantitative extensions of the Unanimity
property and the well-known $3/4$-majority rule of Betzler et al. for the
classical Kemeny median problem. Moreover, by elaborating an exhaustive list of
quantified axiomatic properties (such as the Condorcet and Smith criteria, the
$5/6$-majority rule, etc.) of the $3$-wise Kemeny rule where not only pairwise
comparisons but also the discordance between the winners of subsets of three
candidates are also taken into account, we come to the conclusion that the
$3$-wise Kemeny voting scheme induced by the $3$-wise Kendall-tau distance
presents interesting advantages in comparison with the classical Kemeny rule.
For example, it satisfies several improved manipulation-proof properties. Since
the $3$-wise Kemeny problem is NP-hard, our results also provide some of the
first useful space reduction techniques by determining the relative orders of
pairs of alternatives. Our works suggest similar interesting properties of
higher setwise Kemeny voting schemes which justify and compensate for the more
expensive computational cost than the classical Kemeny scheme. | 2304.14980v1 |
2023-05-25 | Packaging code for reproducible research in the public sector | The effective and ethical use of data to inform decision-making offers huge
value to the public sector, especially when delivered by transparent,
reproducible, and robust data processing workflows. One way that governments
are unlocking this value is through making their data publicly available,
allowing more people and organisations to derive insights. However, open data
is not enough in many cases: publicly available datasets need to be accessible
in an analysis-ready form from popular data science tools, such as R and
Python, for them to realise their full potential.
This paper explores ways to maximise the impact of open data with reference
to a case study of packaging code to facilitate reproducible analysis. We
present the jtstats project, which consists of R and Python packages for
importing, processing, and visualising large and complex datasets representing
journey times, for many modes and purposes at multiple geographic levels,
released by the UK Department of Transport. jtstats shows how domain specific
packages can enable reproducible research within the public sector and beyond,
saving duplicated effort and reducing the risks of errors from repeated
analyses. We hope that the jtstats project inspires others, particularly those
in the public sector, to add value to their data sets by making them more
accessible. | 2305.16205v1 |
2023-05-25 | COMPLETE: A flagship mission for complete understanding of 3D coronal magnetic energy release | COMPLETE is a flagship mission concept combining broadband spectroscopic
imaging and comprehensive magnetography from multiple viewpoints around the Sun
to enable tomographic reconstruction of 3D coronal magnetic fields and
associated dynamic plasma properties, which provide direct diagnostics of
energy release. COMPLETE re-imagines the paradigm for solar remote-sensing
observations through purposefully co-optimized detectors distributed on
multiple spacecraft that operate as a single observatory, linked by a
comprehensive data/model assimilation strategy to unify individual observations
into a single physical framework. We describe COMPLETE's science goals,
instruments, and mission implementation. With targeted investment by NASA,
COMPLETE is feasible for launch in 2032 to observe around the maximum of Solar
Cycle 26. | 2305.16533v1 |
2023-05-25 | Magnetic Energy Powers the Corona: How We Can Understand its 3D Storage & Release | The coronal magnetic field is the prime driver behind many as-yet unsolved
mysteries: solar eruptions, coronal heating, and the solar wind, to name a few.
It is, however, still poorly observed and understood. We highlight key
questions related to magnetic energy storage, release, and transport in the
solar corona, and their relationship to these important problems. We advocate
for new and multi-point co-optimized measurements, sensitive to magnetic field
and other plasma parameters, spanning from optical to $\gamma$-ray wavelengths,
to bring closure to these long-standing and fundamental questions. We discuss
how our approach can fully describe the 3D magnetic field, embedded plasma,
particle energization, and their joint evolution to achieve these objectives. | 2305.17146v1 |
2023-05-27 | Optimization's Neglected Normative Commitments | Optimization is offered as an objective approach to resolving complex,
real-world decisions involving uncertainty and conflicting interests. It drives
business strategies as well as public policies and, increasingly, lies at the
heart of sophisticated machine learning systems. A paradigm used to approach
potentially high-stakes decisions, optimization relies on abstracting the real
world to a set of decision(s), objective(s) and constraint(s). Drawing from the
modeling process and a range of actual cases, this paper describes the
normative choices and assumptions that are necessarily part of using
optimization. It then identifies six emergent problems that may be neglected:
1) Misspecified values can yield optimizations that omit certain imperatives
altogether or incorporate them incorrectly as a constraint or as part of the
objective, 2) Problematic decision boundaries can lead to faulty modularity
assumptions and feedback loops, 3) Failing to account for multiple agents'
divergent goals and decisions can lead to policies that serve only certain
narrow interests, 4) Mislabeling and mismeasurement can introduce bias and
imprecision, 5) Faulty use of relaxation and approximation methods,
unaccompanied by formal characterizations and guarantees, can severely impede
applicability, and 6) Treating optimization as a justification for action,
without specifying the necessary contextual information, can lead to ethically
dubious or faulty decisions. Suggestions are given to further understand and
curb the harms that can arise when optimization is used wrongfully. | 2305.17465v2 |
2023-05-30 | Hardness of Approximation in PSPACE and Separation Results for Pebble Games | We consider the pebble game on DAGs with bounded fan-in introduced in
[Paterson and Hewitt '70] and the reversible version of this game in [Bennett
'89], and study the question of how hard it is to decide exactly or
approximately the number of pebbles needed for a given DAG in these games. We
prove that the problem of eciding whether $s$~pebbles suffice to reversibly
pebble a DAG $G$ is PSPACE-complete, as was previously shown for the standard
pebble game in [Gilbert, Lengauer and Tarjan '80]. Via two different graph
product constructions we then strengthen these results to establish that both
standard and reversible pebbling space are PSPACE-hard to approximate to within
any additive constant. To the best of our knowledge, these are the first
hardness of approximation results for pebble games in an unrestricted setting
(even for polynomial time). Also, since [Chan '13] proved that reversible
pebbling is equivalent to the games in [Dymond and Tompa '85] and [Raz and
McKenzie '99], our results apply to the Dymond--Tompa and Raz--McKenzie games
as well, and from the same paper it follows that resolution depth is
PSPACE-hard to determine up to any additive constant. We also obtain a
multiplicative logarithmic separation between reversible and standard pebbling
space. This improves on the additive logarithmic separation previously known
and could plausibly be tight, although we are not able to prove this. We leave
as an interesting open problem whether our additive hardness of approximation
result could be strengthened to a multiplicative bound if the computational
resources are decreased from polynomial space to the more common setting of
polynomial time. | 2305.19104v1 |
2023-06-01 | Every Bit Counts in Consensus | Consensus enables n processes to agree on a common valid L-bit value, despite
t < n/3 processes being faulty and acting arbitrarily. A long line of work has
been dedicated to improving the worst-case communication complexity of
consensus in partial synchrony. This has recently culminated in the worst-case
word complexity of O(n^2). However, the worst-case bit complexity of the best
solution is still O(n^2 L + n^2 kappa) (where kappa is the security parameter),
far from the \Omega(n L + n^2) lower bound. The gap is significant given the
practical use of consensus primitives, where values typically consist of
batches of large size (L > n).
This paper shows how to narrow the aforementioned gap while achieving optimal
linear latency. Namely, we present a new algorithm, DARE (Disperse, Agree,
REtrieve), that improves upon the O(n^2 L) term via a novel dispersal
primitive. DARE achieves O(n^{1.5} L + n^{2.5} kappa) bit complexity, an
effective sqrt{n}-factor improvement over the state-of-the-art (when L > n
kappa). Moreover, we show that employing heavier cryptographic primitives,
namely STARK proofs, allows us to devise DARE-Stark, a version of DARE which
achieves the near-optimal bit complexity of O(n L + n^2 poly(kappa)). Both DARE
and DARE-Stark achieve optimal O(n) latency. | 2306.00431v2 |
2023-06-12 | Accountability Infrastructure: How to implement limits on platform optimization to protect population health | Attention capitalism has generated design processes and product development
decisions that prioritize platform growth over all other considerations. To the
extent limits have been placed on these incentives, interventions have
primarily taken the form of content moderation. While moderation is important
for what we call "acute harms," societal-scale harms -- such as negative
effects on mental health and social trust -- require new forms of institutional
transparency and scientific investigation, which we group under the term
accountability infrastructure.
This is not a new problem. In fact, there are many conceptual lessons and
implementation approaches for accountability infrastructure within the history
of public health. After reviewing these insights, we reinterpret the societal
harms generated by technology platforms through reference to public health. To
that end, we present a novel mechanism design framework and practical
measurement methods for that framework. The proposed approach is iterative and
built into the product design process, and is applicable for both
internally-motivated (i.e. self regulation by companies) and
externally-motivated (i.e. government regulation) interventions for a range of
societal problems, including mental health.
We aim to help shape a research agenda of principles for the design of
mechanisms around problem areas on which there is broad consensus and a firm
base of support. We offer constructive examples and discussion of potential
implementation methods related to these topics, as well as several new data
illustrations for potential effects of exposure to online content. | 2306.07443v1 |
2023-06-16 | Microlayer in nucleate boiling seen as Landau-Levich film with dewetting and evaporation | Both experimental and theoretical studies on the microscale and fast physical
phenomena occurring during the growth of vapor bubbles in nucleate pool boiling
are reported. The focus is on the liquid film of micrometric thickness
(``microlayer'') that can form between the heater and the liquid-vapor
interface of a bubble on the millisecond time scale. The microlayer strongly
affects the macroscale heat transfer and is thus important to be understood. It
is shown that the microlayer can be seen as the Landau-Levich film deposited by
the bubble foot edge during its receding when the bubble grows. The microlayer
profile measured with white-light interferometry, the temperature distribution
over the heater, and the bubble shape were observed with synchronized
high-speed cameras. The microlayer consists of two regions: a ridge near the
contact line followed by a longer and flatter part. The ridge could not be
measured because of the intrinsic limitation of interferometry, which is
analyzed. The simulations show that the ridge grows over time due to collection
of liquid at contact line receding, the theoretical dynamics of which agrees
with the experiment. The flatter part of the microlayer is bumped and its
physical origin is explained. | 2306.09838v1 |
2023-06-20 | High frequency oscillations in spin-torque nano oscillator due to bilinear coupling | Exchange coupling in an interfacial context is crucial for spin-torque nano
oscillator (STNO) that consists of a non-magnetic spacer which is alloyed with
a ferromagnetic material. Currently, investigations on the dynamics of the free
layer magnetization and frequency enhancement in the STNO with bilinear
coupling are still being actively pursued. In the present work, we investigate
the dynamics of the STNO in the presence of bilinear coupling but in the
absence of an external magnetic field by analyzing the associated
Landau-Lifshitz-Gilbert-Sloncewski(LLGS) equation, and consequently the impact
of the bilinear coupling on the dynamics of the magnetization of the free layer
is studied. It is observed that the frequency of the oscillations in the
magnetization component along the direction of the pinned layer polarization
can be enhanced above 300 GHz by positive bilinear coupling and up to around 30
GHz by negative bilinear coupling. We further reveal a transition from in-plane
to out-of-plane precession both for positive and negative bi-linear couplings.
We also analyze the switching of the magnetization for different values of
current and bilinear coupling. Our detailed investigations of STNO with
bilinear coupling aim at the possibilities of high-frequency devices by
considering the applied current and bilinear coupling in the absence of a
magnetic field. | 2306.11415v1 |
2023-06-20 | Convolutional neural networks for large-scale dynamical modeling of itinerant magnets | Complex spin textures in itinerant electron magnets hold promises for
next-generation memory and information technology. The long-ranged and often
frustrated electron-mediated spin interactions in these materials give rise to
intriguing localized spin structures such as skyrmions. Yet, simulations of
magnetization dynamics for such itinerant magnets are computationally difficult
due to the need for repeated solutions to the electronic structure problems. We
present a convolutional neural network (CNN) model to accurately and
efficiently predict the electron-induced magnetic torques acting on local
spins. Importantly, as the convolutional operations with a fixed kernel
(receptive field) size naturally take advantage of the locality principle for
many-electron systems, CNN offers a scalable machine learning approach to spin
dynamics. We apply our approach to enable large-scale dynamical simulations of
skyrmion phases in itinerant spin systems. By incorporating the CNN model into
Landau-Lifshitz-Gilbert dynamics, our simulations successfully reproduce the
relaxation process of the skyrmion phase and stabilize a skyrmion lattice in
larger systems. The CNN model also allows us to compute the effective receptive
fields, thus providing a systematic and unbiased method for determining the
locality of the original electron models. | 2306.11833v1 |
2023-06-29 | Relaxed Local Correctability from Local Testing | We construct the first asymptotically good relaxed locally correctable codes
with polylogarithmic query complexity, bringing the upper bound polynomially
close to the lower bound of Gur and Lachish (SICOMP 2021). Our result follows
from showing that a high-rate locally testable code can boost the block length
of a smaller relaxed locally correctable code, while preserving the correcting
radius and incurring only a modest additive cost in rate and query complexity.
We use the locally testable code's tester to check if the amount of corruption
in the input is low; if so, we can "zoom-in" to a suitable substring of the
input and recurse on the smaller code's local corrector. Hence, iterating this
operation with a suitable family of locally testable codes due to Dinur, Evra,
Livne, Lubotzky, and Mozes (STOC 2022) yields asymptotically good codes with
relaxed local correctability, arbitrarily large block length, and
polylogarithmic query complexity.
Our codes asymptotically inherit the rate and distance of any locally
testable code used in the final invocation of the operation. Therefore, our
framework also yields nonexplicit relaxed locally correctable codes with
polylogarithmic query complexity that have rate and distance approaching the
Gilbert-Varshamov bound. | 2306.17035v2 |
2023-07-13 | Words are not Wind -- How Joint Commitment and Reputation Solve Social Dilemmas, without Repeated Interactions or Enforcement by Third Parties | Joint commitment was argued to "make our social world" (Gilbert, 2014) and to
separate us from other primates. 'Joint' entails that neither of us promises
anything, unless the other promises as well. When we need to coordinate for the
best mutual outcome, any commitment is beneficial. However, when we are tempted
to free-ride (i.e. in social dilemmas), commitment serves no obvious purpose.
We show that a reputation system, which judges action in social dilemmas only
after joint commitment, can prevent free-riding. Keeping commitments builds
trust. We can selectively enter joint commitments with trustworthy individuals
to ensure their cooperation (since they will now be judged). We simply do not
commit to cooperate with those we do not trust, and hence can freely defect
without losing the trust of others. This principle might be the reason for
pointedly public joint commitments, such as marriage. It is especially relevant
to our evolutionary past, in which no mechanisms existed to enforce commitments
reliably and impartially (e.g. via a powerful and accountable government). Much
research from anthropology, philosophy and psychology made the assumption that
past collaborations were mutually beneficial and had little possibilities to
free-ride, for which there is little support. Our evolutionary game theory
approach proves that this assumption is not necessary, because free-riding
could have been dealt with joint commitments and reputation. | 2307.06898v1 |
2023-07-18 | Multi-Stage Cable Routing through Hierarchical Imitation Learning | We study the problem of learning to perform multi-stage robotic manipulation
tasks, with applications to cable routing, where the robot must route a cable
through a series of clips. This setting presents challenges representative of
complex multi-stage robotic manipulation scenarios: handling deformable
objects, closing the loop on visual perception, and handling extended behaviors
consisting of multiple steps that must be executed successfully to complete the
entire task. In such settings, learning individual primitives for each stage
that succeed with a high enough rate to perform a complete temporally extended
task is impractical: if each stage must be completed successfully and has a
non-negligible probability of failure, the likelihood of successful completion
of the entire task becomes negligible. Therefore, successful controllers for
such multi-stage tasks must be able to recover from failure and compensate for
imperfections in low-level controllers by smartly choosing which controllers to
trigger at any given time, retrying, or taking corrective action as needed. To
this end, we describe an imitation learning system that uses vision-based
policies trained from demonstrations at both the lower (motor control) and the
upper (sequencing) level, present a system for instantiating this method to
learn the cable routing task, and perform evaluations showing great performance
in generalizing to very challenging clip placement variations. Supplementary
videos, datasets, and code can be found at
https://sites.google.com/view/cablerouting. | 2307.08927v5 |
2023-07-20 | Fallout from U.S. atmospheric nuclear tests in New Mexico and Nevada (1945-1962) | One hundred and one atmospheric nuclear weapon tests were conducted between
1945 and 1962 in the United States, resulting in widespread dispersion of
radioactive fallout, and leading to environmental contamination and population
exposures. Accurate assessment of the extent of fallout from nuclear weapon
tests has been challenging in the United States and elsewhere, due to limited
monitoring and data accessibility. Here we address this deficit by combining
U.S. government data, high-resolution reanalyzed historical weather fields, and
atmospheric transport modeling to reconstruct radionuclide deposition across
the contiguous United States, with 10-kilometer spatial and one-hour temporal
resolution for five days following detonation, from all 94 atmospheric tests
detonated in New Mexico and Nevada with fission yields sufficient to generate
mushroom clouds. Our analysis also includes deposition estimates for 10 days
following the detonation of Trinity, the first ever nuclear weapon test, on
July 16, 1945. We identify locations where radionuclide deposition
significantly exceeded levels in areas covered by the U.S. Radiation Exposure
Compensation Act (RECA). These findings include deposition in all 48 contiguous
U.S. states. They provide an opportunity for re-evaluating the public health
and environmental implications from atmospheric nuclear testing. Finally, our
findings also speak to debates about marking the beginning of the Anthropocene
with nuclear weapons fallout. Our deposition estimates indicate that direct
fallout from Trinity, a plutonium device, reached Crawford Lake in Canada, the
proposed "golden spike" site marking the beginning of the Anthropocene epoch,
starting on July 20, 1945. | 2307.11040v1 |
2023-07-23 | Characterizing non-Markovian Quantum Process by Fast Bayesian Tomography | To push gate performance to levels beyond the thresholds for quantum error
correction, it is important to characterize the error sources occurring on
quantum gates. However, the characterization of non-Markovian error poses a
challenge to current quantum process tomography techniques. Fast Bayesian
Tomography (FBT) is a self-consistent gate set tomography protocol that can be
bootstrapped from earlier characterization knowledge and be updated in
real-time with arbitrary gate sequences. Here we demonstrate how FBT allows for
the characterization of key non-Markovian error processes. We introduce two
experimental protocols for FBT to diagnose the non-Markovian behavior of
two-qubit systems on silicon quantum dots. To increase the efficiency and
scalability of the experiment-analysis loop, we develop an online FBT software
stack. To reduce experiment cost and analysis time, we also introduce a native
readout method and warm boot strategy. Our results demonstrate that FBT is a
useful tool for probing non-Markovian errors that can be detrimental to the
ultimate realization of fault-tolerant operation on quantum computing. | 2307.12452v2 |
2023-07-27 | Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback | Reinforcement learning from human feedback (RLHF) is a technique for training
AI systems to align with human goals. RLHF has emerged as the central method
used to finetune state-of-the-art large language models (LLMs). Despite this
popularity, there has been relatively little public work systematizing its
flaws. In this paper, we (1) survey open problems and fundamental limitations
of RLHF and related methods; (2) overview techniques to understand, improve,
and complement RLHF in practice; and (3) propose auditing and disclosure
standards to improve societal oversight of RLHF systems. Our work emphasizes
the limitations of RLHF and highlights the importance of a multi-faceted
approach to the development of safer AI systems. | 2307.15217v2 |
2023-08-03 | Predicting Ki67, ER, PR, and HER2 Statuses from H&E-stained Breast Cancer Images | Despite the advances in machine learning and digital pathology, it is not yet
clear if machine learning methods can accurately predict molecular information
merely from histomorphology. In a quest to answer this question, we built a
large-scale dataset (185538 images) with reliable measurements for Ki67, ER,
PR, and HER2 statuses. The dataset is composed of mirrored images of H\&E and
corresponding images of immunohistochemistry (IHC) assays (Ki67, ER, PR, and
HER2. These images are mirrored through registration. To increase reliability,
individual pairs were inspected and discarded if artifacts were present (tissue
folding, bubbles, etc). Measurements for Ki67, ER and PR were determined by
calculating H-Score from image analysis. HER2 measurement is based on binary
classification: 0 and 1+ (IHC scores representing a negative subset) vs 3+ (IHC
score positive subset). Cases with IHC equivocal score (2+) were excluded. We
show that a standard ViT-based pipeline can achieve prediction performances
around 90% in terms of Area Under the Curve (AUC) when trained with a proper
labeling protocol. Finally, we shed light on the ability of the trained
classifiers to localize relevant regions, which encourages future work to
improve the localizations. Our proposed dataset is publicly available:
https://ihc4bc.github.io/ | 2308.01982v1 |
2023-08-06 | Unravelling metallic contaminants in complex polyimide heterostructures using deep ultraviolet spectroscopic ellipsometry | Metallic contaminants in complex heterostructures are important topics due to
their significant roles in determining physical properties as well as device
performance. Heterostructures of polyimide via on Al pad and Cu redistribution
layer (RDL) on polyimide have shown exotic properties and are important for
advanced semiconductor packaging systems. One main problem is significant
leakage current variations, which affect the performance of the devices, yet
the origin is far from understood. Furthermore, metal contaminations would
occur at the buried interfaces and it is particularly challenging to probe
them. Until now, the electronic and optical properties of complex polyimide
heterostructures and the roles of metallic contaminants, especially in the deep
ultraviolet (DUV) have not been studied extensively. Herewith, using
spectroscopic ellipsometry (SE) in a broad DUV range supported with
finite-difference time-domain (FDTD) calculations, we determine optical
properties of contaminants with various concentrations and reveal their
influence on device performance of under-bump vias and redistribution layer
(RDL) architectures. The complex dielectric function shows varying
contamination levels and different metals responsible for chip performance.
Metallic contaminants are found embedded within 50 nm in the polyimide and
different metals are distinguishable with varying concentrations, in agreement
with contact measurements in highly complex structures. Our result shows the
potency of spectroscopic ellipsometry in the DUV and paves the way for
non-destructive, advanced quality control and metrology applications in
integrated advanced electronics packaging systems. | 2308.03015v1 |
2023-08-14 | Nanoelectromechanical control of spin-photon interfaces in a hybrid quantum system on chip | Atom-like defects or color centers (CC's) in nanostructured diamond are a
leading platform for optically linked quantum technologies, with recent
advances including memory-enhanced quantum communication, multi-node quantum
networks, and spin-mediated generation of photonic cluster states. Scaling to
practically useful applications motivates architectures meeting the following
criteria: C1 individual optical addressing of spin qubits; C2 frequency tuning
of CC spin-dependent optical transitions; C3 coherent spin control in CC ground
states; C4 active photon routing; C5 scalable manufacturability; and C6 low
on-chip power dissipation for cryogenic operations. However, no architecture
meeting C1-C6 has thus far been demonstrated. Here, we introduce a hybrid
quantum system-on-chip (HQ-SoC) architecture that simultaneously achieves
C1-C6. Key to this advance is the realization of piezoelectric strain control
of diamond waveguide-coupled tin vacancy centers to meet C2 and C3, with
ultra-low power dissipation necessary for C6. The DC response of our device
allows emitter transition tuning by over 20 GHz, while the large frequency
range (exceeding 2 GHz) enables low-power AC control. We show acoustic
manipulation of integrated tin vacancy spins and estimate single-phonon
coupling rates over 1 kHz in the resolved sideband regime. Combined with
high-speed optical routing with negligible static hold power, this HQ-SoC
platform opens the path to scalable single-qubit control with optically
mediated entangling gates. | 2308.07161v1 |
2023-08-23 | MOFO: MOtion FOcused Self-Supervision for Video Understanding | Self-supervised learning (SSL) techniques have recently produced outstanding
results in learning visual representations from unlabeled videos. Despite the
importance of motion in supervised learning techniques for action recognition,
SSL methods often do not explicitly consider motion information in videos. To
address this issue, we propose MOFO (MOtion FOcused), a novel SSL method for
focusing representation learning on the motion area of a video, for action
recognition. MOFO automatically detects motion areas in videos and uses these
to guide the self-supervision task. We use a masked autoencoder which randomly
masks out a high proportion of the input sequence; we force a specified
percentage of the inside of the motion area to be masked and the remainder from
outside. We further incorporate motion information into the finetuning step to
emphasise motion in the downstream task. We demonstrate that our motion-focused
innovations can significantly boost the performance of the currently leading
SSL method (VideoMAE) for action recognition. Our method improves the recent
self-supervised Vision Transformer (ViT), VideoMAE, by achieving +2.6%, +2.1%,
+1.3% accuracy on Epic-Kitchens verb, noun and action classification,
respectively, and +4.7% accuracy on Something-Something V2 action
classification. Our proposed approach significantly improves the performance of
the current SSL method for action recognition, indicating the importance of
explicitly encoding motion in SSL. | 2308.12447v2 |
2023-08-25 | Thermal effect on microwave pulse driven magnetization switching of Stoner particle | Recently it has been demonstrated that the cosine chirp microwave pulse
(CCMP) is capable of achieving fast and energy-efficient magnetization-reversal
of a nanoparticle with zero-Temperature. However, we investigate the finite
temperature, $T$ effect on the CCMP-driven magnetization reversal using the
framework of the stochastic Landau Lifshitz Gilbert equation. At finite
Temperature, we obtain the CCMP-driven fast and energy-efficient reversal and
hence estimate the maximal temperature, $T_{max}$ at which the magnetization
reversal is valid. $T_{max}$ increases with increasing the nanoparticle
cross-sectional area/shape anisotropy up to a certain value, and afterward
$T_{max}$ decreases with the further increment of nanoparticle cross-sectional
area/shape anisotropy. This is because of demagnetization/shape anisotropy
field opposes the magnetocrystalline anisotropy, i.e., reduces the energy
barrier which separates the two stable states. For smaller cross-sectional
area/shape anisotropy, the controlling parameters of CCMP show decreasing trend
with temperature. We also find that with the increment easy-plane
shape-anisotropy, the required initial frequency of CCMP significantly reduces.
For the larger volume of nanoparticles, the parameters of CCMP remains constant
for a wide range of temperature which are desired for the device application.
Therefore, The above findings might be useful to realize the CCMP-driven fast
and energy-efficient magnetization reversal in realistic conditions. | 2308.13124v1 |
2023-09-04 | Impact of electrostatic crosstalk on spin qubits in dense CMOS quantum dot arrays | Quantum processors based on integrated nanoscale silicon spin qubits are a
promising platform for highly scalable quantum computation. Current CMOS spin
qubit processors consist of dense gate arrays to define the quantum dots,
making them susceptible to crosstalk from capacitive coupling between a dot and
its neighbouring gates. Small but sizeable spin-orbit interactions can transfer
this electrostatic crosstalk to the spin g-factors, creating a dependence of
the Larmor frequency on the electric field created by gate electrodes
positioned even tens of nanometers apart. By studying the Stark shift from tens
of spin qubits measured in nine different CMOS devices, we developed a
theoretical frawework that explains how electric fields couple to the spin of
the electrons in increasingly complex arrays, including those electric
fluctuations that limit qubit dephasing times $T_2^*$. The results will aid in
the design of robust strategies to scale CMOS quantum technology. | 2309.01849v1 |
2023-09-05 | Connectivity and interference in device-to-device networks in Poisson-Voronoi cities | To study the overall connectivity in device-to-device networks in cities, we
incorporate a signal-to-interference-plus-noise connectivity model into a
Poisson-Voronoi tessellation model representing the streets of a city. Relays
are located at crossroads (or street intersections), whereas (user) devices are
scattered along streets. Between any two adjacent relays, we assume data can be
transmitted either directly between the relays or through users, given they
share a common street. Our simulation results reveal that the network
connectivity is ensured when the density of users (on the streets) exceeds a
certain critical value. But then the network connectivity disappears when the
user density exceeds a second critical value. The intuition is that for longer
streets, where direct relay-to-relay communication is not possible, users are
needed to transmit data between relays, but with too many users the
interference becomes too strong, eventually reducing the overall network
connectivity. This observation on the user density evokes previous results
based on another wireless network model, where transmitter-receivers were
scattered across the plane. This effect disappears when interference is removed
from the model, giving a variation of the classic Gilbert model and recalling
the lesson that neglecting interference in such network models can give overly
optimistic results. For physically reasonable model parameters, we show that
crowded streets (with more than six users on a typical street) lead to a sudden
drop in connectivity. We also give numerical results outlining a relationship
between the user density and the strength of any interference reduction
techniques. | 2309.02137v2 |
2023-09-16 | On non-expandable cross-bifix-free codes | A cross-bifix-free code of length $n$ over $\mathbb{Z}_q$ is defined as a
non-empty subset of $\mathbb{Z}_q^n$ satisfying that the prefix set of each
codeword is disjoint from the suffix set of every codeword. Cross-bifix-free
codes have found important applications in digital communication systems. One
of the main research problems on cross-bifix-free codes is to construct
cross-bifix-free codes as large as possible in size. Recently, Wang and Wang
introduced a family of cross-bifix-free codes $S_{I,J}^{(k)}(n)$, which is a
generalization of the classical cross-bifix-free codes studied early by
Lvenshtein, Gilbert and Chee {\it et al.}. It is known that $S_{I,J}^{(k)}(n)$
is nearly optimal in size and $S_{I,J}^{(k)}(n)$ is non-expandable if $k=n-1$
or $1\leq k<n/2$. In this paper, we first show that $S_{I,J}^{(k)}(n)$ is
non-expandable if and only if $k=n-1$ or $1\leq k<n/2$, thereby improving the
results in [Chee {\it et al.}, IEEE-TIT, 2013] and [Wang and Wang, IEEE-TIT,
2022]. We then construct a new family of cross-bifix-free codes
$U^{(t)}_{I,J}(n)$ to expand $S_{I,J}^{(k)}(n)$ such that the resulting larger
code $S_{I,J}^{(k)}(n)\bigcup U^{(t)}_{I,J}(n)$ is a non-expandable
cross-bifix-free code whenever $S_{I,J}^{(k)}(n)$ is expandable. Finally, we
present an explicit formula for the size of $S_{I,J}^{(k)}(n)\bigcup
U^{(t)}_{I,J}(n)$. | 2309.08915v1 |
2023-09-21 | Real-time feedback protocols for optimizing fault-tolerant two-qubit gate fidelities in a silicon spin system | Recently, several groups have demonstrated two-qubit gate fidelities in
semiconductor spin qubit systems above 99%. Achieving this regime of
fault-tolerant compatible high fidelities is nontrivial and requires exquisite
stability and precise control over the different qubit parameters over an
extended period of time. This can be done by efficiently calibrating qubit
control parameters against different sources of micro- and macroscopic noise.
Here, we present several single- and two-qubit parameter feedback protocols,
optimised for and implemented in state-of-the-art fast FPGA hardware.
Furthermore, we use wavelet-based analysis on the collected feedback data to
gain insight into the different sources of noise in the system. Scalable
feedback is an outstanding challenge and the presented implementation and
analysis gives insight into the benefits and drawbacks of qubit parameter
feedback, as feedback related overhead increases. This work demonstrates a
pathway towards robust qubit parameter feedback and systematic noise analysis,
crucial for mitigation strategies towards systematic high-fidelity qubit
operation compatible with quantum error correction protocols. | 2309.12541v1 |
2023-09-21 | Spatio-temporal correlations of noise in MOS spin qubits | In quantum computing, characterising the full noise profile of qubits can aid
the efforts towards increasing coherence times and fidelities by creating error
mitigating techniques specific to the type of noise in the system, or by
completely removing the sources of noise. Spin qubits in MOS quantum dots are
exposed to noise originated from the complex glassy behaviour of two-level
fluctuators, leading to non-trivial correlations between qubit properties both
in space and time. With recent engineering progress, large amounts of data are
being collected in typical spin qubit device experiments, and it is beneficiary
to explore data analysis options inspired from fields of research that are
experienced in managing large data sets, examples include astrophysics, finance
and climate science. Here, we propose and demonstrate wavelet-based analysis
techniques to decompose signals into both frequency and time components to gain
a deeper insight into the sources of noise in our systems. We apply the
analysis to a long feedback experiment performed on a state-of-the-art
two-qubit system in a pair of SiMOS quantum dots. The observed correlations
serve to identify common microscopic causes of noise, as well as to elucidate
pathways for multi-qubit operation with a more scalable feedback system. | 2309.12542v2 |
2023-09-29 | Glioma subtype classification from histopathological images using in-domain and out-of-domain transfer learning: An experimental study | We provide in this paper a comprehensive comparison of various transfer
learning strategies and deep learning architectures for computer-aided
classification of adult-type diffuse gliomas. We evaluate the generalizability
of out-of-domain ImageNet representations for a target domain of
histopathological images, and study the impact of in-domain adaptation using
self-supervised and multi-task learning approaches for pretraining the models
using the medium-to-large scale datasets of histopathological images. A
semi-supervised learning approach is furthermore proposed, where the fine-tuned
models are utilized to predict the labels of unannotated regions of the whole
slide images (WSI). The models are subsequently retrained using the
ground-truth labels and weak labels determined in the previous step, providing
superior performance in comparison to standard in-domain transfer learning with
balanced accuracy of 96.91% and F1-score 97.07%, and minimizing the
pathologist's efforts for annotation. Finally, we provide a visualization tool
working at WSI level which generates heatmaps that highlight tumor areas; thus,
providing insights to pathologists concerning the most informative parts of the
WSI. | 2309.17223v1 |
2023-10-13 | Midpoint geometric integrators for inertial magnetization dynamics | We consider the numerical solution of the inertial version of
Landau-Lifshitz-Gilbert equation (iLLG), which describes high-frequency
nutation on top of magnetization precession due to angular momentum relaxation.
The iLLG equation defines a higher-order nonlinear dynamical system with very
different nature compared to the classical LLG equation, requiring twice as
many degrees of freedom for space-time discretization. It exhibits essential
conservation properties, namely magnetization amplitude preservation,
magnetization projection conservation, and a balance equation for generalized
free energy, leading to a Lyapunov structure (i.e. the free energy is a
decreasing function of time) when the external magnetic field is constant in
time. We propose two second-order numerical schemes for integrating the iLLG
dynamics over time, both based on implicit midpoint rule. The first scheme
unconditionally preserves all the conservation properties, making it the
preferred choice for simulating inertial magnetization dynamics. However, it
implies doubling the number of unknowns, necessitating significant changes in
numerical micromagnetic codes and increasing computational costs especially for
spatially inhomogeneous dynamics simulations. To address this issue, we present
a second time-stepping method that retains the same computational cost as the
implicit midpoint rule for classical LLG dynamics while unconditionally
preserving magnetization amplitude and projection. Special quasi-Newton
techniques are developed for solving the nonlinear system of equations required
at each time step due to the implicit nature of both time-steppings. The
numerical schemes are validated on analytical solution for macrospin terahertz
frequency response and the effectiveness of the second scheme is demonstrated
with full micromagnetic simulation of inertial spin waves propagation in a
magnetic thin-film. | 2310.09043v1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.