publicationDate
stringlengths 1
2.79k
| title
stringlengths 1
36.5k
⌀ | abstract
stringlengths 1
37.3k
⌀ | id
stringlengths 9
47
|
|---|---|---|---|
2022-04-19
|
Higher-order modulations in the skyrmion-lattice phase of Cu$_2$OSeO$_3$
|
Using small angle neutron scattering, we have investigated higher-order peaks
in the skyrmion-lattice phase of Cu$_2$OSeO$_3$, in which two different
skyrmion lattices, SkX1 and SkX2, are known to form. For each skyrmion-lattice
phase, we observed two sets of symmetrically inequivalent peaks at the
higher-order-reflection positions with the indices $(110)$ and $(200)$. Under
the condition where the SkX1 and SkX2 coexist, we confirmed the absence of the
scattering at $\mathbf{Q}$ positions combining reflections from the two phases,
indicating a significantly weak double-scattering component. Detailed analysis
of the peak profile, as well as the temperature and magnetic-field dependence
of the peak intensity, also supports the intrinsic higher-order modulation
rather than the parasitic double scattering. The two higher-order modulations
show contrasting magnetic-field dependence; the former $(110)$ increases as the
field is increased, whereas the latter $(200)$ decreases. This indicates that,
in Cu$_2$OSeO$_3$, skyrmions are weakly distorted, and the distortion is
field-dependent in a way that the dominant higher-order modulation switches
from $(110)$ to $(200)$ under field. Monte Carlo simulations under sweeping
external magnetic field qualitatively reproduce the observed magnetic-field
dependence, and suggests that the higher-order modulations correspond to the
superlattices of weak swirlings appearing in the middle of the original
triangular-latticed skyrmions.
|
2204.08614v1
|
2022-04-19
|
Emu: A Case Study for TDI-like Imaging for Infrared Observation from Space
|
A wide-field zenith-looking telescope operating in a mode similar to
Time-Delay-Integration (TDI) or drift scan imaging can perform an infrared sky
survey without active pointing control but it requires a high-speed, low-noise
infrared detector. Operating from a hosted payload platform on the
International Space Station (ISS), the Emu space telescope employs the
paradigm-changing properties of the Leonardo SAPHIRA electron avalanche
photodiode array to provide powerful new observations of cool stars at the
critical water absorption wavelength (1.4 $\mu$m) largely inaccessible to
ground-based telescopes due to the Earth's own atmosphere. Cool stars,
especially those of spectral-type M, are important probes across contemporary
astrophysics, from the formation history of the Galaxy to the formation of
rocky exoplanets. Main sequence M-dwarf stars are the most abundant stars in
the Galaxy and evolved M-giant stars are some of the most distant stars that
can be individually observed. The Emu sky survey will deliver critical stellar
properties of these cool stars by inferring oxygen abundances via measurement
of the water absorption band strength at 1.4 $\mu$m. Here we present the
TDI-like imaging capability of Emu mission, its science objectives, instrument
details and simulation results.
|
2204.08713v2
|
2022-05-05
|
Photon emissivity of the quark-gluon plasma: a lattice QCD analysis of the transverse channel
|
We present results for the thermal photon emissivity of the quark-gluon
plasma derived from spatially transverse vector correlators computed in lattice
QCD at a temperature of 250 MeV. The analysis of the spectral functions,
performed at fixed spatial momentum, is based on continuum-extrapolated
correlators obtained with two flavours of dynamical Wilson fermions. We compare
the next-to-leading order perturbative QCD correlators, as well as the ${\cal
N}=4$ supersymmetric Yang-Mills correlators at infinite coupling, to the
correlators from lattice QCD and find them to lie within $\sim10\%$ of each
other. We then refine the comparison, performing it at the level of filtered
spectral functions obtained model-independently via the Backus-Gilbert method.
Motivated by these studies, for frequencies $\omega\lesssim2.5\,$GeV we use fit
ans\"atze to the spectral functions that perform well when applied to mock data
generated from the NLO QCD or from the strongly-coupled SYM spectral functions,
while the high-frequency part, $\omega\gtrsim 2.5\,$GeV, is matched to NLO QCD.
We compare our results for the photon emissivity to our previous analysis of a
different vector channel at the same temperature. We obtain the most stringent
constraint at photon momenta around $k\simeq0.8\,$GeV, for which we find a
differential photon emission rate per unit volume of $d\Gamma_\gamma/d^3k =
(\alpha_{\rm em}/(\exp(k/T)-1))\times (2.2 \pm 0.8 ) \times 10^{-3}\,{\rm
GeV}$.
|
2205.02821v1
|
2022-05-17
|
Highlighting relations between Wave-particle duality, Uncertainty principle, Phase space and Microstates
|
Wave-particle duality is often considered as the modern answer to the problem
of the nature of light after more than 2000 years of questioning. It is also
the answer given by quantum physics concerning the nature of matter particles
and any other radiations. The main objective of this work is to analyze the
relations that are existing between this concept of wave-particle duality, the
uncertainty principle and the concepts of phase space and microstates
considered in statistical mechanics. It is mainly highlighted that while the
concepts of phase space and microstates were already introduced in classical
physics before the discovery of the wave-particle duality, a correct
understanding of them cannot be achieved without the use of the concept of
quantum phase space and phase space representation of quantum mechanics which
are directly related to the uncertainty principle. The possibility of using
these concepts of quantum phase space and phase space representations of
quantum mechanics to help in a deeper description of the wave-particle duality
and in the study of some current issues related to foundational problems of
quantum mechanics like quantum decoherence and the measurement problem is also
discussed.
|
2205.08538v4
|
2022-05-26
|
New Explicit Good Linear Sum-Rank-Metric Codes
|
Sum-rank-metric codes have wide applications in universal error correction,
multishot network coding, space-time coding and the construction of partial-MDS
codes for repair in distributed storage. Fundamental properties of
sum-rank-metric codes have been studied and some explicit or probabilistic
constructions of good sum-rank-metric codes have been proposed. In this paper
we give three simple constructions of explicit linear sum-rank-metric codes. In
finite length regime, numerous larger linear sum-rank-metric codes with the
same minimum sum-rank distances as the previous constructed codes can be
derived from our constructions. For example several better linear
sum-rank-metric codes over ${\bf F}_q$ with small block sizes and the matrix
size $2 \times 2$ are constructed for $q=2, 3, 4$ by applying our construction
to the presently known best linear codes. Asymptotically our constructed
sum-rank-metric codes are close to the Gilbert-Varshamov-like bound on
sum-rank-metric codes for some parameters. Finally we construct a linear MSRD
code over an arbitrary finite field ${\bf F}_q$ with various square matrix
sizes $n_1, n_2, \ldots, n_t$ satisfying $n_i \geq n_{i+1}^2+\cdots+n_t^2$ ,
$i=1, 2, \ldots, t-1$, for any given minimum sum-rank distance. There is no
restriction on the block lengths $t$ and parameters $N=n_1+\cdots+n_t$ of these
linear MSRD codes from the sizes of the fields ${\bf F}_q$. \end{abstract}
|
2205.13087v8
|
2022-06-17
|
Multi-scale Super-resolution Magnetic Resonance Spectroscopic Imaging with Adjustable Sharpness
|
Magnetic Resonance Spectroscopic Imaging (MRSI) is a valuable tool for
studying metabolic activities in the human body, but the current applications
are limited to low spatial resolutions. The existing deep learning-based MRSI
super-resolution methods require training a separate network for each upscaling
factor, which is time-consuming and memory inefficient. We tackle this
multi-scale super-resolution problem using a Filter Scaling strategy that
modulates the convolution filters based on the upscaling factor, such that a
single network can be used for various upscaling factors. Observing that each
metabolite has distinct spatial characteristics, we also modulate the network
based on the specific metabolite. Furthermore, our network is conditioned on
the weight of adversarial loss so that the perceptual sharpness of the
super-resolved metabolic maps can be adjusted within a single network. We
incorporate these network conditionings using a novel Multi-Conditional Module.
The experiments were carried out on a 1H-MRSI dataset from 15 high-grade glioma
patients. Results indicate that the proposed network achieves the best
performance among several multi-scale super-resolution methods and can provide
super-resolved metabolic maps with adjustable sharpness.
|
2206.08984v1
|
2022-06-20
|
How to Assess Trustworthy AI in Practice
|
This report is a methodological reflection on
Z-Inspection$^{\small{\circledR}}$. Z-Inspection$^{\small{\circledR}}$ is a
holistic process used to evaluate the trustworthiness of AI-based technologies
at different stages of the AI lifecycle. It focuses, in particular, on the
identification and discussion of ethical issues and tensions through the
elaboration of socio-technical scenarios. It uses the general European Union's
High-Level Expert Group's (EU HLEG) guidelines for trustworthy AI. This report
illustrates for both AI researchers and AI practitioners how the EU HLEG
guidelines for trustworthy AI can be applied in practice. We share the lessons
learned from conducting a series of independent assessments to evaluate the
trustworthiness of AI systems in healthcare. We also share key recommendations
and practical suggestions on how to ensure a rigorous trustworthy AI assessment
throughout the life-cycle of an AI system.
|
2206.09887v2
|
2022-06-23
|
LRPC codes with multiple syndromes: near ideal-size KEMs without ideals
|
We introduce a new rank-based key encapsulation mechanism (KEM) with public
key and ciphertext sizes around 3.5 Kbytes each, for 128 bits of security,
without using ideal structures. Such structures allow to compress objects, but
give reductions to specific problems whose security is potentially weaker than
for unstructured problems. To the best of our knowledge, our scheme improves in
size all the existing unstructured post-quantum lattice or code-based
algorithms such as FrodoKEM or Classic McEliece. Our technique, whose
efficiency relies on properties of rank metric, is to build upon existing Low
Rank Parity Check (LRPC) code-based KEMs and to send multiple syndromes in one
ciphertext, allowing to reduce the parameters and still obtain an acceptable
decoding failure rate. Our system relies on the hardness of the Rank Support
Learning problem, a well-known variant of the Rank Syndrome Decoding problem.
The gain on parameters is enough to significantly close the gap between ideal
and non-ideal constructions. It enables to choose an error weight close to the
rank Gilbert-Varshamov bound, which is a relatively harder zone for algebraic
attacks. We also give a version of our KEM that keeps an ideal structure and
permits to roughly divide the bandwidth by two compared to previous versions of
LRPC KEMs submitted to the NIST with a Decoding Failure Rate (DFR) of
$2^{-128}$.
|
2206.11961v1
|
2022-07-08
|
Rate-Optimal Streaming Codes Over the Three-Node Decode-And-Forward Relay Network
|
In this paper, we study the three-node Decode-and-Forward (D&F) relay network
subject to random and burst packet erasures. The source wishes to transmit an
infinite stream of packets to the destination via the relay. The three-node D&F
relay network is constrained by a decoding delay of T packets, i.e., the packet
transmitted by the source at time i must be decoded by the destination by time
i+T. For the individual channels from source to relay and relay to destination,
we assume a delay-constrained sliding-window (DCSW) based packet-erasure model
that can be viewed as a tractable approximation to the commonly-accepted
Gilbert-Elliot channel model. Under the model, any time-window of width w
contains either up to a random erasure or else erasure burst of length at most
b (>= a). Thus the source-relay and relay-destination channels are modeled as
(a_1, b_1, w_1, T_1) and (a_2, b_2, w_2, T_2) DCSW channels. We first derive an
upper bound on the capacity of the three-node D&F relay network. We then show
that the upper bound is tight for the parameter regime: max{b_1,
b_2}|(T-b_1-b_2-max{a_1, a_2}+1), a1=a2 OR b1=b2 by constructing streaming
codes achieving the bound. The code construction requires field size linear in
T, and has decoding complexity equivalent to that of decoding an MDS code.
|
2207.04025v2
|
2022-07-12
|
Diversity of ghost notes in tubas, euphoniums and saxhorns
|
The ghost note is a natural note which can be played exclusively on bass
brass instruments with a predominantly-expanding bore profile such as tubas,
euphoniums or saxhorns. It stands between the pedal note-the lowest natural
note playable, or first regime-and the instrument's second regime. However, if
the interval between the pedal note and the second regime remains close to an
octave regardless of the instrument, the interval between the pedal note and
the ghost note vary from a minor third to a perfect fourth. References about
this note are very scarce, and it is not commonly known among tuba players.This
study shows that an elementary brass model describing the player coupled to the
instrument is capable of bringing both the ghost and the pedal note to light.
Here, we adopt a dynamical systems point of view and perform a bifurcation
analysis using a software of numerical continuation. The numerical results
provided in terms of frequency intervals between pedal note and ghost note are
compared with frequency intervals experimentally inferred from recordings of
seven different types of tuba, each of them being played by two professional
tuba players.
|
2207.05395v3
|
2022-07-20
|
Flow-based Visual Quality Enhancer for Super-resolution Magnetic Resonance Spectroscopic Imaging
|
Magnetic Resonance Spectroscopic Imaging (MRSI) is an essential tool for
quantifying metabolites in the body, but the low spatial resolution limits its
clinical applications. Deep learning-based super-resolution methods provided
promising results for improving the spatial resolution of MRSI, but the
super-resolved images are often blurry compared to the experimentally-acquired
high-resolution images. Attempts have been made with the generative adversarial
networks to improve the image visual quality. In this work, we consider another
type of generative model, the flow-based model, of which the training is more
stable and interpretable compared to the adversarial networks. Specifically, we
propose a flow-based enhancer network to improve the visual quality of
super-resolution MRSI. Different from previous flow-based models, our enhancer
network incorporates anatomical information from additional image modalities
(MRI) and uses a learnable base distribution. In addition, we impose a guide
loss and a data-consistency loss to encourage the network to generate images
with high visual quality while maintaining high fidelity. Experiments on a
1H-MRSI dataset acquired from 25 high-grade glioma patients indicate that our
enhancer network outperforms the adversarial networks and the baseline
flow-based methods. Our method also allows visual quality adjustment and
uncertainty estimation.
|
2207.10181v1
|
2022-07-24
|
Contention Resolution for Coded Radio Networks
|
Randomized backoff protocols, such as exponential backoff, are a powerful
tool for managing access to a shared resource, often a wireless communication
channel (e.g., [1]). For a wireless device to transmit successfully, it uses a
backoff protocol to ensure exclusive access to the channel. Modern radios,
however, do not need exclusive access to the channel to communicate; in
particular, they have the ability to receive useful information even when more
than one device transmits at the same time. These capabilities have now been
exploited for many years by systems that rely on interference cancellation,
physical layer network coding and analog network coding to improve efficiency.
For example, Zigzag decoding [56] demonstrated how a base station can decode
messages sent by multiple devices simultaneously.
In this paper, we address the following question: Can we design a backoff
protocol that is better than exponential backoff when exclusive channel access
is not required. We define the Coded Radio Network Model, which generalizes
traditional radio network models (e.g., [30]). We then introduce the Decodable
Backoff Algorithm, a randomized backoff protocol that achieves an optimal
throughput of $1-o(1)$. (Throughput $1$ is optimal, as simultaneous reception
does not increase the channel capacity.) The algorithm breaks the constant
throughput lower bound for traditional radio networks [47-49], showing the
power of these new hardware capabilities.
|
2207.11824v1
|
2022-07-25
|
Control of dephasing in spin qubits during coherent transport in silicon
|
One of the key pathways towards scalability of spin-based quantum computing
systems lies in achieving long-range interactions between electrons and
increasing their inter-connectivity. Coherent spin transport is one of the most
promising strategies to achieve this architectural advantage. Experimental
results have previously demonstrated high fidelity transportation of spin
qubits between two quantum dots in silicon and identified possible sources of
error. In this theoretical study, we investigate these errors and analyze the
impact of tunnel coupling, magnetic field and spin-orbit effects on the spin
transfer process. The interplay between these effects gives rise to double dot
configurations that include regimes of enhanced decoherence that should be
avoided for quantum information processing. These conclusions permit us to
extrapolate previous experimental conclusions and rationalize the future design
of large scale quantum processors.
|
2207.11865v2
|
2022-07-29
|
Orthogonal Spin Current Injected Magnetic Tunnel Junction for Convolutional Neural Networks
|
We propose that a spin Hall effect driven magnetic tunnel junction device can
be engineered to provide a continuous change in the resistance across it when
injected with orthogonal spin currents. Using this concept, we develop a hybrid
device-circuit simulation platform to design a network that realizes multiple
functionalities of a convolutional neural network. At the atomistic level, we
use the Keldysh non-equilibrium Green's function technique that is coupled
self-consistently with the stochastic Landau-Lifshitz-Gilbert-Slonczewski
equations, which in turn is coupled with the HSPICE circuit simulator. We
demonstrate the simultaneous functionality of the proposed network to evaluate
the rectified linear unit and max-pooling functionalities. We present a
detailed power and error analysis of the designed network against the thermal
stability factor of the free ferromagnets. Our results show that there exists a
non-trivial power-error trade-off in the proposed network, which enables an
energy-efficient network design based on unstable free ferromagnets with
reliable outputs. The static power for the proposed ReLU circuit is $0.56\mu W$
and whereas the energy cost of a nine-input rectified linear unit-max-pooling
network with an unstable free ferromagnet($\Delta=15$) is $3.4pJ$ in the
worst-case scenario. We also rationalize the magnetization stability of the
proposed device by analyzing the vanishing torque gradient points.
|
2207.14603v3
|
2022-08-09
|
Good locally repairable codes via propagation rules
|
In classical coding theory, it is common to construct new codes via
propagation rules. There are various propagation rules to construct classical
block codes. However, propagation rules have not been extensively explored for
constructions of locally repairable codes. In this paper, we introduce a few
propagation rules to construct good locally repairable codes. To our surprise,
these simple propagation rules produce a few interesting results. Firstly, by
concatenating a locally repairable code as an inner code with a classical block
code as an outer code, we obtain quite a few dimension-optimal binary locally
repairable codes. Secondly, from this concatenation, we explicitly build a
family of locally repairable codes that exceeds the Zyablov-type bound.
Thirdly, by a lengthening propagation rule that adds some rows and columns from
a parity-check matrix of a given linear code, we are able to produce a family
of dimension-optimal binary locally repairable codes from the extended Hamming
codes, and to convert a classical maximum distance separable (MDS) code into a
Singleton-optimal locally repairable code. Furthermore, via the lengthening
propagation rule, we greatly simplify the construction of a family of locally
repairable codes in \cite[Theorem 5]{MX20} that breaks the asymptotic
Gilbert-Varshamov bound. In addition, we make use of three other propagation
rules to produce more dimension-optimal binary locally repairable codes.
Finally, one of phenomena that we observe in this paper is that some trivial
propagation rules in classical block codes do not hold anymore for locally
repairable codes.
|
2208.04484v1
|
2022-08-10
|
Forward volume magnetoacoustic spin wave excitation with micron-scale spatial resolution
|
The interaction between surface acoustic waves (SAWs) and spin waves (SWs) in
a piezoelectric-magnetic thin film heterostructure yields potential for the
realization of novel microwave devices and applications in magnonics. In the
present work, we characterize magnetoacoustic waves in three adjacent magnetic
micro-stripes made from CoFe+Ga, CoFe, and CoFe+Pt with a single pair of
tapered interdigital transducers (TIDTs). The magnetic micro-stripes were
deposited by focused electron beam-induced deposition (FEBID) and focused ion
beam-induced deposition (FIBID) direct-writing techniques. The transmission
characteristics of the TIDTs are leveraged to selectively address the
individual micro-stripes. Here, the external magnetic field is continuously
rotated out of the plane of the magnetic thin film and the forward volume SW
geometry is probed with the external magnetic field along the film normal. Our
experimental findings are well explained by an extended phenomenological model
based on a modified Landau-Lifshitz-Gilbert approach that considers SWs with
nonzero wave vectors. Magnetoelastic excitation of forward volume SWs is
possible because of the vertical shear strain $\varepsilon_{xz}$ of the
Rayleigh-type SAW.
|
2208.05205v1
|
2022-08-29
|
Programmable photonic integrated meshes for modular generation of optical entanglement links
|
Large-scale generation of quantum entanglement between individually
controllable qubits is at the core of quantum computing, communications, and
sensing. Modular architectures of remotely-connected quantum technologies have
been proposed for a variety of physical qubits, with demonstrations reported in
atomic and all-photonic systems. However, an open challenge in these
architectures lies in constructing high-speed and high-fidelity reconfigurable
photonic networks for optically-heralded entanglement among target qubits. Here
we introduce a programmable photonic integrated circuit (PIC), realized in a
piezo-actuated silicon nitride (SiN)-in-oxide CMOS-compatible process, that
implements an N x N Mach-Zehnder mesh (MZM) capable of high-speed execution of
linear optical transformations. The visible-spectrum photonic integrated mesh
is programmed to generate optical connectivity on up to N = 8 inputs for a
range of optically-heralded entanglement protocols. In particular, we
experimentally demonstrated optical connections between 16 independent pairwise
mode couplings through the MZM, with optical transformation fidelities
averaging 0.991 +/- 0.0063. The PIC's reconfigurable optical connectivity
suffices for the production of 8-qubit resource states as building blocks of
larger topological cluster states for quantum computing. Our programmable PIC
platform enables the fast and scalable optical switching technology necessary
for network-based quantum information processors.
|
2208.13911v1
|
2022-09-15
|
Almost Ramanujan Expanders from Arbitrary Expanders via Operator Amplification
|
We give an efficient algorithm that transforms any bounded degree expander
graph into another that achieves almost optimal (namely, near-quadratic, $d
\leq 1/\lambda^{2+o(1)}$) trade-off between (any desired) spectral expansion
$\lambda$ and degree $d$. Furthermore, the algorithm is local: every vertex can
compute its new neighbors as a subset of its original neighborhood of radius
$O(\log(1/\lambda))$. The optimal quadratic trade-off is known as the Ramanujan
bound, so our construction gives almost Ramanujan expanders from arbitrary
expanders.
The locality of the transformation preserves structural properties of the
original graph, and thus has many consequences. Applied to Cayley graphs, our
transformation shows that any expanding finite group has almost Ramanujan
expanding generators. Similarly, one can obtain almost optimal explicit
constructions of quantum expanders, dimension expanders, monotone expanders,
etc., from existing (suboptimal) constructions of such objects. Another
consequence is a "derandomized" random walk on the original (suboptimal)
expander with almost optimal convergence rate. Our transformation also applies
when the degree is not bounded or the expansion is not constant.
We obtain our results by a generalization of Ta-Shma's technique in his
breakthrough paper [STOC 2017], used to obtain explicit almost optimal binary
codes. Specifically, our spectral amplification extends Ta-Shma's analysis of
bias amplification from scalars to matrices of arbitrary dimension in a very
natural way. Curiously, while Ta-Shma's explicit bias amplification
derandomizes a well-known probabilistic argument (underlying the
Gilbert--Varshamov bound), there seems to be no known probabilistic (or other
existential) way of achieving our explicit ("high-dimensional") spectral
amplification.
|
2209.07024v1
|
2022-09-15
|
An analytical study of the MHD clamshell instability on a sphere
|
This paper studies the instability of two-dimensional magnetohydrodynamic
(MHD) systems on a sphere using analytical methods. The underlying flow
consists of a zonal differential rotation and a toroidal magnetic field is
present. Semicircle rules that prescribe the possible domain of the wave
velocity in the complex plane for general flow and field profiles are derived.
The paper then sets out an analytical study of the `clamshell instability',
which features field lines on the two hemispheres tilting in opposite
directions (Cally 2001, Sol. Phys. vol. 199, pp. 231--249). An asymptotic
solution for the instability problem is derived for the limit of weak shear of
the zonal flow, via the method of matched asymptotic expansions. It is shown
that when the zonal flow is solid body rotation, there exists a neutral mode
that tilts the magnetic field lines, referred to as the `tilting mode'. A weak
shear of the zonal flow excites the critical layer of the tilting mode, which
reverses the tilting direction to form the clamshell pattern and induces the
instability. The asymptotic solution provides insights into properties of the
instability for a range of flow and field profiles. A remarkable feature is
that the magnetic field affects the instability only through its local
behaviour in the critical layer.
|
2209.07349v1
|
2022-09-15
|
$\tilde{O}(n+\mathrm{poly}(k))$-time Algorithm for Bounded Tree Edit Distance
|
Computing the edit distance of two strings is one of the most basic problems
in computer science and combinatorial optimization. Tree edit distance is a
natural generalization of edit distance in which the task is to compute a
measure of dissimilarity between two (unweighted) rooted trees with node
labels. Perhaps the most notable recent application of tree edit distance is in
NoSQL big databases, such as MongoDB, where each row of the database is a JSON
document represented as a labeled rooted tree, and finding dissimilarity
between two rows is a basic operation. Until recently, the fastest algorithm
for tree edit distance ran in cubic time (Demaine, Mozes, Rossman, Weimann;
TALG'10); however, Mao (FOCS'21) broke the cubic barrier for the tree edit
distance problem using fast matrix multiplication.
Given a parameter $k$ as an upper bound on the distance, an $O(n+k^2)$-time
algorithm for edit distance has been known since the 1980s due to the works of
Myers (Algorithmica'86) and Landau and Vishkin (JCSS'88). The existence of an
$\tilde{O}(n+\mathrm{poly}(k))$-time algorithm for tree edit distance has been
posed as an open question, e.g., by Akmal and Jin (ICALP'21), who gave a
state-of-the-art $\tilde{O}(nk^2)$-time algorithm. In this paper, we answer
this question positively.
|
2209.07524v1
|
2022-09-23
|
Multiplexed control of spin quantum memories in a photonic circuit
|
A central goal in many quantum information processing applications is a
network of quantum memories that can be entangled with each other while being
individually controlled and measured with high fidelity. This goal has
motivated the development of programmable photonic integrated circuits (PICs)
with integrated spin quantum memories using diamond color center spin-photon
interfaces. However, this approach introduces a challenge in the microwave
control of individual spins within closely packed registers. Here, we present a
quantum-memory-integrated photonics platform capable of (i) the integration of
multiple diamond color center spins into a cryogenically compatible, high-speed
programmable PIC platform; (ii) selective manipulation of individual spin
qubits addressed via tunable magnetic field gradients; and (iii) simultaneous
control of multiple qubits using numerically optimized microwave pulse shaping.
The combination of localized optical control, enabled by the PIC platform,
together with selective spin manipulation opens the path to scalable quantum
networks on intra-chip and inter-chip platforms.
|
2209.11853v2
|
2022-09-26
|
A detailed star formation history for the extremely diffuse Andromeda XIX dwarf galaxy
|
We present deep imaging of the ultra-diffuse Andromeda XIX dwarf galaxy from
the Advance Camera for Surveys on the Hubble Space Telescope which resolves its
stellar populations to below the oldest main sequence turn-off. We derive a
full star formation history for the galaxy using MATCH, and find no evidence of
star formation in the past 8 Gyr. We calculate a quenching time of
$\tau_{90}=9.7\pm0.2$~Gyr, suggesting Andromeda~XIX ceased forming stars very
early on. This early quenching, combined with its extremely large half-light
radius, low density dark matter halo and lower than expected metallicity make
it a unique galaxy within the Local Group and raises questions about how it
formed. The early quenching time allows us to rule out feedback from bursty
star formation as a means to explain its diffuse stellar population and low
density dark matter halo. We find that the extended stellar population, low
density halo and star formation could be explained by either tidal interactions
(such as tidal shocking) or by late dry mergers, with the latter also
explaining its low metallicity. Proper motions and detailed abundances would
allow us to distinguish between these two scenarios.
|
2209.12912v1
|
2022-10-06
|
Scalable photonic integrated circuits for programmable control of atomic systems
|
Advances in laser technology have driven discoveries in atomic, molecular,
and optical (AMO) physics and emerging applications, from quantum computers
with cold atoms or ions, to quantum networks with solid-state color centers.
This progress is motivating the development of a new generation of
"programmable optical control" systems, characterized by criteria (C1) visible
(VIS) and near-infrared (IR) wavelength operation, (C2) large channel counts
extensible beyond 1000s of individually addressable atoms, (C3) high intensity
modulation extinction and (C4) repeatability compatible with low gate errors,
and (C5) fast switching times. Here, we address these challenges by introducing
an atom control architecture based on VIS-IR photonic integrated circuit (PIC)
technology. Based on a complementary metal-oxide-semiconductor (CMOS)
fabrication process, this Atom-control PIC (APIC) technology meets the system
requirements (C1)-(C5). As a proof of concept, we demonstrate a 16-channel
silicon nitride based APIC with (5.8$\pm$0.4) ns response times and -30 dB
extinction ratio at a wavelength of 780 nm. This work demonstrates the
suitability of PIC technology for quantum control, opening a path towards
scalable quantum information processing based on optically-programmable atomic
systems.
|
2210.03100v2
|
2022-10-10
|
Andreev processes in mesoscopic multi-terminal graphene Josephson junctions
|
There is growing interest in using multi-terminal Josephson junctions (MTJJs)
as a platform to artificially emulate topological phases and to investigate
complex superconducting mechanisms such as quartet and multiplet Cooper
pairings. Current experimental signatures in MTJJs have led to conflicting
interpretations of the salient features. In this work, we report a
collaborative experimental and theoretical investigation of graphene-based
four-terminal Josephson junctions. We observe resonant features in the
differential resistance maps that resemble those ascribed to multiplet Cooper
pairings. To understand these features, we model our junctions using a circuit
network of coupled two-terminal resistively and capacitively shunted junctions
(RCSJs). Under appropriate bias current, the model predicts that a current
flowing between two diagonal terminals in a four-terminal geometry may be
represented as a sinusoidal function of a weighted sum of the superconducting
phases. We show that starting from a semi-classical model with diffusive
current-phase relations, the MTJJ effectively emulates a general form of the
expected current-phase relation for multiplet Cooper pairings. Our study
therefore suggests that differential resistance measurements alone are
insufficient to conclusively distinguish resonant Andreev reflection processes
from semi-classical circuit-network effects.
|
2210.04408v3
|
2022-10-10
|
Infrared Remote Sensing Using Low Noise Avalanche Photodiode Detector
|
For a remote sensing optical payload to achieve a Ground Sampling Distance of
~ 10-30 m, a critical problem is platform-induced motion blur. While forward
motion compensation can reduce this transit speed, it comes at the expense of a
more challenging satellite attitude control system and induces a variable
observation/illumination angle. This relative motion can be frozen out by
simply reading the sensor system at a frame rate that matches the ground
resolution element's pixel crossing time. To achieve high resolution using this
Time-Delay Integration (TDI)-like approach requires high speed and hence near
"zero" readout noise detector arrays to avoid swamping the observed signal.
This requires associated control electronics for fast frame readout and direct
interface with smart- Artificial Intelligence (AI) onboard processing. With
this technique, the platform freezes out its movement concerning the ground,
reducing the demands placed on the attitude control systems, which can
otherwise be difficult to implement on a small satellite platform. Here we
report the Australian National University's OzFuel mission which applies this
technical solution to deliver high ground resolution via high frame rate
imaging. OzFuel is built around the Leonardo SAPHIRA Mercury Cadmium Telluride
linear mode electron avalanche photodiode (LMeAPD) detector and the in-house
developed Rosella electronics control system. The mission will deliver an
integrated sensor system in a suite of Short-Wave Infrared (SWIR) passbands
dedicated to monitoring the flammability of Eucalypt trees. The OzFuel mission
concept focuses on the application of SWIR remote sensing data to deliver a
strategic evaluation of fuel loads and moisture content in the bushfire-prone
Australian environment.
|
2210.04770v1
|
2022-10-17
|
On construction of quantum codes with dual-containing quasi-cyclic codes
|
One of the main objectives of quantum error-correction theory is to construct
quantum codes with optimal parameters and properties. In this paper, we propose
a class of 2-generator quasi-cyclic codes and study their applications in the
construction of quantum codes over small fields. Firstly, some sufficient
conditions for these 2-generator quasi-cyclic codes to be dual-containing
concerning Hermitian inner product are determined. Then, we utilize these
Hermitian dual-containing quasi-cyclic codes to produce quantum codes via the
famous Hermitian construction. Moreover, we present a lower bound on the
minimum distance of these quasi-cyclic codes, which is helpful to construct
quantum codes with larger lengths and dimensions. As the computational results,
many new quantum codes that exceed the quantum Gilbert-Varshamov bound are
constructed over $F_q$, where $q$ is $2,3,4,5$. In particular, 16 binary
quantum codes raise the lower bound on the minimum distance in Grassl's table
\cite{Grassl:codetables}. In nonbinary cases, many quantum codes are new or
have better parameters than those in the literature.
|
2210.08716v1
|
2022-10-18
|
Intense γ-photon and high-energy electron production by neutron irradiation: effects of nuclear excitations on reactor materials
|
The effects of neutron irradiation on materials are often interpreted in
terms of atomic recoils, initiated by neutron impacts and producing crystal
lattice defects. In addition, there is a remarkable two-step process, strongly
pronounced in the medium-weight and heavy elements. This process involves the
generation of energetic {\gamma} photons in nonelastic collisions of neutrons
with atomic nuclei, achieved via capture and inelastic reactions. Subsequently,
high-energy electrons are excited through the scattering of {\gamma} photons by
the atomic electrons. We derive and validate equations enabling a fast and
robust evaluation of photon and electron fluxes produced by the neutrons in the
bulk of materials. The two-step n-{\gamma}-e scattering creates a
nonequilibrium dynamically fluctuating steady-state population of high-energy
electrons, with the spectra of photon and electron energies extending well into
the mega-electron-volt range. This stimulates vacancy diffusion through
electron-triggered atomic recoils, primarily involving vacancy-impurity
dissociation, even if thermal activation is ineffective. Tungsten converts the
energy of fusion or fission neutrons into a flux of {\gamma} radiation at the
conversion efficiency approaching 99%, with implications for structural
materials, superconductors, and insulators, as well as phenomena like
corrosion, and helium and hydrogen isotope retention.
|
2210.09667v2
|
2022-11-06
|
A framework for leveraging machine learning tools to estimate personalized survival curves
|
The conditional survival function of a time-to-event outcome subject to
censoring and truncation is a common target of estimation in survival analysis.
This parameter may be of scientific interest and also often appears as a
nuisance in nonparametric and semiparametric problems. In addition to classical
parametric and semiparametric methods (e.g., based on the Cox proportional
hazards model), flexible machine learning approaches have been developed to
estimate the conditional survival function. However, many of these methods are
either implicitly or explicitly targeted toward risk stratification rather than
overall survival function estimation. Others apply only to discrete-time
settings or require inverse probability of censoring weights, which can be as
difficult to estimate as the outcome survival function itself. Here, we employ
a decomposition of the conditional survival function in terms of observable
regression models in which censoring and truncation play no role. This allows
application of an array of flexible regression and classification methods
rather than only approaches that explicitly handle the complexities inherent to
survival data. We outline estimation procedures based on this decomposition,
empirically assess their performance, and demonstrate their use on data from an
HIV vaccine trial.
|
2211.03031v4
|
2022-11-14
|
High-resolution single-shot spiral diffusion-weighted imaging at 7T using expanded encoding with compressed sensing
|
Purpose: The expanded encoding model incorporates spatially- and time-varying
field perturbations for correction during reconstruction. So far, these
reconstructions have used the conjugate gradient method with early stopping
used as implicit regularization. However, this approach is likely suboptimal
for low-SNR cases like diffusion or high-resolution MRI. Here, we investigate
the extent that l1-wavelet regularization, or equivalently compressed sensing
(CS), combined with expanded encoding improves trade-offs between spatial
resolution, readout time and SNR for single-shot spiral diffusion-weighted
imaging at 7T. The reconstructions were performed using our open-source
GPU-enabled reconstruction toolbox, MatMRI, that allows inclusion of the
different components of the expanded encoding model, with or without CS.
Methods: In vivo accelerated single-shot spirals were acquired with five
acceleration factors (2-6) and three in-plane spatial resolutions (1.5, 1.3,
and 1.1 mm). From the in vivo reconstructions, we estimated diffusion tensors
and computed fractional anisotropy maps. Then, simulations were used to
quantitatively investigate and validate the impact of CS-based regularization
on image quality when compared to a known ground truth. Results: In vivo
reconstructions revealed improved image quality with retainment of small
features when CS was used. Simulations showed that the joint use of the
expanded encoding model and CS improves accuracy of image reconstructions
(reduced mean-squared error) over the range of acceleration factors
investigated. Conclusion: The expanded encoding model and CS regularization are
complementary tools for single-shot spiral diffusion MRI, which enables both
higher spatial resolutions and higher acceleration factors.
|
2211.07532v1
|
2022-11-17
|
On universal butterfly and antisymmetric magnetoresistances
|
Butterfly magnetoresistance (BMR) and antisymmetric magnetoresistance (ASMR)
are about a butterfly-cross curve and a curve with one peak and one valley when
a magnetic field is swept up and down along a fixed direction. Other than the
parallelogram-shaped magnetoresistance-curve (MR-curve) often observed in
magnetic memory devices, BMR and ASMR are two ubiquitous types of MR-curves
observed in diversified magnetic systems, including van der Waals materials,
strongly correlated systems, and traditional magnets. Here, we reveal the
general principles and the picture behind the BMR and the ASMR that do not
depend on the detailed mechanisms of magnetoresistance: 1) The systems exhibit
hysteresis loops, common for most magnetic materials with coercivities. 2) The
magnetoresistance of the magnetic structures in a large positive magnetic field
and in a large negative magnetic field is approximately the same. With the
generalized Ohm's law in magnetic materials, these principles explain why most
BMR appears in the longitudinal resistance measurements and is very rare in the
Hall resistance measurements. Simple toy models, in which the
Landau-Lifshitz-Gilbert equation governs magnetization, are used to demonstrate
the principles and explain the appearance and disappearance of BMR in various
experiments. Our finding provides a simple picture to understand
magnetoresistance-related experiments.
|
2211.09369v1
|
2022-12-22
|
Photon production rate from Transverse-Longitudinal ($T-L$) mesonic correlator on the lattice
|
Thermal photons from the QGP provide important information about the
interaction among plasma constituents. The photon production rate from a
thermally equilibrated system is proportional to the transverse spectral
function $\rho_T(\omega=|\vec k|, \vec k)$. One can also calculate the photon
production rate from the difference between $\rho_T(\omega,\vec k)$
(transverse) and $\rho_L(\omega,\vec k)$ (longitudinal) projections, as
$\rho_L$ vanishes on the photon point. Because the UV part of $\rho_T-\rho_L$
is suppressed, the corresponding Euclidean correlator receives most of its
contribution from the IR part. We calculate the $T\!-\!L$ correlator on
$N_f=2+1$ flavour HISQ configurations with $m_l=m_s/5$ at temperature of about
$1.15\,T_{pc}$ (220 MeV). We have used two ans\"{a}tze for the spectral
function: 1) A polynomial connected to the UV region consistent with OPE
expansion and 2) a hydro-inspired spectral function. We have also applied the
Backus-Gilbert method to estimate the spectral function. All these different
approaches are combined to estimate the photon production rate.
|
2212.11509v2
|
2023-01-12
|
Incremental Dead State Detection in Logarithmic Time
|
Identifying live and dead states in an abstract transition system is a
recurring problem in formal verification; for example, it arises in our recent
work on efficiently deciding regex constraints in SMT. However,
state-of-the-art graph algorithms for maintaining reachability information
incrementally (that is, as states are visited and before the entire state space
is explored) assume that new edges can be added from any state at any time,
whereas in many applications, outgoing edges are added from each state as it is
explored. To formalize the latter situation, we propose guided incremental
digraphs (GIDs), incremental graphs which support labeling closed states
(states which will not receive further outgoing edges). Our main result is that
dead state detection in GIDs is solvable in $O(\log m)$ amortized time per edge
for $m$ edges, improving upon $O(\sqrt{m})$ per edge due to Bender, Fineman,
Gilbert, and Tarjan (BFGT) for general incremental directed graphs.
We introduce two algorithms for GIDs: one establishing the logarithmic time
bound, and a second algorithm to explore a lazy heuristics-based approach. To
enable an apples-to-apples experimental comparison, we implemented both
algorithms, two simpler baselines, and the state-of-the-art BFGT baseline using
a common directed graph interface in Rust. Our evaluation shows $110$-$530$x
speedups over BFGT for the largest input graphs over a range of graph classes,
random graphs, and graphs arising from regex benchmarks.
|
2301.05308v2
|
2023-01-23
|
Correction of high-order phase variation effects in dynamic field monitoring
|
Purpose: Field monitoring measures field perturbations, which can be
accounted for during image reconstructions. In certain field monitoring
environments, significant phase deviations can arise far from isocenter due to
the finite extent of the gradient and/or main magnet. This can degrade the
accuracy of field dynamics when field probes are placed near or outside the
diameter spherical volume of the gradient coils and/or main magnet, leading to
corrupted image quality. The objective of this work was to develop a correction
algorithm that reduces errors from highly nonlinear phase variations at distant
field probes in field dynamic fits. Methods: The algorithm is split into three
components. Component one fits phase coefficients one spatial order at a time,
while the second implements a weighted least squares solution based on probe
distance. After initial fitting, component three calculates phase residuals and
removes the phase for distant probes before re-fitting. Two healthy volunteers
were scanned on a head-only 7T MRI using diffusion-weighted single-shot spiral
and EPI sequences and field monitoring was performed. Images were reconstructed
with and without phase coefficient correction and compared qualitatively.
Results: The algorithm was able to correct corrupted field dynamics, resulting
in image quality improvements. Significant artefact reduction was observed when
correcting higher order fits, especially for diffusion weighted images.
Stepwise fitting provided the most correction benefit, which was marginally
improved when adding weighted least squares and phase residual corrections.
Conclusion: The proposed algorithm can mitigate effects of phase errors in
field monitoring, providing improved reliability of field dynamic
characterization.
|
2301.09726v1
|
2023-02-07
|
Computational capability for physical reservoir computing using a spin-torque oscillator with two free layers
|
A numerical analysis on the computational capability of physical reservoir
computing utilizing a spin-torque oscillator with two free layers is reported.
Conventional spintronics devices usually consist of two ferromagnets, where the
direction of magnetization in one layer, called the free layer, can move while
that of the other, the reference layer, is fixed. Recently, however, devices
with two free layers, where the reference layer is replaced by another free
layer, have been developed for various practical applications. Adding another
free layer drastically changes the dynamical response of the device through the
couplings via the spin-transfer effect and the dipole magnetic field. A
numerical simulation of the Landau-Lifshitz-Gilbert equation and a statistical
analyses of the Lyapunov exponent and the synchronization index reveal the
appearance of an amplitude-modulated oscillation and chaos in the oscillators
with two free layers. Such complex dynamics qualitatively change the
computational capability of physical reservoir computing because the
computational resource is dynamics of the physical system. An evaluation of the
short-term memory capacity clarifies that oscillators with two free layers have
a larger capacity than those of conventional oscillators. An enhancement in
capacity near the edge of echo state property, i.e., the boundary between zero
and finite synchronization index, is also found.
|
2302.03769v1
|
2023-02-13
|
Ultra-bright single photon source based on an atomically thin material
|
Solid-state single photon sources are central building blocks in quantum
communication networks and on-chip quantum information processing. Atomically
thin crystals were established as possible candidates to emit non-classical
states of light, however, the performance of monolayer-based single photon
sources has so far been lacking behind state-of-the-art devices based on volume
crystals. Here, we implement a single photon source based on an atomically thin
sheet of WSe2 coupled to a spectrally tunable optical cavity. It is
characterized by a high single photon purity with a $g^{(2)}(0)$ value as low
as $4.7 \pm 0.7 \%$ and a record-high first lens brightness of linearly
polarized photons as large as $65 \pm 4 \%$. Interestingly, the high
performance of our devices allows us to observe genuine quantum interference
phenomena in a Hong-Ou-Mandel experiment. Our results demonstrate that open
cavities and two-dimensional materials constitute an excellent platform for
ultra-bright quantum light sources: the unique properties of such
two-dimensional materials and the versatility of open cavities open an
inspiring avenue for novel quantum optoelectronic devices.
|
2302.06340v1
|
2023-02-21
|
A Prompt Pattern Catalog to Enhance Prompt Engineering with ChatGPT
|
Prompt engineering is an increasingly important skill set needed to converse
effectively with large language models (LLMs), such as ChatGPT. Prompts are
instructions given to an LLM to enforce rules, automate processes, and ensure
specific qualities (and quantities) of generated output. Prompts are also a
form of programming that can customize the outputs and interactions with an
LLM. This paper describes a catalog of prompt engineering techniques presented
in pattern form that have been applied to solve common problems when conversing
with LLMs. Prompt patterns are a knowledge transfer method analogous to
software patterns since they provide reusable solutions to common problems
faced in a particular context, i.e., output generation and interaction when
working with LLMs. This paper provides the following contributions to research
on prompt engineering that apply LLMs to automate software development tasks.
First, it provides a framework for documenting patterns for structuring prompts
to solve a range of problems so that they can be adapted to different domains.
Second, it presents a catalog of patterns that have been applied successfully
to improve the outputs of LLM conversations. Third, it explains how prompts can
be built from multiple patterns and illustrates prompt patterns that benefit
from combination with other prompt patterns.
|
2302.11382v1
|
2023-03-11
|
Power efficient ReLU design for neuromorphic computing using spin Hall effect
|
We demonstrate a magnetic tunnel junction injected with spin Hall current to
exhibit linear rotation of magnetization of the free-ferromagnet using only the
spin current. Using the linear resistance change of the MTJ, we devise a
circuit for the rectified linear activation (ReLU) function of the artificial
neuron. We explore the role of different spin Hall effect (SHE) heavy metal
layers on the power consumption of the ReLU circuit. We benchmark the power
consumption of the ReLU circuit with different SHE layers by defining a new
parameter called the spin Hall power factor. It combines the spin Hall angle,
resistivity, and thickness of the heavy metal layer, which translates to the
power consumption of the different SHE layers during spin-orbit
switching/rotation of the free FM. We employ a hybrid spintronics-CMOS
simulation framework that couples Keldysh non-equilibrium Green's function
formalism with Landau-Lifshitz-Gilbert-Slonzewski equations and the HSPICE
circuit simulator to account for diverse physics of spin-transport and the CMOS
elements in our proposed ReLU design. We also demonstrate the robustness of the
proposed ReLU circuit against thermal noise and non-trivial power-error
trade-off that enables the use of an unstable free-ferromagnet for
energy-efficient design. Using the proposed circuit, we evaluate the
performance of the convolutional neural network for MNIST datasets and
demonstrate comparable classification accuracies to the ideal ReLU with an
energy consumption of 75 $pJ$ per sample.
|
2303.06463v1
|
2023-03-28
|
Optimal Scheduling Policies for Remote Estimation of Autoregressive Markov Processes over Time-Correlated Fading Channel
|
We consider the problem of transmission scheduling for the remote estimation
of a discrete-time autoregressive Markov process that is driven by white
Gaussian noise. A sensor observes this process, and then decides to either
encode the current state of this process into a data packet and attempts to
transmit it to the estimator over an unreliable wireless channel modeled as a
Gilbert-Elliott channel, or does not send any update. Each transmission attempt
consumes $\lambda$ units of transmission power, and the remote estimator is
assumed to be linear. The channel state is revealed only via the feedback
(ACK\slash NACK) of a transmission, and hence the channel state is not revealed
if no transmission occurs. The goal of the scheduler is to minimize the
expected value of an infinite-horizon cumulative discounted cost, in which the
instantaneous cost is composed of the following two quantities: (i)~squared
estimation error, (ii) transmission power. We show that this problem can
equivalently be posed as a partially observable Markov decision process
(POMDP), in which the scheduler maintains a belief about the current state of
the channel, and makes decisions on the basis of the current value of the
estimation error, and the belief state.~We then show that the optimal policy is
of threshold-type, i.e. for each value of the estimation error $e$, there is a
threshold $b\ust(e)$ such that when the error is equal to $e$, then it is
optimal to transmit only when the current belief state is greater than
$b\ust(e)$.
|
2303.16285v1
|
2023-04-14
|
Study on Soft Robotic Pinniped Locomotion
|
Legged locomotion is a highly promising but under-researched subfield within
the field of soft robotics. The compliant limbs of soft-limbed robots offer
numerous benefits, including the ability to regulate impacts, tolerate falls,
and navigate through tight spaces. These robots have the potential to be used
for various applications, such as search and rescue, inspection, surveillance,
and more. The state-of-the-art still faces many challenges, including limited
degrees of freedom, a lack of diversity in gait trajectories, insufficient limb
dexterity, and limited payload capabilities. To address these challenges, we
develop a modular soft-limbed robot that can mimic the locomotion of pinnipeds.
By using a modular design approach, we aim to create a robot that has improved
degrees of freedom, gait trajectory diversity, limb dexterity, and payload
capabilities. We derive a complete floating-base kinematic model of the
proposed robot and use it to generate and experimentally validate a variety of
locomotion gaits. Results show that the proposed robot is capable of
replicating these gaits effectively. We compare the locomotion trajectories
under different gait parameters against our modeling results to demonstrate the
validity of our proposed gait models.
|
2304.06945v1
|
2023-04-19
|
Local object crop collision network for efficient simulation of non-convex objects in GPU-based simulators
|
Our goal is to develop an efficient contact detection algorithm for
large-scale GPU-based simulation of non-convex objects. Current GPU-based
simulators such as IsaacGym and Brax must trade-off speed with fidelity,
generality, or both when simulating non-convex objects. Their main issue lies
in contact detection (CD): existing CD algorithms, such as
Gilbert-Johnson-Keerthi (GJK), must trade off their computational speed with
accuracy which becomes expensive as the number of collisions among non-convex
objects increases. We propose a data-driven approach for CD, whose accuracy
depends only on the quality and quantity of offline dataset rather than online
computation time. Unlike GJK, our method inherently has a uniform computational
flow, which facilitates efficient GPU usage based on advanced compilers such as
XLA (Accelerated Linear Algebra). Further, we offer a data-efficient solution
by learning the patterns of colliding local crop object shapes, rather than
global object shapes which are harder to learn. We demonstrate our approach
improves the efficiency of existing CD methods by a factor of 5-10 for
non-convex objects with comparable accuracy. Using the previous work on contact
resolution for a neural-network-based contact detector, we integrate our CD
algorithm into the open-source GPU-based simulator, Brax, and show that we can
improve the efficiency over IsaacGym and generality over standard Brax. We
highly recommend the videos of our simulator included in the supplementary
materials.
|
2304.09439v2
|
2023-04-25
|
Semantic Compression With Large Language Models
|
The rise of large language models (LLMs) is revolutionizing information
retrieval, question answering, summarization, and code generation tasks.
However, in addition to confidently presenting factually inaccurate information
at times (known as "hallucinations"), LLMs are also inherently limited by the
number of input and output tokens that can be processed at once, making them
potentially less effective on tasks that require processing a large set or
continuous stream of information. A common approach to reducing the size of
data is through lossless or lossy compression. Yet, in some cases it may not be
strictly necessary to perfectly recover every detail from the original data, as
long as a requisite level of semantic precision or intent is conveyed.
This paper presents three contributions to research on LLMs. First, we
present the results from experiments exploring the viability of approximate
compression using LLMs, focusing specifically on GPT-3.5 and GPT-4 via ChatGPT
interfaces. Second, we investigate and quantify the capability of LLMs to
compress text and code, as well as to recall and manipulate compressed
representations of prompts. Third, we present two novel metrics -- Exact
Reconstructive Effectiveness (ERE) and Semantic Reconstruction Effectiveness
(SRE) -- that quantify the level of preserved intent between text compressed
and decompressed by the LLMs we studied. Our initial results indicate that
GPT-4 can effectively compress and reconstruct text while preserving the
semantic essence of the original text, providing a path to leverage
$\sim$5$\times$ more tokens than present limits allow.
|
2304.12512v1
|
2023-04-28
|
Optimal majority rules and quantitative Condorcet properties of setwise Kemeny voting schemes
|
The important Kemeny problem, which consists of computing median consensus
rankings of an election with respect to the Kemeny voting rule, admits
important applications in biology and computational social choice and was
generalized recently via an interesting setwise approach by Gilbert et. al. Our
first results establish optimal quantitative extensions of the Unanimity
property and the well-known $3/4$-majority rule of Betzler et al. for the
classical Kemeny median problem. Moreover, by elaborating an exhaustive list of
quantified axiomatic properties (such as the Condorcet and Smith criteria, the
$5/6$-majority rule, etc.) of the $3$-wise Kemeny rule where not only pairwise
comparisons but also the discordance between the winners of subsets of three
candidates are also taken into account, we come to the conclusion that the
$3$-wise Kemeny voting scheme induced by the $3$-wise Kendall-tau distance
presents interesting advantages in comparison with the classical Kemeny rule.
For example, it satisfies several improved manipulation-proof properties. Since
the $3$-wise Kemeny problem is NP-hard, our results also provide some of the
first useful space reduction techniques by determining the relative orders of
pairs of alternatives. Our works suggest similar interesting properties of
higher setwise Kemeny voting schemes which justify and compensate for the more
expensive computational cost than the classical Kemeny scheme.
|
2304.14980v1
|
2023-05-25
|
Packaging code for reproducible research in the public sector
|
The effective and ethical use of data to inform decision-making offers huge
value to the public sector, especially when delivered by transparent,
reproducible, and robust data processing workflows. One way that governments
are unlocking this value is through making their data publicly available,
allowing more people and organisations to derive insights. However, open data
is not enough in many cases: publicly available datasets need to be accessible
in an analysis-ready form from popular data science tools, such as R and
Python, for them to realise their full potential.
This paper explores ways to maximise the impact of open data with reference
to a case study of packaging code to facilitate reproducible analysis. We
present the jtstats project, which consists of R and Python packages for
importing, processing, and visualising large and complex datasets representing
journey times, for many modes and purposes at multiple geographic levels,
released by the UK Department of Transport. jtstats shows how domain specific
packages can enable reproducible research within the public sector and beyond,
saving duplicated effort and reducing the risks of errors from repeated
analyses. We hope that the jtstats project inspires others, particularly those
in the public sector, to add value to their data sets by making them more
accessible.
|
2305.16205v1
|
2023-05-25
|
COMPLETE: A flagship mission for complete understanding of 3D coronal magnetic energy release
|
COMPLETE is a flagship mission concept combining broadband spectroscopic
imaging and comprehensive magnetography from multiple viewpoints around the Sun
to enable tomographic reconstruction of 3D coronal magnetic fields and
associated dynamic plasma properties, which provide direct diagnostics of
energy release. COMPLETE re-imagines the paradigm for solar remote-sensing
observations through purposefully co-optimized detectors distributed on
multiple spacecraft that operate as a single observatory, linked by a
comprehensive data/model assimilation strategy to unify individual observations
into a single physical framework. We describe COMPLETE's science goals,
instruments, and mission implementation. With targeted investment by NASA,
COMPLETE is feasible for launch in 2032 to observe around the maximum of Solar
Cycle 26.
|
2305.16533v1
|
2023-05-25
|
Magnetic Energy Powers the Corona: How We Can Understand its 3D Storage & Release
|
The coronal magnetic field is the prime driver behind many as-yet unsolved
mysteries: solar eruptions, coronal heating, and the solar wind, to name a few.
It is, however, still poorly observed and understood. We highlight key
questions related to magnetic energy storage, release, and transport in the
solar corona, and their relationship to these important problems. We advocate
for new and multi-point co-optimized measurements, sensitive to magnetic field
and other plasma parameters, spanning from optical to $\gamma$-ray wavelengths,
to bring closure to these long-standing and fundamental questions. We discuss
how our approach can fully describe the 3D magnetic field, embedded plasma,
particle energization, and their joint evolution to achieve these objectives.
|
2305.17146v1
|
2023-05-27
|
Optimization's Neglected Normative Commitments
|
Optimization is offered as an objective approach to resolving complex,
real-world decisions involving uncertainty and conflicting interests. It drives
business strategies as well as public policies and, increasingly, lies at the
heart of sophisticated machine learning systems. A paradigm used to approach
potentially high-stakes decisions, optimization relies on abstracting the real
world to a set of decision(s), objective(s) and constraint(s). Drawing from the
modeling process and a range of actual cases, this paper describes the
normative choices and assumptions that are necessarily part of using
optimization. It then identifies six emergent problems that may be neglected:
1) Misspecified values can yield optimizations that omit certain imperatives
altogether or incorporate them incorrectly as a constraint or as part of the
objective, 2) Problematic decision boundaries can lead to faulty modularity
assumptions and feedback loops, 3) Failing to account for multiple agents'
divergent goals and decisions can lead to policies that serve only certain
narrow interests, 4) Mislabeling and mismeasurement can introduce bias and
imprecision, 5) Faulty use of relaxation and approximation methods,
unaccompanied by formal characterizations and guarantees, can severely impede
applicability, and 6) Treating optimization as a justification for action,
without specifying the necessary contextual information, can lead to ethically
dubious or faulty decisions. Suggestions are given to further understand and
curb the harms that can arise when optimization is used wrongfully.
|
2305.17465v2
|
2023-05-30
|
Hardness of Approximation in PSPACE and Separation Results for Pebble Games
|
We consider the pebble game on DAGs with bounded fan-in introduced in
[Paterson and Hewitt '70] and the reversible version of this game in [Bennett
'89], and study the question of how hard it is to decide exactly or
approximately the number of pebbles needed for a given DAG in these games. We
prove that the problem of eciding whether $s$~pebbles suffice to reversibly
pebble a DAG $G$ is PSPACE-complete, as was previously shown for the standard
pebble game in [Gilbert, Lengauer and Tarjan '80]. Via two different graph
product constructions we then strengthen these results to establish that both
standard and reversible pebbling space are PSPACE-hard to approximate to within
any additive constant. To the best of our knowledge, these are the first
hardness of approximation results for pebble games in an unrestricted setting
(even for polynomial time). Also, since [Chan '13] proved that reversible
pebbling is equivalent to the games in [Dymond and Tompa '85] and [Raz and
McKenzie '99], our results apply to the Dymond--Tompa and Raz--McKenzie games
as well, and from the same paper it follows that resolution depth is
PSPACE-hard to determine up to any additive constant. We also obtain a
multiplicative logarithmic separation between reversible and standard pebbling
space. This improves on the additive logarithmic separation previously known
and could plausibly be tight, although we are not able to prove this. We leave
as an interesting open problem whether our additive hardness of approximation
result could be strengthened to a multiplicative bound if the computational
resources are decreased from polynomial space to the more common setting of
polynomial time.
|
2305.19104v1
|
2023-06-01
|
Every Bit Counts in Consensus
|
Consensus enables n processes to agree on a common valid L-bit value, despite
t < n/3 processes being faulty and acting arbitrarily. A long line of work has
been dedicated to improving the worst-case communication complexity of
consensus in partial synchrony. This has recently culminated in the worst-case
word complexity of O(n^2). However, the worst-case bit complexity of the best
solution is still O(n^2 L + n^2 kappa) (where kappa is the security parameter),
far from the \Omega(n L + n^2) lower bound. The gap is significant given the
practical use of consensus primitives, where values typically consist of
batches of large size (L > n).
This paper shows how to narrow the aforementioned gap while achieving optimal
linear latency. Namely, we present a new algorithm, DARE (Disperse, Agree,
REtrieve), that improves upon the O(n^2 L) term via a novel dispersal
primitive. DARE achieves O(n^{1.5} L + n^{2.5} kappa) bit complexity, an
effective sqrt{n}-factor improvement over the state-of-the-art (when L > n
kappa). Moreover, we show that employing heavier cryptographic primitives,
namely STARK proofs, allows us to devise DARE-Stark, a version of DARE which
achieves the near-optimal bit complexity of O(n L + n^2 poly(kappa)). Both DARE
and DARE-Stark achieve optimal O(n) latency.
|
2306.00431v2
|
2023-06-12
|
Accountability Infrastructure: How to implement limits on platform optimization to protect population health
|
Attention capitalism has generated design processes and product development
decisions that prioritize platform growth over all other considerations. To the
extent limits have been placed on these incentives, interventions have
primarily taken the form of content moderation. While moderation is important
for what we call "acute harms," societal-scale harms -- such as negative
effects on mental health and social trust -- require new forms of institutional
transparency and scientific investigation, which we group under the term
accountability infrastructure.
This is not a new problem. In fact, there are many conceptual lessons and
implementation approaches for accountability infrastructure within the history
of public health. After reviewing these insights, we reinterpret the societal
harms generated by technology platforms through reference to public health. To
that end, we present a novel mechanism design framework and practical
measurement methods for that framework. The proposed approach is iterative and
built into the product design process, and is applicable for both
internally-motivated (i.e. self regulation by companies) and
externally-motivated (i.e. government regulation) interventions for a range of
societal problems, including mental health.
We aim to help shape a research agenda of principles for the design of
mechanisms around problem areas on which there is broad consensus and a firm
base of support. We offer constructive examples and discussion of potential
implementation methods related to these topics, as well as several new data
illustrations for potential effects of exposure to online content.
|
2306.07443v1
|
2023-06-16
|
Microlayer in nucleate boiling seen as Landau-Levich film with dewetting and evaporation
|
Both experimental and theoretical studies on the microscale and fast physical
phenomena occurring during the growth of vapor bubbles in nucleate pool boiling
are reported. The focus is on the liquid film of micrometric thickness
(``microlayer'') that can form between the heater and the liquid-vapor
interface of a bubble on the millisecond time scale. The microlayer strongly
affects the macroscale heat transfer and is thus important to be understood. It
is shown that the microlayer can be seen as the Landau-Levich film deposited by
the bubble foot edge during its receding when the bubble grows. The microlayer
profile measured with white-light interferometry, the temperature distribution
over the heater, and the bubble shape were observed with synchronized
high-speed cameras. The microlayer consists of two regions: a ridge near the
contact line followed by a longer and flatter part. The ridge could not be
measured because of the intrinsic limitation of interferometry, which is
analyzed. The simulations show that the ridge grows over time due to collection
of liquid at contact line receding, the theoretical dynamics of which agrees
with the experiment. The flatter part of the microlayer is bumped and its
physical origin is explained.
|
2306.09838v1
|
2023-06-20
|
High frequency oscillations in spin-torque nano oscillator due to bilinear coupling
|
Exchange coupling in an interfacial context is crucial for spin-torque nano
oscillator (STNO) that consists of a non-magnetic spacer which is alloyed with
a ferromagnetic material. Currently, investigations on the dynamics of the free
layer magnetization and frequency enhancement in the STNO with bilinear
coupling are still being actively pursued. In the present work, we investigate
the dynamics of the STNO in the presence of bilinear coupling but in the
absence of an external magnetic field by analyzing the associated
Landau-Lifshitz-Gilbert-Sloncewski(LLGS) equation, and consequently the impact
of the bilinear coupling on the dynamics of the magnetization of the free layer
is studied. It is observed that the frequency of the oscillations in the
magnetization component along the direction of the pinned layer polarization
can be enhanced above 300 GHz by positive bilinear coupling and up to around 30
GHz by negative bilinear coupling. We further reveal a transition from in-plane
to out-of-plane precession both for positive and negative bi-linear couplings.
We also analyze the switching of the magnetization for different values of
current and bilinear coupling. Our detailed investigations of STNO with
bilinear coupling aim at the possibilities of high-frequency devices by
considering the applied current and bilinear coupling in the absence of a
magnetic field.
|
2306.11415v1
|
2023-06-20
|
Convolutional neural networks for large-scale dynamical modeling of itinerant magnets
|
Complex spin textures in itinerant electron magnets hold promises for
next-generation memory and information technology. The long-ranged and often
frustrated electron-mediated spin interactions in these materials give rise to
intriguing localized spin structures such as skyrmions. Yet, simulations of
magnetization dynamics for such itinerant magnets are computationally difficult
due to the need for repeated solutions to the electronic structure problems. We
present a convolutional neural network (CNN) model to accurately and
efficiently predict the electron-induced magnetic torques acting on local
spins. Importantly, as the convolutional operations with a fixed kernel
(receptive field) size naturally take advantage of the locality principle for
many-electron systems, CNN offers a scalable machine learning approach to spin
dynamics. We apply our approach to enable large-scale dynamical simulations of
skyrmion phases in itinerant spin systems. By incorporating the CNN model into
Landau-Lifshitz-Gilbert dynamics, our simulations successfully reproduce the
relaxation process of the skyrmion phase and stabilize a skyrmion lattice in
larger systems. The CNN model also allows us to compute the effective receptive
fields, thus providing a systematic and unbiased method for determining the
locality of the original electron models.
|
2306.11833v1
|
2023-06-29
|
Relaxed Local Correctability from Local Testing
|
We construct the first asymptotically good relaxed locally correctable codes
with polylogarithmic query complexity, bringing the upper bound polynomially
close to the lower bound of Gur and Lachish (SICOMP 2021). Our result follows
from showing that a high-rate locally testable code can boost the block length
of a smaller relaxed locally correctable code, while preserving the correcting
radius and incurring only a modest additive cost in rate and query complexity.
We use the locally testable code's tester to check if the amount of corruption
in the input is low; if so, we can "zoom-in" to a suitable substring of the
input and recurse on the smaller code's local corrector. Hence, iterating this
operation with a suitable family of locally testable codes due to Dinur, Evra,
Livne, Lubotzky, and Mozes (STOC 2022) yields asymptotically good codes with
relaxed local correctability, arbitrarily large block length, and
polylogarithmic query complexity.
Our codes asymptotically inherit the rate and distance of any locally
testable code used in the final invocation of the operation. Therefore, our
framework also yields nonexplicit relaxed locally correctable codes with
polylogarithmic query complexity that have rate and distance approaching the
Gilbert-Varshamov bound.
|
2306.17035v2
|
2023-07-13
|
Words are not Wind -- How Joint Commitment and Reputation Solve Social Dilemmas, without Repeated Interactions or Enforcement by Third Parties
|
Joint commitment was argued to "make our social world" (Gilbert, 2014) and to
separate us from other primates. 'Joint' entails that neither of us promises
anything, unless the other promises as well. When we need to coordinate for the
best mutual outcome, any commitment is beneficial. However, when we are tempted
to free-ride (i.e. in social dilemmas), commitment serves no obvious purpose.
We show that a reputation system, which judges action in social dilemmas only
after joint commitment, can prevent free-riding. Keeping commitments builds
trust. We can selectively enter joint commitments with trustworthy individuals
to ensure their cooperation (since they will now be judged). We simply do not
commit to cooperate with those we do not trust, and hence can freely defect
without losing the trust of others. This principle might be the reason for
pointedly public joint commitments, such as marriage. It is especially relevant
to our evolutionary past, in which no mechanisms existed to enforce commitments
reliably and impartially (e.g. via a powerful and accountable government). Much
research from anthropology, philosophy and psychology made the assumption that
past collaborations were mutually beneficial and had little possibilities to
free-ride, for which there is little support. Our evolutionary game theory
approach proves that this assumption is not necessary, because free-riding
could have been dealt with joint commitments and reputation.
|
2307.06898v1
|
2023-07-18
|
Multi-Stage Cable Routing through Hierarchical Imitation Learning
|
We study the problem of learning to perform multi-stage robotic manipulation
tasks, with applications to cable routing, where the robot must route a cable
through a series of clips. This setting presents challenges representative of
complex multi-stage robotic manipulation scenarios: handling deformable
objects, closing the loop on visual perception, and handling extended behaviors
consisting of multiple steps that must be executed successfully to complete the
entire task. In such settings, learning individual primitives for each stage
that succeed with a high enough rate to perform a complete temporally extended
task is impractical: if each stage must be completed successfully and has a
non-negligible probability of failure, the likelihood of successful completion
of the entire task becomes negligible. Therefore, successful controllers for
such multi-stage tasks must be able to recover from failure and compensate for
imperfections in low-level controllers by smartly choosing which controllers to
trigger at any given time, retrying, or taking corrective action as needed. To
this end, we describe an imitation learning system that uses vision-based
policies trained from demonstrations at both the lower (motor control) and the
upper (sequencing) level, present a system for instantiating this method to
learn the cable routing task, and perform evaluations showing great performance
in generalizing to very challenging clip placement variations. Supplementary
videos, datasets, and code can be found at
https://sites.google.com/view/cablerouting.
|
2307.08927v5
|
2023-07-20
|
Fallout from U.S. atmospheric nuclear tests in New Mexico and Nevada (1945-1962)
|
One hundred and one atmospheric nuclear weapon tests were conducted between
1945 and 1962 in the United States, resulting in widespread dispersion of
radioactive fallout, and leading to environmental contamination and population
exposures. Accurate assessment of the extent of fallout from nuclear weapon
tests has been challenging in the United States and elsewhere, due to limited
monitoring and data accessibility. Here we address this deficit by combining
U.S. government data, high-resolution reanalyzed historical weather fields, and
atmospheric transport modeling to reconstruct radionuclide deposition across
the contiguous United States, with 10-kilometer spatial and one-hour temporal
resolution for five days following detonation, from all 94 atmospheric tests
detonated in New Mexico and Nevada with fission yields sufficient to generate
mushroom clouds. Our analysis also includes deposition estimates for 10 days
following the detonation of Trinity, the first ever nuclear weapon test, on
July 16, 1945. We identify locations where radionuclide deposition
significantly exceeded levels in areas covered by the U.S. Radiation Exposure
Compensation Act (RECA). These findings include deposition in all 48 contiguous
U.S. states. They provide an opportunity for re-evaluating the public health
and environmental implications from atmospheric nuclear testing. Finally, our
findings also speak to debates about marking the beginning of the Anthropocene
with nuclear weapons fallout. Our deposition estimates indicate that direct
fallout from Trinity, a plutonium device, reached Crawford Lake in Canada, the
proposed "golden spike" site marking the beginning of the Anthropocene epoch,
starting on July 20, 1945.
|
2307.11040v1
|
2023-07-23
|
Characterizing non-Markovian Quantum Process by Fast Bayesian Tomography
|
To push gate performance to levels beyond the thresholds for quantum error
correction, it is important to characterize the error sources occurring on
quantum gates. However, the characterization of non-Markovian error poses a
challenge to current quantum process tomography techniques. Fast Bayesian
Tomography (FBT) is a self-consistent gate set tomography protocol that can be
bootstrapped from earlier characterization knowledge and be updated in
real-time with arbitrary gate sequences. Here we demonstrate how FBT allows for
the characterization of key non-Markovian error processes. We introduce two
experimental protocols for FBT to diagnose the non-Markovian behavior of
two-qubit systems on silicon quantum dots. To increase the efficiency and
scalability of the experiment-analysis loop, we develop an online FBT software
stack. To reduce experiment cost and analysis time, we also introduce a native
readout method and warm boot strategy. Our results demonstrate that FBT is a
useful tool for probing non-Markovian errors that can be detrimental to the
ultimate realization of fault-tolerant operation on quantum computing.
|
2307.12452v2
|
2023-07-27
|
Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback
|
Reinforcement learning from human feedback (RLHF) is a technique for training
AI systems to align with human goals. RLHF has emerged as the central method
used to finetune state-of-the-art large language models (LLMs). Despite this
popularity, there has been relatively little public work systematizing its
flaws. In this paper, we (1) survey open problems and fundamental limitations
of RLHF and related methods; (2) overview techniques to understand, improve,
and complement RLHF in practice; and (3) propose auditing and disclosure
standards to improve societal oversight of RLHF systems. Our work emphasizes
the limitations of RLHF and highlights the importance of a multi-faceted
approach to the development of safer AI systems.
|
2307.15217v2
|
2023-08-03
|
Predicting Ki67, ER, PR, and HER2 Statuses from H&E-stained Breast Cancer Images
|
Despite the advances in machine learning and digital pathology, it is not yet
clear if machine learning methods can accurately predict molecular information
merely from histomorphology. In a quest to answer this question, we built a
large-scale dataset (185538 images) with reliable measurements for Ki67, ER,
PR, and HER2 statuses. The dataset is composed of mirrored images of H\&E and
corresponding images of immunohistochemistry (IHC) assays (Ki67, ER, PR, and
HER2. These images are mirrored through registration. To increase reliability,
individual pairs were inspected and discarded if artifacts were present (tissue
folding, bubbles, etc). Measurements for Ki67, ER and PR were determined by
calculating H-Score from image analysis. HER2 measurement is based on binary
classification: 0 and 1+ (IHC scores representing a negative subset) vs 3+ (IHC
score positive subset). Cases with IHC equivocal score (2+) were excluded. We
show that a standard ViT-based pipeline can achieve prediction performances
around 90% in terms of Area Under the Curve (AUC) when trained with a proper
labeling protocol. Finally, we shed light on the ability of the trained
classifiers to localize relevant regions, which encourages future work to
improve the localizations. Our proposed dataset is publicly available:
https://ihc4bc.github.io/
|
2308.01982v1
|
2023-08-06
|
Unravelling metallic contaminants in complex polyimide heterostructures using deep ultraviolet spectroscopic ellipsometry
|
Metallic contaminants in complex heterostructures are important topics due to
their significant roles in determining physical properties as well as device
performance. Heterostructures of polyimide via on Al pad and Cu redistribution
layer (RDL) on polyimide have shown exotic properties and are important for
advanced semiconductor packaging systems. One main problem is significant
leakage current variations, which affect the performance of the devices, yet
the origin is far from understood. Furthermore, metal contaminations would
occur at the buried interfaces and it is particularly challenging to probe
them. Until now, the electronic and optical properties of complex polyimide
heterostructures and the roles of metallic contaminants, especially in the deep
ultraviolet (DUV) have not been studied extensively. Herewith, using
spectroscopic ellipsometry (SE) in a broad DUV range supported with
finite-difference time-domain (FDTD) calculations, we determine optical
properties of contaminants with various concentrations and reveal their
influence on device performance of under-bump vias and redistribution layer
(RDL) architectures. The complex dielectric function shows varying
contamination levels and different metals responsible for chip performance.
Metallic contaminants are found embedded within 50 nm in the polyimide and
different metals are distinguishable with varying concentrations, in agreement
with contact measurements in highly complex structures. Our result shows the
potency of spectroscopic ellipsometry in the DUV and paves the way for
non-destructive, advanced quality control and metrology applications in
integrated advanced electronics packaging systems.
|
2308.03015v1
|
2023-08-14
|
Nanoelectromechanical control of spin-photon interfaces in a hybrid quantum system on chip
|
Atom-like defects or color centers (CC's) in nanostructured diamond are a
leading platform for optically linked quantum technologies, with recent
advances including memory-enhanced quantum communication, multi-node quantum
networks, and spin-mediated generation of photonic cluster states. Scaling to
practically useful applications motivates architectures meeting the following
criteria: C1 individual optical addressing of spin qubits; C2 frequency tuning
of CC spin-dependent optical transitions; C3 coherent spin control in CC ground
states; C4 active photon routing; C5 scalable manufacturability; and C6 low
on-chip power dissipation for cryogenic operations. However, no architecture
meeting C1-C6 has thus far been demonstrated. Here, we introduce a hybrid
quantum system-on-chip (HQ-SoC) architecture that simultaneously achieves
C1-C6. Key to this advance is the realization of piezoelectric strain control
of diamond waveguide-coupled tin vacancy centers to meet C2 and C3, with
ultra-low power dissipation necessary for C6. The DC response of our device
allows emitter transition tuning by over 20 GHz, while the large frequency
range (exceeding 2 GHz) enables low-power AC control. We show acoustic
manipulation of integrated tin vacancy spins and estimate single-phonon
coupling rates over 1 kHz in the resolved sideband regime. Combined with
high-speed optical routing with negligible static hold power, this HQ-SoC
platform opens the path to scalable single-qubit control with optically
mediated entangling gates.
|
2308.07161v1
|
2023-08-23
|
MOFO: MOtion FOcused Self-Supervision for Video Understanding
|
Self-supervised learning (SSL) techniques have recently produced outstanding
results in learning visual representations from unlabeled videos. Despite the
importance of motion in supervised learning techniques for action recognition,
SSL methods often do not explicitly consider motion information in videos. To
address this issue, we propose MOFO (MOtion FOcused), a novel SSL method for
focusing representation learning on the motion area of a video, for action
recognition. MOFO automatically detects motion areas in videos and uses these
to guide the self-supervision task. We use a masked autoencoder which randomly
masks out a high proportion of the input sequence; we force a specified
percentage of the inside of the motion area to be masked and the remainder from
outside. We further incorporate motion information into the finetuning step to
emphasise motion in the downstream task. We demonstrate that our motion-focused
innovations can significantly boost the performance of the currently leading
SSL method (VideoMAE) for action recognition. Our method improves the recent
self-supervised Vision Transformer (ViT), VideoMAE, by achieving +2.6%, +2.1%,
+1.3% accuracy on Epic-Kitchens verb, noun and action classification,
respectively, and +4.7% accuracy on Something-Something V2 action
classification. Our proposed approach significantly improves the performance of
the current SSL method for action recognition, indicating the importance of
explicitly encoding motion in SSL.
|
2308.12447v2
|
2023-08-25
|
Thermal effect on microwave pulse driven magnetization switching of Stoner particle
|
Recently it has been demonstrated that the cosine chirp microwave pulse
(CCMP) is capable of achieving fast and energy-efficient magnetization-reversal
of a nanoparticle with zero-Temperature. However, we investigate the finite
temperature, $T$ effect on the CCMP-driven magnetization reversal using the
framework of the stochastic Landau Lifshitz Gilbert equation. At finite
Temperature, we obtain the CCMP-driven fast and energy-efficient reversal and
hence estimate the maximal temperature, $T_{max}$ at which the magnetization
reversal is valid. $T_{max}$ increases with increasing the nanoparticle
cross-sectional area/shape anisotropy up to a certain value, and afterward
$T_{max}$ decreases with the further increment of nanoparticle cross-sectional
area/shape anisotropy. This is because of demagnetization/shape anisotropy
field opposes the magnetocrystalline anisotropy, i.e., reduces the energy
barrier which separates the two stable states. For smaller cross-sectional
area/shape anisotropy, the controlling parameters of CCMP show decreasing trend
with temperature. We also find that with the increment easy-plane
shape-anisotropy, the required initial frequency of CCMP significantly reduces.
For the larger volume of nanoparticles, the parameters of CCMP remains constant
for a wide range of temperature which are desired for the device application.
Therefore, The above findings might be useful to realize the CCMP-driven fast
and energy-efficient magnetization reversal in realistic conditions.
|
2308.13124v1
|
2023-09-04
|
Impact of electrostatic crosstalk on spin qubits in dense CMOS quantum dot arrays
|
Quantum processors based on integrated nanoscale silicon spin qubits are a
promising platform for highly scalable quantum computation. Current CMOS spin
qubit processors consist of dense gate arrays to define the quantum dots,
making them susceptible to crosstalk from capacitive coupling between a dot and
its neighbouring gates. Small but sizeable spin-orbit interactions can transfer
this electrostatic crosstalk to the spin g-factors, creating a dependence of
the Larmor frequency on the electric field created by gate electrodes
positioned even tens of nanometers apart. By studying the Stark shift from tens
of spin qubits measured in nine different CMOS devices, we developed a
theoretical frawework that explains how electric fields couple to the spin of
the electrons in increasingly complex arrays, including those electric
fluctuations that limit qubit dephasing times $T_2^*$. The results will aid in
the design of robust strategies to scale CMOS quantum technology.
|
2309.01849v1
|
2023-09-05
|
Connectivity and interference in device-to-device networks in Poisson-Voronoi cities
|
To study the overall connectivity in device-to-device networks in cities, we
incorporate a signal-to-interference-plus-noise connectivity model into a
Poisson-Voronoi tessellation model representing the streets of a city. Relays
are located at crossroads (or street intersections), whereas (user) devices are
scattered along streets. Between any two adjacent relays, we assume data can be
transmitted either directly between the relays or through users, given they
share a common street. Our simulation results reveal that the network
connectivity is ensured when the density of users (on the streets) exceeds a
certain critical value. But then the network connectivity disappears when the
user density exceeds a second critical value. The intuition is that for longer
streets, where direct relay-to-relay communication is not possible, users are
needed to transmit data between relays, but with too many users the
interference becomes too strong, eventually reducing the overall network
connectivity. This observation on the user density evokes previous results
based on another wireless network model, where transmitter-receivers were
scattered across the plane. This effect disappears when interference is removed
from the model, giving a variation of the classic Gilbert model and recalling
the lesson that neglecting interference in such network models can give overly
optimistic results. For physically reasonable model parameters, we show that
crowded streets (with more than six users on a typical street) lead to a sudden
drop in connectivity. We also give numerical results outlining a relationship
between the user density and the strength of any interference reduction
techniques.
|
2309.02137v2
|
2023-09-16
|
On non-expandable cross-bifix-free codes
|
A cross-bifix-free code of length $n$ over $\mathbb{Z}_q$ is defined as a
non-empty subset of $\mathbb{Z}_q^n$ satisfying that the prefix set of each
codeword is disjoint from the suffix set of every codeword. Cross-bifix-free
codes have found important applications in digital communication systems. One
of the main research problems on cross-bifix-free codes is to construct
cross-bifix-free codes as large as possible in size. Recently, Wang and Wang
introduced a family of cross-bifix-free codes $S_{I,J}^{(k)}(n)$, which is a
generalization of the classical cross-bifix-free codes studied early by
Lvenshtein, Gilbert and Chee {\it et al.}. It is known that $S_{I,J}^{(k)}(n)$
is nearly optimal in size and $S_{I,J}^{(k)}(n)$ is non-expandable if $k=n-1$
or $1\leq k<n/2$. In this paper, we first show that $S_{I,J}^{(k)}(n)$ is
non-expandable if and only if $k=n-1$ or $1\leq k<n/2$, thereby improving the
results in [Chee {\it et al.}, IEEE-TIT, 2013] and [Wang and Wang, IEEE-TIT,
2022]. We then construct a new family of cross-bifix-free codes
$U^{(t)}_{I,J}(n)$ to expand $S_{I,J}^{(k)}(n)$ such that the resulting larger
code $S_{I,J}^{(k)}(n)\bigcup U^{(t)}_{I,J}(n)$ is a non-expandable
cross-bifix-free code whenever $S_{I,J}^{(k)}(n)$ is expandable. Finally, we
present an explicit formula for the size of $S_{I,J}^{(k)}(n)\bigcup
U^{(t)}_{I,J}(n)$.
|
2309.08915v1
|
2023-09-21
|
Real-time feedback protocols for optimizing fault-tolerant two-qubit gate fidelities in a silicon spin system
|
Recently, several groups have demonstrated two-qubit gate fidelities in
semiconductor spin qubit systems above 99%. Achieving this regime of
fault-tolerant compatible high fidelities is nontrivial and requires exquisite
stability and precise control over the different qubit parameters over an
extended period of time. This can be done by efficiently calibrating qubit
control parameters against different sources of micro- and macroscopic noise.
Here, we present several single- and two-qubit parameter feedback protocols,
optimised for and implemented in state-of-the-art fast FPGA hardware.
Furthermore, we use wavelet-based analysis on the collected feedback data to
gain insight into the different sources of noise in the system. Scalable
feedback is an outstanding challenge and the presented implementation and
analysis gives insight into the benefits and drawbacks of qubit parameter
feedback, as feedback related overhead increases. This work demonstrates a
pathway towards robust qubit parameter feedback and systematic noise analysis,
crucial for mitigation strategies towards systematic high-fidelity qubit
operation compatible with quantum error correction protocols.
|
2309.12541v1
|
2023-09-21
|
Spatio-temporal correlations of noise in MOS spin qubits
|
In quantum computing, characterising the full noise profile of qubits can aid
the efforts towards increasing coherence times and fidelities by creating error
mitigating techniques specific to the type of noise in the system, or by
completely removing the sources of noise. Spin qubits in MOS quantum dots are
exposed to noise originated from the complex glassy behaviour of two-level
fluctuators, leading to non-trivial correlations between qubit properties both
in space and time. With recent engineering progress, large amounts of data are
being collected in typical spin qubit device experiments, and it is beneficiary
to explore data analysis options inspired from fields of research that are
experienced in managing large data sets, examples include astrophysics, finance
and climate science. Here, we propose and demonstrate wavelet-based analysis
techniques to decompose signals into both frequency and time components to gain
a deeper insight into the sources of noise in our systems. We apply the
analysis to a long feedback experiment performed on a state-of-the-art
two-qubit system in a pair of SiMOS quantum dots. The observed correlations
serve to identify common microscopic causes of noise, as well as to elucidate
pathways for multi-qubit operation with a more scalable feedback system.
|
2309.12542v2
|
2023-09-29
|
Glioma subtype classification from histopathological images using in-domain and out-of-domain transfer learning: An experimental study
|
We provide in this paper a comprehensive comparison of various transfer
learning strategies and deep learning architectures for computer-aided
classification of adult-type diffuse gliomas. We evaluate the generalizability
of out-of-domain ImageNet representations for a target domain of
histopathological images, and study the impact of in-domain adaptation using
self-supervised and multi-task learning approaches for pretraining the models
using the medium-to-large scale datasets of histopathological images. A
semi-supervised learning approach is furthermore proposed, where the fine-tuned
models are utilized to predict the labels of unannotated regions of the whole
slide images (WSI). The models are subsequently retrained using the
ground-truth labels and weak labels determined in the previous step, providing
superior performance in comparison to standard in-domain transfer learning with
balanced accuracy of 96.91% and F1-score 97.07%, and minimizing the
pathologist's efforts for annotation. Finally, we provide a visualization tool
working at WSI level which generates heatmaps that highlight tumor areas; thus,
providing insights to pathologists concerning the most informative parts of the
WSI.
|
2309.17223v1
|
2023-10-13
|
Midpoint geometric integrators for inertial magnetization dynamics
|
We consider the numerical solution of the inertial version of
Landau-Lifshitz-Gilbert equation (iLLG), which describes high-frequency
nutation on top of magnetization precession due to angular momentum relaxation.
The iLLG equation defines a higher-order nonlinear dynamical system with very
different nature compared to the classical LLG equation, requiring twice as
many degrees of freedom for space-time discretization. It exhibits essential
conservation properties, namely magnetization amplitude preservation,
magnetization projection conservation, and a balance equation for generalized
free energy, leading to a Lyapunov structure (i.e. the free energy is a
decreasing function of time) when the external magnetic field is constant in
time. We propose two second-order numerical schemes for integrating the iLLG
dynamics over time, both based on implicit midpoint rule. The first scheme
unconditionally preserves all the conservation properties, making it the
preferred choice for simulating inertial magnetization dynamics. However, it
implies doubling the number of unknowns, necessitating significant changes in
numerical micromagnetic codes and increasing computational costs especially for
spatially inhomogeneous dynamics simulations. To address this issue, we present
a second time-stepping method that retains the same computational cost as the
implicit midpoint rule for classical LLG dynamics while unconditionally
preserving magnetization amplitude and projection. Special quasi-Newton
techniques are developed for solving the nonlinear system of equations required
at each time step due to the implicit nature of both time-steppings. The
numerical schemes are validated on analytical solution for macrospin terahertz
frequency response and the effectiveness of the second scheme is demonstrated
with full micromagnetic simulation of inertial spin waves propagation in a
magnetic thin-film.
|
2310.09043v1
|
2023-10-28
|
Einstein-de Haas torque as a discrete spectroscopic probe allows nanomechanical measurement of a magnetic resonance
|
The Einstein-de Haas (EdH) effect is a fundamental, mechanical consequence of
any temporal change of magnetism in an object. EdH torque results from
conserving the object's total angular momentum: the angular momenta of all the
specimen's magnetic moments, together with its mechanical angular momentum.
Although the EdH effect is usually small and difficult to observe, it increases
in magnitude with detection frequency. We explore the frequency-dependence of
EdH torque for a thin film permalloy microstructure by employing a ladder of
flexural beam modes (with five distinct resonance frequencies spanning from 3
to 208 MHz) within a nanocavity optomechanical torque sensor via magnetic
hysteresis curves measured at mechanical resonances. At low DC fields the
gyrotropic resonance of a magnetic vortex spin texture overlaps the 208 MHz
mechanical mode. The massive EdH mechanical torques arising from this
co-resonance yield a fingerprint of vortex core pinning and depinning in the
sample. The experimental results are discussed in relation to mechanical
torques predicted from both macrospin (at high DC magnetic field) and
finite-difference solutions to the Landau-Lifshitz-Gilbert (LLG) equation. A
global fit of the LLG solutions to the frequency-dependent data reveals a
statistically significant discrepancy between the experimentally observed and
simulated torque phase behaviours at spin texture transitions that can be
reduced through the addition of a time constant to the conversion between
magnetic cross-product torque and mechanical torque, constrained by experiment
to be in the range of 0.5 - 4 ns.
|
2310.18546v2
|
2023-10-31
|
Ensemble models outperform single model uncertainties and predictions for operator-learning of hypersonic flows
|
High-fidelity computational simulations and physical experiments of
hypersonic flows are resource intensive. Training scientific machine learning
(SciML) models on limited high-fidelity data offers one approach to rapidly
predict behaviors for situations that have not been seen before. However,
high-fidelity data is itself in limited quantity to validate all outputs of the
SciML model in unexplored input space. As such, an uncertainty-aware SciML
model is desired. The SciML model's output uncertainties could then be used to
assess the reliability and confidence of the model's predictions. In this
study, we extend a DeepONet using three different uncertainty quantification
mechanisms: mean-variance estimation, evidential uncertainty, and ensembling.
The uncertainty aware DeepONet models are trained and evaluated on the
hypersonic flow around a blunt cone object with data generated via
computational fluid dynamics over a wide range of Mach numbers and altitudes.
We find that ensembling outperforms the other two uncertainty models in terms
of minimizing error and calibrating uncertainty in both interpolative and
extrapolative regimes.
|
2311.00060v2
|
2023-11-11
|
Double-Free-Layer Stochastic Magnetic Tunnel Junctions with Synthetic Antiferromagnets
|
Stochastic magnetic tunnel junctions (sMTJ) using low-barrier nanomagnets
have shown promise as fast, energy-efficient, and scalable building blocks for
probabilistic computing. Despite recent experimental and theoretical progress,
sMTJs exhibiting the ideal characteristics necessary for probabilistic bits
(p-bit) are still lacking. Ideally, the sMTJs should have (a) voltage bias
independence preventing read disturbance (b) uniform randomness in the
magnetization angle between the free layers, and (c) fast fluctuations without
requiring external magnetic fields while being robust to magnetic field
perturbations. Here, we propose a new design satisfying all of these
requirements, using double-free-layer sMTJs with synthetic antiferromagnets
(SAF). We evaluate the proposed sMTJ design with experimentally benchmarked
spin-circuit models accounting for transport physics, coupled with the
stochastic Landau-Lifshitz-Gilbert equation for magnetization dynamics. We find
that the use of low-barrier SAF layers reduces dipolar coupling, achieving
uncorrelated fluctuations at zero-magnetic field surviving up to diameters
exceeding ($D\approx 100$ nm) if the nanomagnets can be made thin enough
($\approx 1$-$2$ nm). The double-free-layer structure retains bias-independence
and the circular nature of the nanomagnets provides near-uniform randomness
with fast fluctuations. Combining our full sMTJ model with advanced transistor
models, we estimate the energy to generate a random bit as $\approx$ 3.6 fJ,
with fluctuation rates of $\approx$ 3.3 GHz per p-bit. Our results will guide
the experimental development of superior stochastic magnetic tunnel junctions
for large-scale and energy-efficient probabilistic computation for problems
relevant to machine learning and artificial intelligence.
|
2311.06642v2
|
2023-11-14
|
Toxicity Detection is NOT all you Need: Measuring the Gaps to Supporting Volunteer Content Moderators
|
Extensive efforts in automated approaches for content moderation have been
focused on developing models to identify toxic, offensive, and hateful content
with the aim of lightening the load for moderators. Yet, it remains uncertain
whether improvements on those tasks have truly addressed moderators' needs in
accomplishing their work. In this paper, we surface gaps between past research
efforts that have aimed to provide automation for aspects of content moderation
and the needs of volunteer content moderators, regarding identifying violations
of various moderation rules. To do so, we conduct a model review on Hugging
Face to reveal the availability of models to cover various moderation rules and
guidelines from three exemplar forums. We further put state-of-the-art LLMs to
the test, evaluating how well these models perform in flagging violations of
platform rules from one particular forum. Finally, we conduct a user survey
study with volunteer moderators to gain insight into their perspectives on
useful moderation models. Overall, we observe a non-trivial gap, as missing
developed models and LLMs exhibit moderate to low performance on a significant
portion of the rules. Moderators' reports provide guides for future work on
developing moderation assistant models.
|
2311.07879v2
|
2023-11-14
|
All Byzantine Agreement Problems are Expensive
|
Byzantine agreement, arguably the most fundamental problem in distributed
computing, operates among n processes, out of which t < n can exhibit arbitrary
failures. The problem states that all correct (non-faulty) processes must
eventually decide (termination) the same value (agreement) from a set of
admissible values defined by the proposals of the processes (validity).
Depending on the exact version of the validity property, Byzantine agreement
comes in different forms, from Byzantine broadcast to strong and weak
consensus, to modern variants of the problem introduced in today's blockchain
systems. Regardless of the specific flavor of the agreement problem, its
communication cost is a fundamental metric whose improvement has been the focus
of decades of research. The Dolev-Reischuk bound, one of the most celebrated
results in distributed computing, proved 40 years ago that, at least for
Byzantine broadcast, no deterministic solution can do better than Omega(t^2)
exchanged messages in the worst case. Since then, it remained unknown whether
the quadratic lower bound extends to seemingly weaker variants of Byzantine
agreement. This paper answers the question in the affirmative, closing this
long-standing open problem. Namely, we prove that any non-trivial agreement
problem requires Omega(t^2) messages to be exchanged in the worst case. To
prove the general lower bound, we determine the weakest Byzantine agreement
problem and show, via a novel indistinguishability argument, that it incurs
Omega(t^2) exchanged messages.
|
2311.08060v2
|
2023-11-21
|
Nonparametric variable importance for time-to-event outcomes with application to prediction of HIV infection
|
In survival analysis, complex machine learning algorithms have been
increasingly used for predictive modeling. Given a collection of features
available for inclusion in a predictive model, it may be of interest to
quantify the relative importance of a subset of features for the prediction
task at hand. In particular, in HIV vaccine trials, participant baseline
characteristics are used to predict the probability of infection over the
intended follow-up period, and investigators may wish to understand how much
certain types of predictors, such as behavioral factors, contribute toward
overall predictiveness. Time-to-event outcomes such as time to infection are
often subject to right censoring, and existing methods for assessing variable
importance are typically not intended to be used in this setting. We describe a
broad class of algorithm-agnostic variable importance measures for prediction
in the context of survival data. We propose a nonparametric efficient
estimation procedure that incorporates flexible learning of nuisance
parameters, yields asymptotically valid inference, and enjoys
double-robustness. We assess the performance of our proposed procedure via
numerical simulations and analyze data from the HVTN 702 study to inform
enrollment strategies for future HIV vaccine trials.
|
2311.12726v2
|
2023-11-29
|
Atmospheric Escape From Three Terrestrial Planets in the L 98-59 System
|
A critically important process affecting the climate evolution and potential
habitability of an exoplanet is atmospheric escape, in which high-energy
radiation from a star drives the escape of hydrogen atoms and other light
elements from a planet's atmosphere. L 98-59 is a benchmark system for studying
such atmospheric processes, with three transiting terrestrial-size planets
receiving Venus-like instellations (4-25 S$_\oplus$) from their M3 host star.
We use the VPLanet model to simulate the evolution of the L 98-59 system and
the atmospheric escape of its inner three small planets, given different
assumed initial water quantities. We find that, regardless of their initial
water content, all three planets accumulate significant quantities of oxygen
due to efficient water photolysis and hydrogen loss. All three planets also
receive enough XUV flux to drive rapid water loss, which considerably affects
their developing climates and atmospheres. Even in scenarios of low initial
water content, our results suggest that the James Webb Space Telescope (JWST)
will be sensitive to observations of retained oxygen on the L 98-59 planets in
its future scheduled observations, with planets b and c being the most likely
targets to possess an extended atmosphere. Our results constrain the
atmospheric evolution of these small rocky planets, and they provide context
for current and future observations of the L 98-59 system to generalize our
understanding of multi-terrestrial planet systems.
|
2312.00062v1
|
2023-12-03
|
Heisenberg machines with programmable spin-circuits
|
We show that we can harness two recent experimental developments to build a
compact hardware emulator for the classical Heisenberg model in statistical
physics. The first is the demonstration of spin-diffusion lengths in excess of
microns in graphene even at room temperature. The second is the demonstration
of low barrier magnets (LBMs) whose magnetization can fluctuate rapidly even at
sub-nanosecond rates. Using experimentally benchmarked circuit models, we show
that an array of LBMs driven by an external current source has a steady-state
distribution corresponding to a classical system with an energy function of the
form $E = -1/2\sum_{i,j} J_{ij} (\hat{m}_i \cdot \hat{m}_j$). This may seem
surprising for a non-equilibrium system but we show that it can be justified by
a Lyapunov function corresponding to a system of coupled
Landau-Lifshitz-Gilbert (LLG) equations. The Lyapunov function we construct
describes LBMs interacting through the spin currents they inject into the spin
neutral substrate. We suggest ways to tune the coupling coefficients $J_{ij}$
so that it can be used as a hardware solver for optimization problems involving
continuous variables represented by vector magnetizations, similar to the role
of the Ising model in solving optimization problems with binary variables.
Finally, we implement a Heisenberg AND gate based on a network of three coupled
stochastic LLG equations, illustrating the concept of probabilistic computing
with a programmable Heisenberg model.
|
2312.01477v1
|
2023-12-05
|
A complex-projected Rayleigh quotient iteration for targeting interior eigenvalues
|
We introduce a new Projected Rayleigh Quotient Iteration aimed at improving
the convergence behaviour of classic Rayleigh Quotient iteration (RQI) by
incorporating approximate information about the target eigenvector at each
step. While classic RQI exhibits local cubic convergence for Hermitian
matrices, its global behaviour can be unpredictable, whereby it may converge to
an eigenvalue far away from the target, even when started with accurate initial
conditions. This problem is exacerbated when the eigenvalues are closely
spaced. The key idea of the new algorithm is at each step to add a
complex-valued projection to the original matrix (that depends on the current
eigenvector approximation), such that the unwanted eigenvalues are lifted into
the complex plane while the target stays close to the real line, thereby
increasing the spacing between the target eigenvalue and the rest of the
spectrum. Making better use of the eigenvector approximation leads to more
robust convergence behaviour and the new method converges reliably to the
correct target eigenpair for a significantly wider range of initial vectors
than does classic RQI. We prove that the method converges locally cubically and
we present several numerical examples demonstrating the improved global
convergence behaviour. In particular, we apply it to compute eigenvalues in a
band-gap spectrum of a Sturm-Liouville operator used to model photonic crystal
fibres, where the target and unwanted eigenvalues are closely spaced. The
examples show that the new method converges to the desired eigenpair even when
the eigenvalue spacing is very small, often succeeding when classic RQI fails.
|
2312.02847v2
|
2023-12-14
|
On statistical zonostrophic instability and the effect of magnetic fields
|
Zonal flows are mean flows in the east-west direction, which are ubiquitous
on planets, and can be formed through 'zonostrophic instability': within
turbulence or random waves, a weak large-scale zonal flow can grow
exponentially to become prominent. In this paper, we study the statistical
behaviour of the zonostrophic instability and the effect of magnetic fields. We
use a stochastic white noise forcing to drive random waves, and study the
growth of a mean flow in this random system. The dispersion relation for the
growth rate of the expectation of the mean flow is derived, and properties of
the instability are discussed. In the limits of weak and strong magnetic
diffusivity, the dispersion relation reduces to manageable expressions, which
provide clear insights into the effect of the magnetic field and scaling laws
for the threshold of instability. The magnetic field mainly plays a stabilising
role and thus impedes the formation of the zonal flow, but under certain
conditions it can also have destabilising effects. Numerical simulation of the
stochastic flow is performed to confirm the theory. Results indicate that the
magnetic field can significantly increase the randomness of the zonal flow. It
is found that the zonal flow of an individual realisation may behave very
differently from the expectation. For weak magnetic diffusivity and moderate
magnetic field strengths, this leads to considerable variation of the outcome,
that is whether zonostrophic instability takes place or not in individual
realisations.
|
2312.08905v1
|
2023-12-19
|
Towards a theta correspondence in families for type II dual pairs
|
Let $R$ be a commutative $\mathbb{Z}[1/p]$-algebra, let $m \leq n$ be
positive integers, and let $G_n=\text{GL}_n(F)$ and $G_m=\text{GL}_m(F)$ where
$F$ is a $p$-adic field. The Weil representation is the smooth $R[G_n\times
G_m]$-module $C_c^{\infty}(\text{Mat}_{n\times m}(F),R)$ with the action
induced by matrix multiplication. When $R=\mathbb{C}$ or is any algebraically
closed field of banal characteristic compared to $G_n$ and $G_m$, the local
theta correspondence holds by the work of Howe and M\'inguez. At the level of
supercuspidal support, we interpret the theta correspondence as a morphism of
varieties $\theta_R$, which we describe as an explicit closed immersion. For
arbitrary $R$, we construct a canonical ring homomorphism $\theta^\#_{R} :
\mathfrak{Z}_{R}(G_n)\to \mathfrak{Z}_{R}(G_m)$ that controls the action of the
center $\mathfrak{Z}_{R}(G_n)$ of the category of smooth $R[G_n]$-modules on
the Weil representation. We use the rank filtration of the Weil representation
to first obtain $\theta_{\mathbb{Z}[1/p]}^\#$, then obtain $\theta^\#_R$ for
arbitrary $R$ by proving $\mathfrak{Z}_R(G_n)$ is compatible with scalar
extension. In particular, the map $\text{Spec}(\mathfrak{Z}_R(G_m))\to
\text{Spec}(\mathfrak{Z}_R(G_n))$ induced by $\theta_R^\#$ recovers $\theta_R$
in the $R=\mathbb{C}$ case and in the banal case. We use gamma factors to prove
$\theta_R^\#$ is surjective for any $R$. Finally, we describe $\theta^\#_R$ in
terms of the moduli space of Langlands parameters and use this description to
give an alternative proof of surjectivity in the tamely ramified case.
|
2312.12031v1
|
2023-12-19
|
Microscopic theory of current-induced skyrmion transport and its application in disordered spin textures
|
Magnetic skyrmions hold great promise for realizing compact and stable memory
devices that can be manipulated at very low energy costs via electronic current
densities. In this work, we extend a recently introduced method to describe
classical skyrmion textures coupled to dynamical itinerant electrons. In this
scheme, the electron dynamics is described via nonequilibrium Green's functions
(NEGF) within the generalized Kadanoff-Baym ansatz, and the classical spins are
treated via the Landau-Lifshitz-Gilbert equation. The framework is here
extended to open systems, by the introduction of a non-interacting
approximation to the collision integral of NEGF. This, in turn, allows us to
perform computations of the real-time response of skyrmions to electronic
currents in large quantum systems coupled to electronic reservoirs, which
exhibit a linear scaling in the number of time steps. We use this approach to
investigate how electronic spin currents and dilute spin disorder affects
skyrmion transport and the skyrmion Hall drift. Our results show that the
skyrmion dynamics is sensitive to the specific form of spin disorder, such that
different disorder configurations leads to qualitatively different skyrmion
trajectories for the same applied bias. This sensitivity arises from the local
spin dynamics around the magnetic impurities, a feature that is expected not to
be well captured by phenomenological or spin-only descriptions. At the same
time, our findings illustrate the potential of engineering microscopic impurity
patterns to steer skyrmion trajectories.
|
2312.12201v1
|
2024-01-09
|
Characterization of two fast-turnaround dry dilution refrigerators for scanning probe microscopy
|
Low-temperature scanning probe microscopes (SPMs) are critical for the study
of quantum materials and quantum information science. Due to the rising costs
of helium, cryogen-free cryostats have become increasingly desirable. However,
they typically suffer from comparatively worse vibrations than cryogen-based
systems, necessitating the understanding and mitigation of vibrations for SPM
applications. Here we demonstrate the construction of two cryogen-free dilution
refrigerator SPMs with minimal modifications to the factory default and we
systematically characterize their vibrational performance. We measure the
absolute vibrations at the microscope stage with geophones, and use both
microwave impedance microscopy and a scanning single electron transistor to
independently measure tip-sample vibrations. Additionally, we implement
customized filtering and thermal anchoring schemes, and characterize the
cooling power at the scanning stage and the tip electron temperature. This work
serves as a reference to researchers interested in cryogen-free SPMs, as such
characterization is not standardized in the literature or available from
manufacturers.
|
2401.04373v1
|
2024-01-11
|
Micromagnetic simulations of the size dependence of the Curie temperature in ferromagnetic nanowires and nanolayers
|
We solve the Landau-Lifshitz-Gilbert equation in the finite-temperature
regime, where thermal fluctuations are modeled by a random magnetic field whose
variance is proportional to the temperature. By rescaling the temperature
proportionally to the computational cell size $\Delta x$ ($T \to T\,\Delta
x/a_{\text{eff}}$, where $a_{\text{eff}}$ is the lattice constant) [M. B. Hahn,
J. Phys. Comm., 3:075009, 2019], we obtain Curie temperatures $T_{\text{C}}$
that are in line with the experimental values for cobalt, iron and nickel. For
finite-sized objects such as nanowires (1D) and nanolayers (2D), the Curie
temperature varies with the smallest size $d$ of the system. We show that the
difference between the computed finite-size $T_{\text{C}}$ and the bulk
$T_{\text{C}}$ follows a power-law of the type: $(\xi_0/d)^\lambda$, where
$\xi_0$ is the correlation length at zero temperature, and $\lambda$ is a
critical exponent. We obtain values of $\xi_0$ in the nanometer range, also in
accordance with other simulations and experiments. The computed critical
exponent is close to $\lambda=2$ for all considered materials and geometries.
This is the expected result for a mean-field approach, but slightly larger than
the values observed experimentally.
|
2401.05722v1
|
2024-01-24
|
How AI Ideas Affect the Creativity, Diversity, and Evolution of Human Ideas: Evidence From a Large, Dynamic Experiment
|
Exposure to large language model output is rapidly increasing. How will
seeing AI-generated ideas affect human ideas? We conducted an experiment (800+
participants, 40+ countries) where participants viewed creative ideas that were
from ChatGPT or prior experimental participants and then brainstormed their own
idea. We varied the number of AI-generated examples (none, low, or high
exposure) and if the examples were labeled as 'AI' (disclosure). Our dynamic
experiment design -- ideas from prior participants in an experimental condition
are used as stimuli for future participants in the same experimental condition
-- mimics the interdependent process of cultural creation: creative ideas are
built upon prior ideas. Hence, we capture the compounding effects of having
LLMs 'in the culture loop'. We find that high AI exposure (but not low AI
exposure) did not affect the creativity of individual ideas but did increase
the average amount and rate of change of collective idea diversity. AI made
ideas different, not better. There were no main effects of disclosure. We also
found that self-reported creative people were less influenced by knowing an
idea was from AI, and that participants were more likely to knowingly adopt AI
ideas when the task was difficult. Our findings suggest that introducing AI
ideas into society may increase collective diversity but not individual
creativity.
|
2401.13481v1
|
2024-01-31
|
Multimaterial Inkjet Printing of Mechanochromic Materials
|
Inkjet printing technology achieves the precise deposition of liquid-phase
materials via the digitally controlled formation of picoliter-sized droplets.
Beyond graphical printing, inkjet printing has been employed for the deposition
of separated drops on surfaces or the formation of continuous layers, which
allows to construct materials gradients or periodic features that provide
enhanced functionalities. Here, we explore the use of multinozzle,
drop-on-demand piezoelectric inkjet technology for the manufacturing of
mechanochromic materials, i.e., materials that change their color or
fluorescence in response to mechanical deformation. To accomplish this,
suitable polyurethane polymers of differing hardness grades were tested with a
range of organic solvents to formulate low-viscosity, inkjet-printable
solutions. Following their rheological characterization, two solutions
comprised of "soft" and "hard" polyurethanes were selected for in-depth study.
The solutions were imbibed with a mechanochromic additive to yield fluorescent
inks, which were either dropcast onto polymeric substrates or printed to form
checkerboard patterns of alternating hardness using a lab-built, multimaterial
inkjet platform. Fluorescence imaging and spectroscopy were used to identify
different hardness grades in the dropcast and printed materials, as well as to
monitor the responses of these gradient materials to mechanical deformation.
The insights gained in this study are expected to facilitate the development of
inkjet-printable, mechanochromic polymer materials for a wide range of
applications.
|
2401.17758v2
|
2024-01-11
|
Resonant inelastic x-ray scattering in warm-dense Fe compounds beyond the SASE FEL resolution limit
|
Resonant inelastic x-ray scattering (RIXS) is a widely used spectroscopic
technique, providing access to the electronic structure and dynamics of atoms,
molecules, and solids. However, RIXS requires a narrow bandwidth x-ray probe to
achieve high spectral resolution. The challenges in delivering an energetic
monochromated beam from an x-ray free electron laser (XFEL) thus limit its use
in few-shot experiments, including for the study of high energy density
systems. Here we demonstrate that by correlating the measurements of the
self-amplified spontaneous emission (SASE) spectrum of an XFEL with the RIXS
signal, using a dynamic kernel deconvolution with a neural surrogate, we can
achieve electronic structure resolutions substantially higher than those
normally afforded by the bandwidth of the incoming x-ray beam. We further show
how this technique allows us to discriminate between the valence structures of
Fe and Fe$_2$O$_3$, and provides access to temperature measurements as well as
M-shell binding energies estimates in warm-dense Fe compounds.
|
2402.00039v1
|
2024-02-08
|
Trustful Coopetitive Infrastructures for the New Space Exploration Era
|
In the new space economy, space agencies, large enterprises, and start-ups
aim to launch space multi-robot systems (MRS) for various in-situ resource
utilization (ISRU) purposes, such as mapping, soil evaluation, and utility
provisioning. However, these stakeholders' competing economic interests may
hinder effective collaboration on a centralized digital platform. To address
this issue, neutral and transparent infrastructures could facilitate
coordination and value exchange among heterogeneous space MRS. While related
work has expressed legitimate concerns about the technical challenges
associated with blockchain use in space, we argue that weighing its potential
economic benefits against its drawbacks is necessary. This paper presents a
novel architectural framework and a comprehensive set of requirements for
integrating blockchain technology in MRS, aiming to enhance coordination and
data integrity in space exploration missions. We explored distributed ledger
technology (DLT) to design a non-proprietary architecture for heterogeneous MRS
and validated the prototype in a simulated lunar environment. The analyses of
our implementation suggest global ISRU efficiency improvements for map
exploration, compared to a corresponding group of individually acting robots,
and that fostering a coopetitive environment may provide additional revenue
opportunities for stakeholders.
|
2402.06014v1
|
2024-02-08
|
Designing Trustful Cooperation Ecosystems is Key to the New Space Exploration Era
|
In the emerging space economy, autonomous robotic missions with specialized
goals such as mapping and mining are gaining traction, with agencies and
enterprises increasingly investing resources. Multirobot systems (MRS) research
has provided many approaches to establish control and communication layers to
facilitate collaboration from a technical perspective, such as granting more
autonomy to heterogeneous robotic groups through auction-based interactions in
mesh networks. However, stakeholders' competing economic interests often
prevent them from cooperating within a proprietary ecosystem. Related work
suggests that distributed ledger technology (DLT) might serve as a mechanism
for enterprises to coordinate workflows and trade services to explore space
resources through a transparent, reliable, non-proprietary digital platform. We
challenge this perspective by pointing to the core technical weaknesses of
blockchains, in particular, increased energy consumption, low throughput, and
full transparency through redundancy. Our objective is to advance the
discussion in a direction where the benefits of DLT from an economic
perspective are weighted against the drawbacks from a technical perspective. We
finally present a possible DLT-driven heterogeneous MRS for map exploration to
study the opportunities for economic collaboration and competitiveness.
|
2402.06036v1
|
2024-02-19
|
Density estimation for elliptic PDE with random input by preintegration and quasi-Monte Carlo methods
|
In this paper, we apply quasi-Monte Carlo (QMC) methods with an initial
preintegration step to estimate cumulative distribution functions and
probability density functions in uncertainty quantification (UQ). The
distribution and density functions correspond to a quantity of interest
involving the solution to an elliptic partial differential equation (PDE) with
a lognormally distributed coefficient and a normally distributed source term.
There is extensive previous work on using QMC to compute expected values in UQ,
which have proven very successful in tackling a range of different PDE
problems. However, the use of QMC for density estimation applied to UQ problems
will be explored here for the first time. Density estimation presents a more
difficult challenge compared to computing the expected value due to
discontinuities present in the integral formulations of both the distribution
and density. Our strategy is to use preintegration to eliminate the
discontinuity by integrating out a carefully selected random parameter, so that
QMC can be used to approximate the remaining integral. First, we establish
regularity results for the PDE quantity of interest that are required for
smoothing by preintegration to be effective. We then show that an $N$-point
lattice rule can be constructed for the integrands corresponding to the
distribution and density, such that after preintegration the QMC error is of
order $\mathcal{O}(N^{-1+\epsilon})$ for arbitrarily small $\epsilon>0$. This
is the same rate achieved for computing the expected value of the quantity of
interest. Numerical results are presented to reaffirm our theory.
|
2402.11807v1
|
2024-02-29
|
Magnon spectrum of altermagnets: Time-dependent matrix product states vs. linearized Holstein-Primakoff calculations unravelling spontaneous magnon decay
|
The energy-momentum dispersion of magnons, viewed as noninteracting and
infinitely long-lived quasiparticles describing collective low-energy
excitations of magnetic materials, is often presented as sharp bands obtained
from the effective quantum spin Hamiltonian, after being simplified via
linearized Holstein-Primakoff (HP) transformations. However, magnons are prone
to many-body interactions with other quasiparticles which can lead to their
spontaneous decay. The magnon-magnon interactions could affect newly classified
altermagnets. On the other hand, sharp bands of noninteracting chiral magnons
in RuO2, as the canonical example of altermagnets, have been very recently
predicted. Here, we employ nonperturbative numerically (quasi)exact quantum
many-body calculations, via time-dependent matrix product states (TDMPS), to
obtain magnon spectral function of RuO2. These calculations produce a broadened
magnon dispersion, which overlaps with linearized HP theory sharp bands only at
edges/center of the Brillouin zone. Substantially deviating otherwise.
Artificially making exchange interaction within two sublattices of RuO2 closer
in value forces these two spectra to overlap, thereby explaining the origin of
the failure of linearized HP theory. Such features translate into the
difference between their respective density of states, which we also compute
and which could be tested by Raman scattering experiments. Finally, we employ
popular Landau-Lifshitz-Gilbert (LLG) equation-based classical atomistic spin
dynamics (ASD) simulations to obtain dynamical structure factor and extract
magnon spectrum from it at finite temperature. Despite including magnon-magnon
interactions via nonlinearity of LLG equation, ASD simulations cannot fully
match the TDMPS-computed magnon spectrum due to nonclassical effects harbored
by altermagnets.
|
2402.19433v1
|
2024-03-07
|
Controllable Skyrmion Islands in a Moiré Magnet
|
Antiferromagnetic(AFM) skyrmions have been in the spotlight as ideal
topological magnetic bits. Although they are topologically protected, they do
not exhibit the skyrmion Hall effect unlike the ferromagnetic ones. Thus, AFM
skyrmions are considered to provide a better control of the skyrmion's motion
due to the absence of the skyrmion Magnus effect. In this work, we propose a
possible realization of controllable AFM skyrmions in a twisted Moir\'e magnet.
The tunability of Moir\'e materials is not only a good platform for the
provision of rich phases, but also for the stabilization of skyrmion phase. We
investigate the ground state of twisted bilayer AFM system by solving the
Landau-Lifshitz-Gilbert equation in a continuum model. We show that the AFM
skyrmions are stabilized even in the absence of the external/dipolar magnetic
field, as a consequence of the interplay of interlayer coupling,
Dzyaloshinskii-Moriya (DM) interaction and Ising anisotropy. More
interestingly, due to the magnetoelectric effect, the application of an
external electric field locally stabilizes the skyrmions in the twisted bilayer
AFM systems, even in the absence of DM interaction. It also allows the skyrmion
helicity to change continuously when both the DM interaction and an electric
field are present. We show the phase diagram with respect to the strength of
interlayer coupling, the DM interaction and an electric field. Our results
suggest the possibility of using AFM skyrmions as stable, controllable
topological magnetic bits.
|
2403.04208v1
|
2024-03-08
|
A Data Augmentation Pipeline to Generate Synthetic Labeled Datasets of 3D Echocardiography Images using a GAN
|
Due to privacy issues and limited amount of publicly available labeled
datasets in the domain of medical imaging, we propose an image generation
pipeline to synthesize 3D echocardiographic images with corresponding ground
truth labels, to alleviate the need for data collection and for laborious and
error-prone human labeling of images for subsequent Deep Learning (DL) tasks.
The proposed method utilizes detailed anatomical segmentations of the heart as
ground truth label sources. This initial dataset is combined with a second
dataset made up of real 3D echocardiographic images to train a Generative
Adversarial Network (GAN) to synthesize realistic 3D cardiovascular Ultrasound
images paired with ground truth labels. To generate the synthetic 3D dataset,
the trained GAN uses high resolution anatomical models from Computed Tomography
(CT) as input. A qualitative analysis of the synthesized images showed that the
main structures of the heart are well delineated and closely follow the labels
obtained from the anatomical models. To assess the usability of these synthetic
images for DL tasks, segmentation algorithms were trained to delineate the left
ventricle, left atrium, and myocardium. A quantitative analysis of the 3D
segmentations given by the models trained with the synthetic images indicated
the potential use of this GAN approach to generate 3D synthetic data, use the
data to train DL models for different clinical tasks, and therefore tackle the
problem of scarcity of 3D labeled echocardiography datasets.
|
2403.05384v1
|
2024-03-10
|
Dynamical generation of skyrmion and bimeron crystals by a circularly polarized electric field in frustrated magnets
|
A skyrmion crystal (SkX) has attracted much attention in condensed matter
physics, since topologically nontrivial structures induce fascinating physical
phenomena. The SkXs have been experimentally observed in a variety of
materials, where the Zeeman coupling to the static magnetic field plays an
important role in the formation of the SkXs. In this study, we theoretically
propose another route to generate the SkXs by using a circularly polarized
electric field. We investigate a non-equilibrium steady state in a classical
frustrated Heisenberg magnet under the circularly polarized electric field,
where the electric field is coupled to the electric polarization via the
spin-current mechanism. By numerically solving the Landau-Lifshitz-Gilbert
equation at zero temperature, we show that the electric field radiation
generates a SkX with a high topological number in the high-frequency regime,
where the sign of the skyrmion number is fixed to be negative (positive) under
the left (right) circularly polarized field. The intense electric field melts
these SkXs and generates isolated skyrmions. We clarify that the microscopic
origin is effective electric-field-induced three-spin interactions by adopting
the high-frequency expansion in the Floquet formalism. Furthermore, we find
that the electric field radiation generates another type of SkXs, a bimeron
crystal, in the low-frequency regime. Our results provide a way to generate the
SkXs and control the topology by the circularly polarized electric field.
|
2403.06118v1
|
2024-03-12
|
Flexible Non-intrusive Dynamic Instrumentation for WebAssembly
|
A key strength of managed runtimes over hardware is the ability to gain
detailed insight into the dynamic execution of programs with instrumentation.
Analyses such as code coverage, execution frequency, tracing, and debugging,
are all made easier in a virtual setting. As a portable, low-level bytecode,
WebAssembly offers inexpensive in-process sandboxing with high performance. Yet
to date, Wasm engines have not offered much insight into executing programs,
supporting at best bytecode-level stepping and basic source maps, but no
instrumentation capabilities. In this paper, we show the first non-intrusive
dynamic instrumentation system for WebAssembly in the open-source Wizard
Research Engine. Our innovative design offers a flexible, complete hierarchy of
instrumentation primitives that support building high-level, complex analyses
in terms of low-level, programmable probes. In contrast to emulation or machine
code instrumentation, injecting probes at the bytecode level increases
expressiveness and vastly simplifies the implementation by reusing the engine's
JIT compiler, interpreter, and deoptimization mechanism rather than building
new ones. Wizard supports both dynamic instrumentation insertion and removal
while providing consistency guarantees, which is key to composing multiple
analyses without interference. We detail a fully-featured implementation in a
high-performance multi-tier Wasm engine, show novel optimizations specifically
designed to minimize instrumentation overhead, and evaluate performance
characteristics under load from various analyses. This design is well-suited
for production engine adoption as probes can be implemented to have no impact
on production performance when not in use.
|
2403.07973v1
|
2024-03-13
|
Highly confined epsilon-near-zero- and surface-phonon polaritons in SrTiO3 membranes
|
Recent theoretical studies have suggested that transition metal perovskite
oxide membranes can enable surface phonon polaritons in the infrared range with
low loss and much stronger subwavelength confinement than bulk crystals. Such
modes, however, have not been experimentally observed so far. Here, using a
combination of far-field Fourier-transform infrared (FTIR) spectroscopy and
near-field synchrotron infrared nanospectroscopy (SINS) imaging, we study the
phonon-polaritons in a 100 nm thick freestanding crystalline membrane of SrTiO3
transferred on metallic and dielectric substrates. We observe a
symmetric-antisymmetric mode splitting giving rise to epsilon-near-zero and
Berreman modes as well as highly confined (by a factor of 10) propagating
phonon polaritons, both of which result from the deep-subwavelength thickness
of the membranes. Theoretical modeling based on the analytical finite-dipole
model and numerical finite-difference methods fully corroborate the
experimental results. Our work reveals the potential of oxide membranes as a
promising platform for infrared photonics and polaritonics.
|
2403.08500v1
|
2024-03-18
|
Lattice QCD estimates of thermal photon production from the QGP
|
Thermal photons produced in heavy-ion collision experiments are an important
observable for understanding quark-gluon plasma (QGP). The thermal photon rate
from the QGP at a given temperature can be calculated from the spectral
function of the vector current correlator. Extraction of the spectral function
from the lattice correlator is known to be an ill-conditioned problem, as there
is no unique solution for a spectral function for a given lattice correlator
with statistical errors. The vector current correlator, on the other hand,
receives a large ultraviolet contribution from the vacuum, which makes the
extraction of the thermal photon rate difficult from this channel. We therefore
consider the difference between the transverse and longitudinal part of the
spectral function, only capturing the thermal contribution to the current
correlator, simplifying the reconstruction significantly. The lattice
correlator is calculated for light quarks in quenched QCD at $T=470~$MeV ($\sim
1.5\, T_c$), as well as in 2+1 flavor QCD at $T=220~$MeV ($\sim 1.2 \, T_{pc}$)
with $m_{\pi}=320$ MeV. In order to quantify the non-perturbative effects, the
lattice correlator is compared with the corresponding
$\text{NLO}+\text{LPM}^{\text{LO}}$ estimate of correlator. The reconstruction
of the spectral function is performed in several different frameworks, ranging
from physics-informed models of the spectral function to more general models in
the Backus-Gilbert method and Gaussian Process regression. We find that the
resulting photon rates agree within errors.
|
2403.11647v1
|
2024-03-20
|
Optimal Risk-Sensitive Scheduling Policies for Remote Estimation of Autoregressive Markov Processes
|
We design scheduling policies that minimize a risk-sensitive cost criterion
for a remote estimation setup. Since risk-sensitive cost objective takes into
account not just the mean value of the cost, but also higher order moments of
its probability distribution, the resulting policy is robust to changes in the
underlying system's parameters. The setup consists of a sensor that observes a
discrete-time autoregressive Markov process, and at each time $t$ decides
whether or not to transmit its observations to a remote estimator using an
unreliable wireless communication channel after encoding these observations
into data packets. We model the communication channel as a Gilbert-Elliott
channel \cite{10384144}. Sensor probes the channel \cite{laourine2010betting}
and hence knows the channel state at each time $t$ before making scheduling
decision. The scheduler has to minimize the expected value of the exponential
of the finite horizon cumulative cost that is sum of the following two
quantities (i) the cumulative transmission power consumed, (ii) the cumulative
squared estimator error. We pose this dynamic optimization problem as a Markov
decision process (MDP), in which the system state at time $t$ is composed of
(i) the instantaneous error $\Delta(t):= x(t)-a\hat{x}(t-1)$, where
$x(t),\hat{x}(t-1)$ are the system state and the estimate at time $t,t-1$
respectively, and (ii) the channel state $c(t)$. We show that there exists an
optimal policy that has a threshold structure, i.e., at each time $t$, for each
possible channel state $c$, there is a threshold $\D\ust(c)$ such that if the
current channel state is $c$, then it transmits only when the error $\D(t)$
exceeds $\D\ust(c)$.
|
2403.13898v1
|
2024-03-27
|
The Correlations of Scene Complexity, Workload, Presence, and Cybersickness in a Task-Based VR Game
|
This investigation examined the relationships among scene complexity,
workload, presence, and cybersickness in virtual reality (VR) environments.
Numerous factors can influence the overall VR experience, and existing research
on this matter is not yet conclusive, warranting further investigation. In this
between-subjects experimental setup, 44 participants engaged in the Pendulum
Chair game, with half exposed to a simple scene with lower optic flow and lower
familiarity, and the remaining half to a complex scene characterized by higher
optic flow and greater familiarity. The study measured the dependent variables
workload, presence, and cybersickness and analyzed their correlations.
Equivalence testing was also used to compare the simple and complex
environments. Results revealed that despite the visible differences between the
environments, within the 10% boundaries of the maximum possible value for
workload and presence, and 13.6% of the maximum SSQ value, a statistically
significant equivalence was observed between the simple and complex scenes.
Additionally, a moderate, negative correlation emerged between workload and SSQ
scores. The findings suggest two key points: (1) the nature of the task can
mitigate the impact of scene complexity factors such as optic flow and
familiarity, and (2) the correlation between workload and cybersickness may
vary, showing either a positive or negative relationship.
|
2403.19019v1
|
2024-03-28
|
Long-range Phase Coherence and Tunable Second Order $φ_0$-Josephson Effect in a Dirac Semimetal $1T-PtTe_2$
|
Superconducting diode effects have recently attracted much attention for
their potential applications in superconducting logic circuits. Several
mechanisms such as magneto-chiral effects, finite momentum Cooper pairing,
asymmetric edge currents have been proposed to give rise to a supercurrent
diode effect in different materials. In this work, we establish the presence of
a large intrinsic Josephson diode effect in a type-II Dirac semimetal
$1T-PtTe_2$ facilitated by its helical spin-momentum locking and distinguish it
from other extrinsic effects. The magnitude of the Josephson diode effect is
shown to be directly correlated to the large second-harmonic component of the
supercurrent that is induced by the significant contribution of the topological
spin-momentum locked states that promote coherent Andreev processes in the
junction. We denote such junctions, where the relative phase between the two
harmonics corresponding to charge transfers of $2e$ and $4e$ can be tuned by a
magnetic field, as second order ${\phi}_0$-junctions. The direct correspondence
between the second harmonic supercurrent component and the diode effect in
$1T-PtTe_2$ junctions makes topological semimetals with high transparency an
ideal platform to study and implement the Josephson diode effect, while also
enabling further research on higher order supercurrent transport in Josephson
junctions.
|
2403.19445v1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.