text
stringlengths
14
5.77M
meta
dict
__index_level_0__
int64
0
9.97k
package utils import ( "io/ioutil" "os" "path/filepath" ) func FilesToFileList(files []string, root string) (*os.File, error) { tmpfile, err := ioutil.TempFile("", "updated") if err != nil { return nil, err } seen := make(map[string]bool) for _, path := range files { abs_path := filepath.Join(root, path) _, ok := seen[abs_path] if ok { continue } tmpfile.Write([]byte(abs_path + "\n")) seen[abs_path] = true } // Remember: this is an open filehandle and you need to remember // to close it or hilarity will ensue... return tmpfile, nil }
{ "redpajama_set_name": "RedPajamaGithub" }
7,812
{"url":"https:\/\/math.meta.stackexchange.com\/questions\/linked\/19135?sort=hot&page=2","text":"974 views\n\n### Congratulations robjohn for getting into the 100k club. [duplicate]\n\nWe truly appreciate your constant presence and insightful hints in the Mathematics chat room. Your dedication to learning is inspiring. Thank you also, for all your hard work as a moderator.\n1k views\n\n### Congratulations, Ross Millikan! [duplicate]\n\nMany congratulations to Ross Millikan on reaching 100000 reputation points. Very well deserved.\n786 views\n\n### Congratulations to Robert Israel! [duplicate]\n\nWith Robert, we now have two rows of 100k members complete. Thanks for all your contributions... $\\hskip2.3in$\n1k views\n\n### Congratulations, joriki! [duplicate]\n\nCongratulation to joriki for getting $100,001>100,000$ reputation! Thanks for you amazing answers! You've helped a countless number of people in the M.SE community. I will wish you, on the behalf ...\n1k views\n\n### Congratulations to Andr\u00e9! [duplicate]\n\nCongratulations to Andr\u00e9 Nicolas $300 K$ edition!!!! Just writing this message to congratulate Andr\u00e9 for being the first user in MSE to reach 300 K.\n2k views\n\n### Congratulations, amWhy hit 100k! [duplicate]\n\nCongratulations to @amWhy for hitting the 100k mark today. As far as I know, she is the first woman to also reach this amazing milestone and we are lucky that she is a participant here!\n785 views\n\n### Congratulations once again, Andr\u00e9!!! [duplicate]\n\nFrom today Andr\u00e9 Nicolas is the top MSE user. Congratulations Andr\u00e9!\n1k views\n\n### Congratulations, lab bhattacharjee! [duplicate]\n\nCongratulations lab bhattacharjee on reaching 100k reputation! We all appreciate your continued efforts on Math.SE.\n6k views\n\n### History of Math.StackExchange\n\nThis thread is used to record significant events in the life of Math.StackExchange. What should be recorded. Creation of the site (proposed, beta tested, graduated). Technological innovations like ...\n1k views\n\n### Are congratulatory posts off-topic on meta? Or only some of them?\n\nPosts commenting some achievements of math.SE users have been posted quite often on meta. Usually it was about a user reaching some reputation milestone, such as 100k or 200k. (I also remember a post ...\n1k views\n\n### Let's make one big, happy \u201cCongratulations!\u201d thread.\n\nUpdate: The proposal in question has been implemented. We recently had a discussion about the celebration tag that raised some important issues, but did not result in a consensus. I'd like to pitch ...\n1k views\n\n### Congratulations (again), Daniel Fischer! [closed]\n\nNo, this is not a belated post about the moderator election results. Instead, I noticed that Daniel Fischer has very recently topped 100K in reputation. And he accomplished that in one-year, six-...","date":"2020-03-29 18:47:28","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.4825928211212158, \"perplexity\": 9742.381529034388}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-16\/segments\/1585370495413.19\/warc\/CC-MAIN-20200329171027-20200329201027-00232.warc.gz\"}"}
null
null
jest (C++14 unit test framework) --- jest is a sane and minimal (header only) C++ unit test framework that uses a template-based approach to registering tests. Absolutely no macros are used or needed for test writing and the whole API can be described in the following example: ```cpp #include <jest/jest.hpp> /* Step 1: Define a group type and object. */ struct ex_1{ }; using ex_1_group = jest::group<ex_1>; ex_1_group const ex_1_obj{ "example" }; /* Step 2: Specialize for your group and all tests you'd like. */ namespace jest { template <> template <> void ex_1_group::test<0>() /* Tests are numbered; the order does not matter. */ { int const i{}; float const f{ 3.14 }; expect(i == f); } template <> template <> /* Double template bit here is required. */ void ex_1_group::test<1>() { int const i{}; float const f{ 3.14 }; float const f2{ f * 2.0f }; expect_equal(i, f, f2); /* Variadic; compares all permutations of pairs. */ } template <> template <> void ex_1_group::test<2>() { fail("woops"); } template <> template <> void ex_1_group::test<3>() { fail(); } template <> template <> void ex_1_group::test<4>() { expect_equal(0, 0.0f, 0x00, 0000, 0b000); } template <> template <> void ex_1_group::test<5>() { expect_almost_equal(3.140000f, 3.140001f); } /* Using combined tolerance. */ template <> template <> void ex_1_group::test<28>() /* Test numbers do not need to be sequential. */ { expect_equal("string", "String"); } template <> template <> void ex_1_group::test<29>() { expect_exception<std::runtime_error>([] /* Variadic; any number of exception types. */ { throw std::runtime_error{""}; }); } } /* Step 3: Create a worker which will run the tests. */ int main() { jest::worker const j{}; return j(); } ``` Possible output: ``` running group 'example' test 0 failure: failed 'unexpected' (false) test 1 failure: failed 'not equal' (0, 3.14) test 2 failure: failed 'explicit fail' ("woops") test 3 failure: failed 'explicit fail' ("") test 4 success test 28 failure: failed 'not equal' ("string", "String") test 29 success finished group 'example' 5/7 test(s) failed ``` ### What the hell is `template <> template <>`? You're specializing a member function of `jest::group`, which is parameterized on your test type. It also inherits from your test type, giving direct access to your test type's member variables from within `jest::group::test`. An example of where I use this group-specific data is for testing the output of certain functions to stdout. Add a `std::stringstream` to the test data, redirect `std::cout` to it, and now you can check its contents for each test. Example: ```cpp #include <jest/jest.hpp> #include <iterator> #include <algorithm> /* My test type is output, which has a stringstream. */ struct output { output() { std::cout.rdbuf(out.rdbuf()); } std::stringstream out; /* This will be accessible in each test. */ }; using output_group = jest::group<output>; output_group const output_obj{ "output" }; namespace jest { namespace detail { void speak() { std::cout << "BARK"; } void yell(std::string const &s) { std::transform(std::begin(s), std::end(s), std::ostream_iterator<char>(std::cout), [](char const c) { return std::toupper(c); }); } } template <> template <> void output_group::test<0>() { /* Here, I can access `out`, a member variable of my test type. */ detail::speak(); expect_equal(out.str(), "BARK"); out.str(""); } template <> template <> void output_group::test<1>() { detail::yell("testing is the best"); expect_equal(out.str(), "TESTING IS THE BEST"); out.str(""); } } int main() { jest::worker const j{}; return j(); } ``` Possible output: ``` running group 'output' test 0 success test 1 success finished group 'output' all 2 tests passed ``` ### Installation Since jest is a header-only library, simply copy over the contents of `include` to your project, or, better yet, add jest as a submodule and introduce `jest/include` to your header search paths. Full installation can also be achieved by using `./configure && make install`. See the `configure` script for prefix options.
{ "redpajama_set_name": "RedPajamaGithub" }
8,828
layout: post microblog: true audio: photo: date: 2009-06-28 18:00:00 -0600 guid: http://craigmcclellan.micro.blog/2009/06/29/t2388456709.html --- I am excites to be setting up on a significantly larger stage than last week.
{ "redpajama_set_name": "RedPajamaGithub" }
2,072
\section{Introduction} \label{sec:intro} In developing a system of {\it programmable matter}, one hopes to create a material or substance that can utilize user input or stimuli from its environment to change its physical properties in a programmable fashion. Many such systems have been proposed (e.g., smart materials, synthetic cells, and modular and swarm robotics) and each attempts to perform tasks subject to domain-specific capabilities and constraints. In this paper, we are interested in {\it active programmable matter}, where the energy input takes place directly at the scale of each active (matter) particle and allows for self-propelled movement\footnote{As opposed to passive programmable matter systems such as DNA computing and tile self-assembly.} ~\cite{Ramaswamy2010}. We investigate how such a system can achieve {\it directed locomotion}, wherein the individual particles move together as a collective in a desired direction. Specifically, we consider active programmable matter ensembles composed of particles that individually are incapable of locomotion. However, when constrained to remain in close proximity to other particles, we show that the overall ensemble can generate movement. Moreover, external stimuli that introduce asymmetries into the system with regards to individual particle activity can be used to produce a mode of directed displacement, either towards or away from a light source, known as {\it phototaxing}. We investigate this phenomenon through both experimental and theoretical models. We show, in Section~\ref{sec:physical}, that phototaxing emerges in testbed experiments on a constrained collection of {\it smarticles} (that we call a {\it supersmarticle}). A smarticle is a small, 3-link, planar robot, developed by Goldman's group, equipped with sensing abilities but is incapable of rotating or displacing individually. A supersmarticle is a collection of smarticles enclosed by an unanchored rigid ring. One can think of a supersmarticle as a ``robot made of robots'' which achieves capabilities greater than any individual smarticle; phototaxing is one such capability. To investigate phototaxing from a theoretical perspective, we utilize previous work on {\it self-organizing particle systems}, that abstractly describes programmable matter as a collection of simple computational elements ({\it particles}) with limited memory that each execute fully distributed, local, and asynchronous algorithms to solve system-wide problems of movement, configuration, and coordination (e.g.,~\cite{Derakhshandeh2015}). Recent work applying stochastic approaches to algorithms for self-organizing particle systems has yielded surprisingly fruitful results, producing local algorithms that are robust, nearly oblivious, and truly decentralized. This approach was initially applied to develop an algorithm for {\it compression} in self-organizing particle systems under the assumptions of the geometric amoebot model~\cite{Cannon2016}. To solve the compression problem, particles gather as tightly together as possible, as in a sphere or its equivalent in the presence of some underlying geometry. This phenomenon is observed in natural systems (e.g., fire ants forming floating rafts~\cite{Mlot2011}) \subsection{Our Results} In this paper, we demonstrate how one can create a phototaxing particle ensemble by giving an algorithm for an abstract particle system under the amoebot model in which phototaxing is observed. It is achieved, rather remarkably, with just one very subtle modification to the compression algorithm: particles become more (or less) active when they sense light. In Section~\ref{sec:alg}, we formally prove that phototaxing occurs for systems of two and three particles; we also present simulation results of our algorithms for much larger systems that demonstrate the same behavior. We note that in the amoebot model, unlike smarticles, individual particles are capable of movement, but this will be undirected regardless of how active they become in response to a light source. In contrast, we show that groups of particles can achieve directed displacement in response to light in the theoretical model, similar to the smarticles. Both the physical and theoretical systems we consider have three components: (1) individual particles move regularly with no sense of direction, (2) there is a constraint ensuring the particles remain in close proximity to one another, and (3) particles' activity changes in response to light. In both cases, these basic requirements suffice to produce phototaxing. Perhaps the most surprising result is that phototaxing can be achieved without all particles knowing the direction of the light source; the occlusion of light by individual particles suffices for the ensemble as a whole to ``know'' where the light is and move accordingly, entirely via local distributed algorithms. We posit that more generally, many other systems with these three features should also be phototactic. The remainder of this paper is organized as follows: In Section~\ref{sec:related}, we present a brief overview of related work. Section~\ref{sec:prelims}, we describe the physical smarticles and the theoretical abstractions that we will use in this paper. Section~\ref{sec:physical} presents our experimental testbed results on phototactic supersmarticles, which were the inspiration for the theoretical and simulation analysis on an abstraction of smarticle ensembles that we present in Section~\ref{sec:alg}. We present our concluding remarks, including directions for future work, in Section~\ref{sec:conclude}. \subsection{Related Work} \label{sec:related} When examining the recently proposed and realized systems of programmable matter, one can distinguish between \emph{passive} (e.g., \cite{Cheung2011,Woods2013,Angluin2006}) and \emph{active} (e.g.,~\cite{Cieliebak2012,Rubenstein2014,Chazelle2009,Yim2007}) systems. Our work falls within the latter, which distinguishes itself from passive systems due to self-propelled motion at the particle level. Examples of active programmable matter systems include \emph{swarm robotics}, various other models of modular robotics, and the \emph{amoebot model}, which defines our computational framework (detailed in Section~\ref{subsec:amoebot}). Swarm robotics systems usually involve a collection of autonomous robots that move freely in space with limited sensing and communication ranges. These systems can perform a variety of tasks including gathering (e.g.,~\cite{Cieliebak2012}), shape formation (e.g.,~\cite{Rubenstein2014}), and imitating the collective behavior of natural systems (e.g.,~\cite{Chazelle2009}); however, the individual robots are more complex and have more powerful computational capabilities than those we consider. \emph{Modular self-reconfigurable robotic systems} focus on motion planning and control of kinematic robots to achieve dynamic morphology~(e.g., \cite{Yim2007}). The \emph{nubot model}~\cite{Woods2013-nubot} seeks to provide a formal framework for rigorous algorithmic study of molecular programming systems. In our physical experiments, our supersmarticles achieve phototaxing by changing the behavior of individual smarticles in response to light, making some of them inactive. We believe that the inactive smarticles can be approximated as a loose extension of the boundary, and one whose collision model is softer than the normal rigid boundary. This is consistent with previous work done with randomly diffusing self-propelled particles in~\cite{Dauchot2017,Kardar2015}. These studies investigated systems of self-propelled active particles enclosed in a boundary. The boundary's perimeter was divided into two sections, each composed of distinct materials, one half with a softer potential and the other half a more rigid potential. They found the applied pressure on the soft boundary from the particles was larger than on the more rigid boundary. We utilize this emergent response resulting from physical interactions, shown previously in simulation, in experiment to generate directed motion from a collection of individually non-motile robotic units. \section{Preliminaries} \label{sec:prelims} We begin by describing both the physical smarticles and the theoretical abstractions. \subsection{Smarticles} \label{subsec:smarticles} In order to explore emergent phenomena that result from collections of entities with limited mobility and sensing, we developed what we are calling ``smarticles.'' {\it Smarticles}, or {\it smart particles}, are small 14 x 2.5 x 3 cm robots which can change their shape in situ, but are incapable of rotating or displacing individually. Each smarticle is a three-link, two revolute joint, planar robot where only the center link is in contact with the ground. Each smarticle consists of two Power HD-1440A MicroServos, a MEMS analog omnidirectional microphone, two photoresistors, a current sensing resistor, and a re-programmable Arduino Pro Mini 328-3.3V/8MHz, which handles the ADC and servo control. The two servos control the smarticle's two outer links, allowing the smarticle to fully explore its two-dimensional configuration space. The microphone, and pair of photoresistors, represent two channels through which we can send basic commands: using varying frequency ranges of sound or controlling levels of light. The current sensing resistor detects current draw from the servos, and thus the torque they are experiencing, allowing each smarticle to sense its own stress state. The links of the smarticles were 3D printed, ensuring uniform construction between all smarticles. Each smarticle is capable of performing predefined shape changes in the joint space. When we place a collection of smarticles placed inside an unanchored ring, we call this a supersmarticle. \begin{figure} \centering \includegraphics[width=175pt]{smartExpPic.pdf} \caption{(a) A supersmarticle composed of 5 individual smarticles. A single smarticle, as viewed from the (b) front and (c) rear.} \label{fig:expPic} \end{figure} \subsection{The geometric amoebot model} \label{subsec:amoebot} In order to study smarticle systems from a more formal perspective, we turn to self-organizing particle systems, which abstract away from specific instantiations of programmable matter to a more general model. This approach describes programmable matter as a collection of simple computational elements ({\it particles}) with limited memory that each execute fully distributed, local, asynchronous algorithms to solve system-wide problems of movement, configuration, and coordination. In the {\it geometric amoebot model}, space is modeled as the infinite triangular lattice $\Gamma$ (Fig.~\ref{fig:model}a). Each particle occupies a distinct lattice point and can move along lattice edges. Each particle is anonymous, and there is no shared coordinate system or global sense of orientation. Particles interact only if they are {\it neighbors}, that is, if they occupy adjacent vertices of the lattice. Every particle has a constant-size, local, memory which both it and its neighbors are able to read from and write to for communication. Due to the limitation of constant-size memory, particles cannot know the total number of particles in the system or any estimate of this quantity. We assume that any conflicts (of movement or shared memory writes) are resolved arbitrarily. Full model details can be found in~\cite{Cannon2016}. \begin{figure} \centering \begin{subfigure}{.45\columnwidth} \centering \begin{tikzpicture}[scale=0.6] \clip (0.5,-0.25) rectangle (5.5,3.25); \foreach \i in {0,...,10} \draw[black,line width=.5pt] (\i*1.732050808 / 2,-5)--(\i*1.732050808 / 2,5); \foreach \i in {-10,...,10} { \draw[black,line width=.5pt] (0,\i)--(5*1.732050808,\i + 5); \draw[black,line width=.5pt] (0,\i)--(5*1.732050808,\i - 5); } \foreach \i in {0,2,...,10} \foreach \j in {-5,...,5} \draw[fill] (\i*1.732050808 / 2,\j) circle (0.13); \foreach \i in {1,3,...,10} \foreach \j in {-4.5,...,4.5} \draw[fill] (\i*1.732050808 / 2,\j) circle (0.13); \end{tikzpicture} \caption{} \label{fig:modelgrid} \end{subfigure}% \begin{subfigure}{.45\columnwidth} \centering \includegraphics[scale=0.45]{light_ex.pdf} \caption{} \label{fig:modelheadlabel} \end{subfigure} \vspace{-3mm} \caption{(a) A section of the triangular lattice $\Gamma$. (b) An example particle system with some point light sources broadcasting upward along lattice lines; the particles that sense the light are outlined, while all others are occluded.} \label{fig:model} \end{figure} \iffalse \begin{figure} \centering \input{model_grid} \hspace{5mm} \includegraphics[scale = 0.45]{light_ex.pdf} \\(a) \hspace{30mm} (b) \caption{(a) A section of the triangular lattice $\Gamma$. (b) An example particle system with some point light sources broadcasting upward along lattice lines; the particles that sense the light are outlined, while all others are occluded.} \label{fig:model} \end{figure} \fi For phototaxing, we furthermore assume that each particle can sense light, and that particles can occlude that light. More specifically, we consider point light sources that broadcast along rays in $\Gamma$. The light from a source is sensed only by the first particle in the lattice line along which the light is shining, and not by any other particles that may be in that lattice line (Fig.~\ref{fig:model}b). \subsection{Compression} \label{subsec:compression} Our local distributed algorithm for phototaxing under the assumptions of the geometric amoebot model uses the stochastic compression algorithm of Cannon et al.~\cite{Cannon2016} as a subroutine, so we present a high-level summary here. We assume particles start in a simply connected configuration, and we design algorithms that ensure they stay simply connected. Variants of the compression algorithm in~\cite{Cannon2016} produce a variety of other useful behaviors, including expansion over as wide an area as possible, coating an arbitrarily shaped surface, spanning fixed sites, and forming shortcut bridges~\cite{AndresArroyo2017} (a behavior also observed in army ants~\cite{Reid2015}); here, we show another variant produces phototaxing. For all of these problems, tools from Markov chain analysis and distributed algorithms allow us to relate local and global optimal behavior. The stochastic algorithm in \cite{Cannon2016} achieves compression by favoring moves that increase the number of edges in the particle system configuration, where an {\it edge} of a configuration is an edge of $\Gamma$ where both endpoints are occupied by particles. Since the total number of particles stays fixed, maximizing the number of edges within a configuration is equivalent to minimizing the number of edges on the perimeter. The compression algorithm takes as input a parameter $\lambda$ that controls how desirable it is for a particle to have neighbors, where $\lambda > 1$ favors configurations with more neighboring pairs of particles and thus more edges. The distributed compression algorithm ensures the system converges to a distribution that favors having more edges using a {\it Metropolis filter}~\cite{Metropolis1953,Hastings1970}, a tool from Markov chain analysis that allows local probabilities of moves to be set so that global convergence to a certain distribution occurs. Specifically, our algorithms incorporate carefully chosen probabilities for particle moves so that the system converges to a stationary distribution $\pi$ over particle system configurations $\sigma$ where $\pi(\sigma) \sim \lambda^{e(\sigma)}$, where $e(\sigma)$ is the number of edges of configuration $\sigma$. When $\lambda > 1$, this leads directly to distribution $\pi$ placing the most weight on configurations with the most edges, which provably are the most compressed configurations. In particular, for any $\lambda > 2+\sqrt{2} \sim 3.42$, under $\pi$ all but an exponentially small fraction of particle configurations will be compressed. This means the Markov chain, and thus the associated distributed local algorithm, converge to a distribution over particle configurations where with very high probability compression has been achieved. Algorithm~\ref{alg:particles-compression} is a simplified, high level description of the local distributed algorithm executed by each particle in order to achieve system-level compression~\cite{Cannon2016}; parameter $\lambda$, the input to the compression algorithm, is known by each particle. A simulated asynchronous execution of this compression algorithm is shown in Fig.~\ref{fig:markovcomp}. \begin{figure}[t] \centering \begin{subfigure}{\columnwidth} \centering \input{line100_bias4_1mil_rotated.txt} \vspace{-4mm} \caption{} \end{subfigure}\\[1ex] \begin{subfigure}[b]{.55\columnwidth} \centering \input{line100_bias4_2mil_rotated.txt} \vspace{-4mm} \caption{} \end{subfigure}% \begin{subfigure}[b]{.4\columnwidth} \centering \input{line100_bias4_3mil_rotated.txt} \caption{} \end{subfigure}\\[1ex] \begin{subfigure}{.45\columnwidth} \centering \input{line100_bias4_4mil_rotated.txt} \caption{} \end{subfigure}% \begin{subfigure}{.45\columnwidth} \centering \input{line100_bias4_5mil_rotated.txt} \caption{} \end{subfigure} \vspace{-2mm} \caption{The compression algorithm for 100 particles initially in a line after (a) 1 million, (b) 2 million, (c) 3million, (d) 4 million and (e) 5 million iterations of Markov chain $\mathcal{M}$ with bias $\lambda = 4$.} \label{fig:markovcomp} \end{figure} \iffalse \begin{figure}[t] \centering \begin{subfigure}{\columnwidth} \centering \input{line100_bias4_1mil.tex} \vspace{-4mm} \caption{} \end{subfigure}\\[1ex] \begin{subfigure}[b]{.45\columnwidth} \centering \input{line100_bias4_2mil.tex} \vspace{-8mm} \caption{} \end{subfigure}% \begin{subfigure}[b]{.45\columnwidth} \centering \input{line100_bias4_3mil.tex} \caption{} \end{subfigure}\\[2ex] \begin{subfigure}{.45\columnwidth} \centering \input{line100_bias4_4mil.tex} \caption{} \end{subfigure}% \begin{subfigure}{.45\columnwidth} \centering \input{line100_bias4_5mil.tex} \caption{} \end{subfigure} \caption{The compression algorithm for 100 particles initially in a line after (a) 1 million, (b) 2 million, (c) 3million, (d) 4 million and (e) 5 million iterations of Markov chain $\mathcal{M}$ with bias $\lambda = 4$.} \label{fig:markovcomp} \end{figure}\fi \iffalse \begin{figure}[t] \centering \input{line100_bias4_1mil.tex} \vspace{-5mm} \\ (a) \\ \input{line100_bias4_3mil.tex} \hspace{10mm} \input{line100_bias4_5mil.tex} \\ (b) \hspace{30mm} (c) \caption{A simulation of our compression algorithm: 100 particles initially in a line after (a) 1 million, (b) 3 million, and (c) 5 million iterations of Markov chain $\mathcal{M}$ with bias $\lambda = 4$.} \label{fig:markovcomp} \end{figure} \fi \begin{algorithm} \begin{algorithmic}[1] \State Let $\ell$ denote $P$'s current location; choose neighboring location $\ell'$ uniformly at random from the six possible choices in $\Gamma$. \If {$\ell'$ is unoccupied and certain local connectivity conditions hold in the neighborhood of $\ell \cup \ell'$} \State Generate a random number $q \in (0,1)$. \State Let $e$ be the number of other particles adjacent to location $\ell$ and $e'$ be the number adjacent to $\ell'$. \IfThen{$q < \lambda^{e' - e}$}{Move to $\ell'$.} \label{algstate:particles-compression-lambda} \Else{} Remain at $\ell$. \EndIf \end{algorithmic} \caption{(Compression for particle $P$)} \label{alg:particles-compression} \end{algorithm} To analyze the limiting behavior of the algorithm, we assume each particle activates and executes Algorithm~\ref{alg:particles-compression} at a time drawn randomly from a Poisson distribution. This has the benefit of indirectly ensuring that our particle activations are fair, in the sense that for any particle $P$ and any time $t$, $P$ will always be activated at least once after $t$. Further details regarding resolutions of the conflicts (of movement or shared memory writes) that arise when nearby particles are activated at nearly the same time are available in~\cite{Cannon2016}; most importantly, these efforts ensure that for the formal analysis, we may assume that at most one particle is active (performing a bounded amount of computation and at most one movement) at a time. This follows the standard asynchronous model of computation~\cite{lynch96}, which greatly simplifies analysis. In particular, one can define the (centralized) Markov chain $\mathcal{M}$ associated with Algorithm~\ref{alg:particles-compression} as follows: $\mathcal{M}$ picks a particle uniformly at random and then executes the steps of Algorithm~\ref{alg:particles-compression} for that particle. This enables the use of techniques from Markov chain analysis to prove guarantees about the behavior of the system when each particle is independently executing Algorithm~\ref{alg:particles-compression}; we now summarize those guarantees. {\bf Theorem 1:} Consider a self-organizing particle system under the geometric amoebot model where each particle individually executes Algorithm~\ref{alg:particles-compression} with some fixed $\lambda > 2+\sqrt{2}$. The particle system will always remain simply connected and will converge to a distribution over configurations $\pi(\sigma) \sim \lambda^{e(\sigma)}$ where with all but exponentially small probability the system is compressed. \section{Physical Phototactic Supersmarticles} \label{sec:physical} In this section, we describe the supersmarticle displacement experiments and their results. For each experiment, we place the supersmarticle, i.e., the smarticles and ring, on a level plane and each smarticle performs a gait, where a gait is a closed periodic trajectory in the joint space of a smarticle. The smarticles used in the experiments were programmed to exhibit two behavioral states: one where the smarticle servos traced a square drawn in the 2-dimensional joint space as seen in Fig.~\ref{fig:squareGait}, called the active state, and another where the servos were held at a fixed position such that all links of the smarticle were parallel, called the inactive state. Smarticles will persist in the active state until either photoresistors, one found on either side of the smarticle, detect light above a certain threshold. When above the threshold the smarticle will persist in the inactive state until the light level sensed by either of its photoresistors drop below the threshold and will becomes active again. \begin{figure} \centering \includegraphics[width=125pt]{squareGaitv2.pdf} \vspace{-2mm}\caption{(a) Configuration space of a single smarticle defined by the angles $\alpha_1$ and $\alpha_2$ between the outer and inner links. (b) The square gait with certain configurations from the trajectory illustrated.} \label{fig:squareGait} \end{figure} The experimental setup is planar, and hence the smarticle nearest to the light source occludes light from reaching other smarticles behind it inside the supersmarticle. Given the light sensor locations on the smarticle body and the geometry of the straight configuration, typically only one smarticle at a time is inactive, occluding the light and keeping the others smarticles below the photoresistor threshold. The occlusion of the light source effectively produces a light gradient across the supersmarticle which provides a decentralized, stigmergic communication method. Each smarticle's behavior is a response to the local environment, which in turn is a tends to affect the local environment of its neighbors. \subsection{Experimental Methods} \label{subsec:methods} Two types of experiments were performed: one type where all smarticles remained active and another type with both active and inactive smarticles. In the second experiment, one side of the system is illuminated with a light source, thereby forcing certain smarticles into the inactive state. All experiments were performed in a dark room, so that smarticles only entered the inactive state when subjected to the controlling light source input. Experimental trials were initiated with the supersmarticle at the center of a $0.4m^2$ test plate and ended when the supersmarticle had translated to an edge of the test plane. When internal supersmarticle configurations exhibited slow displacement rates, trails were cropped at 10 minutes. Multiple trials were taken with the light source at one of four locations to minimize systematic error. In each light experiment, one light source was placed at the center of an edge of the test plate. The light source was directed towards the nearest exposed photoresistor, thereby rendering a single smarticle within the system inactive. Trajectories of the supersmarticle center of geometry were recorded using OptiTrack infrared video recording technology, and the data were exported and analyzed in MATLAB using a MSD analysis package~\cite{Tarantino2014}. \subsection{Experimental Results and Discussion} \label{subsec:results} The supersmarticle's motion was dependent on the activity within the ring. Diffusive behavior was observed in both the control (Fig.~\ref{fig:expData}.(a,c)) and directed experiments (Fig.~\ref{fig:expData}.(b,d)), but the presence of inactive smarticles near the light source introduces a biased drift towards the light. The light-controlled supersmarticle system consistently diffused in the direction of the light source, with an average success rate of $82.3\pm 6.0\%$ across all trials. Mean squared displacement (MSD) curves are useful for describing the types of diffusive behavior present in a given dynamic system \cite{Berg1983}. The MSD is defined as: \begin{center} $\sigma^2 = \langle\vec{x}\cdot\vec{x}\rangle - \langle\vec{x}\rangle \cdot \langle\vec{x}\rangle = 4Dt^\gamma$ \end{center} where $\gamma$ is the diffusion coefficient for the system. Free diffusive movement is seen for values of $\gamma = 1$, with $\gamma < 1$ characterizing subdiffusive behavior and $\gamma > 1$ characterizing superdiffusive behavior, or active transport. By fitting a line to the log-log plot of the MSD curve, the slope of the resulting fit will be the diffusion parameter $\gamma$. The average MSD curve for each set of experimental data was computed, and a linear approximations were fit to the log-log plots. After performing out analyses across all data sets, the mean slope for fully active system was computed to be $0.99 \ m^{2}/s$ and the light-directed systems were $1.04 \pm 0.02 \ m^2/s$. The application of the light-control algorithm results in a shift in diffusive behavior, from a purely diffusive system to a superdiffusive system where the active transport phenomenon causes the system to propagate towards the light source. \begin{figure} \centering \includegraphics[width=225pt]{smarticleExpLightData} \caption{(a) and (b) are trajectories of the supersmarticle's center of geometry for a non-biased motion and light-biased motion respectively. Each colored trajectory represents a separate trial. (c) and (d) are shows lines connecting the initial and final position of the supersmarticle for each experiment. (d) contains all light biased directions data. Trials where the light was not originating from the +x direction were rotated to allow comparison between all trials. Illumination direction is shown via the location of the flashlight with respect to the supersmarticle image. All tracks begin from $(0,0)$ and end at the red circles.} \label{fig:expData} \end{figure} \section{A Phototactic Algorithm} \label{sec:alg} To complement the physical experiments, we developed a local distributed algorithm for phototaxing in self-organizing particle systems under the abstract geometric amoebot model. We prove that the algorithm causes the system to diffuse in a certain direction in reaction to the light source when there are two or three particles, and we present simulations that demonstrate this same effect for larger particle systems. We assume the particle system starts in some connected\footnote{The assumption of connectedness can be relaxed, but it simplifies the proofs while maintaining the phototaxing behavior we desire. We can think of connectivity and compression as playing a role analogous to that of the ring in the physical model.} initial configuration $\sigma_0$. For phototaxing to occur, we assume a collection of point light sources (sufficiently far from the particle configuration to not interfere with its motion) broadcast light along lattice lines in the same direction. Specifically, we assume the light sources form an infinite jagged line below all the particles and broadcast light upwards, as in Fig.~\ref{fig:model}b. We define the {\it height} of a particle system to be the $y$-coordinate of its center of mass, where all light sources are assumed to have $y$-coordinate $0$ or $-1/2$; we assume all edges of the triangular lattice are of length 1. We say that phototaxing occurs if there is some fixed number of iterations after which the height of the particle system has strictly increased or strictly decreased in expectation. Our local distributed algorithm for phototaxing (specifically, for locomotion away from a light source) is remarkably simple; each particle executes Algorithm~\ref{alg:particles-phototaxing} when activated. The choice of $1/4$ in the algorithm is because it seems to work well in practice. Smaller values for that parameter can affect the compression algorithm's execution and cause different structural configurations to emerge, while larger values (that are still less than 1) correspond to even slower locomotion. Conflicts (of movement or shared memory writes) are resolved just as they are for compression; recall from Section~\ref{subsec:compression} this allows us to assume at most one particle is active at a time. \begin{algorithm} \begin{algorithmic}[1] \If {$P$ senses light} \State $P$ Executes Algorithm~\ref{alg:particles-compression}. \Else \State $P$ Executes Algorithm~\ref{alg:particles-compression} with probability $\frac{1}{4}$. \EndIf \end{algorithmic} \caption{Phototaxing for a particle $P$} \label{alg:particles-phototaxing} \end{algorithm} So far, we assumed all particles activate at the same rate; under this assumption, we will see that the particle system achieves the desired phototaxing when all the particles independently execute Algorithm~\ref{alg:particles-phototaxing}. If instead we assume that it is possible for particles' activation rates to change in response to light, as is the case for the physical smarticles of Section~\ref{sec:physical}, then phototaxing can occur when each particle simply executes Algorithm~\ref{alg:particles-compression}. For instance, if particles are four times more likely to activate when exposed to light and each executes Algorithm~\ref{alg:particles-compression} upon activation, this system is equivalent to a system of particles with uniform activation rates executing Algorithm~\ref{alg:particles-phototaxing}. \begin{figure}[t] \centering \begin{subfigure}{.45\columnwidth} \centering \includegraphics[scale=1.2]{light_ex_2particles_state1.pdf} \caption{} \end{subfigure}% \begin{subfigure}{.45\columnwidth} \centering \includegraphics[scale=1.2]{light_ex_2particles_state2.pdf} \caption{} \end{subfigure} \vspace{-3mm}\caption{A system of two particles, and the probabilities of each particle's movement if it is activated next. (a) If both particles are exposed to the light source, the expected change in height of the system after one iteration is 0. (b) If one particle occludes the other from the light source, the expected change in the height of the system after one iteration is $+3/32$.} \label{fig:2particles} \end{figure} \iffalse \begin{figure}\centering \includegraphics[scale = 1.2]{images/light_ex_2particles_state1.pdf} \hspace{5mm} \includegraphics[scale = 1.2]{images/light_ex_2particles_state2.pdf} \\ (a) \hspace{30mm} (b) \caption{A system with two particles, and the probabilities of each particle's movement if it is activated next. (a) If both particles are exposed to the light source, the expected change in height of the system after one iteration is 0. (b) If one particle occludes the other from the light source, the expected change in the height of the system after one iteration is $+3/32$.} \label{fig:2particles} \end{figure} \fi \iffalse \begin{figure}[t] \centering \begin{subfigure}{.45\columnwidth} \centering \includegraphics[scale=1.2,page=1]{images/light_ex_3particles.pdf} \caption{E$[\Delta h] = 0$.} \end{subfigure}% \begin{subfigure}{.45\columnwidth} \centering \includegraphics[scale=1.11,page=6]{images/light_ex_3particles.pdf} \caption{E$[\Delta h] = 0$.} \end{subfigure}\\[1ex] \begin{subfigure}{.45\columnwidth} \centering \includegraphics[scale=1.2,page=2]{images/light_ex_3particles.pdf} \caption{E$[\Delta h] = 0$.} \end{subfigure}% \begin{subfigure}{.45\columnwidth} \centering \includegraphics[scale=1.2,page=7]{images/light_ex_3particles.pdf} \caption{E$[\Delta h] = 0$.} \end{subfigure}\\[1ex] \begin{subfigure}{.3\columnwidth} \centering \includegraphics[scale=1.2,page=3]{images/light_ex_3particles.pdf} \caption{E$[\Delta h] = \frac{1}{48}$.} \end{subfigure}% \begin{subfigure}{.35\columnwidth} \centering \includegraphics[scale=1.13,page=4]{images/light_ex_3particles.pdf} \caption{E$[\Delta h] = \frac{1}{24}$.} \end{subfigure}% \begin{subfigure}{.3\columnwidth} \centering \includegraphics[scale=1.2,page=5]{images/light_ex_3particles.pdf} \caption{E$[\Delta h] = \frac{1}{24}$.} \end{subfigure} \caption{The seven possible states for a system of three particles (up to reflection and translation) and the probabilities of each particle's movement if it is activated next; the expected change in the height of the system after one iteration beginning in each of the seven states is also shown.} \label{fig:3particles} \end{figure} \fi \subsection{Provable Phototaxing for Very Small Systems} \label{subsec:phototaxing-small} Here, we formally verify the observed phototaxing for very small systems (with 2 or 3 particles) by proving that when the particles independently execute Algorithm~\ref{alg:particles-phototaxing}, the system exhibits a drift away from the light source. We first consider a system of two particles, each activating at the same rate and then executing Algorithm~\ref{alg:particles-compression}. In this case, Algorithm~\ref{alg:particles-phototaxing} simplifies to Algorithm~\ref{alg:particles-phototaxing2}: \begin{algorithm} \begin{algorithmic}[1] \State Choose one of the two locations adjacent to both particles uniformly at random; call it $\ell$. \IfThenElse{$P$ senses light}{Move to $\ell$}{Move to $\ell$ with probability $\frac{1}{4}$.} \end{algorithmic} \caption{Phototaxing for a particle $P$: 2 particles} \label{alg:particles-phototaxing2} \end{algorithm} {\bf Theorem 2:} For system of two particles each executing Algorithm~\ref{alg:particles-phototaxing2}, phototaxing occurs. \begin{proof} We show that after two particle activations, the expected height of the system has increased by at least $+3/64$, which implies the particle system is moving away from the light source. Up to translation and reflection, there are two possible states a system of two particles can be in: either both particles are exposed to the light ({\it State 1}), or one particle occludes the other from the light ({\it State 2}); see Fig.~\ref{fig:2particles}. Regardless of state, both particles are equally likely to activate next. In State 1, case analysis shows the expected change in height after one particle activation is 0. Furthermore, with probability $1/2$ the system remains in State 1 and with probability $1/2$ it enters State 2. For a particle system in State 2, with probability $1/2$ the occluded particle activates next, and with probability $1/4$ it moves a distance of $-1/2$ in the $y$-direction, causing the height of the system to decrease by $1/4$. With the remaining probability $1/2$, the particle exposed to light is activated and moves a distance of $+1/2$ in the $y$-direction, causing the height of the system to increase by $1/4$. Overall, in this case the expected change in the height of the system is: \begin{center} $\frac{1}{2} \cdot \frac{1}{4} \cdot \left(-\frac{1}{4}\right) + \frac{1}{2} \cdot \left(+\frac{1}{4}\right) = \frac{3}{32}$. \end{center} Beginning in State 1, we condition on the state of the system after one activation and see that after two activations the expected height of the system has increased by at least $3/64$. Beginning in State 2, after two particle activations the expected height of the system has increased by at least $3/32 > 3/64$. This proves the theorem. \end{proof} Thus, for systems of two particles each executing Algorithm~\ref{alg:particles-phototaxing2} upon activation, the expected distance from the light sources strictly increases over time, meaning phototaxing provably occurs. \begin{figure*} \centering \begin{subfigure}{.33\columnwidth} \centering \includegraphics[height = 3cm,keepaspectratio,page=1]{light_ex_3particles.pdf} \caption{E$[\Delta h] = 0$} \end{subfigure}% \begin{subfigure}{.3\columnwidth} \centering \includegraphics[height = 3cm,keepaspectratio,page=6]{light_ex_3particles.pdf} \caption{E$[\Delta h] = 0$} \end{subfigure}% \begin{subfigure}{.3\columnwidth} \centering \includegraphics[height = 3cm,keepaspectratio,page=2]{light_ex_3particles.pdf} \caption{E$[\Delta h] = 0$} \end{subfigure}% \begin{subfigure}{.3\columnwidth} \centering \includegraphics[height = 3cm,keepaspectratio,page=7]{light_ex_3particles.pdf} \caption{E$[\Delta h] = 0$} \end{subfigure}% \begin{subfigure}{.28\columnwidth} \centering \includegraphics[height = 3cm,keepaspectratio,page=3]{light_ex_3particles.pdf} \caption{E$[\Delta h] = \frac{1}{48}$} \end{subfigure}% \begin{subfigure}{.28\columnwidth} \centering \includegraphics[height = 3cm,keepaspectratio,page=4]{light_ex_3particles.pdf} \caption{E$[\Delta h] = \frac{1}{24}$} \end{subfigure}% \begin{subfigure}{.28\columnwidth} \centering \includegraphics[height = 3cm,keepaspectratio,page=5]{light_ex_3particles.pdf} \caption{E$[\Delta h] = \frac{1}{24}$} \end{subfigure} \vspace{-1mm}\caption{The seven possible states for a system of three particles (up to reflection and translation) and the probabilities of each particle's movement if it is activated next; the expected change in the height of the system after one iteration beginning in each of the seven states is also shown.} \label{fig:3particles} \end{figure*} The same result holds for systems of three particles, albeit with a slightly slower drift. For systems with exactly three particles, Algorithm~\ref{alg:particles-phototaxing} simplifies to Algorithm~\ref{alg:particles-phototaxing3}, below. Note that the compression bias parameter $\lambda$ and the movement probability filter based on the number of edges in the system (Step~\ref{algstate:particles-compression-lambda} of Algorithm~\ref{alg:particles-compression}) begin to play a role. \begin{algorithm} \begin{algorithmic}[1] \State Determine possible valid locations to move to, of which there are at most 2. \State For each such location, set move probability to $1/2$. \If {move decreases number of edges in system} \State Divide move probability by $\lambda$. \EndIf \If {$P$ does not sense light} \State Divide each move probability by $4$. \EndIf \State Move to a possible valid location with the corresponding move probability; with all remaining probability, don't move. \end{algorithmic} \caption{Phototaxing for a particle $P$: 3 particles} \label{alg:particles-phototaxing3} \end{algorithm} {\bf Theorem 3:} For a system of three particles each executing Algorithm~\ref{alg:particles-phototaxing3} with bias parameter $\lambda > 2+\sqrt{2}$, phototaxing occurs. \begin{proof} We show that after three particle activations the expected height of the system has increased by at least $1/(64\lambda)$. Up to translation and reflection, there are seven possible states the particle system could be in; all are shown in Fig.~\ref{fig:3particles}. Doing a case analysis just as for two particles, we see that the expected change in the height of the system after one particle activation is nonnegative in all seven states. For states (e,f,g), the expected increase in height after one particle activation is more than $1/(64\lambda)$, and as expected height is nondecreasing the same holds after three particle activations. For the states (a,b,c,d), the expected change in height after one particle activation is zero, so we consider multiple particle activations at a time. For state (a), after one particle activation there is a positive probability it is in state (e) or state (g), and we can use conditional expectation to calculate that after two particle activations, beginning in state (a), the expected increase in the height of the system is $1/(64\lambda)$. Similarly, beginning in state (b), after two particle activations the expected change in the height of the system is $1/96$; because $\lambda > 2+\sqrt{2}$, that is, that we are in the regime of compression, then $1/96 > 1/(64\lambda)$. Beginning in states (c) and (d), it takes at least two particle moves to reach a state where there is a positive expected increase in height after the next particle activation; each reaches state (e) after two particle activations with probability $1/18 + 1/(9\lambda)$ and state (g) after two particle activations with probability $1/18 + 5/(72 \lambda)$. The total expected increase in height after three particle activations starting in either state (c) or state (d) is: \begin{center} $\left(\frac{1}{18} + \frac{1}{9\lambda} \right) \frac{1}{48} + \left(\frac{1}{18} + \frac{5}{72\lambda} \right) \frac{1}{24}= \frac{1}{288} + \frac{1}{192\lambda} > \frac{1}{64\lambda}$, \end{center} since $\lambda > 3$. Thus, for all possible states the system is in, we have shown that after three particle activations, the height of the system has strictly increased in expectation by at least $1/(64\lambda)$. \end{proof} \iffalse \begin{figure} \centering \includegraphics[scale = 1.2, page = 1]{images/light_ex_3particles.pdf} \hspace{5mm} \includegraphics[scale = 1.2,page=6]{images/light_ex_3particles.pdf} \\ (a) $E[\Delta h] = 0$ \hspace{18mm} (b)$E[\Delta h] = 0$\\ \vspace{3mm} \includegraphics[scale = 1.2,page=2]{images/light_ex_3particles.pdf} \hspace{5mm} \includegraphics[scale = 1.2,page=7]{images/light_ex_3particles.pdf} \\ (c) $E[\Delta h] = 0$ \hspace{15mm} (d)$E[\Delta h] = 0$\\ \includegraphics[scale = 1.2, page = 3]{images/light_ex_3particles.pdf} \hspace{3mm} \includegraphics[scale = 1.2, page = 4]{images/light_ex_3particles.pdf} \hspace{3mm} \includegraphics[scale = 1.2, page = 5]{images/light_ex_3particles.pdf} \\ (e) $E[\Delta h] = \frac{1}{48}$ \hspace{3mm}(f)$E[\Delta h] = \frac{1}{24}$ \hspace{3mm}(g) $E[\Delta h] = \frac{1}{24}$ \\ \vspace{3mm} \caption{The seven possible states for a system of three particles (up to reflection and translation) and the probabilities of each particle's movement if it is activated next; the expected change in the height of the system after one iteration beginning in each of the seven states is also shown.} \label{fig:3particles} \end{figure} \fi \subsection{Phototaxing Simulations for Larger Systems} \label{subsec:phototaxing-large} Algorithm~\ref{alg:particles-phototaxing} can be used to achieve phototaxing for arbitrarily large systems of particles, not just systems with two or three particles. Simulations for a system with 91 particles can be seen in Fig.~\ref{fig:hexagonlight}. Though the motion is largely random, it is clear there is a general trend away from the light sources. This drift was consistent across all simulations of Algorithm~\ref{alg:particles-phototaxing}. In all simulations, including the one shown in Fig.~\ref{fig:hexagonlight}, the particle system also exhibited lateral drift of varying magnitude and direction; that drift is not shown in Fig.~\ref{fig:hexagonlight} due to space constraints. \iffalse \begin{figure} \centering \input{hexagonlight_0_modified.txt} \hspace{2mm} \input{hexagonlight_10000000_modified.txt} \\ (a) \hspace{33mm} (b) \\ \vspace{4mm} \input{hexagonlight_20000000_modified.txt} \hspace{2mm} \input{hexagonlight_30000000_modified.txt} \\ (c) \hspace{33mm} (d) \caption{An execution of Algorithm~\ref{alg:particles-phototaxing} with $\lambda = 4$ for a system of 91 particles, with light sources that shine upwards shown in red, after (a) 0, (b) 10 million, (c) 20 million, and (d) 30 million iterations. Multiple executions all exhibit a drift upwards, as seen here.} \label{fig:hexagonlight} \end{figure} \fi \begin{figure} \centering \input{hexagonlight_0_modified_cropped.txt} \input{hexagonlight_10000000_modified_cropped.txt} \input{hexagonlight_20000000_modified_cropped.txt} \input{hexagonlight_30000000_modified_cropped.txt} \\ (a) \hspace{14mm} (b) \hspace{14mm} (c) \hspace{14mm} (d) \vspace{-3mm}\caption{An execution of Algorithm~\ref{alg:particles-phototaxing} with $\lambda = 4$ for a system of 91 particles, with light sources that shine upwards shown in red, after (a) 0, (b) 10 million, (c) 20 million, and (d) 30 million iterations. Multiple executions all exhibit a drift upwards, as seen here.} \label{fig:hexagonlight} \end{figure} \section{Conclusion} \label{sec:conclude} This study presented the use of physical and simulated atomic agents incapable of directed motion in confined active matter systems which exhibit locomotion on the collective scale. Moreover, the responses of the individuals of the system to external fields were used to~introduce asymmetries in the system, producing biased locomotion. Robophysical studies of the supersmarticle system were demonstrated to probabilistically favor the direction of the inactive smarticle, though the physics which drives this behavior has yet to be fully explored. Future work will probe the underlying system dynamics to refine and develop a more comprehensive understanding of the system interactions between active and inactive particles which generate biased locomotion. Physical variables such as the masses of the particles and the confining ring and the friction coefficients are expected to modulate the diffusive properties of the system. Additionally, the interaction behaviors of the particles as they move through their joint space trajectories may lead to various system modes of oscillations characterized by hysteretic displacement loops which lead to the biased locomotion seen in our research. These physical features will be explored by developing a reduced 1D model of the supersmarticle system in which particles and the confining ring will be restricted to movement along a linear track. This new 1D system will be studied experimentally and through physics-based simulations, with the intent of sweeping the physical and interaction parameter space in order to identify the governing variables which characterize the system dynamics and produce biased locomotion. We plan to continue to complement the experimental robophysical extensions with rigorous algorithmic studies of the systems to provide a better understanding on how to program collections of smarticles to achieve the desired collective behavior. Here, we extended a known algorithm by changing particles' probabilities of movement; while the simplicity of this approach is one of its strengths, new algorithmic ideas and approaches could provide further insights into phototaxing behavior. {\bf Acknowledgements:} S. Cannon: This material is based upon work supported by the National Science Foundation Graduate Research Fellowship Program under Grant DGE-1148903. J. J. Daymude and A. W. Richa: Supported in part by NSF CCF-1422603, CCF-1637393, and CCF-1733680. D. I. Goldman: Funding provided by NSF PoLS \#0957659 and \#PHY-1205878, and ARO \#W911NF-13-1-0347. D. Randall: Supported in part by NSF CCF-1526900, CCF-1637031, and CCF-1733812. \bibliographystyle{plain}
{ "redpajama_set_name": "RedPajamaArXiv" }
3,485
import compiler from "moon-compiler/src/index"; function assertGenerate(input, output) { expect(compiler.compile(input)).toEqual(output); } test("generate static element", () => { assertGenerate( "<div><h1>Test</h1><p>test</p></div>", "Moon.view.components.div({children:[Moon.view.components.h1({children:[Moon.view.components.text({data:\"Test\"})]}),Moon.view.components.p({children:[Moon.view.components.text({data:\"test\"})]})]})" ); }); test("generate static element with escaped text", () => { assertGenerate( "<div>foo \\{ bar \\< baz \\\" \" \\\n \n</div>", `Moon.view.components.div({children:[Moon.view.components.text({data:"foo \\{ bar \\< baz \\\" \\\" \\\n \\n\\\n"})]})` ); }); test("generate static element with escaped text at start", () => { assertGenerate( "<div>\nTest</div>", `Moon.view.components.div({children:[Moon.view.components.text({data:"\\n\\ Test"})]})` ); }); test("generate static element with whitespace only nodes", () => { assertGenerate( `<div> <h1>Test</h1> <p>test</p> </div>`, `Moon.view.components.div({children:[ Moon.view.components.h1({children:[Moon.view.components.text({data:\"Test\"})]}) ,Moon.view.components.p({children:[Moon.view.components.text({data:\"test\"})]}) ]})` ); }); test("generate dynamic element", () => { assertGenerate( "<div><h1>Test</h1><p>test {message}</p></div>", "Moon.view.components.div({children:[Moon.view.components.h1({children:[Moon.view.components.text({data:\"Test\"})]}),Moon.view.components.p({children:[Moon.view.components.text({data:\"test \"}),Moon.view.components.text({data:message})]})]})" ); }); test("generate static attributes", () => { assertGenerate( "<div><h1 id='bar' class='foo'>Test</h1><p>test {message}</p></div>", "Moon.view.components.div({children:[Moon.view.components.h1 ({\"id\":'bar' ,\"class\":'foo',children:[Moon.view.components.text({data:\"Test\"})]}),Moon.view.components.p({children:[Moon.view.components.text({data:\"test \"}),Moon.view.components.text({data:message})]})]})" ); }); test("generate dynamic attributes", () => { assertGenerate( "<div><h1 id='bar' class=(foo)>Test</h1><p>test {message}</p></div>", "Moon.view.components.div({children:[Moon.view.components.h1 ({\"id\":'bar' ,\"class\":(foo),children:[Moon.view.components.text({data:\"Test\"})]}),Moon.view.components.p({children:[Moon.view.components.text({data:\"test \"}),Moon.view.components.text({data:message})]})]})" ); }); test("generate dynamic data attribute", () => { assertGenerate( "<div foo=(bar) bar=(data)></div>", "Moon.view.components.div ({\"foo\":(bar) ,\"bar\":(data)})" ); }); test("generate static children attribute", () => { assertGenerate( "<div foo=(bar) children='fake'></div>", "Moon.view.components.div ({\"foo\":(bar) ,\"children\":'fake'})" ); }); test("generate dynamic children attribute", () => { assertGenerate( "<div children=(children)></div>", "Moon.view.components.div ({\"children\":(children)})" ); }); test("generate events", () => { assertGenerate( "<div><h1 id='bar' class=(foo) onClick=(doSomething)>Test</h1><p>test {message}</p></div>", "Moon.view.components.div({children:[Moon.view.components.h1 ({\"id\":'bar' ,\"class\":(foo) ,\"onClick\":(doSomething),children:[Moon.view.components.text({data:\"Test\"})]}),Moon.view.components.p({children:[Moon.view.components.text({data:\"test \"}),Moon.view.components.text({data:message})]})]})" ); }); test("generate static components", () => { assertGenerate( "<div><Component/></div>", "Moon.view.components.div({children:[Component({})]})" ); }); test("generate static components with dot and first character lowercase", () => { assertGenerate( "<div><test.Component/></div>", "Moon.view.components.div({children:[test.Component({})]})" ); }); test("generate static components with data", () => { assertGenerate( "<div><Component foo='bar' bar='baz'/></div>", "Moon.view.components.div({children:[Component ({\"foo\":'bar' ,\"bar\":'baz'})]})" ); }); test("generate static components with children", () => { assertGenerate( "<div><Component foo='bar' bar='baz'><p>static</p></Component></div>", "Moon.view.components.div({children:[Component ({\"foo\":'bar' ,\"bar\":'baz',children:[Moon.view.components.p({children:[Moon.view.components.text({data:\"static\"})]})]})]})" ); }); test("generate dynamic components with data", () => { assertGenerate( "<div><Component foo=(bar) bar='baz'/></div>", "Moon.view.components.div({children:[Component ({\"foo\":(bar) ,\"bar\":'baz'})]})" ); }); test("generate dynamic components with children", () => { assertGenerate( "<div><Component foo=(bar) bar='baz'><p>{message}</p></Component></div>", "Moon.view.components.div({children:[Component ({\"foo\":(bar) ,\"bar\":'baz',children:[Moon.view.components.p({children:[Moon.view.components.text({data:message})]})]})]})" ); }); test("generate text directly", () => { assertGenerate( "<text value=(foo)/>", `Moon.view.components.text ({"value":(foo)})` ); }); test("generate static element nodes", () => { assertGenerate( "<element name='h1' data='fake data' children='fake children'/>", "Moon.view.components.element ({\"name\":'h1' ,\"data\":'fake data' ,\"children\":'fake children'})" ); }); test("generate static data element nodes", () => { assertGenerate( "<element name='h1' data='static' children=(dynamic)/>", "Moon.view.components.element ({\"name\":'h1' ,\"data\":'static' ,\"children\":(dynamic)})" ); }); test("generate static children element nodes", () => { assertGenerate( "<element name='h1' data={dynamic: dynamic} children=[]/>", "Moon.view.components.element ({\"name\":'h1' ,\"data\":{dynamic: dynamic} ,\"children\":[]})" ); }); test("generate dynamic element nodes", () => { assertGenerate( "<element name='h1' data=(dynamic) children=(dynamicChildren)/>", "Moon.view.components.element ({\"name\":'h1' ,\"data\":(dynamic) ,\"children\":(dynamicChildren)})" ); }); test("generate if node", () => { assertGenerate( `<div><(condition ? <p>test</p> : <text value=""/>)*></div>`, "Moon.view.components.div({children:[(condition ? Moon.view.components.p({children:[Moon.view.components.text({data:\"test\"})]}) : Moon.view.components.text ({\"value\":\"\"}))]})" ); }); test("generate if node at root", () => { assertGenerate( `<(condition ? <p>test</p> : <text value=""/>)*>`, "(condition ? Moon.view.components.p({children:[Moon.view.components.text({data:\"test\"})]}) : Moon.view.components.text ({\"value\":\"\"}))" ); }); test("generate if/else node", () => { assertGenerate( "<(condition ? <p>test</p> : <p>{dynamic}</p>)*>", "(condition ? Moon.view.components.p({children:[Moon.view.components.text({data:\"test\"})]}) : Moon.view.components.p({children:[Moon.view.components.text({data:dynamic})]}))" ); }); test("generate loop", () => { assertGenerate( "<span children=(list.map(x => <p>{x}</p>))/>", "Moon.view.components.span ({\"children\":(list.map(x => Moon.view.components.p({children:[Moon.view.components.text({data:x})]})))})" ); }); test("generate node with name as identifier", () => { assertGenerate( `<div*>`, `Moon.view.components.div` ); }); test("generate node with name as string", () => { assertGenerate( `<"div foo"*>`, `"div foo"` ); }); test("generate node with name as block", () => { assertGenerate( `<(div + foo)*>`, `(div + foo)` ); }); test("generate node data with name as identifier", () => { assertGenerate( `<div/>`, `Moon.view.components.div({})` ); }); test("generate node data with name as string", () => { assertGenerate( `<"div foo"/>`, `"div foo"({})` ); }); test("generate node data with name as block", () => { assertGenerate( `<(div + foo)/>`, `(div + foo)({})` ); }); test("generate node data with name as block and data as block", () => { assertGenerate( `<(div + foo) (custom)/>`, `(div + foo) ((custom))` ); }); test("generate node data with name as block and data as attributes", () => { assertGenerate( `<(div + foo) foo="bar" bar=baz baz=(foo)/>`, `(div + foo) ({\"foo\":\"bar\" ,\"bar\":baz ,\"baz\":(foo)})` ); }); test("generate node data children with name as block and data as attributes", () => { assertGenerate( `<(div + foo) foo="bar" bar=baz baz=(foo)></>`, `(div + foo) ({\"foo\":\"bar\" ,\"bar\":baz ,\"baz\":(foo)})` ); }); test("generate node data children with name as identifier and data as attributes", () => { assertGenerate( `<div foo="bar" bar=baz baz=(foo)>child <div>here {foo}</div></div>`, `Moon.view.components.div ({\"foo\":\"bar\" ,\"bar\":baz ,\"baz\":(foo),children:[Moon.view.components.text({data:\"child \"}),Moon.view.components.div({children:[Moon.view.components.text({data:\"here \"}),Moon.view.components.text({data:foo})]})]})` ); }); test("generate with comments", () => { const code = `// <h1>not converted</h1>\n`; assertGenerate(code, code); }); test("generate with multiline comments", () => { const code = `/*\n<h1>not converted</h1>\n*/`; assertGenerate(code, code); }); test("generate with moon comments outside of node", () => { assertGenerate(`console.log(# comment\\# # "hello moon")`, "console.log(/* comment\\# */ \"hello moon\")"); }); test("generate with moon comments inside node", () => { assertGenerate(`const hi = #test#<#sep#div#sep#foo=bar#sep##sep##sep#>test</div>#foo#;`, `const hi = /*test*//*sep*/Moon.view.components.div/*sep*/({\"foo\":bar/*sep*//*sep*//*sep*/,children:[Moon.view.components.text({data:\"test\"})]})/*foo*/;`); }); test("generate with double quote strings", () => { const code = `"<h1>not converted</h1>"`; assertGenerate(code, code); }); test("generate with single quote strings", () => { const code = `'<h1>not converted</h1>'`; assertGenerate(code, code); }); test("generate with template strings", () => { const code = "`<h1>not converted</h1>`"; assertGenerate(code, code); }); test("generate with regular expressions", () => { const code = "/<h1*>/"; assertGenerate(code, code); }); test("generate other expressions", () => { const code = "(1 + 1)"; assertGenerate(code, code); }); test("generate other complex nested expressions", () => { const code = "(1 + ('hello\\'' + `world\\\"`))"; assertGenerate(code, code); }); test("generate other complex nested expressions inside views", () => { assertGenerate( "<h1 test=(1 + ('hello\\'' + `world\\\"`))>Test</h1>", "Moon.view.components.h1 ({\"test\":(1 + ('hello\\'' + `world\\\"`)),children:[Moon.view.components.text({data:\"Test\"})]})" ); }); test("generate views with surrounding whitespace", () => { assertGenerate( `( <p>Moon</p> )`, `( Moon.view.components.p({children:[Moon.view.components.text({data:\"Moon\"})]}) )` ); }); test("fails on invalid parse nodes", () => { expect(compiler.generate({type: "invalid"})).toBeUndefined(); });
{ "redpajama_set_name": "RedPajamaGithub" }
1,606
456MW Upper Tamakoshi Hydel project to float shares to public in Dec The much-awaited 456MW Upper Tamakoshi Hydro-power Project is gearing up to float shares to the public, including those residing in project-affected areas, in December. The project is planning to launch initial public offering (IPO) equivalent to 25 percent of its stake, which is worth Rs2.64 billion. The project hopes to wrap up the share distribution process by March 2018. Nepal Electricity Authority (NEA), the state-owned power utility and one of the promoters of the project, is selling 10 percent of the shares of the project to locals of Dolakha district affected by the project. It is offering another 15 percent of the project's stake to the general public. A meeting of the District Coordination Committee (DCC) Dolakha held recently has decided to issue 10 percent of the project's shares to project-affected locals immediately after completion of the provincial and federal elections. "We will launch the IPO after the elections slated for the last week of November are over," said Upper Tamakoshi Project Chief Bigyan Raj Shrestha. "First of all, we will distribute 10 percent of the project's shares to project-affected residents. We will then allot another 15 percent of the project's shares to the general public." If shares allocated for project-affected locals go unsubscribed, the hydropower project will allow the general public to purchase those shares. The DDC Dolakha has requested the project to allow locals of Dolakha to purchase minimum 30 units of shares as against the provision of 50 units introduced by the Securities Board of Nepal (Sebon), the securities market regulator. "As banks and financial institutions have not shown interest to provide collateral-free loans to project-affected locals, we fear that impoverished locals of the district will not be able to tap the opportunity of owning a stake in the project," said DCC Dolakha Chief Dabal Pandey. "Therefore, we have called on the project developer to request the regulator to make amendment to the provision on minimum number of shares that needs to be purchased during IPO." The Upper Tamakoshi Hydropower Project is considered a strategic project to end the acute power shortage felt by the country. As of now, it is on track to meet the revised completion deadline of July 2018. The project, so far, has completed 90 percent of construction works, according to Shrestha. "We have almost completed digging 16-km tunnel as well," said Shrestha. "We need to dig around 390 meters more to achieve a major breakthrough. We hope to achieve that goal within a couple of months." The national-pride project was originally scheduled to be completed in mid-July 2016, but the earthquakes, Indian trade blockade and various technical and social issues pushed back the completion date. Prior to the earthquakes hit the country, the project had completed 79 percent of civil works. The project faced cost overruns due to these delays. The project is now expected to cost Rs42 billion, up from the previous estimate of Rs 35.3 billion. Source : The Kathmandu Post Previous articleसामुदायिक विद्युतीय संस्था प्राधिकरणलाई लिइदिन आग्रह Next articleकागजमै सीमित विद्युत् आयोजना तल्लो मोदीको विद्युत् उत्पादन पुसदेखि महसुल नतिरेपछि धामाधम लाइन काट्दै प्राधिकरण Development of Inland Water Transport will transform the scope of maritime... भारतीय ऊर्जा बजारबाट विद्युत खरिद गर्न नेपाल विद्युत प्राधिकरणले पायो स्वीकृति Turning the lights on across South Asia Project Engineer (Electrical & Civil) Wanted at CEDB Hydro Fund Ltd NEA Trade union disagreed with Transmission Grid Company उपकार्यकारी निर्देशक र महाप्रबन्धकको दरबन्दी आधा कटौती
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
8,791
\section{\titleq{Introduction}} Space-time data arise in many applications, see \cite{CrWi11} for an introduction and an overview. Increasingly larger space-time data sets are obtained, for instance, from remote sensing satellites or deterministic physical models such as numerical weather prediction (NWP) models. Statistical models are needed that can cope with such data. As \cite{WiHo10} point out, there are two basic paradigms for constructing spatio-temporal models. The first approach is descriptive and follows the traditional geostatistical paradigm, using joint space-time covariance functions \citep{CrHu99, Gn02, Ma03, Wi03, St05, PaSc06}. The second approach is dynamic and combines ideas from time-series and spatial statistics \citep{SoSw96, WiCr99, HuHs04, XuWiFo05, GeEtAl05, JoCrHu07, SiKuSt11}. Even for purely spatial data, developing methodology which can handle large data sets is an active area of research. \cite{BaBrGe04} refer to this as the ``big n problem''. Factorizing large covariance matrices is not possible without assuming a special structure or using approximate methods. Using low rank matrices is one approach \citep{NyWiRo02, BaEtAl08, CrJo08, St08, Wi2010}. Other proposals include using Gaussian Markov random-fields (GMRF) \citep{RuTj02, RuHe05, LiLiRu10} or applying tapering \citep{FuGeNy06} thereby obtaining sparse precision or covariance matrices, respectively, for which calculations can be done efficiently. Another proposed solution is to approximate the likelihood so that it can be evaluated faster \citep{Ve88, StChWe04, Fu07, EiEtAl11}. \cite{RoWi05} and \cite{Pa07} use Fourier functions to reduce computational costs. In a space-time setting, the situation is the same, if not worse: one runs into a computational bottleneck with high dimensional data since the computational cost to factorize dense $NT\times NT$ covariance matrices is $O((NT)^3)$, $N$ and $T$ being the number of points in space and time, respectively. Moreover, specifying flexible and realistic space-time covariance functions is a nontrivial task. In this paper, we follow the dynamic approach and study models which are defined through a stochastic advection-diffusion partial differential equation (SPDE). This has the advantage of providing physically motivated parametrizations of space-time covariances. We show that when solving the SPDE using Fourier functions, one can do computationally efficient statistical inference. In the spectral space, computational costs for the Kalman filter and backward sampling algorithms are of order $O(NT)$. As we show, roughly speaking, this computational efficiency is due to the temporal Markov property, the fact that Fourier functions are eigenfunctions of the spatial differential operators, and the use of some matrix identities. The overall computational costs are then determined by the ones of the fast Fourier transform (FFT) \citep{CoTu65} which are $O(TN\log N)$. In addition, computational time can be further reduced by running the $T$ different FFTs in parallel. Defining Gaussian processes through stochastic differential equations has a long history in statistics going back to early works such as \cite{Wh54}, \cite{He55}, and \cite{Wh62}. Later works include \cite{JoZh97} and \cite{BrKa00}. Recently, \cite{LiLiRu10} have shown how a certain class of SPDEs can be solved using finite elements to obtain parametrizations of spatial GMRF. Note that a potential caveat of these SPDE approaches is that it is nontrivial to generalize the linear equation to non-linear ones. Spectral methods for solving partial differential equations are well established in the numerical mathematics community (see, e.g., \cite{GoOr77}, \cite{Fo92}, or \cite{Ha04}). In contrast, statistical models have different requirements and goals, since the (hyper-)parameters of an (S)PDE are not known \textit{a priori} and need to be estimated. Spectral methods have also been used in spatio-temporal statistics, mostly for approximating or solving deterministic integro-difference equations (IDEs) or PDEs. \cite{WiCr99} introduce a dynamic spatio-temporal model obtained from an IDE that is approximated using a reduced-dimensional spectral basis. Extending this work, \cite{Wi02} and \cite{XuWiFo05} propose parametrizations of spatio-temporal processes based on IDEs. Modeling tropical ocean surface winds, \cite{WiEtAl01} present a physics based model based on the shallow-water equations. \citet[Chapter 7]{CrWi11} give an overview of basis function expansions in spatio-temporal statistics. The novel features of our work are the following. While spectral methods have been used for approximating deterministic IDEs and PDEs in the statistical literature, there is no article, to our knowledge, that explicitly shows how to obtain a space-time Gaussian process by solving an advection-diffusion SPDE using the real Fourier transform. Moreover, we present computationally efficient algorithms for doing statistical inference, which use the fast Fourier transform and the Kalman filter. The computational burden can be additionally alleviated by applying dimension reduction. We also give a bound on the accuracy of the approximate solution. In the application, our main objective is to postprocess precipitation forecasts, explicitly modeling spatial and temporal variation. The idea is that the spatio-temporal model not only accounts for dependence, but also captures and extrapolates dynamically an error term of the NWP model in space and time. The remainder of this paper is organized as follows. Section \ref{ContMod} introduces the continuous space-time Gaussian process defined through the advection-diffusion SPDE. In Section \ref{SpecSpace}, it is shown how the solution of the SPDE can be approximated using the two-dimensional real Fourier transform, and we give convergence rates for the approximation. Next, in Section \ref{inference}, we show how to do computationally efficient inference. In Section \ref{Postproc}, the spatio-temporal model is used as part of a hierarchical Bayesian model, which we then apply for postprocessing of precipitation forecasts. All the methodology presented in this article is implemented in the R package \func{spate} (see \cite{SiKuSt12b}). \section{\titleq{A Continuous Space-Time Model: The Advection-Diffusion SPDE}}\label{ContMod} In one dimension, a fundamental process is the Ornstein-Uhlenbeck process which is governed by a relatively simple stochastic differential equation (SDE). The process has an exponential covariance function and its discretized version is the famous AR(1) model. In the two dimensional spatial case, \cite{Wh54} argues convincingly that the process with a Whittle correlation function is an ``elementary'' process (see Section \ref{InnoProc} for further discussion). If the time dimension is added, we think that the process defined through the stochastic partial differential equation (SPDE) in \eqref{SPDE} has properties that make it a good candidate for an ``elementary'' spatio-temporal process. It is a linear equation that explicitly models phenomena such as transport and diffusion that occur in many natural processes ranging from environmental sciences to ecology. This means that, if desired, the parameters can be given a physical interpretation. Furthermore, if some parameters equal zero (no advection and no diffusion), the covariance structure reduces to a separable one with an AR(1) structure over time and a certain covariance structure over space. The advection-diffusion SPDE, also called transport-diffusion SPDE, is given by \begin{equation}\label{SPDE} \frac{\partial}{\partial t}\xi(t,\vect{s})=-\vect{\mu}^T\nabla \xi(t,\vect{s})+\nabla\cdot\mat{\Sigma}\nabla\xi(t,\vect{s})-\zeta \xi(t,\vect{s})+\epsilon(t,\vect{s}), \end{equation} with $\vect{s}=(x,y)^T\in \mathbb{R}^{2}$, where $\nabla =\left(\frac{\partial }{\partial x},\frac{\partial }{\partial y}\right)^T$ is the gradient operator, and, for a vector field $\vect{F}=(F^x,F^y)^T$, $\nabla\cdot \vect{F}=\frac{\partial F^x}{\partial x}+\frac{\partial F^y}{\partial y}$ is the divergence operator. $\epsilon(t,\vect{s})$ is a Gaussian process that is temporally white and spatially colored. See Section \ref{InnoProc} for a discussion on the choice of the spatial covariance function. \cite{He55} and \cite{Wh63} introduced and analyzed SPDEs of similar form as in \eqref{SPDE}. \cite{JoZh97} also investigated SPDE based models. Furthermore, \cite{BrKa00} obtained such an advection-diffusion SPDE as a limit of stochastic integro-difference equation models. Without giving any concrete details, \cite{LiLiRu10} suggested that this SPDE can be used in connection with their GMRF method. See also \cite{SiLiRu12} and \cite{YuEtAl12}. \cite{CaEtAl13} model particulate matter concentration in space and time with a separable covariance structure and an SPDE based spatial Gaussian Markov random field for the innovation term. \cite{AuSi12} and \cite{HuEtAl13} use systems of SPDEs to define multivariate spatial models. The SPDE has the following interpretation. Heuristically, an SPDE specifies what happens locally at each point in space during a small time step. The first term $\vect{\mu}^T\nabla \xi(t,\vect{s})$ models transport effects (called advection in weather applications), $\vect{\mu}=(\mu_x,\mu_y)^T\in \mathbb{R}^2$ being a drift or velocity vector. The second term, $\nabla\cdot\mat{\Sigma}\nabla\xi(t,\vect{s})$, is a diffusion term that can incorporate anisotropy. If $\mat{\Sigma}$ is the identity matrix, this term reduces to the divergence ($\nabla\cdot$) of the gradient ($\nabla$) which is the ordinary Laplace operator $\nabla\cdot\nabla=\Delta=\frac{\partial^2}{\partial x^2}+\frac{\partial^2}{\partial y^2}$. The third term $-\zeta \xi(t,\vect{s})$, $\zeta>0$, diminishes $\xi(t,\vect{s})$ at a constant rate and thus accounts for damping. Finally, $\epsilon(t,\vect{s})$ is a source-sink or stochastic forcing term, also called innovation term, that can be interpreted as describing, amongst others, convective phenomena in precipitation modeling applications. Concerning the diffusion matrix $\mat{\Sigma}$, we suggest the following parametrization \begin{equation} \mat{\Sigma}^{-1}=\frac{1}{\rho_1^2}\left(\begin{matrix}\cos{\psi} & \sin{\psi}\\ -\gamma\cdot\sin{\psi} & \gamma\cdot\cos{\psi}\end{matrix}\right)^{T}\left(\begin{matrix}\cos{\psi} & \sin{\psi}\\ -\gamma\cdot\sin{\psi} & \gamma\cdot\cos{\psi}\end{matrix}\right), \end{equation} where $\rho_1>0$, $\gamma>0$, and $\psi\in[0,\pi/2]$. The parameters are interpreted as follows. $\rho_1$ acts as a range parameter and controls the amount of diffusion. The parameters $\gamma$ and $\psi$ control the amount and the direction of anisotropy. With $\gamma=1$, isotropic diffusion is obtained. \begin{figure} \centering \makebox{\includegraphics[width=\textwidth]{S_PDE_Illus}} \caption{Illustration of the SPDE in \eqref{SPDE} and the corresponding PDE. The top row illustrates a solution to the PDE which corresponds to the deterministic part of the SPDE without stochastic term $\epsilon(t,\vect{s})$. The bottom row shows one sample from the distribution specified by the SPDE with a fixed initial condition. The drift vector points from north-east to south-west and the diffusive part exhibits anisotropy in the same direction. The same parameters are used for both the PDE and the SPDE: $\zeta = -\log(0.99), \rho_1 = 0.06, \gamma = 3, \psi = \pi/4, \mu_x = -0.1, \mu_y = -0.1$, and for the stochastic innovations: $\rho_0 = 0.05, \sigma^2 = 0.7^2$. The color scales are different in different panels.} \label{fig:PDEIllus} \end{figure} Figure \ref{fig:PDEIllus} illustrates the SPDE in \eqref{SPDE} and the corresponding PDE without the stochastic innovation term. The top row shows a solution to the PDE which corresponds the deterministic part of the SPDE that is obtained when there is no stochastic term $\epsilon(t,\vect{s})$. The figure shows how the initial state in the top-left plot gets propagated forward in time. The drift vector points from north-east to south-west and the diffusive part exhibits anisotropy in the same direction. A $100 \times 100$ grid is used and the PDE is solved in the spectral domain using the method described below in Section \ref{SpecSpace}. There is a fundamental difference between the deterministic PDE and the probabilistic SPDE. In the first case, a deterministic process is modeled directly. In the second case, the SPDE defines a stochastic process. Since the operator is linear and the input Gaussian, this process is a Gaussian process whose covariance function is implicitly defined by the SPDE. The bottom row of Figure \ref{fig:PDEIllus} shows one sample from this Gaussian process. The same initial state as in the deterministic example is used, i.e., we use a fixed initial state. Except for the stochastic part, the same parameters are used for both the PDE and the SPDE. For the innovations $\epsilon(t,\vect{s})$, we choose a Gaussian process that is temporally independent and spatially structured according to the Mat\'ern covariance function with smoothness parameter $1$. Again, the drift vector points from north-east to south-west and the diffusive part exhibits anisotropy in the same direction. Note that the use of this spatio-temporal Gaussian process is not restricted to situations where it is a priori known that phenomena such as transport and diffusion occur. In the one dimensional case, it is common to use the AR(1) process in situations where it is not a priori clear whether the modeled process follows the dynamic of the Ornstein-Uhlenbeck SDE. In two dimensions, the same holds true for the process with the Whittle covariance function, and even more so for the process having an exponential covariance structure. Having this in mind, even though the SPDE in \eqref{SPDE} is physically motivated, it can be used as a general spatio-temporal model. As the case may be, the interpretation of the parameters can be more or less straightforward. \subsection{Spectral Density and Covariance Function} As can be shown using the Fourier transform (see, e.g., \cite{Wh63}), if the innovation process $\epsilon(t,\vect{s})$ is stationary with spectral density $\widetilde{f}(\vect{k})$, the spectrum of the stationary solution $\xi(t,\vect{s})$ of the SPDE \eqref{SPDE} is \begin{equation}\label{SPDESpec} f(\omega,\vect{k})=\widetilde{f}(\vect{k})\frac{1}{(2\pi)}\left(\left(\vect{k}^T\mat{\Sigma}\vect{k}+\zeta\right)^2+\left(\omega+\vect{\mu}^T\vect{k}\right)^2\right)^{-1}, \end{equation} where $\vect{k}$ and $\omega$ are spatial wavenumbers and temporal frequencies. The covariance function $C(t,\vect{s})$ of $\xi(t,\vect{s})$ is then given by \begin{equation}\label{SPDECov} \begin{split} C(t,\vect{s})=&\int f(\omega,\vect{k})\expo{\imag t\omega}\expo{\imag \vect{s}'\vect{k}}d\vect{k} d\omega\\ =&\int\widetilde{f}(\vect{k})\frac{\expo{-\imag\vect{\mu}^T\vect{k}t-(\vect{k}^T\mat{\Sigma}\vect{k}+\zeta)|t|}}{2(\vect{k}^T\mat{\Sigma}\vect{k}+\zeta)}\expo{\imag \vect{s}'\vect{k}}d\vect{k}, \end{split} \end{equation} where $\imagND$ denotes the imaginary number $\imagND^2=-1$, and the integration over the temporal frequencies $\omega$ follows from the calculation of the characteristic function of the Cauchy distribution \citep{AbSt64}. The spatial integral above has no closed form solution but can be computed approximately by numerical integration. Since, in general, the spectrum does not factorize into a temporal and a spatial component, we see that $\xi(t,\vect{s})$ has a non-separable covariance function (see \cite{GnGeGu07} for a definition of separability). The model reduces to a separable one, though, when there is no advection and diffusion, i.e., when both $\vect{\mu}$ and $\mat{\Sigma}$ are zero. In this case, the covariance function is given by $C(t,\vect{s})=\frac{1}{2\zeta}\expo{-\zeta|t|}C(\vect s)$, where $C(\vect s)$ denotes the spatial covariance function of the innovation process. \subsection{Specification of the Innovation Process}\label{InnoProc} It is assumed that the innovation process is white in time and spatially colored. In principle, one can choose any spatial covariance function such that the covariance function in \eqref{SPDECov} is finite at zero. Note that if $\widetilde{f}(\vect{k})$ is integrable, then $f(\omega,\vect{k})$ is also integrable. Similarly as \cite{LiLiRu10}, we opt for the most commonly used covariance function in spatial statistics: the Mat\'ern covariance function (see \cite{HaSt93}, \cite{st99}). Since in many applications the smoothness parameter is not estimable, we further restrict ourselves to the Whittle covariance function. This covariance function is of the form $\sigma^2 d/\rho_0 K_1\left(d/\rho_0\right)$ with $d$ being the Euclidean distance between two points and $K_1\left(d/\rho_0\right)$ being the modified Bessel function of order $1$. It is called after \cite{Wh54} who introduced it and argued convincingly that it ``may be regarded as the 'elementary' correlation in two dimensions, similar to the exponential in one dimension.''. It can be shown that the stationary solution of the SPDE \begin{equation}\label{SPDEInnov} \left(\nabla\cdot\nabla-\frac{1}{\rho_0^2}\right) \epsilon(t,\vect{s})=\mathcal{W}(t,\vect{s}), \end{equation} where $\mathcal{W}(t,\vect{s})$ is a zero mean Gaussian white noise field with variance $\sigma^2$, has the Whittle covariance function in space. From this, it follows that the spectrum of the process $\epsilon(t,\vect{s})$ is given by \begin{equation}\label{WhittleSpec} \widetilde{f}(\vect{k})=\frac{\sigma^2}{(2\pi)^2}\left(\vect{k}^T\vect{k}+\frac{1}{\rho_0^2}\right)^{-2}, ~~\rho_0>0,~\sigma>0. \end{equation} The parameter $\sigma^2$ determines the marginal variance of $\epsilon(t,\vect{s})$, and $\rho_0$ is a spatial range parameter. \subsection{Relation to an Integro-Difference Equation} Assuming discrete time steps with lag $\Delta$, \cite{BrKa00} consider the following integro-difference equation (IDE) \begin{equation}\label{IDE} \xi(t,\vect{s})=\expo{-\Delta\zeta}\int_{\mathbb{R}^{2}}{h(\vect{s}-\vect{s}')\xi(t-\Delta,\vect{s}')d\vect{s}'}+\epsilon(t,\vect{s}),~~\vect{s} \in \mathbb{R}^{2}, \end{equation} with a Gaussian redistribution kernel \begin{equation*} h(\vect{s}-\vect{s}')= (2\pi)^{-1}|2\Delta\mat{\Sigma}|^{-1/2}\exp\left(-(\vect{s}-\vect{s}'-\Delta\vect{\mu})^T(2\Delta\mat{\Sigma})^{-1}(\vect{s}-\vect{s}'-\Delta\vect{\mu})/2\right), \end{equation*} $\epsilon(t,\vect{s})$ being temporally independent and spatially dependent. They show that in the limit $\Delta \to 0$, the solution of the IDE and the one of the SPDE in \eqref{SPDE} coincide. The IDE is interpreted as follows: the convolution kernel $h(\vect{s}-\vect{s}')$ determines the weight or the amount of influence that a location $\vect{s}'$ at previous time $t-\Delta$ has on the point $\vect{s}$ at current time $t$. This IDE representation provides an alternative way of interpreting the SPDE model and its parameters. \cite{StFrHi02} show under which conditions a dynamic model determined by an IDE as in \eqref{IDE} can be represented using a parametric joint space-time covariance function, and vice versa. Based on the IDE in \eqref{IDE}, \cite{SiKuSt11} construct a spatio-temporal model for irregularly spaced data and apply it to obtain short term predictions of precipitation. \cite{Wi02} and \cite{XuWiFo05} also model spatio-temporal rainfall based on IDEs. \section{\titleq{Solution in the Spectral Space}}\label{SpecSpace} Solutions $\xi(t,\vect{s})$ of the SPDE \eqref{SPDE} are defined in continuous space and time. In practice, one needs to discretize both space and time. The resulting vector of $NT$ space-time points is in general of large dimension. This makes statistical inference, be it frequentist or Bayesian, computationally difficult to impossible. However, as we show in the following, solving the SPDE in the spectral space alleviates the computational burden considerably and allows for dimension reduction, if desired. Heuristically speaking, spectral methods \citep[Chapter 7]{GoOr77,CrWi11} approximate the solution $\xi(t,\vect{s})$ by a linear combination of deterministic spatial functions $\phi_j(\vect{s})$ with random coefficients $\alpha_j(t)$ that evolve dynamically over time: \begin{equation}\label{FreqApprox} \begin{split} \xi^K(t,\vect{s})&=\sum_{j=1}^K{\alpha_j(t)\phi_j(\vect{s})}=\vect{\phi}(\vect{s})^T\vect{\alpha}(t), \end{split} \end{equation} where $\vect{\phi}(\vect{s})=(\phi_1(\vect{s}),\dots,\phi_K(\vect{s}))^T$ and $\vect{\alpha}(t)=(\alpha_1(t),\dots,\alpha_K(t))^T$. To be more specific, we use Fourier functions \begin{equation}\label{FourFunc} \phi_j(\vect{s})=\exp{(\imag \vect{k}_j^T \vect{s})}, \end{equation} where $\vect{k}_j =(k_j^x,k_j^y)^T$ is a spatial wavenumber. The advantages of using Fourier functions for solving linear, deterministic PDEs are well known, see, e.g., \citet{Pe87}. First, differentiation in the physical space corresponds to multiplication in the spectral space. In other words, Fourier functions are eigenfunctions of the spatial differential operator. Instead of approximating the differential operator in the physical space and then worrying about approximation errors, one just has to multiply in the spectral space, and there is no approximation error of the operator when all the basis functions are retained. In addition, one can use the FFT for efficiently transforming from the physical to the spectral space, and vice versa. Proposition \ref{PropMod} shows that Fourier functions are also useful for the stochastic PDE \eqref{SPDE}: if the initial condition and the innovation process are in the space spanned by a finite number of Fourier functions, then the solution of the SPDE \eqref{SPDE} remains in this space for all times and can be given in explicit form. \begin{proposition}\label{PropMod} Assume that the initial state and the innovation terms are of the form \begin{equation}\label{SPDEAss} \xi^K(0,\vect{s}) =\vect{\phi}(\vect{s})^T\vect{\alpha}(0), \quad \epsilon^K(t,\vect{s}) =\vect{\phi}(\vect{s})^T\widetilde{\vect{\epsilon}}(t) \end{equation} where $\vect{\phi}(\vect{s})=(\phi_1(\vect{s}),\dots,\phi_K(\vect{s}))^T$, $\phi_j(\vect{s})$ is given in \eqref{FourFunc}, $\vect{\alpha}(0)\sim N\left(\vect 0,\textnormal{diag}\left(\widetilde{f}_0(\vect{k}_j)\right)\right)$, $\widetilde{f}_0(\cdot)$ being a spectral density, and $\widetilde{\vect{\epsilon}}(t)$ is a $K$-dimensional Gaussian white noise independent of $\vect{\alpha}(0)$ with \begin{equation}\label{SPDEAssStat}\mathrm{Cov}(\widetilde{\vect{\epsilon}}(t),\widetilde{\vect{\epsilon}}(t')) = \delta_{t,t'}\textnormal{diag}\left( \widetilde{f}(\vect{k}_j)\right),\end{equation} where $\widetilde{f}(\cdot)$ is a spectral density and $\delta_{t,t'}$ the Kronecker delta function equaling $1$ if $t=t'$ and zero otherwise. Then the process $\xi^K(t,\vect{s}) =\vect{\phi}(\vect{s})^T\vect{\alpha}(t)$, where the components $\alpha_j(t)$ are given by \begin{equation}\label{alpha} \alpha_j(t)= \expo{h_j t} \alpha_j(0) + \int_0^t \expo{h_j (t-\tint)} \widetilde{\epsilon}_j(\tint) d\tint \end{equation} with $h_j = -\imag \vect{\mu}^T\vect{k}_j-\vect{k}_j^T\mat{\Sigma}\vect{k}_j - \zeta$, is a solution of the SPDE in \eqref{SPDE}. For $t \rightarrow \infty$, the influence of the initial condition $\exp(h_j t) \alpha_j(0)$ converges to zero and the process $\xi^K(t,\vect{s})$ converges to a time stationary Gaussian process with mean zero and $$\mathrm{Cov}(\xi^K(t+\Delta t,\vect{s}),\xi^K(t,\vect{s}')) = \vect{\phi}(\vect{s})^T \textnormal{diag}\left(\frac{-\expo{h_j\Delta t}\widetilde{f}(\vect{k}_j)}{h_j + h_j^\ast}\right) \vect{\phi}(\vect{s}')^\ast,$$ where $.^\ast$ stands for complex conjugation. \end{proposition} This result shows that the solution of the SPDE is exact over time, given the frequencies included. In contrast to finite differences, one does not accumulate errors over time. This is related to the fact that there is no need for numerical stability conditions. For statistical applications, where the parameters are not known a priori, this is particularly useful. The approximation error of $\xi^K(t,\vect{s})$ to the space time stationary solution of the SPDE in \eqref{SPDE} only depends on the number of spectral terms and not on the temporal discretization, see also Proposition \ref{PropConv} below. Since Fourier terms are global functions, stationarity in space, but not in time, is a necessary assumption. \begin{proof} By \eqref{alpha}, we have \begin{equation*} \begin{split} \frac{\partial}{\partial t}\xi^K(t,\vect{s})&=\sum_{j=1}^K{\dot{\alpha}_j(t)\phi_j(\vect{s})}=\sum_{j=1}^K{(h_j \alpha_j(t) + \widetilde{\epsilon}_j(t))\phi_j(\vect{s})}. \end{split} \end{equation*} On the other hand, since the functions $\phi_j(\vect{s})=\exp{(\imag \vect{k}_j^T\vect{s})}$ are Fourier terms, differentiation in the physical space corresponds to multiplication in the spectral space: \begin{equation}\label{AdvecDiff} \vect{\mu}^T\nabla\phi_j(\vect{s})=i\vect{\mu}^T\vect{k}_j\phi_j(\vect{s}) \end{equation} and \begin{equation} \nabla\cdot\mat{\Sigma}\nabla\phi_j(\vect{s})=-\vect{k}_j^T\mat{\Sigma}\vect{k}_j\phi_j(\vect{s}). \end{equation} Therefore, by the definition of $h_j$, \begin{equation*} \left(-\vect{\mu}^T\nabla + \nabla\cdot\mat{\Sigma}\nabla -\zeta\right) \sum_{j=1}^K \alpha_j(t)\phi_j(\vect{s}) =\sum_{j=1}^K h_j \alpha_j(t) \phi_j(\vect{s}). \end{equation*} Together, we have $$\frac{\partial}{\partial t}\xi^K(t,\vect{s})= \left(-\vect{\mu}^T\nabla + \nabla\cdot\mat{\Sigma}\nabla -\zeta\right)\xi^K(t,\vect{s})+\epsilon^K(t,\vect{s})$$ which proves the first part of the proposition. Since the real part of $h_j$ is negative, $\exp(h_jt) \rightarrow 0$ for $t \rightarrow \infty$. Moreover, \begin{equation} \begin{split} \lim\limits_{t \rightarrow \infty} \mathrm{Cov}(\alpha_j(t+\Delta t),\alpha_{j'}(t))&= \lim\limits_{t \rightarrow \infty} \expo{h_j\Delta t} \delta_{j,j'}\widetilde{f}(\vect{k}_j) \int_0^t \expo{-(h_j + h_{j'}^\ast)(t-\tint)}d\tint\\ &= -\frac{\expo{h_j\Delta t}}{h_j + h_j^\ast}\delta_{j,j'}\widetilde{f}(\vect{k}_j), \end{split} \end{equation} and thus the last statement follows. \end{proof} We assume that the forcing term $\epsilon(t,.)$, the initial state $\xi(0,.)$, and consequently also the solution $\xi(t,.)$, are stationary in space. Recall the Cram\'er representation for a stationary field $\epsilon(t,.)$ $$\epsilon(t,\vect{s}) = \int \exp{(\imag \vect{k}^T \vect{s})} d \widetilde \epsilon_t(\vect{k})$$ where $\widetilde\epsilon_t$ has orthogonal increments $\mathrm{Cov}(d\widetilde\epsilon_t(\vect{k}),d\widetilde\epsilon_{t'}(\vect{l})) = \delta_{t,t'}\delta_{\vect{k},\vect{l}} \widetilde{f}(\vect{k})$ and $\widetilde{f}(\cdot)$ is the spectral density of $\epsilon(t,.)$ (see, e.g., \cite{CrLe67}). This implies that we can approximate any stationary field, in particular also the one with a Whittle covariance function, by a finite linear combination of complex exponentials, and the covariance of $\widetilde{\vect{\epsilon}}(t)$ is a diagonal matrix as required in the proposition. Its entries are specified in \eqref{WhittleSpec}. Concerning the initial state, one can use the stationary distribution of $\xi(t,.)$. An alternative choice is to use the same spatial distribution as for the innovations: $\widetilde{f}_0(\cdot)=\widetilde{f}(\cdot)$. \subsection{Approximation bound} By passing to the limit $K \rightarrow \infty$ such that both the wavenumbers $\vect{k}_j$ cover the entire domain $\mathbb{R}^2$ and the distance between neighboring wavenumbers goes to zero, we obtain from \eqref{FreqApprox} the stationary (in space and time) solution with spectral density as in \eqref{SPDESpec}. In practice, if one uses the discrete Fourier transform (DFT), or its fast variant, the FFT, the wavenumbers are regularly spaced and the distance between them is fixed for all $K$ (see below). This implies that the covariance function of an approximate solution is periodic which is equivalent to assuming a rectangular domain being wrapped around a torus. Since in most applications, the domain is fixed anyway, this is a reasonable assumption. Based on the above considerations, we assume, in the following, that $\vect s \in [0,1]^2$ with periodic boundary condition, i.e., that $[0,1]^2$ is wrapped on a torus. In practice, to avoid spurious periodicity, we can apply what is called ``padding''. This means that we take $\vect s \in [0,0.5]^2$ and then embed it in $[0,1]^2$. As in the discrete Fourier transform, if we choose $\vect s \in [0,1]^2$, it follows that the spatial wavenumbers $\vect k_j$ lie on the $n\times n$ grid given by $D_n=\{ 2\pi\cdot(i,j): -(n/2+1)\leq i,j \leq n/2\}=\{-2\pi(n/2+1),\dots,2\pi n/2\}^2$ with $n^2 = N = K$, $n$ being an even natural number. We then have the following convergence result. \begin{proposition}\label{PropConv} When $N \rightarrow \infty$, the approximation $\xi^N(t,\vect{s})$ converges in law to the solution $\xi(t,\vect{s})$ of the SPDE \eqref{SPDE} with $\vect s \in [0,1]^2$ wrapped on a torus, and we have the bound \begin{equation} |C(t,\vect{s})-C^N(t,\vect{s})|\leq \sigma^2_{\xi}-\sigma^2_{\xi^N} , \end{equation} where $C(t,\vect{s})$ and $C^N(t,\vect{s})$ denote the covariance functions of $\xi(t,\vect{s})$ and $\xi^N(t,\vect{s})$, respectively, and where $\sigma^2_{\xi}=C(0,\vect{0})$ and $\sigma^2_{\xi^N}=C^N(0,\vect{0})$ denote the marginal variances of these two processes. \end{proposition} \begin{proof} Similarly as in \eqref{SPDECov} and due to $\vect{k} \in 2\pi\cdot \mathbb{Z}^2$, it follows that the covariance function of $\xi(t,\vect{s})$ is given by \begin{equation} \begin{split} C(t,\vect{s})=&\sum_{\vect{k} \in 2\pi\cdot \mathbb{Z}^2}\int f(\omega,\vect{k})\expo{\imag t\omega}d\omega\exp(\imag\vect{s}'\vect{k}) \\ =&\sum_{\vect{k} \in 2\pi\cdot\mathbb{Z}^2}\widetilde{f}(\vect{k})\frac{-\expo{h_{\vect k}t}}{h_{\vect k}+h_{\vect k}^*}\exp(\imag\vect{s}'\vect{k}), \end{split} \end{equation} where $h_{\vect k}= -\imag \vect{\mu}^T\vect{k}-\vect{k}^T\mat{\Sigma}\vect{k} - \zeta$. From Proposition \ref{PropMod} we know that the approximate solution $\xi^N(t,\vect{s})$ has the covariance function \begin{equation} C^N(t,\vect{s})=\sum_{\vect{k} \in D_n}\widetilde{f}(\vect{k})\frac{-\expo{h_{\vect k}t}}{h_{\vect k}+h_{\vect k}^*}\exp(\imag\vect{s}'\vect{k}). \end{equation} It follows that \begin{equation} \begin{split} |C(t,\vect{s})-C^N(t,\vect{s})|=&\left|\sum_{\vect{k} \in 2\pi\cdot \mathbb{Z}^2}\widetilde{f}(\vect{k})\frac{-\expo{h_{\vect k}t}}{h_{\vect k}+h_{\vect k}^*}(1-\mathbbmss{1}_{\{\vect k \in D_n\}} )\exp(\imag\vect{s}'\vect{k})\right|\\ \leq&\sum_{\vect{k} \in 2\pi\cdot \mathbb{Z}^2}\widetilde{f}(\vect{k})\frac{-1}{h_{\vect k}+h_{\vect k}^*}(1-\mathbbmss{1}_{\{\vect k \in D_n\}} )\\ =&\sigma^2_{\xi}-\sigma^2_{\xi^N}. \end{split} \end{equation} \end{proof} Not surprisingly, this result tells us that the rate of convergence essentially depends on the smoothness properties of the process $\xi(t,\vect{s})$, i.e., on how fast the spectrum decays. The smoother $\xi(t,\vect{s})$, that is, the more variation is explained by low frequencies, the faster is the convergence of the approximation. Note that there is a conceptual difference between the stationary solution of the SPDE \eqref{SPDE} with $\vect{s} \in \mathbb{R}^{2}$ and the periodic one with $\vect s \in [0,1]^2$ wrapped on a torus. For the sake of notational simplicity, we have denoted both of them by $\xi(t,\vect{s})$. The finite dimensional solution $\xi^N(t,\vect{s})$ is an approximation to both of the above infinite dimensional solutions. The above convergence result, though, only holds true for the solution on the torus. \subsection{Real Fourier Functions and Discretization in Time and Space} To apply the model to real data, we have to discretize it. In the following, we consider the process $\xi(t,\vect{s})$ on a regular grid of $n \times n =N$ spatial locations $\vect{s}_1,\dots,\vect{s}_N$ in $[0,1]^2$ and at equidistant time points $t_1,\dots,t_T$ with $t_i-t_{i-1}=\Delta$. Note that these two assumptions can be easily relaxed, i.e., one can have irregular spatial observation locations and non-equidistant time points. The former can be achieved by adopting a data augmentation approach (see, for instance, \cite{SiKuSt11}) or by using an incidence matrix (see Section \ref{dimred}). The latter can be done by taking a time varying $\Delta$. For the sake of illustration, we have stated the results in the previous section using complex Fourier functions. However, when discretizing the model, one obtains a linear Gaussian state space model with a propagator matrix $\mat{G}$ that contains complex numbers, due to \eqref{AdvecDiff}. To avoid this, we replace the complex terms $\exp{(\imag \vect{k}_j^T\vect{s})}$ with real $\cos(\vect{k}_j^T\vect{s})$ and $\sin(\vect{k}_j^T\vect{s})$ functions. In other words, we use the real instead of the complex Fourier transform. The above results then still hold true, since for real valued data, the real Fourier transform is equivalent to the complex one. For notational simplicity, we will drop the superscript ``$^K$'' from $\xi^K(t,\vect{s})$. The distinction between the approximation and the true solution is clear from the context. \begin{proposition}\label{PropRealDiscr} On the above specified discretized spatial and temporal domain and using the real Fourier transform, with initial state $\vect \alpha(t_0)\sim N(0,\widetilde{\mat{Q}}_0)$, $\widetilde{\mat{Q}}_0$ diagonal, a stationary solution of the SPDE \eqref{SPDE} is of the form \begin{align}\label{SolFTVec} \vect{\xi}(t_{i+1})&=\mat{\Phi}\vect{\alpha}(t_{i+1}),\\ \vect{\alpha}(t_{i+1})&=\mat{G}\vect{\alpha}(t_i)+\widetilde{\vect{\epsilon}}(t_{i+1}),~~\widetilde{\vect{\epsilon}}(t_{i+1})\sim N(0,\widetilde{\mat{Q}}),\label{SolCoef} \end{align} with stacked vectors $\vect{\xi}(t_i)=(\xi(t_i,\vect s_1),\dots,\xi(t_i,\vect s_N))^T$ and cosine and sine coefficients $\vect{\alpha}(t_i)=\left(\alpha_1^{(c)}(t_i),\dots,\alpha_4^{(c)}(t_i),\alpha_5^{(c)}(t_i),\alpha_5^{(s)}(t_i),\dots,\alpha_{K/2+2}^{(c)}(t_i),\alpha_{K/2+2}^{(s)}(t_i)\right)^T,$ where $\mat{\Phi}$ applies the discrete, real Fourier transformation, $\mat{G}$ is a block diagonal matrix with $2 \times 2$ blocks, and $\widetilde{\mat{Q}}$ is a diagonal matrix. The above matrices are defined as follows. \begin{itemize} \item $\mat{\Phi}=\left[\vect{\phi}(\vect{s}_1),\dots,\vect{\phi}(\vect{s}_N)\right]^T,$\\ $\vect{\phi}(\vect{s}_l)=\left(\phi_1^{(c)}(\vect{s}_l),\dots,\phi_4^{(c)}(\vect{s}_l),\phi_5^{(c)}(\vect{s}_l),\phi_5^{(s)}(\vect{s}_l),\dots,\phi_{K/2+2}^{(c)}(\vect{s}_l),\phi_{K/2+2}^{(s)}(\vect{s}_l)\right)^T,$\\ $\phi^{(c)}_j(\vect{s}_l)=\cos(\vect{k}_j^T\vect{s}_l),~~\phi^{(s)}_j(\vect{s}_l)=\sin(\vect{k}_j^T\vect{s}_l)$, $\l=1,\dots,n^2$ \item $[\mat G]_{1:4,1:4}=\textnormal{diag}\left(\expo{ -\Delta (\vect{k}_j^T\mat{\Sigma}\vect{k}_j+ \zeta)}\right),$ $[\mat G]_{5:K,5:K}=\textnormal{diag}\left(\expo{-\Delta(\vect{k}_j^T\mat{\Sigma}\vect{k}_j + \zeta)}\left(\cos(\Delta\vect{\mu}^T\vect{k}_j)\mat 1_2-\sin(\Delta\vect{\mu}^T\vect{k}_j)\mat{J}_2\right)\right),$ where \begin{equation} \mat 1_2=\left(\begin{matrix} 1&0 \\0 &1 \end{matrix}\right), ~~\mat{J}_2=\left(\begin{matrix} 0&1 \\-1 &0 \end{matrix}\right), \end{equation} \item $\widetilde{\mat{Q}}=\text{\textnormal{diag}}\left(\widetilde{f}(\vect{k}_j)\frac{1-\expo{-2\Delta(\vect{k}_j^T\mat{\Sigma}\vect{k}_j + \zeta)}}{2(\vect{k}_j^T\mat{\Sigma}\vect{k}_j + \zeta)}\right),$ \item $\widetilde{\mat{Q}}_0=(\mat 1_N-\mat G \mat G^T)^{-1}\widetilde{\mat{Q}}$. \end{itemize} \end{proposition} In summary, at each time point $t$ and spatial point $\vect s_l$, $l=1,\dots,n^2$, the solution $\xi(t,\vect{s}_l)$ is the discrete real Fourier transform of the random coefficients $\vect{\alpha}(t)$ \begin{equation}\label{SolFT} \begin{split} \xi(t,\vect{s}_l) &= \sum_{j=1}^{4}\alpha^{(c)}_j(t)\phi^{(c)}_j(\vect{s}_l)+\sum_{j=5}^{K/2+2}{\left(\alpha^{(c)}_j(t)\phi^{(c)}_j(\vect{s}_l)+\alpha^{(s)}_j(t)\phi^{(s)}_j(\vect{s}_l)\right)}\\ &=\vect{\phi}(\vect{s}_l)^T\vect{\alpha}(t), \end{split} \end{equation} and the Fourier coefficients $\vect{\alpha}(t)$ evolve dynamically over time according to the vector autoregression in \eqref{SolCoef}. The first four terms are cosine terms and, afterward, there are cosine - sine pairs. This is a peculiarity of the real Fourier transform. It is due to the fact that for four wavenumbers $\vect k_j$, the sine terms equal zero on the grid, i.e., $\sin(\vect k_j^T\vect s_l)=0$, for all $l=1,\dots,n^2$ and $\vect k_j\in \{(0,0)^T,(0,n \pi)^T,(n\pi,0)^T,(n\pi,n\pi)^T\}$ (see Figure \ref{fig:WaveIllus}). The above equations \eqref{SolFTVec} and \eqref{SolCoef} form a linear Gaussian state space model with parametric propagator matrix $\mat{G}$ and innovation covariance matrix $\widetilde{\mat{Q}}$, the parametrization being determined by the corresponding SPDE. The model in \eqref{SolFTVec} and \eqref{SolCoef} is similar to the one discussed in \citet[Chapter 7]{CrWi11}, but the derivation as an exact solution to the stochastic PDE \eqref{SPDE} rather than a deterministic PDE is different. \begin{proof} Similarly as in Proposition \ref{PropMod}, we first derive the continuous time solution. Using \begin{equation*} \vect{\mu}^T\nabla\phi^{(c)}_j(\vect{s}_l)=-\vect{\mu}^T\vect{k}_j\phi^{(s)}_j(\vect{s}_l),~~ \vect{\mu}^T\nabla\phi^{(s)}_j(\vect{s}_l)=\vect{\mu}^T\vect{k}_j\phi^{(c)}_j(\vect{s}_l), \end{equation*} \begin{equation*} \nabla\cdot\mat{\Sigma}\nabla\phi^{(c)}_j(\vect{s}_l)=-\vect{k}_j^T\mat{\Sigma}\vect{k}_j\phi^{(c)}_j(\vect{s}_l),~~ \nabla\cdot\mat{\Sigma}\nabla\phi^{(s)}_j(\vect{s}_l)=-\vect{k}_j^T\mat{\Sigma}\vect{k}_j\phi^{(s)}_j(\vect{s}_l), \end{equation*} and the same arguments as in the proof of Proposition \ref{PropMod}, it follows that the continuous time solution is of the form \eqref{SolFT}. For each pair of cosine - sine coefficients $\vect \alpha_j(t)=(\alpha^{(c)}_j(t),\alpha^{(s)}_j(t))^T$ we have \begin{equation}\label{RealSol} \vect \alpha_j(t)=e^{\mat{H}_j t}\vect \alpha_j(0)+\int_0^te^{\mat{H}_j (t-\tint)} \widetilde{\vect{\epsilon}}_j(\tint)d\tint, \end{equation} where \begin{equation*} \mat{H}_j=\left(\begin{matrix} -\vect{k}_j^T\mat{\Sigma}\vect{k}_j - \zeta&-\vect{\mu}^T\vect{k}_j \\\vect{\mu}^T\vect{k}_j &-\vect{k}_j^T\mat{\Sigma}\vect{k}_j - \zeta \end{matrix}\right). \end{equation*} Now $\mat{H}_j$ can be written as \begin{equation*} \mat{H}_j=(-\vect{k}_j^T\mat{\Sigma}\vect{k}_j - \zeta) \mat 1_2 - \vect{\mu}^T\vect{k}_j \mat{J_2}, \end{equation*} where \begin{equation*} \mat 1_2=\left(\begin{matrix} 1&0 \\0 &1 \end{matrix}\right), ~~\mat{J}_2=\left(\begin{matrix} 0&1 \\-1 &0 \end{matrix}\right). \end{equation*} Since $\mat 1_2$ and $\mat{J}_2$ commute, we have \begin{equation}\label{propPairs} \begin{split} e^{\mat{H}_j t}=&\expo{-t(\vect{k}_j^T\mat{\Sigma}\vect{k}_j+ \zeta)\mat 1_2} \expo{-t\vect{\mu}^T\vect{k}_j\mat{J}_2}\\ =&\expo{-t(\vect{k}_j^T\mat{\Sigma}\vect{k}_j + \zeta)}\left(\cos(t\vect{\mu}^T\vect{k}_j)\mat 1_2-\sin(t\vect{\mu}^T\vect{k}_j)\mat{J}_2\right). \end{split} \end{equation} For the calculation of the exponential function of the matrix $\mat{J}_2$, see, e.g., \citet[Chapter 4]{Br07}. Analogously, one derives for the first four cosine terms \begin{equation}\label{propCosine} \alpha_j^c(t)=e^{-\left(\vect{k}_j^T\mat{\Sigma}\vect{k}_j + \zeta\right) t} \alpha_j^c(0)+\int_0^te^{-\left(\vect{k}_j^T\mat{\Sigma}\vect{k}_j + \zeta\right)(t-\tint)} \widetilde{\epsilon}_j(\tint)d\tint, ~~j=1,\dots 4. \end{equation} The above expression \eqref{propPairs} and \eqref{propCosine} give the propagator matrix $\mat G$. For the discrete time solution, in addition to the propagation $$\vect \alpha_j(t+\Delta)=e^{\mat{H}_j \Delta}\vect \alpha_j(t),$$ we need to calculate the covariance of the integrated stochastic innovation term $$\int_t^{t+\Delta}e^{\mat{H}_j (t+\Delta-\tint)} \widetilde{\vect{\epsilon}}_j(\tint)d\tint .$$ This is calculated as \begin{equation*} \begin{split} \int_t^{t+\Delta}e^{\mat{H}_j (t+\Delta-\tint)} \widetilde{f}(\vect{k}_j)e^{\mat{H}'_j (t+\Delta-\tint)}d\tint&=\int_0^{\Delta}e^{\mat{H}_j (\Delta-\tint)} \widetilde{f}(\vect{k}_j)e^{\mat{H}'_j (\Delta-\tint)}d\tint\\ &=\int_0^{\Delta}\widetilde{f}(\vect{k}_j)\expo{-2(\vect{k}_j^T\mat{\Sigma}\vect{k}_j+ \zeta)(\Delta-\tint)}\mat 1_2d\tint\\ &=\widetilde{f}(\vect{k}_j)\frac{1-\expo{-2(\vect{k}_j^T\mat{\Sigma}\vect{k}_j+ \zeta)\Delta}}{2(\vect{k}_j^T\mat{\Sigma}\vect{k}_j + \zeta)}\mat 1_2. \end{split} \end{equation*} For the first four cosine terms, calculations are done analogously. The covariance matrix $\widetilde{\mat{Q}}_0$ of the initial state $\vect \alpha(t_0)$ is assumed to be the covariance matrix of the stationary distribution of $\vect \alpha(t_i)$. Note that $\widetilde{\mat{Q}}_0$ is diagonal since $\mat G \mat G^T$ is diagonal, see the proof of Algorithm \ref{skf} in Section \ref{kfbsss}. This then gives the result in \eqref{SolFTVec} and \eqref{SolCoef}. \end{proof} \begin{figure}[!ht] \begin{center} \includegraphics[width=0.5\textwidth]{WaveNumbers} \end{center} \caption{Illustration of spatial wavenumbers for the two-dimensional discrete real Fourier transform with $n^2=400$ grid points.} \label{fig:WaveNumbersIllus} \end{figure} The discrete complex Fourier transform uses $n^2$ different wavenumbers $\vect k_j$ each having a corresponding Fourier term $\exp(\imag \vect k_j^T \vect s)$. The real Fourier transform, on the other hand, uses $n^2/2 +2$ different wavenumbers, where four of them have only a cosine term and the others each have sine and cosine terms. This follows from the fact that, for real data, certain coefficients of the complex transform are the complex transpose of other coefficients. For technical details on the real Fourier transform, we refer to \cite{DuMe84}, \cite{BoTaHa84}, \cite{RoWi05}, and \cite{Pa07}. Figure \ref{fig:WaveNumbersIllus} illustrates an example of the spatial wavenumbers, with $n^2=20\times 20=400$ grid points. The dots with a circle represent the wavenumbers actually used in the real Fourier transform, and the red crosses mark the wavenumbers having only a cosine term. Note that in \eqref{SolFT} we choose to order the spatial wavenumbers such that the first four spatial wavenumbers correspond to the cosine-only terms. \begin{figure}[!ht] \begin{center} \includegraphics[width=\textwidth]{FourierBasis} \end{center} \caption{Illustration of two dimensional Fourier basis functions used in the discrete real Fourier transform with $n^2=400$. On the x- and y-axis are the coordinates of $\vect{s}$.} \label{fig:WaveIllus} \end{figure} To get an idea of what the basis functions $\cos{(\vect{k}_j^T\vect{s})}$ and $\sin{(\vect{k}_j^T\vect{s})}$ look like, we plot in Figure \ref{fig:WaveIllus} twelve low-frequency basis functions corresponding to the six spatial frequencies closest to the origin $\vect 0$. \begin{figure}[!ht] \centering \includegraphics[width=0.5\textwidth]{Propagator} \caption{Illustration of propagator matrix $\mat G$. 16 real Fourier functions are used ($n=4$).} \label{fig:Propagator} \end{figure} Further, in Figure \ref{fig:Propagator}, there is an example of a propagator matrix $\mat G$ when $n=4$, i.e., when sixteen ($4^2$) spatial basis functions are used. The upper left $4 \times 4$ diagonal matrix corresponds to the cosine-only frequencies. The $2 \times 2$ blocks following correspond to wavenumbers with cosine - sine pairs. Concerning notation in this paper, $K$ refers to the number of Fourier terms, i.e., this is the dimension of the spectral process $\vect \alpha(t)$ at each time $t$. Furthermore, $N$ denotes the number of points at which the process $\vect \xi(t)$ is modeled, and $n$ is the number of points on each axis of the quadratic grid used. Often, we have $n^2=N=K$. However, if one uses a reduced dimensional Fourier basis, $K$ is smaller than $N$, see Section \ref{dimred}. \subsection{Remarks on Finite Differences}\label{SpecvsFD} Another approach to solve PDEs or SPDEs such as the one in \eqref{SPDE} consists of using a discretization such as finite differences. \cite{StEtAl10} use finite differences to solve an advection-diffusion PDE. Other examples are \cite{Wi03}, \cite{XuWi07}, \cite{DuGeSi}, \cite{MaEtAl08}, and \cite{ZhAu10}. The finite difference approximation, however, has several disadvantages. First, each spatial discretization effectively implies an interaction structure between temporal and spatial correlation. In other words, as \cite{XuWiFo05} state, the discretization effectively suggests a knowledge of the scale of interaction, lagged in time. Usually, this space-time covariance interaction structure is not known, though. Furthermore, there are numerical stability conditions that need to be fulfilled so that the approximate solution is meaningful. Since these conditions depend on the values of the unknown parameters, one can run into problems. In addition, computational tractability is an issue. In fact, we have tried to solve the SPDE in \eqref{SPDE} using finite differences as described in the following. A finite difference approximation in \eqref{SPDE} leads to a vector autoregressive model with a sparse propagator matrix being determined by the discretization. The innovation term $\epsilon$ can be approximated using a Gaussian Markov random field with sparse precision matrix (see \cite{LiLiRu10}). Even though the propagator and the precision matrices of the innovations are sparse, we have run into a computational bottleneck when using the Forward Filtering Backward Sampling (FFBS) algorithm \citep{CaKo94, Fr94} for fitting the model. The basic problem is that the Kalman gain is eventually a dense matrix. Alternative sampling schemes like the information filter (see, e.g., \cite{AnMo79} and \cite{ViFe09}) did not solve the problem either. However, future research on this topic might come up with solutions. \section{Computationally Efficient Statistical Inference}\label{inference} The computational cost for one evaluation of the likelihood or one sample from the full conditional in a spatio-temporal model with $T$ time points and $N$ spatial points equals $O((NT)^3)$ when taking a naive approach. Using the Kalman filter or the Forward Filtering Backward Sampling (FFBS) algorithm \citep{CaKo94, Fr94}, depending on what is needed, this cost is reduced to $O(T N^3)$ which, generally, is still too high for large data sets. In the following, we show how evaluation of the likelihood and sampling from the full conditional of the latent process can be done efficiently in $O(TN \log N)$ operations. In the spectral space, the costs of the algorithms grow linearly in the dimension $TN$, which means that the total computational costs are dominated by the costs of the fast Fourier transform (FFT) \citep{CoTu65} which are $O(TN \log N)$. Furthermore, computational time can be reduced by running the $T$ different FFTs in parallel. As is often done in a statistical model, we add a non-structured Gaussian term $\nu(t_{i+1},\vect s) \sim N(0,\tau^2), iid,$ to \eqref{SolFTVec} to account for small scale variation and / or measurement errors. In geostatistics, this term is called nugget effect. Denoting the observations at time $t_i$ by $\vect{w}(t_i)$, we then have the following linear Gaussian state space model: \begin{align}\label{GausObs} \vect{w}(t_{i+1})&=\mat{\Phi}\vect{\alpha}(t_{i+1})+\vect \nu(t_{i+1}), &\vect \nu(t_{i+1})\sim N(0,\tau^2 \mat 1_N),\\ \vect{\alpha}(t_{i+1})&=\mat{G}\vect{\alpha}(t_i)+\widetilde{\vect{\epsilon}}(t_{i+1}),&\widetilde{\vect{\epsilon}}(t_{i+1})\sim N(0,\widetilde{\mat{Q}}).\nonumber \end{align} Note that $\vect{\xi}(t_{i+1})=\mat{\Phi}\vect{\alpha}(t_{i+1})$. As mentioned before, irregular spatial data can be modeled by adopting a data augmentation approach (see \cite{SiKuSt11}) or by using an incidence matrix (see Section \ref{dimred}). For the sake of simplicity, a zero mean was assumed. Extending the model by including covariates in a regression term is straightforward. Furthermore, we assume normality. The model can be easily generalized to allow for data not following a Gaussian distribution. For instance, this can be done by including it in a Bayesian hierarchical model (BHM) \citep{WiBeCr98} and specifying a non-Gaussian distribution for $\vect{w}|\vect{\xi}$. The posterior can then no longer be evaluated exactly. But approximate posterior probabilities can still be computed using, for instance, simulation based methods such as Markov chain Monte Carlo (MCMC) (see, e.g., \cite{GilWRS96} or \cite{RobCC04}). An additional advantage of BHMs is that these models can be extended, for instance, to account for temporal non-stationarity by letting one or several parameters vary over time. \subsection{Kalman Filtering and Backward Sampling in the Spectral Space}\label{kfbsss} When following both a frequentist or a Bayesian paradigm, it is crucial that one is able to evaluate the likelihood of the hyper-parameters given $\vect w$ with a reasonable computational effort. In addition, when doing Bayesian inference, one needs to be able to simulate efficiently from the full conditional of the latent process $[\vect \xi|\cdot]$, or, equivalently, the Fourier coefficients $[\vect \alpha|\cdot]$. Below, we show how both these tasks can be done in the spectral space in linear time, i.e., using $O(TN)$ operations. For transforming between the physical and spectral space, one can use the FFT which requires $O(TN \log N)$ operations. We start with the spectral version of the Kalman filter. Its output is used for both evaluating the log-likelihood and for simulating from the full conditional of the coefficients $\vect \alpha$. \begin{algorithm} \caption{Spectral Kalman filter}\label{skf} \textbf{Input:} $T$,$\widetilde{\vect w}$, $\mat G$, $\tau^2$, $\widetilde{\mat{Q}}$, $\mat F$ \\ \textbf{Output:} forecast and filter means $\vect m_{t_i|t_{i-1}}$, $\vect m_{t_i|t_{i}}$ and covariance matrices $\mat R_{t_i|t_{i}}$, $\mat R_{t_i|t_{i-1}}$, $i=1,\dots,T$ \begin{algorithmic} \STATE $ \vect m_{t_0|t_0}= \vect 0$ \STATE $\mat R_{t_0|t_0}=\widetilde{\mat{Q}}$ \FOR{$i=1$ to $T$} \STATE $\vect m_{t_i|t_{i-1}}=\mat G \vect m_{t_{i-1}|t_{i-1}}$ \STATE $\mat R_{t_i|t_{i-1}}=\widetilde{\mat{Q}} + \mat R_{t_{i-1}|t_{i-1}} \mat F$ \STATE $\mat R_{t_i|t_{i}}=\left(\tau^{-2}\mat 1_N+\mat R_{t_i|t_{i-1}}^{-1}\right)^{-1}$ \STATE $\vect m_{t_i|t_i}=\vect m_{t_i|t_{i-1}}+\tau^{-2}\mat R_{t_i|t_i}\left( \widetilde{\vect w}(t_i)- \vect m_{t_i|t_{i-1}}\right)$ \ENDFOR \end{algorithmic} \end{algorithm} Algorithm \ref{skf} shows the Kalman filter in the spectral space. For the sake of simplicity, we assume that the initial distribution equals the innovation distribution. The spectral Kalman filter has as input the Fourier transform of $\widetilde{\vect w}=(\widetilde{\vect w}(t_1)^T,\dots,\widetilde{\vect w}(t_T)^T)^T$ of $\vect w$, the diagonal matrix $\mat F$ given by \begin{equation}\label{Hdef} \begin{split} [\mat F]_{1:4,1:4}&=\textnormal{diag}\left(\expo{ -2\Delta (\vect{k}_j^T\mat{\Sigma}\vect{k}_j+ \zeta)}\right),\\ [\mat F]_{5:N,5:N}&=\textnormal{diag}\left(\expo{-2\Delta(\vect{k}_j^T\mat{\Sigma}\vect{k}_j + \zeta)}\mat 1_2\right), \end{split} \end{equation} and other parameters that characterize the SPDE model. It returns forecast and filter means $\vect m_{t_i|t_{i-1}}$ and $\vect m_{t_i|t_{i}}$ and covariance matrices $\mat R_{t_i|t_{i}}$ and $\mat R_{t_i|t_{i-1}}$, $i=1,\dots,T$, respectively. I.e., $\vect m_{t_i|t_{i}}$ and $\mat R_{t_i|t_{i}}$ are the mean and the covariance matrix of $\vect \alpha (t_i)$ given data up to time $t_i$ $\{\vect w(t_j)|j=1,\dots,i\}$. Analogously, $m_{t_i|t_{i-1}}$ and $\mat R_{t_i|t_{i-1}}$ are the forecast mean and covariance matrix given data up to time $t_{i-1}$. We follow the notation of \cite{Ku01}. Since the matrices $\widetilde{\mat{Q}}$ and $\mat F$ are diagonal, the covariance matrices $\mat R_{t_i|t_{i}}$ and $\mat R_{t_i|t_{i-1}}$ are also diagonal. Note that the matrix notation in Algorithm \ref{skf} is used solely for illustrational purpose. In practice, matrix vector products ($\mat G \vect m_{t_{i-1}|t_{i-1}}$), matrix multiplications ($\mat R_{t_{i-1}|t_{i-1}}\mat F$), and matrix inversions $(\tau^{-2}+\mat R_{t_i|t_{i-1}})^{-1}$ are not calculated with general purpose algorithms but elementwise since all matrices are diagonal or $2 \times 2$ block diagonal. It follows that the computational cost for this algorithm is $O(TN)$. The derivation of this algorithm follows from the classical Kalman filter (see, e.g., \cite{Ku01}) using $\mat \Phi' \mat \Phi =\mat 1_N$, $\mat G\mat R_{t_{i-1}|t_{i-1}} \mat G^T=\mat R_{t_{i-1}|t_{i-1}} \mat G \mat G^T$, and the fact that $\mat G \mat G^T=\mat F$. The first equation holds true due to the orthonormality of the discrete Fourier transform. The second equation follows from the fact that $\mat G$ is $2 \times 2$ block diagonal and that $\mat R_{t_{i-1}|t_{i-1}}$ is diagonal with the diagonal entries being equal for each cosine - sine pair. The last equation holds true as shown in the following. Being obvious for the first four frequencies, we consider the $2 \times 2$ diagonal blocks of cosine - sine pairs: \begin{equation*} \begin{split} &[\mat G]_{(2l-5):(2l-4),(2l-5):(2l-4)}[\mat G]_{(2l-5):(2l-4),(2l-5):(2l-4)}^T\\ &=\expo{-2\Delta(\vect{k}_j^T\mat{\Sigma}\vect{k}_j + \zeta)}\left(\cos(\Delta\vect{\mu}^T\vect{k}_j)\mat 1_2-\sin(\Delta\vect{\mu}^T\vect{k}_j)\mat{J}_2\right)\left(\cos(\Delta\vect{\mu}^T\vect{k}_j)\mat 1_2-\sin(\Delta\vect{\mu}^T\vect{k}_j)\mat{J}_2\right)^T\\ &=\expo{-2\Delta(\vect{k}_j^T\mat{\Sigma}\vect{k}_j + \zeta)}\left(\cos(\Delta\vect{\mu}^T\vect{k}_j)^2+\sin(\Delta\vect{\mu}^T\vect{k}_j)^2 \right)\mat{1}_2, \end{split} \end{equation*} $l=5,\dots, N/2+2$, which equals \eqref{Hdef}. In the last equation we have used $$\mat J_2^T=-\mat J_2 ~~\textnormal{and}~~ \mat J_2^2=-\mat 1_2.$$ Based on the Kalman filter, the log-likelihood is calculated as (see, e.g., \cite{ShSt00}) \begin{equation}\label{ll} \begin{split} \ell =& \sum_{i=1}^T{\log\left|\mat R_{t_i|t_{i-1}}+\tau^2\mat 1_N\right|+\left(\widetilde{\vect w}(t_i)-\vect m_{t_i|t_{i-1}}\right)^T\left(\mat R_{t_i|t_{i-1}}+\tau^2\mat 1_N\right)^{-1}\left(\widetilde{\vect w}(t_i)-\vect m_{t_i|t_{i-1}}\right)}\\ &+\frac{TN}{2}\log(2\pi). \end{split} \end{equation} Since the forecast covariance matrices $\mat R_{t_i|t_{i-1}}$ are diagonal, calculation of their determinants and their inverses is trivial, and computational cost is again $O(TN)$. \begin{algorithm} \caption{Spectral backward sampling}\label{sbs} \textbf{Input:} $T$, $\mat G$, $\widetilde{\mat{Q}}$, $\mat F$, $\vect m_{t_i|t_{i-1}}$, $\vect m_{t_i|t_{i}}$, $\mat R_{t_i|t_{i}}$, $\mat R_{t_i|t_{i-1}}$, $i=1,\dots,T$\\ \textbf{Output:} a sample $\vect{\alpha}^*(t_1),\dots,\vect{\alpha}^*(t_T)$ from $[\vect \alpha|\cdot]$ \begin{algorithmic} \STATE $\vect{\alpha}^*(t_T)=\vect m_{t_T|t_{T}}+\left(\mat R_{t_T|t_{T}}\right)^{1/2} \vect n_T,~~ \vect n_T \sim N(\vect 0,\mat 1_N)$ \FOR{$i=T-1$ to $1$} \STATE $\overline{\vect m}_{t_i}=\vect m_{t_i|t_{i}}+\mat R_{t_i|t_{i}}\mat R_{t_i|t_{i-1}} ^{-1}\mat G^T\left(\vect{\alpha}^*(t_{i+1})-\vect m_{t_i|t_{i-1}}\right)$ \STATE $\overline{\mat R}_{t_i}=\left(\widetilde{\mat{Q}} \mat F + \mat R_{t_{i-1}|t_{i-1}}^{-1}\right)^{-1}$ \STATE $\vect{\alpha}^*(t_i)=\overline{\vect m}_{t_i} +\left(\overline{\mat R}_{t_i}\right)^{1/2} \vect n_i,~~ \vect n_i \sim N(\vect 0,\mat 1_N)$ \ENDFOR \end{algorithmic} \end{algorithm} In a Bayesian context, the main difficulty consists in simulating from the full conditional of the latent coefficients $[\vect \alpha|\cdot]$. After running the Kalman filter, this can be done with a backward sampling step. Together, these two algorithms are know as Forward Filtering Backward Sampling (FFBS) \citep{CaKo94, Fr94}. Again, backward sampling is computationally very efficient in the spectral space with cost being $O(TN)$. Algorithm \ref{sbs} shows the backward sampling algorithm in the spectral space. The matrices $\overline{\mat R}_{t_i}$ are diagonal which makes their Cholesky decomposition trivial. \subsection{Dimension Reduction and Missing or Non-Gridded Data}\label{dimred} If desired, the total computational cost can be additionally alleviated by using a reduced dimensional Fourier basis with $K<<N$, $N$ being the number of grid points. This means that one includes only certain frequencies, typically low ones. When the Fourier transform has been made, the spectral filtering and sampling algorithms then require $O(KT)$ operations. For using the FFT, the frequencies being excluded are just set to zero. Performing the FFT still requires $O(TN\log N)$ operations, though. When the observed data does not lie on a grid or has missing data, there are two alternative approaches. First, one can use a data augmentation approach \citep{SmRo93} for the missing data. See Section \ref{Fitting} and, for more details, \cite{SiKuSt11}. For irregularly spaced data, one can assign the data to a regular grid and treat the cells with no observations as missing data. FFT can then be applied to the augmented data, and the algorithms presented above can be used. Alternatively, as is the case in our application, one can include an incidence matrix $\mat H$ that relates the process on the grid to the observation locations. Instead of \eqref{GausObs}, the model is then \begin{equation}\label{Incidence} \vect{w}(t_{i+1})=\mat H \mat{\Phi}\vect{\alpha}(t_{i+1})+\vect \nu(t_{i+1}), ~~\vect \nu(t_{i+1})\sim N(0,\tau^2 \mat{1}_N). \end{equation} However, in the Kalman filter, the term $(\mat H \mat{\Phi})^T\mat H \mat{\Phi}$, used for calculating the filter covariance matrix $\mat R_{t_i|t_{i}}$, is not a diagonal matrix anymore. From this follows that the Kalman filter does not diagonalize in the spectral space if one uses an incidence matrix $\mat H$. Consequently, one has to use the traditional FFBS for which computational cost is $O(K^3T)$. This means that dimension reduction is required to make this approach computationally feasible. \subsection{An MCMC Algorithm for Bayesian Inference}\label{MCMC} Based on the algorithms presented above, there are different possible ways for doing statistical inference. For instance, if one adopts a frequentist paradigm, one can numerically maximize the log-likelihood in \eqref{ll}. In the following, we briefly present how Bayesian inference can be done using a Monte Carlo Markov Chain (MCMC) algorithm \citep[see][]{GilWRS96, RobCC04, brooks2011handbook}. This algorithm is implemented in the R package \func{spate} \citep{SiKuSt12b} and used in the application in Section \ref{Postproc}. To complete the specification of a Bayesian model, prior distributions for the parameters $\vect \theta=(\rho_0,\sigma^2,\zeta,\rho_1,\gamma,\alpha,\mu_x,\mu_y,\tau^2)^T$ have to be chosen. In general, this choice can depend on the specific application. We present choices for priors that are weakly uninformative. Based on \cite{Ge06}, we suggest to use improper priors for the $\sigma^2$ (marginal variance of the innovation) and $\tau^2$ (nugget effect variance) that are uniform on the standard deviation scale $\sigma$ and $\tau$, respectively. Further, the drift parameters $\mu_x$ and $\mu_y$ have uniform priors on $[-0.5,0.5]$, $\psi$ (direction of anisotropy) has a uniform prior on $[0,\pi/2]$, and $\gamma$ (degree of anisotropy) has a uniform prior on the log scale of the interval $[0.1,10]$. $\gamma$ is restricted to $[0.1,10]$ since stronger anisotropy does not seem reasonable. The range parameters of the innovations and the diffusion matrix $\rho_0$ and $\rho_1$, respectively, as well as the damping parameter $\zeta$ are assigned improper, locally uniform priors on $\mathbb{R}_+$. Our goal is then to simulate from the joint posterior of the unobservables $[\vect{\theta},\vect{\alpha}|\vect{w}]$, where $\vect{w}$ denotes the set of all observations. Missing data can be accommodated for by using a data augmentation approach which results in an additional Gibbs step, see Section \ref{Fitting}. Since the latent process $\vect \xi$ is the Fourier transform of the coefficients $\vect \alpha$, $\vect \xi(t_i)=\mat \Phi \vect \alpha(t_i)$, sampling from posterior of $\vect \alpha$ is, from a methodological point of view, equivalent to sampling from the one of $\vect \xi$. In the following, we use the notation $[w|\cdot]$ and $P[w|\cdot]$ to denote conditional distributions and densities, respectively. A straightforward approach would be to sample iteratively from the full conditionals of $\vect{\theta}$ and $\vect{\alpha}$. One could also further divide the latent process $\vect{\alpha}$ in blocks by iteratively sampling $\vect{\alpha}(t_i)$ at each time point. However $\vect{\theta}$ and $\vect{\alpha}$ can be strongly dependent which results in slow mixing. This problem is similar to the one observed when doing inference for diffusion models, see, e.g., \cite{RoSt01} and \cite{GoWi08}. It is therefore recommendable to sample jointly from $[ \vect{\theta},\vect{\alpha}|\vect w]$ in a Metropolis-Hastings step. Joint sampling from $\vect{\theta}$ and $\vect{\alpha}$ is done as follows. First, a proposal $(\vect{\theta}^*,\vect{\alpha}^*)$ is obtained by sampling $\vect{\theta}^*$ from a Gaussian distribution with the mean equaling the last value and an adaptively estimated proposal covariance matrix. To be more specific, $\rho_0,\sigma^2,\zeta,\rho_1,\gamma$, and $\tau^2$ are sampled on a log scale to ensure that they remain positive. Then, a sample $\vect{\alpha}^*$ from $[\vect{\alpha}|\vect{\theta}^*,\vect w]$ is obtained using the forward filtering backward sampling (FFBS) algorithm \citep{CaKo94, Fr94}. It can be shown that the acceptance ratio for the joint proposal is \begin{equation}\label{AccRat} \min\left(1,\frac{P[\vect{\theta}^*|\vect{w}]P[\vect{\theta}^*] \rho_0^* \sigma^{2*} \zeta^* \rho_1^* \gamma^* \tau^{2*} }{P[\vect{\theta}^{(i)}|\vect{w}]P[\vect{\theta}^{(i)}]\rho_0^{(i)} \sigma^{2{(i)}} \zeta^{(i)} \rho_1^{(i)} \gamma^{(i)} \tau^{2{(i)}}}\right), \end{equation} where $P[\vect{\theta}|\vect{w}]$ denotes the likelihood of $\vect{\theta}$ given $\vect{w}$, $P[\vect{\theta}]$ the prior, and where $\vect{\theta}^*$ and $\vect{\theta}^{(i)}$ denote the proposal and the last values, respectively. The factor $\rho_0\sigma^{2} \zeta \rho_1 \gamma \tau^{2}$ is included since these parameters are sampled on a log scale. We see that the above acceptance ratio does not depend on the latent process $\vect \xi = \mat \Phi \vect{\alpha}$. Thus, the parameters $\vect{\theta}$ are allowed to move faster in their parameter space. The value of the likelihood $P[\vect{\theta}|\vect{w}]$ is obtained as a side product of the Kalman filter in the FFBS. For this random walk Metropolis step, we suggest to use an adaptive algorithm \citep{RoRo09} meaning that the proposal covariance matrices for $\vect{\theta}$ are successively estimated such that an optimal scaling is obtained with an acceptance rate between $0.2$ and $0.3$. See \cite{RoRo01} for more information on optimal scaling for Metropolis-Hastings algorithms. In addition, if the model includes a regression term (see the application in Section \ref{Postproc}), the fixed effects can also be strongly dependent with the random effects $\vect \xi$. This means that it is advisable that the coefficients $\vect b \in \mathbb{R}^p$ of the potential covariates $\vect x(t, \vect s) \in \mathbb{R}^p$ are also sampled together with $\vect{\theta}$ and $\vect{\alpha}$. This can be done by slightly modifying the above algorithm. First, the regression coefficients $\vect{b}^*$ are proposed jointly with $\vect{\theta}^*$ in a random walk Metropolis step. Then $\vect{\alpha}^*$ is sampled from $[\vect{\alpha}|\vect{\theta}^*,\vect{b}^*,\vect w]$ analogously using the FFBS. Finally, in the acceptance ration in \eqref{AccRat}, $P[\vect{\theta}|\vect{w}]$ now just has to be replaced by $P[\vect{\theta},\vect b|\vect{w}]$ which is also a side product of the Kalman filter. \section{\titleq{Postprocessing Precipitation Forecasts}}\label{Postproc} Numerical weather prediction (NWP) models are capable of producing predictive fields at spatially and temporally high frequencies. Statistical postprocessing, which is the main objective of this application, serves two purposes. First, probabilistic predictions are obtained in cases where only deterministic ones are available. Further, even if ``probabilistic'' forecasts in form of ensembles \citep{Pa02, GnRa05} are available, they are typically not calibrated, i.e., they are often underdispersed \citep{HaCo97}. The goal of postprocessing is then to obtain calibrated and sharp predictive distributions (see \cite{GnBaRa07} for a definition of calibration and sharpness). In the case of precipitation, the need for postprocessing is particularly strong, since, despite their importance, precipitation forecasts are still not as accurate as forecasts for other meteorological quantities \citep{ApEtAl02, StYu07}. Several approaches for postprocessing precipitation forecasts have been proposed, including linear regression \citep{An00}, logistic regression \citep{HaWhWe04}, quantile regression \citep{Br04, FrHe07}, hierarchical models based on a prior climatic distribution \citep{KrMa06}, neural networks \citep{RaVeFe05}, and binning techniques \citep{YuSt06}. \cite{SlRaGn07} propose a two-stage model to postprocess precipitation forecasts. \cite{BeRaGn08} extended the model of \cite{SlRaGn07} by accounting for spatial correlation. \cite{KlRaGn11} present a similar model that includes ensemble predictions and accounts for spatial correlation. Except for the last two references, spatial correlation is typically not modeled in postprocessing precipitation forecasts, and none of the aforementioned models explicitly accounts for spatio-temporal dependencies. However, for temporally and spatially highly resolved data, it is necessary to account for correlation in space and time. First, spatio-temporal correlation is important, for instance, for predicting precipitation accumulation over space and time with accurate estimates of precision. Further, it is likely that errors of NWP models exhibit structured behaviour over space and time, including interactions between space and time. The SPDE approach allows for such interactions, as do other approaches which use scientifically-based physical models \citep{WiHo10}. \subsection{Data} \begin{figure}[!ht] \begin{center} \includegraphics[width=0.65\textwidth]{Stations} \end{center} \caption{Locations of grid points at which predictions are obtained ($50 \times 100$ grid of small dots) and observations stations (bold dots). Both axis are in km using the Swiss coordinate system (CH1903).} \label{fig:Stations} \end{figure} The goal is to postprocess precipitation forecasts from an NWP model called COSMO-2, a high-resolution model with a grid spacing of 2.2 km that is run by MeteoSwiss as part of COonsortium for Small-scale MOdelling (COSMO) \citep[see, e.g.,][]{StEtAl03}. The NWP model produces deterministic forecasts once a day starting at 0:00UTC. Predictions are made for eight consecutive time periods corresponding to 24 h ahead. In the following, let $y_{F}(t,\vect{s})$ denote the forecast of the rainfall sum from time $t-1$ to $t$ at site $s$ made at 0:00UTC of the same day. We consider a rectangular region in northern Switzerland shown in Figure \ref{fig:Stations}. The grid at which predictions are made is of size $50 \times 100$. Precipitation is observed at 32 stations over northern Switzerland. Figure \ref{fig:Stations} also shows the locations of the observation stations. In the postprocessing model, the NWP forecasts are used as covariates in a regression term, see \eqref{ppmodel}. We use data for three-hourly rainfall amounts from the beginning of December 2008 till the end of March 2009. To illustrate the observed data, in Figure \ref{fig:RainVsTime}, observed precipitation at one station and the equally weighted areal average precipitation are plotted versus time. We will use the first three months containing 720 time points for fitting, and the last month is left aside for evaluation. \begin{figure}[!ht] \begin{center} \includegraphics[width=0.65\textwidth]{RainVsTime} \end{center} \caption{Precipitation (mm) versus time, for one station and averaged over all stations.} \label{fig:RainVsTime} \end{figure} The NWP model forecasts are deterministic and ensembles are not available in our case. However, the extension to use an ensemble instead of just one member can be easily done. One can include all the ensemble members in the regression part of the model. Or, in the case of exchangeable members, one can use the location and the spread of the ensemble. \subsection{\titleq{Precipitation Model for Postprocessing}}\label{Application} The model presented in the following is a Bayesian hierarchical model (BHM). It uses the SPDE based spatio-temporal Gaussian process $\xi(t, \vect s)$ presented in Section \ref{SpecSpace} at the process level. At the data stage, a mixture model adapted to the nature of precipitation is used. A characteristic feature of precipitation is that its distribution consists of a discrete component, indicating occurrence of precipitation, and a continuous one, determining the amount (see Figure \ref{fig:RainVsTime}). As a consequence, there are two basic statistical modeling approaches. The continuous and the discrete part are either modelled separately \citep{CoSt82, Wi99} or together \citep{Be87, Wi90, BaPl92, Hu95, SaGu04}. See, e.g., \cite{SiKuSt11} for a more extensive overview of precipitation models and for further details on the data model used below. Originally, the approach presented in the following goes back to \cite{To58} who analyzed household expenditure on durable goods. For modeling precipitation, \cite{St73} took up this idea and modified it by including a power transformation for the non-zero part so that the model can account for skewness. \cite{SaGu99} develop Bayesian methods for the spatio-temporal analysis of rainfall using this skewed Tobit model, but in contrast to our application they do not explicitly account for temporal correlation and they use a much smaller spatial grid. We denote the cumulative rainfall from time $t-1$ to $t$ at site $\vect s \in \mathbb{R}^{2}$ by $y(t,\vect{s})$ and assume that it depends on a latent Gaussian variable $w(t,\vect{s})$ through \begin{equation}\label{rainrel} \begin{split} y(t,\vect{s})&=0 ,\hspace{1.5cm} \text{if}~w(t,\vect{s}) \leq 0,\\ &=w(t,\vect{s})^{\lambda},\hspace{0.4cm} \text{if}~w(t,\vect{s})>0,\\ \end{split} \end{equation} where $\lambda>0$. A power transformation is needed since precipitation amounts are skewed and do not follow a truncated normal distribution. The latent Gaussian process $w(t,\vect{s})$ is interpreted as a precipitation potential. The mean of the Gaussian process $w(t,\vect{s})$ is assumed to depend linearly on spatio-temporal covariates $\vect{x}(t,\vect{s})\in \mathbb{R}^{k}$. As shown below, this mean term basically consists of the NWP forecasts. Variation that is not explained by the linear term is modeled using the Gaussian process $\xi(t,\vect{s})$ and the unstructured term $\nu(t,\vect{s})$ for microscale variability and measurement errors. The spatio-temporal process $\xi(t,\vect{s})$ has two functions. First, it captures systematic errors of the NWP in space and time and can extrapolate them over time. Second, it accounts for structured variability so that the postprocessed forecast is probabilistic and its distribution sharp and calibrated. To be more specific concerning the covariates, similarly to what appears in \cite{BeRaGn08}, we include a transformed variable $y_{F}(t,\vect{s})^{1/\tilde{\lambda}}$ and an indicator variable $\mathbbm{1}_{\{y_{F}(t,\vect{s})=0\}}$ which equals $1$ if $y_{F}(t,\vect{s})=0$ and $0$ otherwise. $\tilde{\lambda}$ is determined by fitting the transformed Tobit model as in \eqref{rainrel} to the marginal distribution of the rain data ignoring any spatio-temporal correlation. In doing so, we obtain $\tilde{\lambda}\approx 1.4$. $y_{F}(t,\vect{s})^{1/\tilde{\lambda}}$ is centered around zero by subtracting its overall mean $\overline{y}^{1/\tilde{\lambda}}_{F}$ in order to reduce posterior correlations. Thus, $w(t,\vect{s})$ equals \begin{equation}\label{ppmodel} w(t,\vect{s})=b_1\left(y_{F}(t,\vect{s})^{1/\tilde{\lambda}}-\overline{y}_F^{1/\tilde{\lambda}}\right)+b_2\mathbbm{1}_{\{y_{F}(t,\vect{s})=0\}}+\xi(t,\vect{s})+\nu(t,\vect{s}). \end{equation} An intercept is not included since the first Fourier term is constant in space. In our case, including an intercept term results in weak identifiability which slows down the convergence of the MCMC algorithm used for fitting. Note that in situations where the mean is large it is advisable to include an intercept, since the coefficient of the first Fourier term is constrained by the joint prior on $\vect \alpha$. Further, unidendifiability is unlikely to be a problem in these cases. Concerning the spatio-temporal process $\xi(t,\vect{s})$, we apply padding. This means that we embed the $50\times 100$ grid in a rectangular $200 \times 200$ grid. A brief prior investigation showed that the range parameters are relatively large in comparison to the spatial domain, and padding is therefore used in order to avoid spurious correlations due to periodicity. The NWP forecasts are not available on the extended $200 \times 200$ domain, which means that, in principle, the process $w(t,\vect{s})$ can only be modeled on the $50\times 100$ grid where the covariates are available. To cope with this we use an incidence matrix $\mat H$ as in \eqref{Incidence} to relate the process at the $200 \times 200$ grid to the observation stations. As argued in Section \ref{dimred}, this then requires that we use a reduced dimensional Fourier expansion. I.e., instead of using $N=200^2$ basis functions, we only use $K<<N$ low-frequency Fourier terms. Since the observation stations are relatively scarce, one might argue that there is no information on spatial high frequencies of the NWP error, and that the high frequencies can be left out. In fact, this hypothesis gets confirmed by our analysis, see Figure \ref{fig:CRPS}. Concerning prior distributions, for $\vect \theta=(\rho_0,\sigma^2,\zeta,\rho_1,\gamma,\psi,\mu_x,\mu_y,\tau^2)^T$, we use the priors presented in Section \ref{MCMC}. The parameters $\vect{b}$ and $\lambda$, which are not included in $\vect \theta$, have improper, locally uniform priors on $\mathbb{R}$ and $\mathbb{R}_+$, respectively. In summary, $$P[\vect b, \lambda, \vect \theta] \propto \frac{1}{\sqrt{\sigma^2}\sqrt{\tau^2}\gamma} \mathbbm{1}_{\{-0.5\leq \mu_x,\mu_y\leq 0.5\}} \mathbbm{1}_{\{0\leq \psi \leq \pi/2\}}\mathbbm{1}_{\{\lambda,\rho_0,\rho_1,\zeta,\sigma^2,\tau^2\geq0\}}\mathbbm{1}_{\{0.1\leq \gamma \leq 10\}}.$$ In addition, concerning $\vect \alpha(0)$, we choose to use the innovation distribution specified in \eqref{WhittleSpec} as initial distribution. \subsection{\titleq{Fitting}}\label{Fitting} Monte Carlo Markov Chain (MCMC) is used to sample from the posterior distribution $[\vect{b},\lambda,\vect{\theta},\vect{\alpha},\vect{w}|\vect{y}]$, where $\vect{y}$ denotes the set of all observations. We use what \cite{NeRo06} call a Metropolis within-Gibbs algorithm which alternates between blocked Gibbs \citep{GeSm90} and Metropolis \citep{MeetAl53, Ha70} sampling steps. We use the Metropolis-Hastings algorithm presented in Section \ref{MCMC} with the coefficients $\vect b$ being sampled jointly with $\vect \theta$ and $\vect \alpha$. Due to the non-Gaussian data model, additional Metropolis and Gibbs steps are required for $\lambda$ and for those points of $\vect{w}$ where the observed rainfall amount is zero and where observations are missing. We refer to \cite{SiKuSt11} for more details on the type of data augmentation approach that is used for doing this. We denote by $\vect{w}^{[0]}$ the values of $\vect{w}$ at those points where the observed rainfall is zero, $y(t,\vect s)=0$. Analogously, we define $\vect{w}^{[m]}$ and $\vect{w}^{[+]}$ for the missing values and the values where a positive rainfall amount is observed, $y(t,\vect s)>0$, respectively. The full conditionals of the censored $\vect{w}^{[0]}$ and missing points $\vect{w}^{[m]}$ are truncated and regular one-dimensional Gaussian distributions, respectively. Sampling from them is done in Gibbs steps. The transformation parameter $\lambda$ is sampled using a random walk Metropolis step. If a new value is accepted, $\vect{w}^{[+]}$ needs to be updated using the deterministic relation $w(t,\vect s)=y(t,\vect s)^{1/\lambda}$ due to \eqref{rainrel}. From these Gibbs and Metropolis steps, we obtain $\vect{w}$ consisting of simulated and transformed observed data. In the second part of the algorithm, we sample $\vect b, \vect{\theta}$, and $\vect{\alpha}$ jointly from $[\vect b, \vect{\theta},\vect{\alpha}|\vect{w}]$ using the algorithm presented in Section \ref{MCMC}, where $\vect{w}$ acts as if it was the observed data. After a burn-in of $5,000$ iterations, we use $100,000$ samples from the Markov chain to characterize the posterior distribution. Convergence is monitored by inspecting trace plots. \subsection{Model Selection and Results} We use a reduced dimensional approach. The number of Fourier functions is determined based on predictive performance for the $240$ time points that were set aside. We start with models including only low spatial frequencies and add successively higher frequencies. In doing so, we only consider models that have the same resolution in each direction, i.e., we do not consider models that have higher frequency spatial basis functions in the east-west direction than in the north-south one. In order to assess the performance of the predictions and to choose the number of basis functions to include, we use the continuous ranked probability score (CRPS) \citep{MaWi76}. The CRPS is a strictly proper scoring rule \citep{GnRa07} that assigns a numerical value to probabilistic forecasts and assesses calibration and sharpness simultaneously \citep{GnBaRa07}. It is defined as \begin{equation} CRPS(F,y)=\int_{-\infty}^{\infty}(F(x)-\mathbbm{1}_{\{y \leq x \}})^2dx, \end{equation} where $F$ is the predictive cumulative distribution, $y$ is the observed realization, and $\mathbbm{1}$ denotes an indicator function. If a sample $y^{(1)},\dots, y^{(m)}$ from $F$ is available, it can be approximated by \begin{equation} \frac{1}{m}\sum_{i=1}^m|y^{(i)}-y|-\frac{1}{2m^2}\sum_{i,j=1}^m|y^{(i)}-y^{(j)}|. \end{equation} Ideally, one would run the full MCMC algorithm at each time point $t\geq 720$, including all data up to the point, and obtain predictive distributions from this. Since this is rather time consuming, we make the following approximation. We assume that the posterior distribution of the ``primary'' parameters $\vect{\theta}$, $\vect{b}$, and $\lambda$ given $\vect{y}_{1:t}=\{\vect{y}_1,\dots,\vect{y}_{t}\}$ is the same for all $t\geq 720$. That is, we neglect the additional information that the observations in March provide about the primary parameters. Thus, the posterior distributions of the primary parameters are calculated only once, namely on the data set from December 2008 to February 2009. The assumption that the posterior of the primary parameters does not change with additional data may be questionable over longer time periods and when one moves away from the time period from which data is used to obtain the posterior distribution. But since all our data lies in the winter season, we think that this assumption is reasonable. If longer time periods are considered, one could use sliding training windows or model the primary parameters as non-stationary using a temporal evolution. For each time point $t \geq 720 $, we make up to $8$ steps ahead forecasts corresponding to 24 hours. I.e., we sample from the predictive distribution of $\vect{y}^*_{t+k}$, $k= 1,\dots 8$, given $\vect{y}_{1:t}=\{\vect{y}_1,\dots,\vect{y}_{t}\}$. \begin{figure}[!ht] \begin{center} \includegraphics[width=\textwidth]{CRPS} \end{center} \caption{Comparison of different statistical models using the continuous ranked probability score (CRPS). On the left are CRPSs of station specific forecasts and on the right are CRPSs of areal forecasts. $K$ denotes the number of basis functions used in the model. ``Sep'' denotes the separable model with $K=29$ Fourier terms. The unit of the CRPS is mm.} \label{fig:CRPS} \end{figure} In Figure \ref{fig:CRPS}, the average CRPS of the pointwise predictions and the areal predictions are shown for the different statistical models. In the left plot, the mean is taken over all stations and lead times, whereas the areal version is an average over all lead times. This is done for the models with different numbers of basis functions used. Models including only a few low-frequency Fourier terms perform worse. Then the CRPS decreases successively. The model based on including $K=29$ Fourier functions performs best. After this, adding higher frequencies results into lower predictive performance. We interpret this results in the way that the observation data does not allow for resolving high frequencies in the error term between the forecasted and observed precipitation. Note that high frequencies of the precipitation process itself are accounted for by the forecast $y_F$. For comparison, we also fit a separable model which is obtained by setting $\vect{\mu}=\vect 0$ and $\mat{\Sigma}^{-1}=\mat 0_{2,2}$. Concerning the number of Fourier functions, we use $K=29$ different Fourier terms. The separable model clearly performs worse than the model with a non-separable covariance structure. Based on these findings, we decided to use the model with $29$ cosine and sine functions. \begin{table} \caption{Posterior medians and 95 $\%$ credible intervals for the SPDE based spatio-temporal model presented in Section \ref{SpecSpace} with $K=29$ Fourier terms.\label{tab:PostQuant}} \centering \begin{tabular}{rrrr} \hline \hline & Median & 2.5 \% & 97.5 \% \\ \hline $\rho_0$ & 25.4 & 18.8 & 32.4 \\ $\sigma^2$ & 0.838 & 0.727 & 0.994 \\ $\zeta$ & 0.00655 & 0.000395 & 0.0156 \\ $\rho_1$ & 48.8 & 42.1 & 57.1 \\ $\gamma$ & 4.33 & 3.34 & 6.01 \\ $\psi$ & 0.557 & 0.49 & 0.617 \\ $\mu_x$ & 6.73 & 0.688 & 12.9 \\ $\mu_y$ & -4.19 & -8.55 & -0.435 \\ $\tau^2$ & 0.307 & 0.288 & 0.327 \\ $b_1$ & 0.448 & 0.414 & 0.481 \\ $b_2$ & -0.422 & -0.5 & -0.344 \\ $\lambda$ & 1.67 & 1.64 & 1.7 \\ \hline \end{tabular} \end{table} Table \ref{tab:PostQuant} shows posterior medians as well as $95 \%$ credible intervals for the different parameters. Note that the range parameters $\rho_0$ and $\rho_1$ as well as the drift parameters $\mu_x$ and $\mu_y$ have been transformed back from the unit $[0,1]$ scale to the original km scale. The posterior median of the variance $\sigma^2$ of the innovations of the spatio-temporal process is around $0.8$. Compared to this, the nugget variance being about $0.3$ is smaller. For the innovation range parameter $\rho_0$, we obtain a value of about $25$ km. And the range parameter $\rho_1$ that controls the amount of diffusion or, in other words, the amount of spatio-temporal interaction, is approximately $49$ km. With $\gamma$ and $\psi$ being around $4$ and $0.6$, respectively, we observe anisotropy in the south-west to north-east direction. This is in line with the orography of the region, as the majority of the grid points lies between two mountain ranges: the Jura to the north-west and the Alps to the south-east. The drift points to the south-east, both parameters being rather small though. Further, the damping parameter $\zeta$ has a posterior median of about $0.01$. \begin{table} \caption{Comparison of NWP model and statistically postprocessed forecasts ('Stat PP') using the mean absolute error (MAE). 'Static' denotes the constant forecast obtained by using the most recently observed data. The unit of the MAE is mm.\label{CosmoMAE}} \centering \begin{tabular}{rrrr} \hline \hline & Stat PP & NWP & Static \\ \hline Stationwise & 0.359 & 0.485 & 0.594 \\ Areal & 0.303 & 0.387 & 0.489 \\ \hline \end{tabular} \end{table} Next, we compare the performance of the postprocessed forecasts with the ones from the NWP model. In addition to the temporal cross-validation, we do the following cross-validation in space and time. We first remove six randomly selected stations from the data, fit the latent process to the remaining stations, and evaluate the forecasts at the stations left out. Concerning the primary parameters, i.e., all parameters except the latent process, we use the posterior obtained from the full data including all stations. This is done for computational simplicity and since this posterior is not very sensitive when excluding a few stations (results not reported). Since the NWP produces 8 step ahead predictions once a day, we only consider statistical forecasts starting at 0:00UTC. This is in contrast to the above comparison of the different statistical models for which 8 step ahead predictions were made at all time points and not just once for each day. We use the mean absolute error (MAE) for evaluating the NWP forecasts. In order to be consistent, we also generate point forecasts from the statistical predictive distributions by using medians, and then calculate the MAE for these point forecasts. In Table \ref{CosmoMAE}, the results are reported. For comparison, we also give the score for the static forecast that is obtained by using the most recently observed data. The postprocessed forecasts clearly perform better than the raw NWP forecasts. In addition, the postprocessed forecasts have the advantage that they provide probabilistic forecasts quantifying prediction uncertainty. \begin{figure}[!ht] \begin{center} \subfigure[NWP]{ \includegraphics[width=0.45\textwidth]{Pred_Cosmo2} } \subfigure[Median]{ \includegraphics[width=0.45\textwidth]{Pred_Med2} } \subfigure[One sample]{ \includegraphics[width=0.45\textwidth]{Pred_Sample2} } \subfigure[Quartile difference]{ \includegraphics[width=0.45\textwidth]{Pred_Quant_Diff2} } \end{center} \caption{Illustration of postprocessed spatio-temporal precipitation fields for the period $t=761,\dots,768$. The figure shows the NWP forecasts (a), pointwise medians of the predictive distribution (b), one sample from the predictive distribution (c), and the differences between the third quartile and the median of the predictive distribution (d). All quantities are in mm. Note that the scales are different in different figures.} \label{fig:Pred_Illus} \end{figure} The statistical model produces a joint spatio-temporal predictive distribution that is spatially highly resolved. To illustrate the use of the model, we show several quantities in Figure \ref{fig:Pred_Illus}. We consider the time point $t=760$ and calculate predictive distributions over the next 24 hours. Predicted fields for the period $t=761,\dots,768$ from the NWP are shown in the top left corner. On the right of it are pointwise medians obtained from the statistical forecasts. This is a period during which the NWP predicts too much rainfall compared to the observed data (results not shown). The figure shows how the statistical model corrects for this. For illustration, we also show one sample from the predictive distribution. To quantify prediction uncertainty, the difference between the third quartile and the median of the predictive distribution is plotted. These plots again show the growing uncertainty with increasing lead time. Other quantities of interest (not shown here), that can be easily obtained, include probabilities of precipitation occurrence or various quantiles of the distribution. \section{Conclusion} We present a spatio-temporal model and corresponding efficient algorithms for doing statistical inference for large data sets. Instead of using the covariance function, we propose to use a Gaussian process defined trough an SPDE. The SPDE is solved using Fourier functions, and we have given a bound on the precision of the approximate solution. In the spectral space, one can use computationally efficient statistical algorithms whose computational costs grow linearly with the dimension, the total computational costs being dominated by the fast Fourier transform. The space-time Gaussian process defined through the advection-diffusion SPDE has a nonseparable covariance structure and can be physically motivated. The model is applied to postprocessing of precipitation forecasts for northern Switzerland. The postprocessed forecasts clearly outperform the raw NWP predictions. In addition, they have the advantage that they quantify prediction uncertainty. In our analysis, we considered cumulative rainfall over 3 hours, both in the NWP forecasts and in the station data. It would be interesting to formulate a model which can describe different accumulation periods in a coherent way and is still computationally feasible. Another interesting direction for further research would be to extend the SPDE based model to allow for spatial non-stationarity. For instance, the deformation method of \citet{SaGu92}, where the process is assumed to be stationary in a transformed space and non-stationary in the original domain, might be a potential way. Since the operators of the SPDE are local, one can define the SPDE on general manifolds and, in particular, on the sphere (see, e.g., \cite{LiLiRu10}). Future research will show to which extent spectral methods can still be used in practice. \section*{Acknowledgments} We are grateful to Vanessa Stauch from MeteoSchweiz for providing the data and for inspiring discussions. In addition, we would like to thank Peter Guttorp for interesting comments and discussions and two referees for helpful comments and suggestions. \bibliographystyle{chicago}
{ "redpajama_set_name": "RedPajamaArXiv" }
4,828
Q: Python: Is there a module that would help me transpose a group of points from shape A to shape B? During a process, my panel samples deforms into certain linear and non linear shapes due to heat What I want to do is based on these deformation shapes, I want to estimate how each point in the original panel has moved after the thermal deformation as shown in the image below I am collecting reference coordinates of few points as shown in the image below Number of red coordinates between blue is much larger than 1 (~5000) So here is what I need to do, and I have no idea which module I should start with. * *Create a mesh of coordinates I can measure, and create an approximate shape *Map these coordinates onto the created shape in 1, assuming their deformation is equal within these coordinates. Are there any modules that support these functions?
{ "redpajama_set_name": "RedPajamaStackExchange" }
3,520
// Copyright (c) Microsoft Corporation. All Rights Reserved. Licensed under the MIT License. See License.txt in the project root for license information. define([ '../Core/_Global', '../Core/_WinRT', '../Core/_Base', '../Core/_BaseUtils', '../Core/_Events', '../Core/_WriteProfilerMark', '../Promise', '../Scheduler', '../Utilities/_ElementUtilities', '../Utilities/_Hoverable', './_LegacyAppBar', './NavBar/_Command', './NavBar/_Container', 'require-style!less/styles-navbar', 'require-style!less/colors-navbar' ], function NavBarInit(_Global,_WinRT, _Base, _BaseUtils, _Events, _WriteProfilerMark, Promise, Scheduler, _ElementUtilities, _Hoverable, _LegacyAppBar, _Command, _Container) { "use strict"; var customLayout = "custom"; _Base.Namespace.define("WinJS.UI", { /// <field> /// <summary locid="WinJS.UI.NavBar"> /// Displays navigation commands in a toolbar that the user can open or close. /// </summary> /// <compatibleWith platform="Windows" minVersion="8.1"/> /// </field> /// <icon src="ui_winjs.ui.navbar.12x12.png" width="12" height="12" /> /// <icon src="ui_winjs.ui.navbar.16x16.png" width="16" height="16" /> /// <htmlSnippet supportsContent="true"><![CDATA[<div data-win-control="WinJS.UI.NavBar"> /// <div data-win-control="WinJS.UI.NavBarContainer"> /// <div data-win-control="WinJS.UI.NavBarCommand" data-win-options="{location:'/pages/home/home.html',label:'Home',icon:WinJS.UI.AppBarIcon.home}"></div> /// </div> /// </div>]]></htmlSnippet> /// <event name="beforeopen" locid="WinJS.UI.NavBar_e:beforeopen">Raised just before opening the NavBar.</event> /// <event name="afteropen" locid="WinJS.UI.NavBar_e:afteropen">Raised immediately after an NavBar is fully opened.</event> /// <event name="beforeclose" locid="WinJS.UI.NavBar_e:beforeclose">Raised just before closing the NavBar.</event> /// <event name="afterclose" locid="WinJS.UI.NavBar_e:afterclose">Raised immediately after the NavBar is fully closed.</event> /// <event name="childrenprocessed" locid="WinJS.UI.NavBar_e:childrenprocessed">Fired when children of NavBar control have been processed from a WinJS.UI.processAll call.</event> /// <part name="navbar" class="win-navbar" locid="WinJS.UI.NavBar_part:navbar">Styles the entire NavBar.</part> /// <resource type="javascript" src="//$(TARGET_DESTINATION)/js/WinJS.js" shared="true" /> /// <resource type="css" src="//$(TARGET_DESTINATION)/css/ui-dark.css" shared="true" /> NavBar: _Base.Namespace._lazy(function () { var childrenProcessedEventName = "childrenprocessed"; var createEvent = _Events._createEventProperty; var NavBar = _Base.Class.derive(_LegacyAppBar._LegacyAppBar, function NavBar_ctor(element, options) { /// <signature helpKeyword="WinJS.UI.NavBar.NavBar"> /// <summary locid="WinJS.UI.NavBar.constructor"> /// Creates a new NavBar. /// </summary> /// <param name="element" type="HTMLElement" domElement="true" locid="WinJS.UI.NavBar.constructor_p:element"> /// The DOM element that will host the new NavBar control. /// </param> /// <param name="options" type="Object" locid="WinJS.UI.NavBar.constructor_p:options"> /// An object that contains one or more property/value pairs to apply to the new control. Each property of the options object corresponds to one of the control's /// properties or events. /// </param> /// <returns type="WinJS.UI.NavBar" locid="WinJS.UI.NavBar.constructor_returnValue"> /// The new NavBar control. /// </returns> /// <compatibleWith platform="Windows" minVersion="8.1"/> /// </signature> options = options || {}; // Shallow copy object so we can modify it. options = _BaseUtils._shallowCopy(options); // Default to Placement = Top and Layout = Custom options.placement = options.placement || "top"; options.layout = customLayout; options.closedDisplayMode = options.closedDisplayMode || "minimal"; _LegacyAppBar._LegacyAppBar.call(this, element, options); this._element.addEventListener("beforeopen", this._handleBeforeShow.bind(this)); _ElementUtilities.addClass(this.element, NavBar._ClassName.navbar); if (_WinRT.Windows.ApplicationModel.DesignMode.designModeEnabled) { this._processChildren(); } else { Scheduler.schedule(this._processChildren.bind(this), Scheduler.Priority.idle, null, "WinJS.UI.NavBar.processChildren"); } }, { // Restrict values of closedDisplayMode to 'none' or 'minimal' /// <field type="String" defaultValue="minimal" locid="WinJS.UI.NavBar.closedDisplayMode" helpKeyword="WinJS.UI.NavBar.closedDisplayMode" isAdvanced="true"> /// Gets/Sets how NavBar will display itself while hidden. Values are "none" and "minimal". /// </field> closedDisplayMode: { get: function () { return this._closedDisplayMode; }, set: function (value) { var newValue = (value === "none" ? "none" : "minimal"); Object.getOwnPropertyDescriptor(_LegacyAppBar._LegacyAppBar.prototype, "closedDisplayMode").set.call(this, newValue); this._closedDisplayMode = newValue; }, }, /// <field type="Function" locid="WinJS.UI.NavBar.onchildrenprocessed" helpKeyword="WinJS.UI.NavBar.onchildrenprocessed"> /// Raised when children of NavBar control have been processed by a WinJS.UI.processAll call. /// <compatibleWith platform="Windows" minVersion="8.1"/> /// </field> onchildrenprocessed: createEvent(childrenProcessedEventName), _processChildren: function NavBar_processChildren() { // The NavBar control schedules processAll on its children at idle priority to avoid hurting startup // performance. If the NavBar is shown before the scheduler gets to the idle job, the NavBar will // immediately call processAll on its children. If your app needs the children to be processed before // the scheduled job executes, you may call processChildren to force the processAll call. if (!this._processed) { this._processed = true; this._writeProfilerMark("processChildren,StartTM"); var that = this; var processed = Promise.as(); if (this._processors) { this._processors.forEach(function (processAll) { for (var i = 0, len = that.element.children.length; i < len; i++) { (function (child) { processed = processed.then(function () { processAll(child); }); }(that.element.children[i])); } }); } return processed.then( function () { that._writeProfilerMark("processChildren,StopTM"); that._fireEvent(NavBar._EventName.childrenProcessed); }, function () { that._writeProfilerMark("processChildren,StopTM"); that._fireEvent(NavBar._EventName.childrenProcessed); } ); } return Promise.wrap(); }, _show: function NavBar_show() { // Override _show to call processChildren first. // if (this.disabled) { return; } var that = this; this._processChildren().then(function () { _LegacyAppBar._LegacyAppBar.prototype._show.call(that); }); }, _handleBeforeShow: function NavBar_handleBeforeShow() { // Navbar needs to ensure its elements to have their correct height and width after _LegacyAppBar changes display="none" // to display="" and _LegacyAppBar needs the elements to have their final height before it measures its own element height // to do the slide in animation over the correct amount of pixels. if (this._disposed) { return; } var navbarcontainerEls = this.element.querySelectorAll('.win-navbarcontainer'); for (var i = 0; i < navbarcontainerEls.length; i++) { navbarcontainerEls[i].winControl.forceLayout(); } }, _fireEvent: function NavBar_fireEvent(type, detail) { var event = _Global.document.createEvent("CustomEvent"); event.initCustomEvent(type, true, false, detail || {}); this.element.dispatchEvent(event); }, _writeProfilerMark: function NavBar_writeProfilerMark(text) { _WriteProfilerMark("WinJS.UI.NavBar:" + this._id + ":" + text); } }, { _ClassName: { navbar: "win-navbar" }, _EventName: { childrenProcessed: childrenProcessedEventName }, isDeclarativeControlContainer: _BaseUtils.markSupportedForProcessing(function (navbar, callback) { if (navbar._processed) { for (var i = 0, len = navbar.element.children.length; i < len; i++) { callback(navbar.element.children[i]); } } else { navbar._processors = navbar._processors || []; navbar._processors.push(callback); } }) }); return NavBar; }) }); });
{ "redpajama_set_name": "RedPajamaGithub" }
9,554
Q: AWS SAM Local and docker-lambda: keep getting Unable to import module 'lambda_function': No module named 'lambda_function' Edit 2: The root cause was I had several DOCKER environment variables set which were causing my function invocations to re-route to a remote Docker host and not hit SAM Local. Once I unset those, the functions started running. Edit: I cloned docker-lambda and tried running one of their examples and get the same error. docker run --rm -v "$PWD":/var/task lambci/lambda:python3.6 START RequestId: 73a433fc-1d8a-4cdb-a66d-61bd667e13ba Version: $LATEST Unable to import module 'lambda_function': No module named 'lambda_function' END RequestId: 73a433fc-1d8a-4cdb-a66d-61bd667e13ba REPORT RequestId: 73a433fc-1d8a-4cdb-a66d-61bd667e13ba Duration: 1 ms Billed Duration: 100 ms Memory Size: 1536 MB Max Memory Used: 15 MB {"errorMessage": "Unable to import module 'lambda_function'"} I'm trying to set up SAM Local with a Python lambda function and keep getting frustrated by the module import error in the title. My template.yaml looks like this: AWSTemplateFormatVersion : '2010-09-09' Transform: AWS::Serverless-2016-10-31 Resources: ProposalsTable: Type: "AWS::Serverless::SimpleTable" AddProposal: Type: "AWS::Serverless::Function" Properties: Handler: lambda_function.lambda_handler Runtime: python3.6 Policies: AmazonDynamoDBFullAccess Environment: Variables: TABLE_NAME: !Ref ProposalsTable Events: Vote: Type: Api Properties: Path: /proposals Method: get I have a lambda_function.py in the same folder as the template.yaml. I run: sam local start-api and it starts up fine: Mounting lambda_function.lambda_handler (python3.6) at http://127.0.0.1:3000/proposals [GET] Then I do: curl http://127.0.0.1:3000/proposals Then on the "server" it shows: Unable to import module 'lambda_function': No module named 'lambda_function' Function returned an invalid response (must include one of: body, headers or statusCode in the response object): %!s(<nil>) I've tried all different ways of naming the file (e.g. putting it inside a folder with an init.py) I googled and read a dozen or more threads but most people are talking about deploying to the real AWS Lambda; there's not a lot on SAM Local. I wonder if it's something in my environment. The node.js sample function here fails with a timeout. https://github.com/awslabs/aws-sam-local/tree/develop/samples/hello-world/node 2018/01/04 15:20:41 Invoking index.handler (nodejs6.10) 2018/01/04 15:20:41 Mounting /Users/me/code/sam-local-prototype as /var/task:ro inside runtime container 2018/01/04 15:20:46 Function index.handler timed out after 3 seconds Ideas? A: Aaron, Check out this video : https://www.youtube.com/watch?v=xaCbIFH_d9k You have listed: * *Handler: lambda_function.lambda_handler *Is your file called lambda_function? *Where is this file located? This matters because the element "CodeUri:" is where you specify the path to the file and I don't see that element in your explanation. Or if you zip up the project you can specify the file name here. For example I zip my project up then within the template.yml I specify my CodeUri to point to the zip as such: CodeUri: lambda.zip . I hope this helps. A: Regarding the Unable to import module 'lambda_function': No module named 'lambda_function' error: In the repo link you provided within the template.yaml the Handler key reads: Handler: index.handler. This corresponds to the index.js file that contains the function named handler written as exports.handler = () => {} If you are re-writing this in Python, your template.yaml handler key will need to read Handler: {file_name}.{function_name}. If the file carrying your lambda function is called lambda.py and the function within it is def lambda_handler you will need to write your handler key in the .yaml as Handler:lambda.lambda_function. I remember having to modify the key beneath Resources but give make sure your file name and function name are accurate and try again. I just copied the code out of the repo and ran it successfully. Do you have docker installed correctly? Or did you have sam local api started when you tried to run the test code from the repo? If it attempted to connect on the same port, you may get the timeout as well. 2018/01/16 13:39:14 Successfully parsed template.yaml 2018/01/16 13:39:14 Connected to Docker 1.35 2018/01/16 13:39:14 Runtime image missing, will pull.... 2018/01/16 13:39:14 Fetching lambci/lambda:nodejs6.10 image for nodejs6.10 runtime... nodejs6.10: Pulling from lambci/lambda f338a32fa56c: Already exists 4926b20b634f: Already exists 8040e979acbc: Pull complete 160b6838355f: Pull complete Digest: sha256:e34f92bc0df0cf4a8ba560c6c7bf201183d5f6773ddf44440a97295486906744 Status: Downloaded newer image for lambci/lambda:nodejs6.10 2018/01/16 13:39:28 Invoking index.handler (nodejs6.10) 2018/01/16 13:39:28 Mounting /Users/me/sam_local_test as /var/task:ro inside runtime container START RequestId: 8970d865-2d59-1d54-0825-d56f1fd035f7 Version: $LATEST 2018-01-16T20:39:32.315Z 8970d865-2d59-1d54-0825-d56f1fd035f7 LOG: Name is Bob END RequestId: 8970d865-2d59-1d54-0825-d56f1fd035f7 REPORT RequestId: 8970d865-2d59-1d54-0825-d56f1fd035f7 Duration: 22.02 ms Billed Duration: 0 ms Memory Size: 0 MB Max Memory Used: 28 MB "Hello Bob"
{ "redpajama_set_name": "RedPajamaStackExchange" }
3,279
The 59th British Academy Film Awards, given by the British Academy of Film and Television Arts, took place on 19 February 2006 and honoured the best films of 2005. Brokeback Mountain won Best Film, Best Director for Ang Lee, Best Supporting Actor for Jake Gyllenhaal, and Best Adapted Screenplay. Philip Seymour Hoffman won Best Actor for Capote and Reese Witherspoon won Best Actress for Walk the Line. The Constant Gardener received the most nominations with 10; the film only received one award: Best Editing for Claire Simpson. Wallace & Gromit: The Curse of the Were-Rabbit, directed by Nick Park and Steve Box, was voted Outstanding British Film of 2005. Winners and nominees Statistics See also 78th Academy Awards 31st César Awards 11th Critics' Choice Awards 58th Directors Guild of America Awards 19th European Film Awards 63rd Golden Globe Awards 26th Golden Raspberry Awards 20th Goya Awards 21st Independent Spirit Awards 11th Lumières Awards 17th Producers Guild of America Awards 10th Satellite Awards 32nd Saturn Awards 12th Screen Actors Guild Awards 58th Writers Guild of America Awards Notes References External links Film059 B 2006 in British cinema February 2006 events in the United Kingdom 2006 in London 2005 awards in the United Kingdom
{ "redpajama_set_name": "RedPajamaWikipedia" }
4,382
\section{Introduction} Monolayer transition metal dichalcogenides (TMDs) uniquely combine optical properties and phenomena such as strong light-matter interaction \cite{wurstbauer_lightmatter_2017}, large spin-orbit coupling \cite{zhu_giant_2011}, valley-contrasting spin \cite{xiao_coupled_2012}, quantum light emission \cite{tonndorf_single-photon_2015, koperski_single_2015, chakraborty_voltage-controlled_2015, srivastava_optically_2015, he_single_2015, palacios-berraquero_large-scale_2017, branny_deterministic_2017, klein_engineering_2021}, and large exciton binding energies (small Bohr radii) that allow operation at room temperature \cite{chernikov_exciton_2014, wang_colloquium_2018}. These materials are a promising platform for valleytronics \cite{jones_optical_2013, xu_spin_2014}, quantum photonics \cite{aharonovich_solid-state_2016, atature_material_2018}, and the investigation of few- and many-body physics \cite{mak_tightly_2013, barbone_charge-tuneable_2018, li_revealing_2018, ye_efficient_2018, chen_coulomb-bound_2018, hao_neutral_2017, kremser_discrete_2020, klein_controlling_2021} and strongly correlated phases \cite{costanzo_gate-induced_2016, ye_superconducting_2012}. Furthermore, the ability to fabricate heterostructures due to their van der Waals character allows the investigation of dipolar interlayer excitons \cite{geim_van_2013, rivera_observation_2015, seyler_signatures_2019}. Large excitonic oscillator strengths and their two-dimensional geometry enable efficient and simple integration of TMDs into photonic structures \cite{tonndorf_-chip_2017, youngblood_integration_2016}, forming hybrid systems that support novel regimes of light-matter interactions, such as non-linear and non-local coupling. These interactions can be engineered well below the diffraction limit through an interplay between localized surface plasmon polaritons (LSPPs) and light emitters \cite{sriram_hybridizing_2020, blauth_enhanced_2017, blauth_coupling_2018}, giving access to the extreme lengthscales of the order of the exciton Bohr radius \cite{stier_exciton_2016, stier_magnetooptics_2018, goryca_revealing_2019}. In the proximity of metallic nanoparticles that host LSSPs, the photonic density of modes available to a light emitter can be strongly enhanced in two different light-matter coupling regimes; weak-coupling, for which the light-matter coupling is perturbative, and strong-coupling for which it becomes coherent \cite{baranov_novel_2018, pelton_strong_2019}. Weakly-coupled systems exhibit modified spontaneous emission rates of the emitter, physics that is captured by the Purcell effect \cite{akselrod_probing_2014}. In strongly-coupled systems, on the other hand, polaritons having mixed light and matter character emerge \cite{chikkaraddy_single-molecule_2016}. In between, the intermediate-coupling regime displays Fano-shaped scattering spectra, indicative of the interplay between pronounced light-matter coupling and significant damping \cite{miroshnichenko_fano_2010, lee_fano_2015, abid_temperature-dependent_2017, wang_tunable_2018, sun_light-emitting_2018}. Recently, plasmonic nanocavities realized using high-quality chemically synthesized metallic nanoparticles have been shown to result in strong and weak light-matter coupling with free excitons in TMDs \cite{kleemann_strong-coupling_2017, wen_room-temperature_2017, zheng_manipulating_2017, geisler_single-crystalline_2019, qin_revealing_2020}. However, the relative positioning of different nanophotonic elements is difficult to achieve using such approaches, thereby hindering routes towards integrated technologies. Strong coupling was demonstrated via coupling emitters to the delocalized collective modes of metallic nanoparticle arrays \cite{lee_fano_2015, wang_coherent_2016, liu_strong_2016}. Lithographically defined nanoresonators that host localized plasmonic modes, such as dielectric \cite{sortino_enhanced_2019} and plasmonic nanoscale antennas \cite{yan_strong_2020}, provide maximum flexibility in design, relative position and levels of integration. Furthermore, they can be individually optically probed to peer through ensemble broadening and access the light-matter coupling at the level of a few excitons. Besides coupling to free excitons in such structures, the emission rate of localized emitters can also be strongly enhanced \cite{luo_deterministic_2018, cai_radiative_2018}, making them highly relevant for photonic quantum technologies. We demonstrate tunable light-matter interaction in an $\mathrm{MoSe_2}$ monolayer proximal to lithographically defined plasmonic antennas having arm sizes centered around $90$ nm and $\leq10$ nm feed gaps. We tune the strength of the light-matter coupling via: (i) control of nanoantenna design to spectrally detune the dipolar mode from the excitonic transition and (ii) polarization control to excite a superposition of single particle plasmon modes and coupled dipolar modes. Firstly, we designed an array of $\mathrm{MoSe_2}$-coupled dipole nanoantennas using finite-difference time-domain (FDTD) calculations. All antennas were widely separated to allow them to be individually optically addressed. A large number ($\geq100$) of nanoantennas were probed using differential reflectivity spectroscopy, each having different exciton-plasmon detunings. An apparent avoided crossing is observed between exciton and the dipolar mode of the antenna, indicative of strong coupling. However, careful analysis using using a coupled mode model shows that our system falls into the intermediate-coupling regime. We demonstrate active control of the coupling by switching the hot-spot within the nanoantenna feed gap on and off via the incident excitation polarization. Finally, in low-temperature photoluminescence spectroscopy, we observe a redshift of free excitons and the emergence of localized excitons at the position of nanoresonators. Our findings set a solid foundation for realizing 2D materials-based ultrafast non-linear photonic devices. \section{Results and Discussion} To achieve the coupling between the $A$-exciton in monolayer $\mathrm{MoSe_2}$ and the nanophotonic resonator, it is necessary to ensure the spectral overlap of the exciton and plasmon resonances, as well as the spatial overlap of the optically active material and the active part of the resonator. Spectral overlap is achieved through engineering the size of the antenna during fabrication. As schematically presented in Figure \ref{fig:Figure1}a, we use dipole bowtie nanoantennas \cite{novotny_antennas_2011, kinkhabwala_large_2009} (see \hyperlink{page.10}{SI} Section \hyperlink{page.10}{S1}), consisting of two lithographically-defined triangular gold nanoparticles brought to proximity with a few-nm feed gap between them. This geometry results in large localized electric field intensity enhancements, due to the near-field dipolar coupling and lightning rod effect \cite{su_interparticle_2003, liao_lightning_1982}, and a single-mode selection through different excitation polarizations \cite{schraml_optical_2014}. To reduce the large parameter space and simplify the spectral control, we nominally fixed all the parameters to maximize the ratio between quality factor $Q$ and mode volume $V_\mathrm{m}$ as a figure of merit for the strength of the light-matter coupling (see \hyperlink{page.10}{SI} Section \hyperlink{page.10}{S2}), except for the size of a single nanoantenna arm $d_{\mathrm{size}}$, which was used as a free parameter for tuning the plasmonic resonance \cite{schraml_optical_2014}. As the resonator material, we used gold with thickness $d_{\mathrm{Au}} = \SI{35}{\nano\meter}$ on top of the $d_{\mathrm{Ti}} = \SI{5}{\nano\meter}$ titanium layer on the SiO$_2$ substrate. The radius of curvature at the triangle corners is resolution limited to as small as $d_{\mathrm{tip}} = \SI{10}{\nano\meter}$. We fixed the feed-gap size to be $d_{\mathrm{gap}} = \SI{10\pm2}{\nano\meter}$, to achieve the lowest $V_\mathrm{m}$ within fabrication limits. We performed finite-difference time-domain (FDTD) calculations (see \hyperref[sec:Methods]{Methods}) to obtain the dipole resonance frequency as a function of nanoantenna size, with the excitation being polarized along the long nanoantenna axis. Typical results are presented in Figure \ref{fig:Figure1}b (upper panel) that shows the simulated scattering cross-section as a function of energy and $d_{\mathrm{size}}$. This data shows a very large redshift, $\geq 0.5$ eV upon increasing $d_{\mathrm{size}}$ from $70-140$ nm spanning the range of optical activity of MoSe$_2$. To tailor the plasmonic resonance to match the neutral exciton energy in monolayer $\mathrm{MoSe_2}$ at $\SI{1.57}{\electronvolt}$, simulations of bare plasmonic nanoantennas indicate that the optimum size is $d_\mathrm{size} = \SI{110}{\nano\meter}$. However, introducing the MoSe$_2$ monolayer on top of the bowtie antennas was found to result in a large redshift of the resonance by $\SI{\sim200}{\milli\electronvolt}$ (see Figure \hyperlink{page.10}{S5}b) and, therefore, we compensated by choosing $d_\mathrm{size} = \SI{90}{\nano\meter}$ to achieve good spectral coupling of the nanoantenna mode. A typical calculated scattering cross section is presented in Figure \ref{fig:Figure1}b (lower panel) for the optimized antenna size. The MoSe$_2$ monolayer was positioned on top of the nanoantennas and FDTD calculations were performed for a $d_{\mathrm{size}} = \SI{90}{\nano\meter}$ titanium-gold nanoantenna covered by $\SI{0.7}{\nano\meter}$ thick monolayer. The resulting side- and top-views of the electromagnetic density are presented in Figure \ref{fig:Figure1}c, revealing that the strongest field enhancement occurs in the feed gap at the position of the TMD (dotted line). Maximum intensity enhancements up to $\sim$1$0^3$ are expected for this optimized geometry. Comparison to the similar intensity distribution for the bare nanoantenna in Figure \hyperlink{page.10}{S1} shows that the flake on top "pulls" the electromagnetic hot-spot from the bottom of the feed gap to the top to maximize the spatial coupling with the MoSe$_2$ monolayer. \begin{figure*}[!ht] \includegraphics[width=1\textwidth]{Figure1.pdf} \caption{\textbf{Geometry and optical response of the $\mathrm{MoSe_2}$-coupled dipole nanoantenna.} \textbf{(a)} Schematic representation of a dipole nanoantenna with highlighted geometrical parameters. \textbf{(b)} Calculated normalized scattering cross section data for dipole nanoantennas with different sizes (top panel) and a highlighted scattering cross section spectrum for $d_{\mathrm{size}} = \SI{90}{\nano\meter}$ (bottom panel) obtained from FDTD calculations. \textbf{(c)} Side-view (top) and top-view (bottom) of the spatial distribution of light intensity enhancement of a $d_{\mathrm{size}} = \SI{90}{\nano\meter}$ sized $\mathrm{MoSe_2}$-coupled dipole nanoantenna calculated via FDTD methods. The dotted line in the top panel represents the $\mathrm{MoSe_2}$ monolayer. Scale bar is $\SI{20}{\nano\meter}$. The side-view contains the long axis of the nanoantenna, while the top-view plane is at the monolayer height. \textbf{(d)} Contrast micrograph of the nanoantenna array covered by the monolayer $\mathrm{MoSe_2}$. The distance between neighboring nanoantennas is $\SI{2}{\micro\meter}$. \textbf{(e)} AFM images of a bare nanoantenna (blue-coded) and an $\mathrm{MoSe_2}$-coupled dipole nanoantenna (red-coded). Dotted lines are at the positions of height cross-sections, depicted at the bottom with respective colors. \textbf{(f)} Differential reflectance spectra recorded from a bare nanoantenna (blue), the bare $\mathrm{MoSe_2}$ monolayer (gray) and an $\mathrm{MoSe_2}$-coupled dipole nanoantenna (red) with corresponding fits.} \label{fig:Figure1} \end{figure*} To realize $\mathrm{MoSe_2}$-coupled dipole nanoantennas, we fabricated arrays of dipole nanoantennas using electron beam lithography (see \hyperref[sec:Methods]{Methods}) with a separation of $\SI{2}{\micro\meter}$ to facilitate optical addressing using confocal microscopy. Simulated scattering cross-sections and measured differential reflectance spectra of individual plasmonic nanoantennas (see \hyperlink{page.10}{SI} Section \hyperlink{page.10}{S3}) are in excellent agreement, which allows us to use simulations as a predictive tool to design the desired spectral response of the nanoantenna. The measured $Q$ of the realized structures is $Q = 4.5\pm0.5$. Subsequently, we used dry viscoelastic stamping methods \cite{castellanos-gomez_deterministic_2014} to cover the nanoantenna array with an $\mathrm{MoSe_2}$ monolayer, as depicted in the contrast micrograph in Figure \ref{fig:Figure1}d, to finally obtain $\mathrm{MoSe_2}$-coupled dipole nanoantennas. As shown in Figure \ref{fig:Figure1}e, atomic force microscopy (AFM) was used to explore how the monolayer flake covers the nanostructure, where blue-coded and red-coded three-dimensional representations in the figure show a bare nanoantenna and $\mathrm{MoSe_2}$-coupled dipole nanoantenna topographies, respectively. Corresponding cross-sections at the dotted lines shown in the figure represent height profiles of the nanoantenna without (blue) and with (red) the TMD monolayer on top, confirming the spatial overlap of the flake and the hot-spot. To optically characterize our nanostructures (see \hyperref[sec:Methods]{Methods}), and thus probe the underlying light-matter interaction mechanisms, we performed differential reflectance spectroscopy. A typical spectrum recorded from a bare nanoantenna, shown in Figure \ref{fig:Figure1}f (blue), exhibits a broad plasmonic resonance centered at $\SI{\sim1.68}{\electronvolt}$. Due to the asymmetric shape, the spectrum is fitted with a double Lorentzian curve, which may be the result of a small size difference in the two nanoantenna arms. Away from the antennas, the bare flake shows the typical signature of $A$- ($\SI{\sim1.57}{\electronvolt}$) and $B$-excitons ($\SI{\sim1.78}{\electronvolt}$) of the $\mathrm{MoSe_2}$ monolayer at room temperature \cite{li_measurement_2014}, as shown in Figure \ref{fig:Figure1}f (gray). The spectral mismatch between the $A$-exciton and the plasmon resonance of the bare antenna is intentional since, as discussed above, the introduction of a monolayer on top of the antenna redshifts the resonance (see Figure \hyperlink{page.10}{S5}b). The optical response of the $\mathrm{MoSe_2}$-coupled dipole nanoantenna, depicted in red in Figure \ref{fig:Figure1}f, consists of two peaks: the low-energy (LE) peak and high-energy (HE) peak separated by a dip at the $A$-exciton position. \begin{figure*}[!ht] \includegraphics[width=1\textwidth]{Figure2.pdf} \caption{\textbf{Probing the nature of the light-matter interaction in $\mathrm{MoSe_2}$-coupled dipole nanoantennas.} \textbf{(a)} Differential reflectance (left) and corresponding calculated scattering cross-section spectra (right) of differently detuned nanoantennas in the range from ${\sim}\SI{-130}{\milli\electronvolt}$ up to $\SI{\sim0}{\milli\electronvolt}$. Differential reflectance spectra are accompanied by the COM fits using Eq. (\ref{equ:ABSscat}). The blue band denotes the $A$-exciton spectral position in an $\mathrm{MoSe_2}$ monolayer at room temperature. \textbf{(b)} Normalized differential reflectance (left) and calculated normalized scattering cross-section (right) of ${\sim}100$ $\mathrm{MoSe_2}$-coupled dipole nanoantennas ordered by increased detuning $\delta$. Blue and red circles correspond to the HE- and LE-peaks, respectively. \textbf{(c)} The LE-peak ($E_-$ in red) and HE-peak ($E_+$ in blue) energy position as a function of the detuning with corresponding fits according to the Eq. (\ref{equ:COM2}) (left panel). Exciton and plasmon energies extracted as fit parameters from Eq. (\ref{equ:ABSscat}) (right panel).} \label{fig:Figure2} \end{figure*} To understand the nature of the optical response of the hybrid nanoantenna-MoSe$_2$ structure, we analyze the data using a coupled oscillator model (COM) \cite{wu_quantum-dot-induced_2010, pelton_strong_2019}. The COM considers plasmonic and excitonic modes as coupled damped harmonic oscillators driven by an external electric field. Starting from the equations of motion of such a system, we derive the resulting scattering cross-section $\sigma_{\mathrm{scat}}$ of the coupled exciton-plasmon system as a function of photon energy $E$. It is given by \cite{wu_quantum-dot-induced_2010}: \begin{equation} \resizebox{.9\hsize}{!}{$ \sigma_{\mathrm{scat}}(E)=AE^4\Bigg|\frac{E^2-E_{\mathrm{A}}^2+iE\Gamma_{\mathrm{A}}}{(E^2-E_{\mathrm{A}}^2+iE\Gamma_{\mathrm{A}})(E^2-E_{\mathrm{plasmon}}^2+iE\Gamma_{\mathrm{plasmon}}) - 4E^2g^2}\Bigg|^2$} \label{equ:ABSscat} \end{equation} where $E_A$ and $E_\mathrm{plasmon}$ are the uncoupled exciton and plasmon resonance energies, respectively, $\Gamma_{\mathrm{A}}$ and $\Gamma_{\mathrm{plasmon}}$ are the corresponding dampings, $g$ is the coupling constant of the two oscillators, and $A$ is an arbitrary scaling factor. For simplicity, we neglected the effect of the $B$-exciton, since it is far away from the plasmonic resonance. In Figure \ref{fig:Figure2}a (left panel), we present the typical spectra recorded from five different $\mathrm{MoSe_2}$-coupled dipole nanoantennas and their respective COM fits that reveal different optical responses depending on the detuning $\delta = E_{\mathrm{plasmon}} - E_{\mathrm{A}}$. Each curve shows a dip in a narrow energy region highlighted in blue and exactly at the A-exciton position of an MoSe$_2$ monolayer. The detuning originates from the slight differences in plasmonic resonances of each dipole nanoantenna due to their spread in size $d_\mathrm{arm} = \SI{90\pm2}{\nano\meter}$ and feed gap size $d_\mathrm{gap} = \SI{10\pm2}{\nano\meter}$, caused by fabrication fluctuations (see \hyperlink{page.10}{SI} Section \hyperlink{page.10}{S4}). The HE-peak asymptotically approaches the exciton energy for increasing negative detuning, while for the LE-peak the same behavior occurs for increasing positive detuning. Complementary to the measurements, the five corresponding calculated spectra are presented in the right panel in Figure \ref{fig:Figure2}a, showing similar behaviour. In Figure \ref{fig:Figure2}b (left), we plot the normalized differential reflectance spectra fits of ${\sim}100$ $\mathrm{MoSe_2}$-coupled dipole nanoantennas ordered by increasing detuning. The blue and red circles denote HE-peak and LE-peak spectral position, respectively. They clearly show an apparent anticrossing behavior that is very similar to recent reports \cite{kleemann_strong-coupling_2017, wen_room-temperature_2017, zheng_manipulating_2017, geisler_single-crystalline_2019, qin_revealing_2020}. However, we continue to show that the system operates in the intermediate coupling regime. The rightmost panel in Figure \ref{fig:Figure2}b shows calculated scattering cross-sections of nanoantennas within that range illustrating the excellent agreement between experiment and our modelling. Distributions of the fitting parameters over ${\sim}100$ nanoantennas are presented in \hyperlink{page.10}{SI} Section \hyperlink{page.10}{S5}. To quantify the coupling strength, we again used the COM, where the eigenenergies $E_{\pm}$ of the coupled system Hamiltonian are given by the equation: \begin{equation} E_{\pm} = \frac{1}{2}(E_{\mathrm{A}} + E_{\mathrm{plasmon}}) \pm \sqrt{g^2 + \frac{1}{4}\delta^2.} \label{equ:COM2} \end{equation} The exciton and plasmon damping terms are neglected under the assumption $\Gamma_{\mathrm{A}}\ll E_{\mathrm{A}}$ and $\Gamma_{\mathrm{plasmon}}\ll E_{\mathrm{plasmon}}$ \cite{zheng_manipulating_2017, chikkaraddy_single-molecule_2016, wen_room-temperature_2017}. The upper and lower anticrossing branches are shown in blue and red in the leftmost panel in Figure \ref{fig:Figure2}c and correspond to $E_+$ (energy of HE-peak) and $E_-$ (energy of LE-peak), respectively. We obtain the coupling constant $g = \frac{1}{2}\Omega_{|\delta = 0} = \SI{46}{\milli\electronvolt}$, where $\Omega_{|\delta = 0}$ is the splitting at zero detuning, as shown in Figure \ref{fig:Figure2}c. This value represents the rate the energy is exchanged between the two interacting systems having predominant light and matter character, respectively. For our system, we obtain $g < g_c$, placing it clearly into the intermediate-coupling regime close to strong-coupling \cite{pelton_strong_2019, sun_light-emitting_2018}. Here, $g_c = \frac{1}{2}(\Gamma_{\mathrm{plasmon}} + \Gamma_{\mathrm{A}}) = \SI{90}{\milli\electronvolt}$ is the critical coupling value, defined as the boundary between the two regimes. Consequently, differential reflectance spectra can be interpreted as the result of the interference between the fields associated with exciton and plasmon mode, leading to the characteristic Fano lineshape \cite{miroshnichenko_fano_2010, lee_fano_2015, limonov_fano_2017, abid_temperature-dependent_2017, sun_light-emitting_2018}. The right panel in Figure \ref{fig:Figure2}c shows exciton and plasmon energies, where gray circles are extracted as fit parameters from the Eq. (\ref{equ:ABSscat}). The exciton branch is centered near the value of $\SI{1.57}{\electronvolt}$ and the plasmon resonant energy has an expected linear behavior. We also examined another array of $\mathrm{MoSe_2}$-coupled dipole nanoantennas that contained dipole nanoantennas of smaller sizes, and thus higher energies of the plasmonic resonance. This enabled us to reach both sides of the detuning curve with respect to the $A$-exciton and we found a similar coupling strength of $g = \SI{55}{\milli\electronvolt}$ (see \hyperlink{page.10}{SI} Section \hyperlink{page.10}{S6}). \begin{figure}[!ht] \includegraphics{Figure3} \caption{\textbf{Active control of the coupling via change of excitation polarization.} \textbf{(a)} Polarization-dependent differential reflectance spectra of a bare nanoantenna fitted by the superposition of the two double Lorentzian functions at 0\degree{} and 90\degree{} (left) and a $\mathrm{MoSe_2}$-coupled dipole nanoantenna fitted by the superposition of the COM for 0\degree{} using Eq. (\ref{equ:ABSscat}) and the double Lorentzian for 90\degree{} (right). \textbf{(b)} The calculated spatial distribution of light intensity enhancement of a bare dipole nanoantenna for excitation light polarized along the long axis, i.e. 0\degree{} (left) and perpendicular to the long axis, i.e. 90\degree{} (right).} \label{fig:Figure3} \end{figure} We continue to explore active control of the coupling via the polarization of the excitation source. Due to the specific design of dipole nanoantennas, linearly polarized incident light excites a superposition of coupled and decoupled modes upon changing its polarization angle relative to the axis passing between the two triangles through the feed gap \cite{schraml_optical_2014}. To demonstrate this, we performed polarization-dependent differential reflectance measurements on a bare nanoantenna by changing the polarization angle of the broadband laser from 0\degree{} to 90\degree{} (corresponding to the polarization direction along and perpendicular to the long axis of the nanoantenna, respectively). The polarization-dependent spectra in Figure \ref{fig:Figure3}a (left panel) exhibit a clear blueshift of the plasmonic mode by ${\sim}\SI{50}{meV}$ upon turning the polarization from 0\degree{}, parallel to the feed gap, to 90\degree{}, perpendicular to the feed gap. By fitting the 0\degree{} and 90\degree{} differential reflectance spectra that correspond to two different modes using double Lorentzian to take into account the asymmetry in triangle sizes, we obtain the spectra for maximum and minimum coupling, respectively. Fits that correspond to the angles between 0\degree{} and 90\degree{} were represented as a linear combination of the two distinct modes. Excellent agreement is obtained between experiment and theory. The reason for the observed blueshift can be readily understood from the calculated spatial distribution of normalized light intensity of such a nanoantenna. This is presented in Figure \ref{fig:Figure3}b for 0\degree{} and 90\degree{} excitation, respectively. Upon exciting at 0\degree{}, along the long axis of the antenna, the dipolar modes of the two particles oscillate in phase and couple to each other, thereby, lowering the resonance energy. In contrast, by exciting close to 90\degree{} the two modes are fully decoupled. This results in \textit{higher} energy of the plasmonic resonance and the observed blueshift. Importantly, we note that the coupled-plasmon mode exhibits an electromagnetic hot-spot in the feed gap of the nanoantenna that can be progressively turned off by rotating the polarization to the 90\degree{} configuration to address the decoupled plasmon mode. To demonstrate the active coupling control granted by this property, we conducted polarization-dependent differential reflectance measurements on a $\mathrm{MoSe_2}$-covered dipole nanoantenna. Typical data obtained from these measurements are presented in the right panel in Figure \ref{fig:Figure3}a. At 0\degree{}, the hot-spot in the feed gap was turned on and we observed the previously explained Fano-shaped curve. The COM model provides a very good fit of the data. At 90\degree{}, the hot-spot is turned off since only single (uncoupled) particle plasmon modes are excited, and as a result, excitons are no longer coupled to the plasmonic resonator. Therefore, instead of the COM, we fit the data using a double Lorentzian function. For intermediate angles, the linear superposition of the two models provides an excellent match to the measured differential reflectance spectra. Furthermore, the blue-shifted plasmonic mode for 90\degree{} excitation occurs at the position resonant with the $B$-exciton. However, no interference between the nanophotonic resonator mode and $B$-exciton can be seen. Therefore, there is no dip at the position of $\SI{1.78}{\electronvolt}$ due to the absence of the hot-spot in the feed gap. This important observation confirms that coupling occurs within the hot-spot of the antenna, within the feed gap, and not at the edges for example. Therefore, we switch on and off the hotspot and, thereby, the coupling between the $A$-exciton and the plasmonic resonator. \begin{figure}[!ht] \includegraphics{Figure4.pdf} \caption{\textbf{Modification of $\mathrm{MoSe}_2$ monolayer photoluminescence (PL) at $\SI{10}{\kelvin}$}. \textbf{(a)} Exemplary PL spectra of the $\mathrm{MoSe}_2$ monolayer on a nanoantenna (red) and on the flat SiO$_2$ substrate (blue). $X^0$ and $X^-$ are exciton and negatively charged trion. Blue and red band correspond to the spectral regions $(1.66-1.68)$ $\SI{}{\electronvolt}$ and $(1.50-1.62)$ $\SI{}{\electronvolt}$, respectively. \textbf{(b)} Integrated PL intensity maps with the integration region corresponding to specified spectral regions. \textbf{(c)} Spectral distribution of the localized optical transitions relative to the $\mathrm{X}^0$. \textbf{(d)} PL intensity of an examplary localized optical transition (at $\sim$\SI{1.57}{\electronvolt}) as a function of the excitation polarization angle.} \label{fig:Figure4} \end{figure} Finally, we present investigations of the impact of the nanoantenna on the low-temperature photoluminescence recorded from the MoSe$_2$ monolayer. Figure \ref{fig:Figure4}a shows a typical photoluminescence spectrum recorded at $\SI{10}{\kelvin}$ with $\SI{633}{\nano\meter}$ excitation. The data reveal that radiative electron-hole recombination in a semiconducting $\mathrm{MoSe_2}$ monolayer is strongly modified at the position of a $\mathrm{MoSe_2}$-coupled dipole nanoantenna (red spectrum) compared to the flat monolayer region (blue spectrum). On the nanostructure, exciton and trion peak exhibit a redshift and a clear intensity reduction. This shift is most likely due to the gold beneath the MoSe$_2$ that results in locally enhanced screening \cite{rosner_two-dimensional_2016, stier_probing_2016}, bandgap renormalization \cite{ugeda_giant_2014}, and potentially also strain \cite{schmidt_reversible_2016}. The right panel in Figure \ref{fig:Figure4}b shows the integrated photoluminescence intensity for the spectral region $(1.66-1.68)$ $\SI{}{\electronvolt}$ scanned over the whole sample, corresponding to the blue band in Figure \ref{fig:Figure4}a. The integrated intensity consistently shows a dip at the position of nanostructures. The spectrum recorded on the nanoantenna in Figure \ref{fig:Figure4}a reveals several PL features which are redshifted from the $X^0$ transition. We plot the integrated emission in the spectral range $(1.50-1.62)$ $\SI{}{\electronvolt}$ in the left panel of Figure \ref{fig:Figure4}b and find that the optical transitions in this spectral region are strictly localized at the positions of the nanostructures \cite{branny_discrete_2016, yu_site-controlled_2021}. These transitions possibly stem from locally trapped excitons, whose origin is often attributed to defects \cite{klein_site-selectively_2019}, strain fields \cite{schmidt_reversible_2016}, and the gradient in the dielectric environment \cite{rosner_two-dimensional_2016}. Their occurrences are depicted by the histogram in Figure \ref{fig:Figure4}c, which shows that most transitions occur spectrally close to the low-energy trion tail. We further observed the transitions at the position of dipole nanoantennas and spectrally close to the plasmonic resonance that show strong polarization response in the long-axes direction of the nanostructure as depicted in Figure \ref{fig:Figure4}d for the transition occurring at $\sim$\SI{1.57}{\electronvolt}, which indicates the adaptation of the emission properties to the nanoantenna. \section{Conclusion} In summary, we reported optical studies of light-matter coupling between a $\mathrm{MoSe_2}$-monolayer and proximal plasmonic dipole nanoantennas. Despite observing a clear anticrossing in reflectivity spectra, we showed that the system operates in the intermediate-coupling regime and that it can be site-selectively accessed and actively controlled at room temperature. Furthermore, these structures exhibit a modified $\mathrm{MoSe_2}$ monolayer photoluminescence that shows a redshift of free excitons and the emergence of trapped excitons. Our methods enable further engineering of the hybrid TMD-metal platform to deterministically position arrays of weakly and strongly coupled systems of a nanoantenna and a single-photon emitter in TMDs. Achieving systems that operate deeper into the strong-coupling regime would require engineering of higher $Q$ \cite{chen_high-_2018} plasmonic structures, for example using novel plasmonic materials or dielectrics. By site-slectivitly inducing one or more color centers using, e.g. local He-ion irradiation \cite{klein_site-selectively_2019}, it may be possible to study novel regimes of few-emitter cavity quantum electrodynamics such as Dicke Superradiance or collective effects, while controlling hybrid exciton-polariton states via hot-spot switching. Additional control could be gained by introduction of a static electric field within the feed gap \cite{kern_electrically_2015} to provide tunability. \section{Methods} \label{sec:Methods} \subsection{FDTD Calculations} Calculations were performed using 3D Maxwell's equations solver Lumerical FDTD Solutions. Bowtie dipole nanoantennas were modeled as triangles with rounded edges. Except for nanoantenna size, all geometrical parameters were fixed as noted in the main text and \hyperlink{page.10}{SI} Section \hyperlink{page.10}{S2}. For the optical constants of gold, we used values reported by Johnson and Christy \cite{johnson_optical_1972}. For a monolayer MoSe$_2$ at room temperature, the data was extracted from Y. Li \textit{et al.} \cite{li_measurement_2014}. A fine mesh in proximity of the dipole nanoantenna was used with a Yee-cell of $1\times1\times\SI{1}{\nano\meter^3}$. Perfectly matched layer (PML) boundary conditions were applied. To speed up the simulation time, periodic boundary conditions were used to exploit the symmetrical character of our structures. Total-field scattered-field (TFSF) was used as the excitation source, which facilitated the calculation of scattered and absorbed light for different spectral ranges and polarization angles. \subsection{Sample Fabrication} The plasmonic dipole nanoantennas were written via electron-beam lithography with an eLine system by Raith. Polymethylmethacrylat (PMMA) 950K, AR-P 679.02, ALLRESIST was spin-coated onto the SiO$_2$ substrate, ramping up from 0 to 4000 rpm for one second, followed by spinning at 4000 rpm for 40 seconds. The PMMA was baked at \SI{170}{\celsius} for 5 minutes. AR-PC 5090 Electra 92 was used to avoid the charging effects (rotation speed 2000 rpm, initial ramp 2000 rpm, spinning time 60 s, \SI{90}{\degree} baking temperature). 30 kV acceleration voltage, \SI{10}{\micro\meter} aperture and 400-\SI{800}{\micro\coulomb/\centi\meter^2} doses were used. Nanoantennas were positioned \SI{2}{\micro\meter} from each other to avoid coupling of neighboring nanoantennas. After writing, the conductive coating was removed using distilled water and the structures were developed applying Methylisobutylketone (MIBK) and isopropanol 1:3 solution for 45 s. Evaporation of 5 nm titanium and 35 nm gold layer and subsequent lift-off by acetone was performed to obtain the final nanoantenna structures. In the second step, an all-dry viscoelastic (Polydimethylsiloxane) stamping method \cite{castellanos-gomez_deterministic_2014} was used to position and transfer a mechanically exfoliated MoSe$_2$ monolayer on top of plasmonic nanostructures, to finally obtain MoSe$_2$-coupled dipole nanoantennas. The monolayer was identified using atomic force microscopy, contrast in an optical microscope, and differential reflectance measurements. \subsection{Optical Measurements} Differential reflectance and photoluminescence measurements were performed using a self-built confocal microscope. For differential reflectance measurements, a broad-band supercontinuum laser beam (Fianium WhiteLase) was focused on the sample by an objective with N.A. = 0.9 to a diffraction-limited spot. Differential reflectance spectra $\Delta R(E)/R(E)$ were obtained by $(I_{ON}(E) - I_{OFF}(E))/I_{OFF}(E)$, where $I_{ON}(E)$ and $I_{OFF}(E)$ represent the number of counts from reflected light when the excitation laser spot is on a nanoantenna and on a substrate/monolayer, respectively. The light was dispersed by a 150 lines/mm grating onto a charge-coupled device (Horiba). Photoluminescence measurements were performed at \SI{10}{\kelvin} using He-flow cryostat (Cryovac). The excitation He-Ne (\SI{633}{\nano\meter}) laser spot was focused on the sample by an objective with N.A. = 0.75. The used grating was 600 lines/mm. \section{Author Contributions} M.M.P. did the FDTD calculations and fabricated the samples. M.M.P. and A.N. constructed the differential reflectance measurement setup and performed differential reflectance measurements and data analysis. M.M.P. and M.Kr. performed PL measurements and data analysis. M.Ka., K.M., and J.J.F. conceived and managed the project. All authors participated in the discussion of the results and the writing of the manuscript. \section{Acknowledgements} M.M.P. acknowledges TUM International Graduate School of Science and Engineering (IGSSE). M.Kr. acknowledges support from the International Max Planck Research School for Quantum Science and Technology (IMPRS-QST). J.J.F. gratefully acknowledges the German Science Foundation (DFG) for financial support via the Clusters of Excellence e.Conversion (EXC 2089), MCQST (EXC 2111), as well as the individual projects DI 2013/5-1 and FI 947/8-1. M.B. and A.L. acknowledge support from the Alexander von Humboldt Foundation. K.M. acknowledges support from the Bavarian Academy of Sciences and Humanities.
{ "redpajama_set_name": "RedPajamaArXiv" }
1,472
{"url":"http:\/\/math.stackexchange.com\/questions\/195283\/convergence-complex-integral","text":"# Convergence complex integral\n\nDoes this integral converge\n\n$$\\lim_{R \\to \\infty}\\int_{\\frac{\\pi}{2}}^{\\pi}\\frac{e^{Rte^{i\\theta}}}{\\sqrt{Re^{i\\theta}+1}}\\cdot iRe^{i\\theta}d\\theta$$\n\nwhere t is a positive integer?\n\nIf the integral diverges, how can I prove this?\n\np.s. This integral is part of a larger contour integral to calculate\n\n$$\\int_{a-i\\infty}^{a+i\\infty} \\frac{e^{zt}}{\\sqrt{1+z}} dz$$\n\nI know that\n\n$$\\lim_{R \\to \\infty}\\int_{\\frac{\\pi}{2}}^{\\pi}\\frac{e^{Rte^{i\\theta}}}{\\sqrt{Re^{i\\theta}+1}}\\cdot iRe^{i\\theta}d\\theta + \\int_{-\\pi}^{\\frac{-\\pi}{2}}\\frac{e^{Rte^{i\\theta}}}{\\sqrt{Re^{i\\theta}+1}}\\cdot iRe^{i\\theta}d\\theta=0$$\n\n-\nWhat is $R$? If $R\\neq 1$, you are integrating over a finite domain of $\\theta$ and there are no singularities. I feel like what you really want to ask is what happens to the integral as $R\\rightarrow\\infty$. \u2013\u00a0 Alex R. Sep 13 '12 at 16:34\n@Sam That is what I mean. I updated the question. \u2013\u00a0 wnvl Sep 13 '12 at 16:36\n\nuse $$\\left| \\frac{e^{Rt e^{i\\theta}}}{\\sqrt{R e^{i\\theta} +1}} iR e^{i\\theta} \\right|\\leq c\\sqrt{R}\\, e^{Rt \\cos\\theta}$$ with an appropriate $c$ for $R\\geq R_0$ to show that the integral converges.","date":"2015-07-07 02:50:44","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9580085873603821, \"perplexity\": 238.48290118687564}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 20, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2015-27\/segments\/1435375098987.83\/warc\/CC-MAIN-20150627031818-00208-ip-10-179-60-89.ec2.internal.warc.gz\"}"}
null
null
{"url":"https:\/\/stats.stackexchange.com\/questions\/186465\/difference-between-standard-beta-and-unstandard-beta-distributions","text":"# Difference between standard beta and unstandard beta distributions?\n\nWhat is the difference between standard beta and unstandard beta distributions? How to understand in an article if it is not described if it is standard or not?\n\nStandard beta distribution is beta distribution bounded in $(0, 1)$ interval, so it is what we generally refer to when talking about beta distribution. Beta is not standard if it has other bounds, denoted sometimes as $a$ and $b$ (lower and upper bound), you can find some information here.\n\nSo the general form of probability density function is\n\n$$f(x) = \\frac{(x-a)^{\\alpha-1}(b-x)^{\\beta-1}} {\\mathrm{B}(\\alpha,\\beta) (b-a)^{\\alpha+\\beta-1}}$$\n\nwhile in most cases we refer to standard beta, i.e.\n\n$$f(x) = \\frac{x^{\\alpha-1}(1-x)^{\\beta-1}} { \\mathrm{B}(\\alpha,\\beta)}$$\n\nIf $X$ is beta distributed with bounds $a$ and $b$, then you can transform it to standard beta distributed variable $Z$ by simple normalization\n\n$$Z = \\frac{X-a}{b-a}$$\n\nIt is also easy to back-transform standard beta to beta with $a$ and $b$ bounds by\n\n$$X = Z \\times (b-a) + a$$\n\nSo to compute pdf, cdf, or random number generation for non-standard beta, you need only the basic functions and formulas for beta distribution. If you want to use density function of standard beta with non-standard beta just remember to normalize the density, i.e. $f(\\frac{X-a}{b-a})\/(b-a)$.\n\nIn most cases people referring to beta distribution are talking about standard beta distribution. If the distribution has different bounds than $(0, 1)$, than it is obviously not a standard beta, so it should be clear from context.\n\n\u2022 Thanks for this. Can I convert the ust Beta to Std beta by using Standard error? \u2013\u00a0hero1985 Dec 12 '15 at 23:03\n\u2022 @hero1985 using SE no, but if you can using bounds, see my edit. \u2013\u00a0Tim Dec 13 '15 at 7:57\n\u2022 Readers note that the section of the wikipedia beta distribution article relating to the Four parameter beta distribution may also be helpful. \u2013\u00a0Glen_b Jul 28 '16 at 11:34\n\u2022 Maybe it is obvious to people more mathematically inclined than me, but what does the B in the first two equations stand for? Cheers. EDIT: opened the \"here\" link and found it. nvm \u2013\u00a0bunsenbaer Aug 16 '18 at 19:28\n\u2022 @bunsenbaer beta function en.wikipedia.org\/wiki\/Beta_function \u2013\u00a0Tim Aug 16 '18 at 19:34","date":"2020-08-05 19:54:58","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9046573042869568, \"perplexity\": 655.9685868251763}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 20, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-34\/segments\/1596439735964.82\/warc\/CC-MAIN-20200805183003-20200805213003-00306.warc.gz\"}"}
null
null
\section{Introduction} Vibrational energy relaxation (VER) and dephasing are fundamental properties of molecular dynamics, energy transfer, and reactivity. Many experimental and theoretical studies have explored these fundamental processes in gas phase, the liquid state, and in glasses and biomolecular systems \cite{FS05}. Though our methodology can be applied to any molecular system, we are primarily interested in addressing VER and dephasing in peptides or proteins. While recent advanced experimental techniques using absorption spectra or time-resolved spectra can deduce the structure and dynamics of such a peptide or protein system, theoretical approaches are needed to clarify the mechanisms of VER and dephasing underlying the experimental data. The most standard approach to this problem is through the perturbation theory of quantum mechanics as initiated by Oxtoby \cite{Oxtoby}. Recently Hynes's group \cite{Hynes} and Skinner's group \cite{Skinner} thoroughly studied the VER and dephasing properties of water (their target mode was the OH bond of HOD in heavy water) using this strategy. This approach is applicable to peptides or proteins as was first illustrated by Straub and coworkers \cite{Straub}. Derived from this strategy is the use of the Maradudin-Fein formula (or its equivalent), which was pursued by Leitner \cite{Leitner05} and Straub and coworkers \cite{FBS05a}. This formula requires the normal modes of the system and the cubic anharmonic coefficients between the normal modes. This methodology can provide a reasonable account of VER properties of peptides or proteins, but there are several deficiencies: the most serious one is that it assumes the Markov properties of the system, so it cannot describe the short time dynamics \cite{FBS05b}. Another problem is the determination of the ``lifetime'' width parameter \cite{FS05,FBS05a,FBS05b}. We also want to describe the dephasing properties of the system, crucial to the interpretation of the experimental results; it cannot be directly described by the MF formula (but see \cite{Leitner02}). To meet these goals, we derive the formulas for VER and dephasing without assuming the Markov properties, i.e., without taking an infinite-time limit. As a result, we can avoid the annoying ``width parameter'' problem inherent to the MF approach. In this sense, Mikami and Okazaki \cite{MO04} took a similar path using the path-integral influence functional theory. We use a simple time-dependent perturbation theory of quantum mechanics, and derive the VER and dephasing formulas more easily. We find there is a difference between our formulas and theirs in terms of renormalization of the system Hamiltonian. Another difference is that our system oscillator is taken to be a cubic anharmonic oscillator, whereas their mode is a harmonic oscillator. This can affect the result when the formulas are applied to real systems with strong anharmonicity. This paper is organized as follows: In Sec.~\ref{sec:derivation}, we derive the VER and dephasing formulas for an anharmonic oscillator (mode) without assuming the Markov properties. In Sec.~\ref{sec:NMA}, we apply our formulas to the amide I mode of $N$-methylacetamide in heavy water, and discuss the numerical results and the limitations of our strategy. In Sec.~\ref{sec:summary}, we summarize the paper. Several system parameters and coefficients in our formulas are defined in the Appendix. \section{Derivation of the formulas for VER and dephasing} \label{sec:derivation} \subsection{System, Bath, and Coupling} We take our Hamiltonian of a solvated peptide or protein to be \begin{eqnarray} {\cal H} &=& {\cal H}_0 + {\cal V} = {\cal H}_S +{\cal H}_B+ {\cal V} = {\cal H}_S^{(0)} +{\cal H}_f +{\cal H}_B+ {\cal V}, \\ {\cal V} &=& -q_S ({\cal F} -\langle {\cal F} \rangle)+ q_S^2 ({\cal G}-\langle {\cal G} \rangle) = -q_S \delta{\cal F}+ q_S^2 \delta{\cal G}, \\ {\cal H}_S^{(0)} &=& \frac{p_S^2}{2} + \frac{\omega_S^2}{2} q_S^2 -q_S \langle {\cal F} \rangle + q_S^2 \langle {\cal G} \rangle = \frac{p_S^2}{2} + \frac{\bar{\omega}_S^2}{2} \bar{q}_S^2 -\frac{\langle {\cal F} \rangle^2}{2 \bar{\omega}_S^2}, \\ {\cal H}_f &=& \frac{f}{6} \bar{q}_S^3, \\ {\cal H}_B &=& \sum_{\alpha} \left( \frac{p_{\alpha}^2}{2}+\frac{\omega_{\alpha}^2}{2} q_{\alpha}^2 \right), \end{eqnarray} where \begin{eqnarray} \bar{\omega}_S &=& \omega_S \sqrt{1+\frac{2 \langle {\cal G} \rangle}{\omega_S^2}}, \label{eq:omegas} \\ \bar{q}_S &=& q_S-\frac{\langle {\cal F} \rangle}{\bar{\omega}_S^2}=q_S-b. \label{eq:qs} \end{eqnarray} ${\cal H}_S={\cal H}_S^{(0)}+{\cal H}_f$ is the renormalized system Hamiltonian representing a vibrational mode $q_S$ with cubic anharmonicity $f$, ${\cal H}_B$ the bath Hamiltonian representing solvent or environmental degrees of freedom with harmonic frequencies $\omega_{\alpha}$, and ${\cal V}$ the interaction Hamiltonian describing the coupling between the system and the bath. We have assumed that the interation can be Taylor expanded, and we have only included up to the second order in $q_S$. Note that we need to renormalize the system to assure that $\langle {\cal V} \rangle =0$ where the bracket denotes the bath average throughout this paper. (For the definition of $\delta {\cal F}$ and $\delta {\cal G}$, see Appendix \ref{app:system}.) This is automatically satisfied in the case of bilinear coupling like the Caldeira-Leggett-Zwanzig model \cite{Weiss}, but this is not usually the case. The system variable becomes ${\bar q}_S$ instead of $q_S$, and the system frequency does ${\bar \omega}_S$ instead of $\omega_S$. This is similar to previous treatments of the system-bath interaction in the literature \cite{Skinner,XYK02}. \subsection{Perturbation theory for VER and dephasing} Starting from the interaction picture of the von Neumann equation, we can expand the density operator for the full system as \begin{eqnarray} \tilde{\rho}(t) &=& \rho(0)+ \frac{1}{i \hbar} \int_0^t dt' [\tilde{\cal V}(t'), \tilde{\rho}(t')] \nonumber \\ &=& \rho(0)+ \frac{1}{i \hbar} \int_0^t dt' [\tilde{\cal V}(t'), \rho(0)] \nonumber \\ && +\frac{1}{(i \hbar)^2} \int_0^t dt' \int_0^{t'} dt'' [\tilde{\cal V}(t'), [\tilde{\cal V}(t''),\rho(0)]] + \cdots \end{eqnarray} where \begin{equation} \tilde{\rho}(t) \equiv e^{i {\cal H}_0 t/\hbar} \rho(t) e^{-i {\cal H}_0 t/\hbar}, \hspace{1cm} \tilde{\cal V}(t) \equiv e^{i {\cal H}_0 t/\hbar} {\cal V} e^{-i {\cal H}_0 t/\hbar}. \end{equation} The {\it reduced} density matrix for the system oscillator is introduced as \begin{eqnarray} (\rho_S)_{mn}(t) &\equiv& {\rm Tr} \{ P_{mn} \rho(t) \} = {\rm Tr} \{ P_{mn} e^{-i {\cal H}_0 t/\hbar} \tilde{\rho}(t) e^{i {\cal H}_0 t/\hbar} \}, \\ P_{mn} &\equiv& |n \rangle \langle m| \otimes 1_B, \\ \rho(0) &=& \rho_S \otimes \rho_B = \sum_{k,l} (\rho_S)_{kl} |k \rangle \langle l| \otimes e^{-\beta {\cal H}_B}/Z_B, \\ Z_B &=& {\rm Tr}_B \{ e^{-\beta {\cal H}_B} \}, \end{eqnarray} where the initial state is assumed to be a direct product state of $\rho_S$ and $\rho_B = e^{-\beta {\cal H}_B}/Z_B$, i.e., we have assumed that the bath is in thermal equilibrium. Here $|k \rangle$ is the vibrational eigenstate for the system Hamiltonian ${\cal H}_S$, i.e., ${\cal H}_S | k \rangle =E_k | k \rangle$. If we assume that ${\cal H}_f$ is small, we can calculate $|k \rangle$ and $E_k$ using the time-independent perturbation theory as shown in Appendix \ref{app:system}. We note that \begin{equation} (\rho_S)_{mn}(t)={\rm Tr} \{ P_{mn} e^{-i {\cal H}_0 t/\hbar} \tilde{\rho}(t) e^{i {\cal H}_0 t/\hbar} \} = {\rm Tr}_B \{ \tilde{\rho}_{mn}(t) \} e^{-i \omega_{mn}t}. \end{equation} The lowest (second) order result for the density matrix is \begin{eqnarray} (\rho_S)_{mn}(t) &\simeq& (\rho_S)^{(0)}_{mn}(t) +(\rho_S)^{(1)}_{mn}(t) +(\rho_S)^{(2)}_{mn}(t) +\cdots, \label{eq:density} \\ (\rho_S)^{(0)}_{mn}(t) &=& {\rm Tr}_B \{ \tilde{\rho}_{mn}(0) \} e^{-i \omega_{mn}t} =(\rho_S)_{mn}e^{-i \omega_{mn}t}, \\ (\rho_S)^{(1)}_{mn}(t) &=& \frac{1}{i \hbar} \int^{t}_0 dt' {\rm Tr}_B \{ \langle m| [\tilde{\cal V}(t'),\rho(0)] | n \rangle \} e^{-i \omega_{mn}t} \nonumber \\ &=& \frac{1}{i \hbar} \int^{t}_0 dt' \sum_k \left \{ \langle \tilde{\cal V}_{mk}(t') \rangle e^{i \omega_{mk}t'} (\rho_S)_{kn} - \langle \tilde{\cal V}_{kn}(t') \rangle e^{i \omega_{kn}t'} (\rho_S)_{mk} \right \} e^{-i \omega_{mn}t}, \\ (\rho_S)^{(2)}_{mn}(t) &=& \frac{1}{(i \hbar)^2} \int^{t}_0 dt' \int^{t'}_0 dt'' {\rm Tr}_B \{ \langle m| [\tilde{\cal V}(t'),[\tilde{\cal V}(t''),\rho(0)]] | n \rangle \} e^{-i \omega_{mn}t} \nonumber \\ &=& \frac{1}{(i \hbar)^2} \int_0^t dt' \int_0^{t'} dt'' \sum_{k,l} \left \{ \langle \tilde{\cal V}_{mk}(t') \tilde{\cal V}_{kl}(t'') \rangle (\rho_S)_{ln} e^{i (\omega_{mk}t'+\omega_{kl}t'')} \right \} e^{-i \omega_{mn}t} \nonumber \\ &+& \frac{1}{(i \hbar)^2} \int_0^t dt' \int_0^{t'} dt'' \sum_{k,l} \left \{ \langle \tilde{\cal V}_{kl}(t'') \tilde{\cal V}_{ln}(t') \rangle (\rho_S)_{mk} e^{i (\omega_{kl}t''+\omega_{ln}t')} \right \} e^{-i \omega_{mn}t} \nonumber \\ &-& \frac{1}{(i \hbar)^2} \int_0^t dt' \int_0^{t'} dt'' \sum_{k,l} \left \{ \langle \tilde{\cal V}_{ln}(t'') \tilde{\cal V}_{mk}(t') \rangle (\rho_S)_{kl} e^{i (\omega_{mk}t'+\omega_{ln}t'')} \right \} e^{-i \omega_{mn}t} \nonumber \\ &-& \frac{1}{(i \hbar)^2} \int_0^t dt' \int_0^{t'} dt'' \sum_{k,l} \left \{ \langle \tilde{\cal V}_{ln}(t') \tilde{\cal V}_{mk}(t'') \rangle (\rho_S)_{kl} e^{i (\omega_{mk}t''+\omega_{ln}t')} \right \} e^{-i \omega_{mn}t} \end{eqnarray} where \begin{eqnarray} \langle \tilde{\cal V}_{kl} (t) \tilde{\cal V}_{mn}(t') \rangle &\equiv& {\rm Tr}_B \{ \rho_B \tilde{\cal V}_{kl}(t) \tilde{\cal V}_{mn}(t') \}, \\ \tilde{\cal V}_{kl}(t) &=& \langle k | \tilde{\cal V}(t)| l \rangle = \langle k | e^{i {\cal H}_B t/\hbar} {\cal V} e^{-i {\cal H}_B t/\hbar} | l \rangle, \\ \omega_{kl} &=& (E_{k}-E_{l})/\hbar. \end{eqnarray} Note that, in the above formulas, the time dependence is only induced by the bath Hamiltonian ${\cal H}_B$. For the matrix elements of the interaction Hamiltonian ${\cal V}$, we have \begin{eqnarray} \langle \tilde{\cal V}_{kl}(t) \rangle &=& -(q_S)_{kl} \langle \delta{\cal F}(t) \rangle +(q_S^2)_{kl} \langle \delta{\cal G}(t) \rangle, \\ \langle \tilde{\cal V}_{kl}(t) \tilde{\cal V}_{mn}(t') \rangle &=& (q_S)_{kl} (q_S)_{mn} \langle \delta{\cal F}(t) \delta{\cal F}(t') \rangle +(q_S^2)_{kl} (q_S^2)_{mn} \langle \delta{\cal G}(t) \delta{\cal G}(t') \rangle \nonumber \\ && -[(q_S)_{kl} (q_S^2)_{mn}+(q_S^2)_{kl} (q_S)_{mn}] \langle \delta{\cal F}(t) \delta{\cal G}(t') \rangle \label{eq:matrix} \end{eqnarray} where the value of $(q_S)_{kl}$ and $(q_S^2)_{kl}$ are given in Eqs.~(\ref{eq:qs1})-(\ref{eq:qs2}) for the case of a cubic oscillator. Since $\langle \delta {\cal F} \rangle=0$ and $\langle \delta {\cal G} \rangle=0$, we have $\langle \tilde{\cal V}_{kl}(t) \rangle=0$ and $(\rho_S)^{(1)}_{mn}(t)=0$. \subsection{VER formula} We first calculate the diagonal elements of the density matrix $(\rho_S)_{ii}(t)$ ($i=0,1$) by assuming that the initial state is the first {\it vibrationally} excited state: $\rho_S= |1 \rangle \langle 1|$. This is a typical situation for VER though VER from highly excited states can be considered \cite{VFVAMJ04}. The density matrix $(\rho_S)_{00}(t)$ is written as \begin{eqnarray} (\rho_S)_{00}(t) &\simeq& \frac{2}{\hbar^2} \int_0^t dt' \int_0^{t'} dt'' {\rm Re} \left \{ \langle \tilde{\cal V}_{10}(t') \tilde{\cal V}_{01}(t'') \rangle e^{i \tilde{\omega}_S (t'-t'')} \right \} \end{eqnarray} where $\tilde{\omega}_S$ is the anharmonicity-corrected system frequency given by Eq.~(\ref{eq:freq}). From Eq.~(\ref{eq:matrix}), we have \begin{eqnarray} (\rho_S)_{00}(t) &\simeq& \frac{2}{\hbar^2} (q_S)^2_{10} \int_0^t dt' \int_0^{t'} dt'' {\rm Re} \left \{ \langle \delta {\cal F}(t'-t'') \delta {\cal F}(0) \rangle e^{i \tilde{\omega}_S (t'-t'')} \right \} \nonumber \\ && + \frac{2}{\hbar^2} (q^2_S)^2_{10} \int_0^t dt' \int_0^{t'} dt'' {\rm Re} \left \{ \langle \delta {\cal G}(t'-t'') \delta {\cal G}(0) \rangle e^{i \tilde{\omega}_S (t'-t'')} \right \} \nonumber \\ && -\frac{4}{\hbar^2} (q_S)_{10}(q^2_S)_{10} \int_0^t dt' \int_0^{t'} dt'' {\rm Re} \left \{ \langle \delta {\cal F}(t'-t'') \delta {\cal G}(0) \rangle e^{i \tilde{\omega}_S (t'-t'')} \right \}. \label{eq:VER0} \end{eqnarray} Using the explicit expressions for the correlation functions \cite{FBS05a}, the final VER formula is obtained as \begin{eqnarray} (\rho_S)_{00}(t) &\simeq& \frac{2}{\hbar^2} \sum_{\alpha,\beta} \left[ C_{--}^{\alpha \beta} u_t(\tilde{\omega}_S-\omega_{\alpha}-\omega_{\beta}) \nonumber +C_{++}^{\alpha \beta} u_t(\tilde{\omega}_S+\omega_{\alpha}+\omega_{\beta}) +C_{+-}^{\alpha \beta} u_t(\tilde{\omega}_S-\omega_{\alpha}+\omega_{\beta}) \right] \nonumber \\ && +\frac{2}{\hbar^2} \sum_{\alpha} \left[ C^{\alpha}_{-} u_t(\tilde{\omega}_S-\omega_{\alpha}) + C^{\alpha}_{+} u_t(\tilde{\omega}_S+\omega_{\alpha}) \right] \label{eq:VER} \end{eqnarray} where $u_t(\Omega)$ is defined as \begin{eqnarray} u_t(\Omega) &=& \int_0^t dt' \int_0^{t'} dt'' \cos \Omega (t'-t'') = \frac{1-\cos \Omega t}{\Omega^2}, \\ v_t(\Omega) &=& \int_0^t dt' \int_0^{t'} dt'' \sin \Omega (t'-t'') = \frac{\Omega t-\sin \Omega t}{\Omega^2}, \end{eqnarray} and $v_t(\Omega)$ is defined for later use. The coefficients are defined in Appendix \ref{app:coef}. Equation (\ref{eq:VER}) is our final formula for VER. If we take the long time limit of this formula (which is equivalent to the Markov approximation), we obtain a formula for the VER rate \begin{eqnarray} k_{0 \leftarrow 1} &\equiv& \left. \frac{d}{dt}(\rho_S)_{00}(t) \right|_{t \rightarrow \infty} \nonumber \\ &=& \frac{2 \pi}{\hbar^2} \sum_{\alpha,\beta} \left[ C_{--}^{\alpha \beta} \delta(\tilde{\omega}_S-\omega_{\alpha}-\omega_{\beta}) + C_{++}^{\alpha \beta} \delta(\tilde{\omega}_S+\omega_{\alpha}+\omega_{\beta}) + C_{+-}^{\alpha \beta} \delta(\tilde{\omega}_S-\omega_{\alpha}+\omega_{\beta}) \right] \nonumber \\ && +\frac{2 \pi}{\hbar^2} \sum_{\alpha} \left[ C^{\alpha}_{-} \delta(\tilde{\omega}_S-\omega_{\alpha}) + C^{\alpha}_{+} \delta(\tilde{\omega}_S+\omega_{\alpha}) \right] \label{eq:MF} \end{eqnarray} where we have used \begin{equation} \left. \frac{d}{dt} u_t(\Omega) \right|_{t \rightarrow \infty} =\left. \frac{\sin \Omega t}{\Omega} \right|_{t \rightarrow \infty} = \pi \delta(\Omega). \end{equation} If $\bar{q}_S=q_S$ and $\tilde{\omega}_S=\omega_S$, i.e., $\langle {\cal F} \rangle = \langle {\cal G} \rangle = 0$ {\it and} $f=0$, we recover the Maradudin-Fein formula \cite{FBS05a} from Eq.~(\ref{eq:MF}). It follows that Eq.~(\ref{eq:VER}) is a generalization of the Maradudin-Fein formula, which can describe the time development of a density matrix. \subsection{Dephasing formula} We calculate the off diagonal elements of the density matrix $(\rho_S)_{10}(t)$ by assuming that the initial state is the superposition state between $|0 \rangle$ and $|1 \rangle$: $\rho_S=(1/2)( |0 \rangle \langle 0|+ |0 \rangle \langle 1|+ |1 \rangle \langle 0|+ |1 \rangle \langle 1|) $ \cite{MO04}. That is, $(\rho_S)_{kl}=1/2$ for all $k$ and $l$. This is a simplified situation to consider dephasing in a two-level system. We have \begin{eqnarray} (\rho_S)_{10}(t) &=& (\rho_S)^{(0)}_{10}(t)+(\rho_S)^{(1)}_{10}(t)+(\rho_S)^{(2)}_{10}(t) + \cdots \nonumber \\ &=& \frac{1}{2}e^{-i \tilde{\omega}_S t}(1+ r^{(1)}(t)+r^{(2)}(t)+ \cdots). \label{eq:dephasing} \end{eqnarray} By the definition of the interaction Hamiltonian (Appendix \ref{app:system}), we have $r^{(1)}(t)=0$. The remaining term $r^{(2)}(t)$ is decomposed as \begin{eqnarray} r^{(2)}(t) &=& -r^{(2)}_{FF}(t)-r^{(2)}_{GG}(t)-r^{(2)}_{FG}(t), \\ r^{(2)}_{FF}(t) &=& \frac{2}{\hbar^2} \int_0^t dt' \int_0^{t'} dt'' {\rm Re} \{ \langle {\cal V}_{10}(t'){\cal V}_{01}(t'') \rangle \} [e^{i \tilde{\omega}_S(t'-t'')}-e^{i \tilde{\omega}_S(t'+t'')}] \nonumber \\ &=& \frac{2}{\hbar^2} (q_S)_{10}^2 \int_0^t dt' \int_0^{t'} dt'' {\rm Re} \{ \langle \delta {\cal F}(t') \delta {\cal F}(t'') \rangle \} [e^{i \tilde{\omega}_S(t'-t'')}-e^{i \tilde{\omega}_S(t'+t'')}] \nonumber \\ &+& \frac{2}{\hbar^2} (q^2_S)_{10}^2 \int_0^t dt' \int_0^{t'} dt'' {\rm Re} \{ \langle \delta {\cal G}(t') \delta {\cal G}(t'') \rangle \} [e^{i \tilde{\omega}_S(t'-t'')}-e^{i \tilde{\omega}_S(t'+t'')}] \nonumber \\ &-& \frac{4}{\hbar^2} (q_S)_{10} (q^2_S)_{10} \int_0^t dt' \int_0^{t'} dt'' {\rm Re} \{ \langle \delta {\cal F}(t') \delta {\cal G}(t'') \rangle \} [e^{i \tilde{\omega}_S(t'-t'')}-e^{i \tilde{\omega}_S(t'+t'')}], \\ r^{(2)}_{GG}(t) &=& \frac{1}{\hbar^2} \int_0^t dt' \int_0^{t'} dt'' \{ \langle [{\cal V}_{11}(t')- {\cal V}_{00}(t')] {\cal V}_{11}(t'') \rangle + \langle {\cal V}_{00}(t'') [{\cal V}_{00}(t')- {\cal V}_{11}(t')] \rangle \} \nonumber \\ &=& \frac{1}{\hbar^2} [(q_S)_{11}-(q_S)_{00}]^2 \int_0^t dt' \int_0^{t'} dt'' {\rm Re} \langle \delta {\cal F}(t') \delta {\cal F}(t'') \rangle \nonumber \\ &+& \frac{1}{\hbar^2} [(q^2_S)_{11}-(q^2_S)_{00}]^2 \int_0^t dt' \int_0^{t'} dt'' {\rm Re} \langle \delta {\cal G}(t') \delta {\cal G}(t'') \rangle \nonumber \\ &-& \frac{2}{\hbar^2} [(q_S)_{11}-(q_S)_{00}] [(q^2_S)_{11}-(q^2_S)_{00}] \int_0^t dt' \int_0^{t'} dt'' {\rm Re} \langle \delta {\cal F}(t') \delta {\cal G}(t'') \rangle \nonumber \\ &+& \frac{i}{\hbar^2} [(q_S)^2_{11}-(q_S)^2_{00}] \int_0^t dt' \int_0^{t'} dt'' {\rm Im} \langle \delta {\cal F}(t') \delta {\cal F}(t'') \rangle \nonumber \\ &+& \frac{i}{\hbar^2} [(q^2_S)^2_{11}-(q^2_S)^2_{00}] \int_0^t dt' \int_0^{t'} dt'' {\rm Im} \langle \delta {\cal G}(t') \delta {\cal G}(t'') \rangle \nonumber \\ &-& \frac{2 i}{\hbar^2} [(q_S)_{11} (q^2_S)_{11} - (q^2_S)_{00}(q^2_S)_{00}] \int_0^t dt' \int_0^{t'} dt'' {\rm Im} \langle \delta {\cal F}(t') \delta {\cal G}(t'') \rangle, \label{eq:gg} \\ r^{(2)}_{FG}(t) &=& \frac{2i}{(i \hbar)^2} \int_0^t dt' \int_0^{t'} dt'' {\rm Im} \{ \langle [{\cal V}_{11}(t')-{\cal V}_{00}(t')] {\cal V}_{10}(t'') \rangle \} [e^{i \tilde{\omega}_S t'}-e^{i \tilde{\omega}_S t''}] \nonumber \\ &=& \frac{2i}{(i \hbar)^2} [(q_S)_{11}-(q_S)_{00}](q_S)_{10} \int_0^t dt' \int_0^{t'} dt'' {\rm Im} \{ \langle \delta {\cal F}(t') \delta {\cal F}(t'') \rangle \} [e^{i \tilde{\omega}_S t'}-e^{i \tilde{\omega}_S t''}] \nonumber \\ &+& \frac{2i}{(i \hbar)^2} [(q_S^2)_{11}-(q_S^2)_{00}] (q_S^2)_{10} \int_0^t dt' \int_0^{t'} dt'' {\rm Im} \{ \langle \delta {\cal G}(t') \delta {\cal G}(t'') \rangle \} [e^{i \tilde{\omega}_S t'}-e^{i \tilde{\omega}_S t''}] \nonumber \\ &-& \frac{2i}{(i \hbar)^2} \{ [(q_S)_{11}-(q_S)_{00}] (q_S^2)_{10} + [(q_S^2)_{11}-(q_S^2)_{00}] (q_S)_{10} \} \nonumber \\ && \times \int_0^t dt' \int_0^{t'} dt'' {\rm Im} \{ \langle \delta {\cal F}(t') \delta {\cal G}(t'') \rangle \} [e^{i \tilde{\omega}_S t'}-e^{i \tilde{\omega}_S t''}] \end{eqnarray} where the subscript denotes that, e.g., for $r^{(2)}_{FF}(t)$, the dominant contribution comes from $\langle \delta {\cal F}(t) \delta {\cal F}(0) \rangle$ when the effects of the bath and the system anharmonicity are both weak. After similar calculations as done for the VER formula above, we obtain \begin{eqnarray} r^{(2)}_{FF}(t) &=& \frac{2}{\hbar^2} \sum_{\alpha,\beta} \left[ (C_{--}^{\alpha \beta} +C_{++}^{\alpha \beta}) f_t (\omega_{\alpha}+\omega_{\beta}) +C_{+-}^{\alpha \beta} f_t(\omega_{\alpha}-\omega_{\beta}) + (C^{\alpha}_{-} + C^{\alpha}_{+}) f_t(\omega_{\alpha}) \right], \label{eq:FF} \\ r^{(2)}_{GG}(t) &=& \frac{1}{\hbar^2} \sum_{\alpha,\beta} \left[ (D_{--}^{R\alpha \beta}+D_{++}^{R\alpha \beta}) u_t(\omega_{\alpha}+\omega_{\beta}) + D_{+-}^{R\alpha \beta} u_t(\omega_{\alpha}-\omega_{\beta}) + (D_{-}^{R \alpha}+D_{+}^{R\alpha}) u_t(\omega_{\alpha}) \right] \nonumber \\ && - \frac{i}{\hbar^2} \sum_{\alpha,\beta} \left[ (D_{--}^{I\alpha \beta}-D_{++}^{I\alpha \beta}) v_t(\omega_{\alpha}+\omega_{\beta}) + D_{+-}^{I\alpha \beta} v_t(\omega_{\alpha}-\omega_{\beta}) +(D^{I \alpha}_{-}-D^{I \alpha}_{+}) v_t(\omega_{\alpha}) \right], \label{eq:GG} \\ r^{(2)}_{FG}(t) &=& \frac{2}{\hbar^2} \sum_{\alpha,\beta} \left[ (E_{--}^{\alpha \beta}-E_{++}^{\alpha \beta}) g_t(\omega_{\alpha}+\omega_{\beta}) +E_{+-}^{\alpha \beta} g_t(\omega_{\alpha}-\omega_{\beta}) +(E^{\alpha}_{-}-E^{\alpha}_{+}) g_t(\omega_{\alpha}) \right] \label{eq:FG} \end{eqnarray} where \begin{eqnarray} f_t(\Omega) &=& \int_0^t dt' \int_0^{t'} dt'' \cos \Omega (t'-t'') [e^{i \tilde{\omega}_S(t'-t'')} -e^{i \tilde{\omega}_S(t'+t'')}] \nonumber \\ &=& \frac{1}{2} \left[ \frac{1}{\tilde{\omega}_S-\Omega}+\frac{1}{\tilde{\omega}_S+\Omega} \right] \left\{ \frac{1-e^{i(\tilde{\omega}_S+\Omega)t}}{\tilde{\omega}_S+\Omega} +\frac{1-e^{i(\tilde{\omega}_S-\Omega)t}}{\tilde{\omega}_S-\Omega} +\frac{2i \tilde{\omega}_S t -1+e^{2 i\tilde{\omega}_S t}}{2\tilde{\omega}_S} \right\}, \\ g_t(\Omega) &=& i \int_0^t dt' \int_0^{t'} dt'' \sin \Omega (t'-t'') [e^{i \tilde{\omega}_S t'} -e^{i \tilde{\omega}_S t''}] \nonumber \\ &=& \frac{ \tilde{\omega}_S (1+e^{i \tilde{\omega}_S t}) (1-\cos \Omega t ) -i \Omega (1-e^{i \tilde{\omega}_S t}) \sin \Omega t } {\Omega (\tilde{\omega}_S^2-\Omega^2)}, \end{eqnarray} and the coefficients are defined in Appendix \ref{app:coef}. Equation (\ref{eq:dephasing}) and Eqs.~(\ref{eq:FF})-(\ref{eq:FG}) constitute the dephasing formula. Dephasing properties are characterized by the decaying behavior of this off diagonal density matrix. Incidentally, as an alternative approach, one might use the von Neuman entropy or linear entropy for the reduced system as an indicator of dephasing \cite{FMT03}. \subsection{Frequency autocorrelation function} Using the time-independent perturbation theory for the interaction, we obtain the fluctuation of the system frequency as \begin{equation} \delta \omega(t)= \frac{{\cal V}_{11}(t)-{\cal V}_{00}(t)}{\hbar} =-\frac{(q_S)_{11}-(q_S)_{00}}{\hbar} \delta {\cal F}(t) +\frac{(q_S^2)_{11}-(q_S^2)_{00}}{\hbar} \delta {\cal G}(t). \end{equation} Hence we have \begin{eqnarray} {\rm Re}\langle \delta \omega(t')\delta \omega(t'') \rangle &=& \frac{1}{\hbar^2} \left( [(q_S)_{11}-(q_S)_{00}]^2 {\rm Re} \langle \delta {\cal F}(t')\delta {\cal F}(t'') \rangle +[(q_S^2)_{11}-(q_S^2)_{00}]^2 {\rm Re} \langle \delta {\cal G}(t')\delta {\cal G}(t'') \rangle \right. \nonumber \\ && \left. -2 [(q_S)_{11}-(q_S)_{00}] [(q_S^2)_{11}-(q_S^2)_{00}] {\rm Re} \langle \delta {\cal F}(t')\delta {\cal G}(t'') \rangle \right). \end{eqnarray} This turns out to be the second derivative of ${\rm Re} \{ r^{(2)}_{GG}(t) \}$, i.e., \begin{equation} C(t) \equiv {\rm Re}\langle \delta \omega(t)\delta \omega(0) \rangle = \frac{d^2}{dt^2} {\rm Re} \{ r^{(2)}_{GG}(t) \}. \end{equation} From this correlation function, we can define a {\it pure dephasing time} $T_2^*$ as \begin{eqnarray} \frac{1}{T_2^*} = \int_0^{\infty} C(t) dt. \end{eqnarray} Note that this is different from a correlation time defined by \begin{eqnarray} \tau_c = \frac{1}{C(0)} \int_0^{\infty} C(t) dt \end{eqnarray} which leads to the relation $1/T_2^*=C(0) \tau_c$. Note that $T_2^*$ and $\tau_c$ are inversely related. \section{Application: NMA in heavy water} \label{sec:NMA} \subsection{NMA-D in heavy water} We now apply our formulas to VER and dephasing problems of $N$-methylacetamide (NMA) in heavy water. In many theoretical and experimental studies, this molecule CH$_3$-NH-CO-CH$_3$ is taken to be a model ``minimal'' peptide system because it contains a peptide bond (-NH-CO-). For example, Gerber and coworkers calculated the vibrational frequencies for this molecule using the vibrational self-consistent field (VSCF) method \cite{GCG02}. Nguyen and Stock worked to characterize VER in this molecule using a quasi-classical method \cite{NS03}. Skinner and coworkers investigated the dephasing properties of the amide I mode using their correlation method combined with ab initio (DFT) calculations \cite{SCS04}. Employing 2D-IR spectroscopy, Zanni {\it et al}.~\cite{ZAH01} measured $T_1$ and $T_2^*$ for the amide I mode in this molecule, which were reported to be $T_1 \simeq 0.45$ ps and $T_2^* \simeq 1.12$ ps, whereas Woutersen {\it et al}. \cite{WPHMKS02} obtained $T_2^* \simeq 0.8$ ps. In our study, we deuterate the system to NMAD/D$_2$O so that the amide I mode, localized around the CO bond, can be clearly recognized as a single peak in the spectrum. In the following numerical calculations based on the CHARMM force field \cite{CHARMM}, its frequency is $\sim$ 1690 cm$^{-1}$ which fluctuates depending on the structure (see Fig.~\ref{fig:shift}), whereas the experimental and DFT values are 1717 cm$^{-1}$ and 1738 cm$^{-1}$, respectively \cite{SCS04}. \subsection{Procedure} We have applied the following general procedure. (1) Run an equilibrium simulation. (2) Sample several trajectories during the run. (3) Delete atoms of each configuration except the ``active'' region around the system oscillator. We introduce a cut off radius $R_c$ around a certain atom within the active site. (4) Calculate instantaneous normal modes (INMs) \cite{Keyes} for each ``reduced'' configuration, ignoring all the imaginary frequencies \cite{MO04}. (5) Calculate anharmonic coupling elements with the finite difference method \cite{FBS05a} using the obtained INMs. (6) Insert the results in the VER formula [Eq.~(\ref{eq:VER})] and dephasing formula [Eq.~(\ref{eq:dephasing}) with Eqs.~(\ref{eq:FF})-(\ref{eq:FG})]. (7) Ensemble average the resultant density matrix, and estimate the VER and dephasing rates (times), if possible. This procedure seems to be straighforward. However, when applied to real systems like peptides or proteins, we need to carefully treat the the effects of the bath. In Fig.~\ref{fig:shift}, we plot the system frequency ${\omega}_S$ for 100 sample trajectories from an equilibrium run at 300K. We see that the amide I mode frequency changes depending on the structure; the deviation can amount to 1\%. \begin{figure}[htbp] \hfill \begin{center} \includegraphics[scale=1.2]{shift4.eps} \end{center} \caption{ Instantaneous normal mode frequency of the system ${\omega}_S$ for 100 different sample trajectories at 300K where the cut off radius is $R_c= 10$ \AA. } \label{fig:shift} \end{figure} {\it Furthermore} the frequency is renormalized according to Eqs.~(\ref{eq:omegas}) and (\ref{eq:freq}), and such an effect can be anomalously large if we include all the contribution from low frequency components. Hence we need to introduce a cutoff frequency $\omega_c$, below which the contribution is neglected. This is physically sound, because we are dealing with time-dependent phenomena, and such low frequency components correspond to longer time behavior. However, we are now interested in rather short time dynamics, so such contributions should not play a role. In fact, the final result of VER does not depend much on the choice of $\omega_c$, whereas that of dephasing does. We need to admit that for now this is just a remedy. We discuss how to improve this situation later. \subsection{VER properties of NMA-D} First we consider the VER properties of the amide I mode as shown in Fig.~\ref{fig:VER1}. We use the following relation \begin{equation} \rho_{11}(t)=1 - \rho_{00}(t)=\exp[-s(t)] \end{equation} and hypothesize that $s(t) \simeq \rho_{00}(t)$, which is definitely true when $\rho_{00}(t) \ll 1$, and might be justified using the cumulant expansion technique \cite{Breuer,IMM03}. We calculated the density matrix for the following three cases: (a) NMA-D in heavy water with CHARMM force field at 300K, (b) NMA-D in vacuum with CHARMM force field at 0K, and (c) NMA-D in vacuum with DFT force field at 0K. Here we have used $R_c=10$ \AA \, and $\omega_c=10$ cm$^{-1}$ for case (a). The result for VER does not depend sensitively on these parameters. For cases (b) and (c), we must take a special care: It is known that the low frequency components cause serious problems for vibrational frequency calculations \cite{Yagi}, so we need to eliminate the low frequency components. In this work, we exclude several normal modes if their frequency is less than 300 cm$^{-1}$. See Table \ref{table:NMAfreq}. \begin{table}[htbp] \caption{ Normal mode frequencies (in cm$^{-1}$) for ab initio (left) and CHARMM (right) NMA. The level of the ab initio calculation is B3LYP/6-31+G(d). } \hfill \begin{center} \begin{tabular}{c|c|c} \hline Mode index $\alpha$ & $\omega_{\alpha}$ (ab initio) & $\omega_{\alpha}$ (CHARMM) \\ \hline \hline 1 & 31.5 & 64.1 \\ \hline 2 & 71.6 & 88.9 \\ \hline 3 & 170.2 & 192.3 \\ \hline 4 & 259.6 & 269.5 \\ \hline 5 & 282.3 & 426.7 \\ \hline 6 & 421.9 & 536.3 \\ \hline 7 & 619.1 & 575.9 \\ \hline 8 & 619.9 & 741.9 \\ \hline 9 & 868.7 & 757.0 \\ \hline 10 & 946.1 & 854.4 \\ \hline 11 & 1012.4 & 964.3 \\ \hline 12 & 1066.1 & 1055.9 \\ \hline 13 & 1144.5 & 1075.7 \\ \hline 14 & 1158.9 & 1088.5 \\ \hline 15 & 1207.6 & 1123.5 \\ \hline \end{tabular} \hspace{1cm} \begin{tabular}{c|c|c} \hline Mode index $\alpha$ & $\omega_{\alpha}$ (ab initio) & $\omega_{\alpha}$ (CHARMM) \\ \hline \hline 16 & 1417.9 & 1380.2 \\ \hline 17 & 1436.1 & 1408.7 \\ \hline 18 & 1483.6 & 1415.4 \\ \hline 19 & 1495.1 & 1418.5 \\ \hline 20 & 1499.4 & 1425.7 \\ \hline 21 & 1516.7 & 1444.7 \\ \hline 22 & 1535.9 & 1563.1 \\ \hline 23 & 1745.9 & 1678.1 \\ \hline 24 & 2671.1 & 2445.0 \\ \hline 25 & 3058.5 & 2852.8 \\ \hline 26 & 3058.9 & 2914.3 \\ \hline 27 & 3116.5 & 2914.8 \\ \hline 28 & 3130.9 & 2917.2 \\ \hline 29 & 3135.9 & 2975.3 \\ \hline 30 & 3148.9 & 2975.5 \\ \hline \end{tabular} \end{center} \label{table:NMAfreq} \end{table} By using a fitting form $s(t)=t/T_1$, where $T_1$ is the VER time, we estimate that $T_1 \simeq 0.5$ ps at 300K and 0.6 ps at 0K from the initial decay. The former estimate is rather similar to the experimental value $T_1 \simeq 0.45$ ps \cite{ZAH01}, whereas Nguyen-Stock's quasi-classical estimate is $T_1 \simeq 1.5$ ps \cite{NS03}. Considering that the estimate at 300K is rather close to that at 0K, we can conclude that quantum effects are important to describe VER for the amide I mode of NMA-D in heavy water. However, the decay at the later stage becomes very slow at 0K as expected because there is no environment. (In the vacuum cases, we only use one minimized structure, thus there is no ensemble average, and the oscillatory behavior remains.) The results are similar for NMA-D in vacuum with different force fields. It is known that NMA with CHARMM force field is not well characterized around the methyl groups \cite{GK92}, but this fact does not affect the VER properties of the amide I mode. We have analyzed the mechanism of VER in terms of the VER pathway. In Table \ref{table:pathway}, we show several mode combinations that contribute most to $s(t)$ for NMA-D in heavy water at 300K. These eigenvectors (normal modes) are well localized around NMA (Fig.~\ref{fig:norm}), especially on the CO bond (Table \ref{table:norm}). There is very little contribution from the surrounding water. (This is expected from the previous result of Kidera and coworkers \cite{Kidera}.) Similar ``resonant'' mode combinations can be found in the isolated NMA-D cases. See Tables \ref{table:pathway2} and \ref{table:pathway3}. This means that the initial stage of VER of NMA-D in heavy water is dominated by intramolecular vibrational redistribution (IVR) localized near the peptide bond. This result might explain why the amide I mode, in many peptide systems with differing environments, appears to have similar VER times \cite{MKFKAZ04}. Note that this is the case for a {\it localized mode} such as the deuterated amide I mode. A {\it collective mode} can decay with a different VER pathway, as shown by Austin's group \cite{XMHA00}. \begin{figure}[htbp] \hfill \begin{center} \includegraphics[scale=1.2]{VER_all.eps} \end{center} \caption{ Time evolution of the excited density matrix for NMA-D in vacuum (DFT and CHARMM) at 0K, and for NMA-D in heavy water (CHARMM) at 300K. The level of DFT is B3LYP/6-31+G(d). } \label{fig:VER1} \end{figure} \begin{table}[htbp] \caption{ The most dominant VER pathways for the amide I mode of NMA-D in heavy water. $\Delta \omega$ is defined by $|\omega_S-\omega_{\alpha}-\omega_{\beta}|$. } \hfill \begin{center} \begin{tabular}{c|c|c|c} \hline \hline Mode combination ($\alpha, \beta$) & frequency (cm$^{-1}$) & Contribution to $s(t)$ & $\Delta \omega$ (cm$^{-1}$) \\ \hline \hline 1143 + 1143 & 778.6 + 778.6 & 0.04 & 125.7 \\ \hline 1147 + 1134 & 1085.3 + 570.0 & 0.04 & 27.6 \\ \hline 1147 + 1135 & 1085.3 + 570.9 & 0.01 & 26.6 \\ \hline 1147 + 1136 & 1085.3 + 578.8 & 0.02 & 18.8 \\ \hline 1147 + 1137 & 1085.3 + 581.3 & 0.03 & 16.2 \\ \hline 1147 + 1140 & 1085.3 + 612.1 & 0.46 & 14.5 \\ \hline 1148 + 1132 & 1127.8 + 558.5 & 0.01 & 3.4 \\ \hline 1148 + 1134 & 1127.8 + 570.0 & 0.11 & 14.9 \\ \hline 1148 + 1135 & 1127.8 + 570.9 & 0.03 & 15.9 \\ \hline \end{tabular} \end{center} \label{table:pathway} \end{table} \begin{figure}[htbp] \hfill \begin{center} \includegraphics[scale=1.2]{norm3.eps} \end{center} \caption{ Norm of the eigenvectors (normal modes) with the exception of the contribution from NMA-D, which is defined by $\sum_{i \in {\rm Water}} (x_i^2 +y_i^2+z_i^2)$, where $i$ comes from water degrees of freedom alone. } \label{fig:norm} \end{figure} \begin{table}[htbp] \caption{ The most localized modes around the CO bond in NMA-D. The norm is defined by $\sum_{i \in {\rm CO bond}} (x_i^2 +y_i^2+z_i^2)$. The 1345th mode is the amide I mode. } \hfill \begin{center} \begin{tabular}{c|c|c} \hline \hline Mode index $\alpha$ & frequency (cm$^{-1}$) & Contribution to norm \\ \hline \hline 1134 & 570.0 & 0.14 \\ \hline 1140 & 612.1 & 0.26 \\ \hline 1142 & 771.2 & 0.35 \\ \hline 1143 & 778.6 & 0.41 \\ \hline 1146 & 1013.6 & 0.13 \\ \hline 1147 & 1085.3 & 0.26 \\ \hline 1148 & 1127.8 & 0.17 \\ \hline 1340 & 1452.1 & 0.36 \\ \hline 1345 & 1689.6 & 0.92 \\ \hline \end{tabular} \end{center} \label{table:norm} \end{table} \begin{table}[htbp] \caption{ The most dominant VER pathways for NMA-D in vacuum with the ab initio potential (B3LYP/6-31+G(d)). } \hfill \begin{center} \begin{tabular}{c|c|c|c} \hline Mode combination ($\alpha, \beta$) & frequency (cm$^{-1}$) & Contribution to $s(t)$ & $\Delta \omega$ (cm$^{-1}$) \\ \hline \hline 9 + 9 & 868.7 + 868.7 & 0.13 & 2.7 \\ \hline 13 + 8 & 1144.5+ 620.0 & 0.02 & 29.8 \\ \hline \end{tabular} \end{center} \label{table:pathway2} \end{table} \begin{table}[htbp] \caption{ The most dominant VER pathways for NMA-D in vacuum with the CHARMM force field. } \hfill \begin{center} \begin{tabular}{c|c|c|c} \hline Mode combination ($\alpha, \beta$) & frequency (cm$^{-1}$) & Contribution to $s(t)$ & $\Delta \omega$ (cm$^{-1}$) \\ \hline \hline 8 + 8 & 741.9 +741.9 & 0.02 & 194.1 \\ \hline 14 + 6 & 1088.5+536.3 & 0.08 & 53.1 \\ \hline 14 + 8 & 1088.5+741.9 & 0.02 & 152.5 \\ \hline 15 + 7 & 1123.5+575.9 & 0.08 & 21.6 \\ \hline \end{tabular} \end{center} \label{table:pathway3} \end{table} \subsection{Dephasing properties of NMA-D} We now consider the dephasing properties of the amide I mode. The off-diagonal density matrix is written as \begin{equation} \rho_{10}(t) =\frac{1}{2} e^{- i \tilde{\omega}_S t} [1 - r^{(2)}_{FF}(t)-r^{(2)}_{GG}(t)-r^{(2)}_{FG}(t)] \simeq \frac{1}{2} e^{- i \tilde{\omega}_S t - r^{(2)}_{FF}(t)-r^{(2)}_{GG}(t)-r^{(2)}_{FG}(t)} \end{equation} and we analyze each contribution to the density matrix seperately. In Fig.~\ref{fig:deph}, we show the result with $R_c=10$ \AA \, and $\omega_c=10$ cm$^{-1}$. We can see that the following relation holds \begin{equation} {\rm Re} \{ r^{(2)}_{FF}(t)\} \simeq s(t)/2 \simeq t/(2 T_1). \end{equation} If we {\it further} assume that ${\rm Re} \{ r^{(2)}_{GG}(t) \}\simeq t/T_2^*$ and ${\rm Re} \{ r^{(2)}_{FG}(t) \} \simeq 0$, we have \begin{equation} \frac{1}{T_2}=\frac{1}{2T_1}+\frac{1}{T_2^*}. \label{eq:T2} \end{equation} This is a standard expression connecting $T_1$ and $T_2$ \cite{Mchale}, and holds under the Markov approximation. We can see that ${\rm Re} \{ r^{(2)}_{FG}(t) \}\simeq 0$ holds, but it is difficult to judge whether ${\rm Re} \{ r^{(2)}_{GG}(t) \} \simeq t/T_2^*$ holds or not. \begin{figure}[htbp] \hfill \begin{center} \includegraphics[scale=1.2]{DEPH_fq10.eps} \end{center} \caption{ Dephasing properties of the amide I mode of NMA-D in heavy water. } \label{fig:deph} \end{figure} \begin{figure}[htbp] \hfill \begin{center} \begin{minipage}{.42\linewidth} \includegraphics[scale=1.0]{VER_comp.eps} \end{minipage} \hspace{1cm} \begin{minipage}{.42\linewidth} \includegraphics[scale=1.0]{corr_comp.eps} \end{minipage} \end{center} \caption{ Left: VER (left) and frequency auto correlation calculations (right) at 300K with different cut-off frequencies $\omega_c$. } \label{fig:comp} \end{figure} There are more serious problems: as mentioned by Mikami and Okazaki \cite{MO04}, the diagonal terms contribute most for dephasing, i.e., the second term in Eq.~(\ref{eq:GG}) is a dominant contribution for dephasing. Furthermore, the coefficients are dominant factors, so this means that the low frequency (and thus delocalized) modes contribute most. In this paper, we have employed two cut-off parameters: $R_c$ and $\omega_c$. If $R_c$ is large enough, it is fine, but the choice of $\omega_c$ can be arbitrary. Figure \ref{fig:comp} shows the dependence of the results on $\omega_c$. The VER results do not depend on the choice of $\omega_c$ because there is a resonant condition which should be met, but the dephasing results do. We need to be cautious in the interpretation of our results for dephasing. One way to get rid of this problem is to go back to the original expression Eq.~(\ref{eq:gg}) using the force and force-constant autocorrelation functions $\langle \delta {\cal F}(t) \delta {\cal F}(0) \rangle$ and $\langle \delta {\cal G}(t) \delta {\cal G}(0) \rangle$. Here $r^{(2)}_{GG}(t)$ is calculated as time integral of these correlation functions, which can be calculated using classical mechanics. This is in the same spirit as the quantum correction factor method \cite{SP01}, which is an approximation to quantum effects. In this case, we only need to consider the zero frequency component, so the classical mechanics should work well and quantum effects should be less important. \subsection{Discussions} We found that there are several resonant modes in NMA-D, which form the main VER pathways {\it within} the molecule. Gerber and coworkers reported that the amide I mode in NMA is very weakly coupled to other modes \cite{GCG02}. We expect that this discrepancy results from (a) the use of only pair interactions between normal modes to reduce the computational cost, (b) the level of the ab initio method: they used MP2/DZP whereas we used B3LYP/6-31+G(d), and (c) the criterion of the mode-mode coupling: their criterion is not directly related to VER. It is important and interesting to clarify the nature of VER in the amide I mode in more detail. Note the importance of the system anharmonicity. The effect of the system anharmonicity defined in Eq.~(\ref{eq:anhar}) is very weak for the CHARMM case: $\varepsilon=10^{-5}$, but it is not for the ab initio case: $\varepsilon=10^{-2}$. According to Eq.~(\ref{eq:freq}), this anharmonicity shifts the system frequency by 0.6\%, which amounts to 10 cm$^{-1}$. The resonant condition changes compared to the case without anharmonicity. Of course, dephasing is also affected by this amount of anharmonicity. To address these issues, we must develop QM/MM type methods, which will be described elsewhere. Another interesting system to investigate anharmonicity is a highly excited bond such as the highly excited CO bond in myoglobin \cite{VFVAMJ04}. It is important to assess our strategy: our perturbative expansion and cut-off strategy are approximate. It would be profitable and interesting to compare this strategy with others, including the time-dependent vibrational self-consistent field methods \cite{JG99,FYZSH}, the semiclassical method \cite{Geva}, and the path integral method \cite{Krilov}. The application of our strategy to protein systems, including cytochrome c, will be described elsewhere \cite{Matt}. \section{Summary} \label{sec:summary} In this paper, we have derived formulas for VER and dephasing for an anharmonic (cubic) oscillator coupled to a harmonic bath through 3rd and 4th order coupling elements. We employed time-dependent perturbation theory and did not take the infinite time limit as is done in the derivation of the Maradudin-Fein formula. Hence our formulas do not assume the Markov properties of the system, and can describe short time behavior that can be important for VER and dephasing properties of localized modes in peptides or proteins. Our final results are the VER formula [Eq.~(\ref{eq:VER})] and dephasing formula [Eq.~(\ref{eq:dephasing}) with Eqs.~(\ref{eq:FF})-(\ref{eq:FG})]. As a test case, we have studied the amide I mode of $N$-methylacetamide in heavy water. We found that the VER time is 0.5 ps at 300K, which is in good accord with the experimental value, and clarified that the VER mechanism is mainly localized around the peptide bond in NMA-D; VER is dominated by IVR within the molecule. We also investigated the dephasing properties of the amide I mode, and met with some problems. We proposed a new method to overcome these problems using classical correlation function calculations. \acknowledgments We thank S.~Okazaki, T.~Mikami, K.~Yagi, T.~Miyadera, A.~Szabo, E.~Geva, G.~Krilov, H.-P.~Breuer, S.~Maniscalco, F.~Romesberg and M.~Cremeens for useful discussions. We also thank the National Science Foundation (CHE-036551) and Boston University's Center for Computer Science for generous support to our research. \begin{appendix} \section{System parameters for a cubic oscillator} \label{app:system} We assume that the system-bath interaction can be Taylor expanded using the bath coordinate $q_{\alpha}$, and that the fluctuating force and the fluctuating force constant can be expressed as \begin{eqnarray} \delta{\cal F} &=& \sum_{\alpha,\beta}C_{S \alpha \beta} (q_{\alpha} q_{\beta} -\langle q_{\alpha} q_{\beta} \rangle), \\ \delta{\cal G} &=& \sum_{\alpha,\beta}C_{SS \alpha \beta} (q_{\alpha} q_{\beta} -\langle q_{\alpha} q_{\beta} \rangle) +\sum_{\alpha}C_{SS \alpha} q_{\alpha}. \end{eqnarray} In real molecular systems such as peptides or proteins, the coefficients in ${\cal V}$ and the anharmonicity parameter in ${\cal H}_f$ are calculated as \begin{eqnarray} C_{S \alpha \beta} &=& -\frac{1}{2} \frac{\partial^3 V}{\partial q_S \partial q_{\alpha} \partial q_{\beta}}, \\ C_{SS \alpha} &=& \frac{1}{2} \frac{\partial^3 V}{\partial q_S^2 \partial q_{\alpha}}, \\ C_{SS \alpha \beta} &=& \frac{1}{4} \frac{\partial^4 V}{\partial q_S^2 \partial q_{\alpha} \partial q_{\beta}}, \\ f &=& \frac{\partial^3 V}{\partial q_S^3} \end{eqnarray} where $V$ represents a potential function for the system considered. This potential function can be an empirical force field (CHARMM, Amber) or an ab initio potential calculated by any level of theory. Assuming that the cubic anhamonicity $f$ in the system is small, we use the time-independent perturbation theory to calculate the eigen energies and vectors. We quote from J.J.~Sakurai \cite{JJ}: \begin{eqnarray} E_n &=& E^{(0)}_n+ V_{nn}+ \sum_{k \neq n} \frac{|V_{nk}|^2}{E^{(0)}_n-E^{(0)}_k}, \\ |n \rangle &=& |n^{(0)} \rangle + \sum_{k \neq n} |k^{(0)} \rangle \frac{V_{kn}}{E^{(0)}_n-E^{(0)}_k} \nonumber \\ && + \left( \sum_{k \neq n} \sum_{l \neq n} |k^{(0)} \rangle \frac{V_{kl}V_{ln}} {(E^{(0)}_n-E^{(0)}_k)(E^{(0)}_n-E^{(0)}_l)} -\sum_{k \neq n} |k^{(0)} \rangle \frac{V_{nn}V_{kn}} {(E^{(0)}_n-E^{(0)}_k)^2} \right) \end{eqnarray} where $E^{(0)}_n=\hbar \bar{\omega}_S (n+1/2)$, $|k^{(0)} \rangle$ is the $k$-th eigenfunction of the harmonic oscillator, and \begin{eqnarray} V_{kn} &=& \frac{f}{6} \langle k^{(0)}| \bar{q}_S^3 |n^{(0)} \rangle \nonumber \\ &=& \hbar \bar{\omega}_S \varepsilon \left[ \sqrt{n(n-1)(n-2)} \delta_{k,n-3} +3n \sqrt{n} \delta_{k,n-1} \right. \nonumber \\ && \left. +3(n+1) \sqrt{n+1} \delta_{k,n+1} +\sqrt{(n+1)(n+2)(n+3)} \delta_{k,n+3} \right] \end{eqnarray} where \begin{equation} \varepsilon=\frac{1}{\hbar \bar{\omega}_S} \frac{f}{6} \left( \frac{\hbar}{2 \bar{\omega}_S} \right)^{3/2} \label{eq:anhar} \end{equation} is a dimensionless paramater representing the strength of the anharmonicity of the system. Note that $V_{kn}$ becomes nonzero only when $|k-n|=1$ or $|k-n|=3$. We explicitly have \begin{eqnarray} E_0 &=& \frac{\hbar \bar{\omega}_S}{2}- \frac{|V_{01}|^2}{\hbar \bar{\omega}_S} -\frac{|V_{03}|^2}{3 \hbar \bar{\omega}_S} = \frac{\hbar \bar{\omega}_S}{2}(1-22 \varepsilon^2), \\ E_1 &=& \frac{3 \hbar \bar{\omega}_S}{2}+ \frac{|V_{10}|^2}{\hbar \bar{\omega}_S} -\frac{|V_{12}|^2}{\hbar \bar{\omega}_S} -\frac{|V_{14}|^2}{3 \hbar \bar{\omega}_S} = \frac{\hbar \bar{\omega}_S}{2}(3-142 \varepsilon^2). \end{eqnarray} The anharmonicity-corrected frequency is \begin{equation} \tilde{\omega}_S=\frac{E_1-E_0}{\hbar} = \bar{\omega}_S (1-60 \varepsilon^2). \label{eq:freq} \end{equation} Next we calculate the matrix elements for ${q}_S$ and ${q}_S^2$. We write the eigenfunctions: \begin{eqnarray} |0 \rangle &=& |0^{(0)} \rangle + \sum_{k=1,3} |k^{(0)} \rangle \frac{V_{k0}}{E^{(0)}_0-E^{(0)}_k} + \sum_{k,l \in S_0} |k^{(0)} \rangle \frac{V_{kl}V_{l0}} {(E^{(0)}_0-E^{(0)}_k)(E^{(0)}_0-E^{(0)}_l)}, \\ |1 \rangle &=& |1^{(0)} \rangle + \sum_{k=0,2,4} |k^{(0)} \rangle \frac{V_{k1}}{E^{(0)}_1-E^{(0)}_k} + \sum_{k,l \in S_1} |k^{(0)} \rangle \frac{V_{kl}V_{l1}} {(E^{(0)}_1-E^{(0)}_k)(E^{(0)}_1-E^{(0)}_l)} \end{eqnarray} where $S_0$ represents $(l=1, k=2)$ or $(l=1, k=4)$ or $(l=3, k=2)$ or $(l=3, k=4)$ or $(l=3, k=6)$, and $S_1$ does $(l=0, k=3)$ or $(l=2, k=3)$ or $(l=2, k=5)$ or $(l=4, k=3)$ or $(l=4, k=5)$, or $(l=4, k=7)$. Note that these eigenvectors are not normalized, so we need to renormalize them before or after calculations. After some lengthy but straighforward calculations, we have \begin{eqnarray} (q_S)_{10} &=& \langle 1| \bar{q}_S+b |0 \rangle =(q_S)_{01} =a(1+22 \varepsilon^2), \label{eq:qs1} \\ (q_S)_{00} &=& \langle 0| \bar{q}_S+b |0 \rangle =b-6 a \varepsilon, \\ (q_S)_{11} &=& \langle 1| \bar{q}_S+b |1 \rangle =b- 18 a \varepsilon, \\ (q^2_S)_{10} &=& \langle 1| (\bar{q}_S+b)^2 |0 \rangle =(q^2_S)_{01} =2 ab-20 a^2 \varepsilon +44 a b \varepsilon^2, \\ (q^2_S)_{00} &=& \langle 0| (\bar{q}_S+b)^2 |0 \rangle =a^2+b^2 -12 a b \varepsilon + 88 a^2 \varepsilon^2, \\ (q^2_S)_{11} &=& \langle 1| (\bar{q}_S+b)^2 |1 \rangle =3 a^2+b^2 -36 ab \varepsilon +568 a^2 \varepsilon^2 \label{eq:qs2} \end{eqnarray} where \begin{eqnarray} a = \sqrt{\frac{\hbar}{2 \bar{\omega}_S}} \end{eqnarray} is the fundamental length charactering the system oscillator. \section{The coefficients used in the formulas} \label{app:coef} Using the expression derived previously for the force-force correlation function \cite{FBS05a}, the coefficients in our VER and dephasing formulas are expressed as \begin{eqnarray} \mathbf{C}^{\alpha \beta} &=& \left( \begin{array}{cc} C^{\alpha \beta}_{--} & C^{\alpha \beta}_{+-} \\ C^{\alpha \beta}_{+-} & C^{\alpha \beta}_{++} \\ \end{array} \right) = \left \{ (q_S)_{10} C_{S \alpha \beta} -(q^2_S)_{10} C_{SS \alpha \beta} \right \}^2 \mathbf{S}^{\alpha \beta}, \\ \mathbf{D}^{R \alpha \beta} &=& \left( \begin{array}{cc} D^{R \alpha \beta}_{--} & D^{R\alpha \beta}_{+-} \\ D^{R \alpha \beta}_{+-} & D^{R \alpha \beta}_{++} \\ \end{array} \right) \nonumber \\ &=& \left \{ [(q_S)_{11}-(q_S)_{00}] C_{S \alpha \beta} -[(q^2_S)_{11}-(q^2_S)_{00}] C_{SS \alpha \beta} \right \}^2 \mathbf{S}^{\alpha \beta}, \\ \mathbf{D}^{I \alpha \beta} &=& \left( \begin{array}{cc} D^{I \alpha \beta}_{--} & D^{I \alpha \beta}_{+-} \\ D^{I \alpha \beta}_{+-} & D^{I \alpha \beta}_{++} \\ \end{array} \right) \nonumber \\ &=& \left \{ [(q_S)_{11}-(q_S)_{00}] C_{S \alpha \beta} -[(q^2_S)_{11}-(q^2_S)_{00}] C_{SS \alpha \beta} \right \} \nonumber \\ && \times \left \{ [(q_S)_{11}+(q_S)_{00}] C_{S \alpha \beta} -[(q^2_S)_{11}+(q^2_S)_{00}] C_{SS \alpha \beta} \right \} \mathbf{S}^{\alpha \beta}, \\ \mathbf{E}^{\alpha \beta} &=& \left( \begin{array}{cc} E^{\alpha \beta}_{--} & E^{\alpha \beta}_{+-} \\ E^{\alpha \beta}_{+-} & E^{\alpha \beta}_{++} \\ \end{array} \right) \nonumber \\ &=& \left \{ [(q_S)_{11}-(q_S)_{00}] C_{S \alpha \beta} -[(q^2_S)_{11}-(q^2_S)_{00}] C_{SS \alpha \beta} \right \} \nonumber \\ && \times \left \{ (q_S)_{10} C_{S \alpha \beta} -(q^2_S)_{10} C_{SS \alpha \beta} \right \} \mathbf{S}^{\alpha \beta}, \\ \mathbf{S}^{\alpha \beta} &=& \frac{\hbar^2}{2 \omega_{\alpha} \omega_{\beta}} \left( \begin{array}{cc} (1+n_{\alpha})(1+n_{\beta}) & 2 (1+n_{\alpha})n_{\beta} \\ 2 (1+n_{\alpha}) n_{\beta} & n_{\alpha} n_{\beta} \\ \end{array} \right), \\ \mathbf{C}^{\alpha} &=& \left( \begin{array}{c} C^{\alpha}_{-} \\ C^{\alpha}_{+} \\ \end{array} \right) = (q_S^2)_{10}^2 C_{SS \alpha}^2 \mathbf{R}^{\alpha}, \\ \mathbf{D}^{R\alpha} &=& \left( \begin{array}{c} D^{R\alpha}_{-} \\ D^{R\alpha}_{+} \\ \end{array} \right) = [(q_S^2)_{11} - (q_S^2)_{00}]^2 C_{SS \alpha}^2 \mathbf{R}^{\alpha}, \\ \mathbf{D}^{I\alpha} &=& \left( \begin{array}{c} D^{I\alpha}_{-} \\ D^{I\alpha}_{+} \\ \end{array} \right) = [(q_S^2)_{11} - (q_S^2)_{00}][(q_S^2)_{11} + (q_S^2)_{00}] C_{SS \alpha}^2 \mathbf{R}^{\alpha}, \\ \mathbf{E}^{\alpha} &=& \left( \begin{array}{c} E^{\alpha}_{-} \\ E^{\alpha}_{+} \\ \end{array} \right) = [(q_S^2)_{11} - (q_S^2)_{00}](q_S^2)_{10} C_{SS \alpha}^2 \mathbf{R}^{\alpha}, \\ \mathbf{R}^{\alpha} &=& \frac{\hbar}{2 \omega_{\alpha}} \left( \begin{array}{c} 1 + n_{\alpha} \\ n_{\alpha} \\ \end{array} \right) \end{eqnarray} where $n_{\alpha}=1/(e^{\beta \hbar \omega_{\alpha}}-1)$ is the thermal phonon number. To calculate $\bar{\omega}_S$ and $b$ in Eqs.~(\ref{eq:omegas}) and (\ref{eq:qs}), we use the following \begin{eqnarray} \langle {\cal F}(t) \rangle &=& \langle {\cal F}(0) \rangle = \frac{\hbar}{2} \sum_{\alpha} \frac{C_{S \alpha \alpha}}{\omega_{\alpha}} (1+ 2 n_{\alpha}), \\ \langle {\cal G}(t) \rangle &=& \langle {\cal G}(0) \rangle = \frac{\hbar}{2} \sum_{\alpha} \frac{C_{S S \alpha \alpha}}{\omega_{\alpha}} (1+ 2 n_{\alpha}). \end{eqnarray} \end{appendix}
{ "redpajama_set_name": "RedPajamaArXiv" }
3,933
Sir Michael Andrew Morpurgo (né Bridge; 5 October 1943) is an English book author, poet, playwright, and librettist who is known best for children's novels such as War Horse (1982). His work is noted for its "magical storytelling", for recurring themes such as the triumph of an outsider or survival, for characters' relationships with nature, and for vivid settings such as the Cornish coast or World War I. Morpurgo became the third Children's Laureate, from 2003 to 2005, and he is also the current President of BookTrust, the UK's largest children's reading charity. Early life Morpurgo was born in 1943 in St Albans, Hertfordshire, as Michael Andrew Bridge, the second child of actor Tony Van Bridge and actress Kippe Cammaerts (born Catherine Noel Kippe Cammaerts, daughter of writer and poet Émile Cammaerts). Both RADA graduates, his parents had met when they were acting in the same repertory company in 1938. His father came from a working-class family, while Kippe came from a family of actors, an opera singer, writers and poets. They were married in 1941 while Van Bridge, having been called up in 1939 and by then stationed in Scotland, was on leave from the army. Morpurgo's brother Pieter was born in 1942. When Morpurgo was born the following year, his father was stationed in Baghdad. While Van Bridge was away at war, Kippe Cammaerts met Jack Morpurgo (subsequently professor of American Literature at the University of Leeds from 1969 to 1982). When Van Bridge returned to England in 1946, he and Cammaerts obtained a divorce and Cammaerts married Jack Morpurgo the same year. Although they were not formally adopted, Morpurgo and his brother took on their step-father's name. Morpurgo's older brother, Pieter Morpurgo, later became a BBC television producer and director. He has two younger siblings, Mark and Kay. Morpurgo's mother was frail, having suffered a breakdown when she was 19, and grieving the loss of her brother Pieter, who was killed in the war in 1941, for the rest of her life. Towards the end of her life she was an alcoholic and a drug addict. Morpurgo and his brother were evacuated to Northumberland when they were very young. After returning to London, the family lived in Philbeach Gardens, Earl's Court, where the children played on nearby bombsites. Morpurgo went to primary school at St Matthias, Earl's Court. The family later moved to Bradwell-on-Sea in Essex, where Morpurgo would live during the school holidays, having been sent to boarding school in Sussex when he was seven years old. The school was very strict and the boys were beaten frequently. During this period Morpurgo developed a stutter. His unhappy experiences at boarding school would later inform his novel The Butterfly Lion. After six years at The Abbey School in Ashurst Wood, Morpurgo then went to the King's School, an independent school in Canterbury, Kent, where he felt less homesick than at his previous school. Morpurgo did not learn who his biological father was until he was 19 years old. After the divorce from Michael's mother, Van Bridge had emigrated to Canada and was never talked about. Morpurgo never saw an image of his father until, while watching the 1962 CBC version of Great Expectations on TV with his mother, she recognised Van Bridge in the role of Magwitch and said to Michael "That's your father!". They met in person nine years later. Morpurgo's stepfather was not encouraging to his sons and was disappointed that they were not meeting his expectations for them of going into academia like him, calling Michael "a bear with very little brain." His stepfather decided he should join the army and Morpurgo attended the Royal Military Academy Sandhurst. He quickly realised that a soldier's life was not for him and left after nine months. Morpurgo later went to study at King's College London, reading English, French, and Philosophy, and graduated with a third class degree. He then joined the teaching profession with a job at Wickhambreaux Primary School in Canterbury, Kent. He also, in 1968, briefly taught at St. Faith's School in Cambridge. Personal life Aged 19, Morpurgo married Clare Lane, eldest daughter of Sir Allen Lane, the founder of Penguin Books, in 1963. They had met the previous year on holiday in Corfu through Morpurgo's stepfather, who was an editor at Penguin at the time. Lane was pregnant with their first child and Morpurgo has referred to it as a shotgun wedding. Their three children are all named after Shakespearian characters. Morpurgo was diagnosed with laryngeal cancer in 2017 and received radiotherapy. He has since recovered. Farms for City Children In 1976, Morpurgo and his wife Clare established the charity Farms for City Children, with the primary aim of providing children from inner city areas with experience of the countryside. The programme involves the children spending a week at a countryside farm, during which they take part in purposeful farmyard work. The charity's first president was the couple's close friend and neighbour, Ted Hughes. About 85,000 children have taken part in the scheme since it was set up, and the charity now has three farms in Wales, Devon, and Gloucestershire. Morpurgo has referred to the charity as his greatest achievement in life. Career From teaching to writing novels It was not until he was teaching in Kent that Morpurgo discovered his vocation in life, of which he later said "I could see there was magic in it for them, and realized there was magic in it for me." Morpurgo's writing career was inspired by Ted Hughes' Poetry in the Making, Paul Gallico's The Snow Goose and Ernest Hemingway's The Old Man and the Sea. Hughes and another poet, Seán Rafferty, were influential in his career, with Hughes becoming a friend, mentor and neighbour. Morpurgo credits Hughes and Rafferty with giving him the confidence to write War Horse, his most successful work to date. Works Morpurgo is the author of dozens of books, including the notable titles: All Around the Year (with Ted Hughes) (1979) The Nine Lives of Montezuma (1980) War Horse (1982) Little Foxes (1984) Mossop's Last Chance (with Shoo Rayner) (198 Waiting for Anya (1990) The Wreck of the Zanzibar (1995) The Butterfly Lion (1996) Farm Boy (1999) Private Peaceful (2003) Sir Gawain and the Green Knight (2004) The Orchard Book of Aesop's Fables (2004), illustrated by Emma Chichester Clark War: Stories of Conflict (compiler) (2005) Alone on a Wide, Wide Sea (2006) Beowulf (2006), illustrated by Michael Foreman Running Wild (2009) The Kites Are Flying! (2009) Not Bad for a Bad Lad (2010) Shadow (2010) Little Manfred (2011) The Pied Piper of Hamelin (2011) Sparrow: The True Story of Joan of Arc (2012) Outlaw: The Story of Robin Hood (2012) Homecoming (2012) Where My Wellies Take Me (with Clare Morpurgo) (2012) A Medal For Leroy (2012) Beauty and the Beast (2013) The Castle in the Field – Little Gems (2013) Pinocchio By Pinocchio (2013) The Goose is Getting Fat (2013) All I Said Was (2014) Half a Man (2014) Listen to the Moon (2014) Mini Kid (2014) Such Stuff: A Story-Maker's Inspiration (2016) The Fox and the Ghost King (The Timeless Tale of an Impossible Dream) (2016) An Eagle in the Snow (2016) Greatest Magical Stories (2017) Lucky Button (2017) Toto: The Dog-gone Amazing Story of the Wizard of Oz (2017) Flamingo Boy (2018) In The Mouth of the Wolf (2018) The Day the World Stopped Turning (2019) Grandpa Christmas (2020) A Song of Gladness (2021) The Puffin Keeper (2021) When Fishes Flew: The Story of Elena's War (2021) Carnival of the Animals (2021) Flying Scotsman and the Best Birthday Ever (2022) Adaptations Gentle Giant was presented as an opera by composer Stephen McNeff and librettist Mike Kenny at the Royal Opera House in 2006. Film versions have been made of Friend or Foe (1981), Private Peaceful (2012) and When the Whales Came (1989), the latter also being adapted to a stage play. My Friend Walter (1988) 'Purple Penguins' (2000) and Out of the Ashes (2001) have been adapted for television. Composer Stephen Barlow created a musical adaptation of Rainbow Bear, narrated by his wife Joanna Lumley. This was subsequently presented as a ballet by the National Youth Ballet of Great Britain in August 2010. War Horse has been adapted as a radio broadcast and as a stage play by Nick Stafford, premiering at the National Theatre, London, on 17 October 2007. The horses were played by life-sized horse puppets designed and built by the Handspring Puppet Company of South Africa. It won two Olivier Awards in 2007. Initially intended to run for 16 weeks, due to popular demand the show transferred to the New London Theatre in the West End on 28 March 2009. It closed in the West End after eight years, having been seen by 2.7 million people in London and seven million worldwide at the time. It was the most successful production of the National Theatre ever. On 15 March 2011, the show premiered on Broadway at the Vivian Beaumont Theater. The play's Broadway production won five Tony Awards, including Best Play. It went on several UK tours and was also staged in Australia, Canada, China, Germany, and The Netherlands. It was seen by seven million people outside the UK. In 2011, War Horse was adapted by Lee Hall and Richard Curtis as a British film directed by Steven Spielberg. The film was nominated numerous awards, including six Academy Awards and five BAFTA Awards. Waiting for Anya was adapted as a film of the same title released in 2020. Reception and influence Morpurgo has 30 books on the HarperCollins list and has sold more than 35 million books worldwide. Reading Matters website calls Morpurgo's 1999 Kensuke's Kingdom "A quietly told story, but plenty of drama and emotion." The Guardian describes Private Peaceful, his 2003 novel for older children, as a "humanising and humane work". Children's Laureate Morpurgo and Hughes, then Poet Laureate, originated the idea of Children's Laureate role. Morpurgo became the third person to fill the two-year position, from 2003 to 2005. Literary awards and prizes Shortlisted 1991 Carnegie Medal: Waiting for Anya 1995 Carnegie Medal: Arthur, High King of Britain 1996 Carnegie Medal: The Wreck of the Zanzibar 2002 W. H. Smith Award for Children's Literature: Out of the Ashes 2003 Blue Peter Book Award: The Book I Couldn't Put Down: Cool! 2003 Carnegie Medal: Private Peaceful 2004 Whitbread Children's Book Award: Private Peaceful 2012 Bippo award for books 2010 Deutscher Jugendliteraturpreis (German youth literature prize): Warten auf Anya (Waiting for Anya) 2014 Costa Children's Book Award: Listen to the Moon Awarded 1993 Prix Sorcières (France): King of the Cloud Forests 1995 Whitbread Children's Book Award: The Wreck of the Zanzibar 1996 Nestlé Smarties Book Prize (Gold Award): The Butterfly Lion 1999 Prix Sorcières (France): Wombat Goes Walkabout 2000 Red House Children's Book Award: Kensuke's Kingdom 2001 Prix Sorcières (France): Kensuke's Kingdom 2002 Nestlé Smarties Book Prize (Bronze Award): The Last Wolf 2004 Red House Children's Book Award: Private Peaceful 2005 Blue Peter Book of the Year Award: Private Peaceful 2005 Hampshire Book Award: Private Peaceful 2008 California Young Reader Medal: Private Peaceful 2011 Red House Children's Book Award: Shadow 2017 Red House Children's Book Award: An Eagle in the Snow 2021 Chen Bochui Children's Literature Award (China) – best author Political views In a January 2014 article, Morpurgo stated "as we begin to mark the centenary of the first world war, we should honour those who died, most certainly, and gratefully too, but we should never glorify... Come each November over the next four years, let the red poppy and the white poppy be worn together to honour those who died, to keep our faith with them, to make of this world a place where freedom and peace can reign together." In August 2014, Morpurgo was one of 200 public figures who were signatories to a letter to The Guardian opposing Scottish independence in the run-up to September's referendum on that issue. Prior to the 2015 general election, he was one of several celebrities who endorsed the parliamentary candidacy of the Green Party's Caroline Lucas. In 2016, he condemned government plans to extend grammar schools as divisive and "quite deeply stupid". In the run-up to the 2016 United Kingdom European Union membership referendum, Morpurgo expressed his support for the European Union in an interview with the BBC, and reinforced this with a ten-minute BBC Radio 4 'Point of View' on 5 August 2018. Honors and appointments Morpurgo and his wife Clare were both appointed a Member of the Order of the British Empire (MBE) in the 1999 Birthday Honours for services to young people. He was advanced to Officer of the Order of the British Empire (OBE) in the 2006 Birthday Honours for services to literature and was made a Knight Bachelor in the 2018 New Year Honours for services to literature and charity. Morpurgo was awarded an honorary doctorate at Bishop Grosseteste University on 17 July 2013. He was awarded the honorary degree of Doctor of Letters (D.Litt.) by Newcastle University on 12 July 2017. Morpurgo was appointed a Deputy Lieutenant for Devon on 10 April 2015. Morpurgo is also President of BookTrust, the UK's largest children's reading charity. Radio and television broadcasts The Invention of Childhood (2006) (with Hugh Cunningham), BBC Radio 4 Set Our Children Free: the 2011 Richard Dimbleby Lecture. BBC One, 15 February 2011. "Alone on a Wide Wide Sea": BBC Radio 2, 7–10 August 2017 Biographies Carey, Joanna (1999). Interview with Michael Morpurgo. Fergusson, Maggie (2012). Michael Morpurgo: War Child to War Horse. Fox, Geoff (2004). Dear Mr Morpingo: Inside the World of Michael Morpurgo. McCarthy, Shaun (2005). Michael Morpurgo. References Further reading Morpurgo, Michael et al. La Revue Des Livres Pour Enfants Number 250, December 2009: "Michael Morpurgo" pp 79–124. External links (old version) Michael Morpurgo at publisher Egmont Books The Observer: "Once upon a life: Michael Morpurgo" 1943 births 20th-century English male writers 20th-century English novelists 21st-century English male writers 21st-century British novelists Alumni of King's College London Associates of King's College London British Children's Laureate British people of Belgian descent Deputy Lieutenants of Devon English children's writers English historical novelists English male novelists English male poets Fellows of King's College London Fellows of the Royal Society of Literature Knights Bachelor Living people New Statesman people Officers of the Order of the British Empire People educated at The King's School, Canterbury People from St Albans
{ "redpajama_set_name": "RedPajamaWikipedia" }
5,860
package simpleeventratelimiter; import simpleeventratelimiter.exception.EventLimitException; import simpleeventratelimiter.exception.EventRegisteredException; import simpleeventratelimiter.exception.NoEventRegisteredException; import java.util.concurrent.TimeUnit; /** * Created by Klemen Polanec on 13.12.2016. */ public interface Limiter { /** * Logs event or throws @{@link EventLimitException} if rate limit reached. * Throws @{@link NoEventRegisteredException} if there's is no event registered. * * @param eventKey * @throws EventLimitException */ void logEvent(String eventKey) throws EventLimitException, NoEventRegisteredException; /** * Logs event or throws @{@link EventLimitException} if rate limit reached. * Event does not need to be registered beforehand. * * @param eventKey * @param limit * @param interval * @param unit * @throws EventLimitException */ void logEvent(String eventKey, int limit, int interval, TimeUnit unit) throws EventLimitException; /** * Registers event or throws @{@link EventRegisteredException} if event is already registered. * * @param eventKey * @param limit * @param interval * @param unit * @throws EventLimitException */ void registerEvent(String eventKey, int limit, long interval, TimeUnit unit) throws EventRegisteredException; /** * Clears expired logs from event logbooks */ @Deprecated void purgeEventLogbooks(); }
{ "redpajama_set_name": "RedPajamaGithub" }
9,000
Find out more about Luke Taylor, A Well-Known TikTok Performer Who Auditioned For American Idol Entertainment Ruby Manandhar March 08, 2022 4 Mins read Luke Taylor, the TikTok-famous vocalist known for his extraordinarily deep voice came on Sunday's edition of "American Idol" and performed Johnny Cash's "Ring of Fire" in his audition, which caught the judges' attention. We've compiled all of the information on Luke Taylor, for all of his followers here. Luke Taylor: Age & Early Life Luke Taylor was born in West Chester, Pennsylvania, on September 17, 2001. His father, Robert Taylor, and his mother, Dr. Carol Taylor. Luke has a successful baker sister, Abigail Taylor. Virgo is his zodiac sign. Taylor graduated from Garnet Valley High School in 2020, and he was extensively involved in several performances and musicals throughout his time there. He went on to Liberty University to study Vocal Performance after graduation. Luke, who stands at 5ft 9inches tall, worked as a carpenter for two years before entering the show. From A Regular Youngster To A Celebrity Luke Taylor went from a regular youngster to an online celebrity when he joined TikTok for the first time during the pandemic in December 2020, with his first-ever video performing Toby Keith's Beer for My Horse. He admits to making the account "as a joke," but his distinct voice quickly became a success on the platform. His duet video with @nathanevans on "Shanty-tok" became popular on TikTok. Since it was posted, the video has received over 11 million views. Taylor claims that around the age of 17, his voice shifted and grew heavier on the bass. He also emphasized that basso profondo singers, like himself, employ air to create a deep, echoing voice. American Idol auditioned Luke Taylor The current season of American Idol marks the show's 20th anniversary. His tremendously deep voice stunned judges Katy Perry, Lionel Richie, and Luke Bryan on Sunday's "American Idol." He feigned to narrate a movie trailer about Bryan as part of his audition, in addition to covering "Ring of Fire." He also sang "Frosty The Snowman" at Bryan's request. Perry voted against sending Taylor to Hollywood, citing concerns that his voice would be "funny but not genuinely able to finish." Richie and Bryan both agreed to send him on his way, allowing him to compete at Hollywood Week. Luke Taylor's fans are ecstatic to watch their favorite TikToker's journey on the show. Luke also shocked his supporters by uploading a photo of himself holding one of the three platinum tickets available on the program, which will let them to avoid the first week of Hollywood eliminations and be revealed as the show's broadcast unfolds. Luke Taylor, an ambitious singer on the rise, is now believed to have a net worth of $100,000. Luke Taylor's Girlfriend? While Luke Taylor has shown the world his vocal range, he has remained tight-lipped about his dating status. His social media accounts do not indicate that he has a girlfriend. As of March 2022, Luke appears to be single and has not verified being in a relationship. Luke Taylor, the internet superstar known for his extraordinarily strong voice, has 31.7k Instagram followers under the name @ luke.the.voice_. Luke's voice account @_luke.the.voice_ has 2.2 million followers and 30.6 million likes on TikTok, a music-sharing app. Social Media Personality Luke Taylor Previous postIs Amelia Dimoldenberg Dating Rapper Aitch? Chicken Shop Date? Next postIs 'Love Is Blind' Star Giannina Gibelli Dating Bachelorette's Blake Horstmann? Get to Know Dana Monique: The Soulful Performer from The Voice Dana Monique grew up singing in church, surrounded by people who love music in her family. Meet Hilary Hoover! Fiancée of Country Singer Brooke Eden As a radio promoter, Hilary Hoover is currently putting her experience into working with American singer Garth Brooks. Meet Sarah Kameela Impey: The Frontwoman in 'We Are Lady Parts' Sarah Kameela Impey plays Saira in the newest British sitcom, 'We Are Lady Parts,' portraying the frontwoman of the band, Lady Part. Who is Johnny Powell? Husband of Charlotte Church Johnny Powell is also from the music industry, like his wife, Charlotte Church. Former 'American Idol' Contestant Ron Bultongez Arrested On Accusations Of Sex With A Minor On Thursday, Ron turned himself in last Thursday as per the TMZ report. The Tarrant County District Attorney said Ron had been charged on four counts of sexual penetration of a minor under 17 on four...
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
3,859
{"url":"http:\/\/www.chegg.com\/homework-help\/definitions\/geometric-probability-63?cp=CHEGGFREESHIP","text":"# Definition of Geometric Probability\n\nGeometric probability involves the distributions of length, area, and volume for geometric objects under stated conditions. The same basic concept behind probability applies, but instead of calculating total outcomes and particular outcomes, calculate total area and particular area of a geometric figure using the formula P = particular area \/ total area. An example of a geometric probability involving area:\n\nA point is selected at random in the square. Calculate the probability that it lies in the triangle MCN.\nIn statistics, geometric probability refers to geometric distributions. For example, when tossing a coin, what is the probability that the first head occurs on the third flip? That probability is referred to as a geometric probability and is denoted by g(x;P).\n\n### Get Definitions of Key Math Concepts from Chegg\n\nIn math there are many key concepts and terms that are crucial for students to know and understand. Often it can be hard to determine what the most important math concepts and terms are, and even once you\u2019ve identified them you still need to understand what they mean. To help you learn and understand key math terms and concepts, we\u2019ve identified some of the most important ones and provided detailed definitions for them, written and compiled by Chegg experts.","date":"2015-11-30 17:59:31","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 7, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8360314965248108, \"perplexity\": 403.0051414255949}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2015-48\/segments\/1448398462709.88\/warc\/CC-MAIN-20151124205422-00038-ip-10-71-132-137.ec2.internal.warc.gz\"}"}
null
null
The 2006 Tour de la Région Wallonne was the 33rd edition of the Tour de Wallonie cycle race and was held from 24 July to 28 July 2006. The race started in Flobecq and finished in Wanze. The race was won by Fabrizio Guidi. General classification References Tour de Wallonie Tour de la Région Wallonne
{ "redpajama_set_name": "RedPajamaWikipedia" }
7,161
Help Wanted: Doggy day care workers By Kayla Martin BURLINGTON, Vt. (WCAX) - Pandemic puppies have been a common phenomenon during the course of the pandemic. But since many people have gone back to work, local dog daycares are having a hard time keeping up with demand. As part of our ongoing series on high-demand jobs, our Kayla Martin spoke with some local businesses trying to get more employees in the door. Anissa Joy, a team leader at Play Dog Play in Burlington, has what some dog lovers would consider a dream job. "I actually worked at a dog boarding facility in Milton for about five years. Then, I left there and decided to go sell cars for a year. That made me realize that I really enjoy working with dogs," Joy said. She says they could really use some help right now and that no special training is required. "We are always willing to train the right candidate for the position." "It's been a very challenging process and experience for us, nothing that I have dealt with before," said Lyra Anderson, the businesses' co-owner. She says the labor issue mostly stems from the pandemic. "Just because of the anxiety and stress and nervousness about the global pandemic, and rightly so. I've been dealing with a lot of applicants who are interested and would possibly like to work, but then they have something that comes up with their family or friends." Anderson says there is a high demand for dog day care services throughout the area. "It is unprecedented how quickly we are booking up," she said. Every dog has to go through an interview process before they can join. "We opened up the interview process after closing it for a couple of months and in a matter of three days we had already booked out two months' worth of interviews." She says the increase in demand is likely because so many people got pets when stuck at home, but now they are returning to work. "Social creatures by nature, so they are not getting what they naturally need if they are sitting home by themselves the entire time," Anderson said. Play Dog Play has a strict 20 to 25 dog to handler ratio. "Specifically did not want to change our business philosophy to meet the need," Anderson said. "I am not the type of employer or worker that's looking for a warm body on the floor." Anderson says that means that if they don't have the staff, they don't take the dogs. She says that often means having to close for the day or decrease the number of dogs they can take in. Their normal limit is 100 dogs, but sometimes they can only take up to 60 or 80 dogs at a time. "That directly affects our bottom line," Anderson said. They're looking to hire at least two more full-time front desk team members and two more full-time handlers, some of the hardest positions for them to fill. So what kind of characteristics do you need to become a handler? "Dogs are just like children in many ways. So on top of loving them, you have to have compassion and patience and empathy to really successfully work with dogs," Anderson said. But being able to play with the dogs is only a small part of the job. "It's a lot of hands-on work, it's a lot of dedication. There are definitely some days that are really, really frustrating and really energetic," Joy said. Overall Joy says it's a positive experience. "This is what handling is all about. Right here we've got these two dogs trying to interact, trying to feel each other out. I want to try to let them play if they can do it appropriately together," she said. She says understanding dog behavior is a big part of the job. "See what support they need. See how they do weeks, months, sometimes really years later. And really starting to see them flourish is so wonderful. It's probably my favorite part about this job." For example, take Ziggy. "This is a dog a year ago wouldn't see bouncing around this excited with another dog," Joy said. Other perks include getting to bring your dog to work, discounted dog training, and pet supplies. Anderson says they also attempt to provide a living wage. "We actually bumped up our starting pay to $15 an hour. From there, we then are really beyond happy to communicate and coordinate beyond that of what is truly is going to be a livable wage," she said. There are a variety of shifts and hours available from full-time to part-time. Help Wanted: Electrical contractor offers free on the job training Help Wanted: Volunteer firefighters and EMS workers Help Wanted: Child care workers Help Wanted: Landscaping companies try to dig up workers Help Wanted: Vermont State Police looking for recruits Help Wanted: School support staff Help Wanted: GMT seeks drivers, mechanics Help Wanted: Cabot Creamery looks to attract workers Help Wanted: Trades desperate to replace aging workforce MiVT: Pin Up Pickles Vermont kids enjoy a real snow day
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
4,856
La bèstia de la guerra (títol original en anglès: The Beast ) és una pel·lícula estatunidenca dirigida per Kevin Reynolds, estrenada el 1988, segons l'obra teatral Nanawatai (dret d'asil o santuari en paixtu) de William Mastrosimone, sobre el tema de la guerra d'afganistan de 1979 a 1989. Ha estat doblada al català. Argument El 1981, durant la Guerra afganosoviètica entre l'exèrcit roig de l'URSS i els Mujahidins de la resistència afgana, durant operacions de destrucció de pobles i de massacre de població civils, una tripulació soviètica d'un carro de combat T-62 portat per un comandant tirànic es perd al desert afgà, i és la presa dels afgans equipats amb un llançacoets RPG-7. Repartiment George Dzundza: Daskal Jason Patric: Koverchenko Steven Bauer: Taj Stephen Baldwin: Golikov Don Harvey: Kaminski Kabir Bedi: Akbar Erick Avari: Samad Haim Gerafi: Moustafa Shoshi Marciano: Sherina Itzhak Bbi Neeman: Iskandar David Sherrill: Kovolov Moshe Vapnik: Hasan Claude Aviram: Sadioue Victor Ken: Ali Avi Keedar: Noor Crítica A The Beast, de Kevin Reynolds, George Dzundza fa de cruel comandant de tancs rus cruel que es perd al desert durant la guerra a l'Afganistan. Com l'enemic s'acosta, Dzundza ha de confiar en la seva supervivència preparant-se per veure com ho fa. Mentrestant, un dels homes de Dzundza (Jason Patric), abandonat i deixat morir pel seu demoníac comandant, s'uneix a la causa afganesa. Filmat a Israel, The Beast és una adaptació d'una obra de William Mastrosimone. Premis 1988: Millor pel·lícula al Festival internacional de cinema de Cleveland. Anècdotes El tanc és un Tiran 6 Israelià maquillat com un T-62 soviètic. L'helicòpter utilitzat és un SA.321 Super Frelon francès maquillat com un Mil Mi-8. Referències Pel·lícules dels Estats Units del 1988 Pel·lícules dramàtiques dels Estats Units Pel·lícules sobre la guerra afgano-soviètica
{ "redpajama_set_name": "RedPajamaWikipedia" }
3,465
Alonso López Maldonado (muere el 26 de septiembre de 1626 en Madrid) fue un escultor y retablista madrileño. Biografía Se encargó de diversas obras a lo largo de la provincia de Madrid. Entre ellas se encuentra la de decorar la vieja Puerta de Alcalá. Referencias Escultores de España del siglo XVII Escultores de la Comunidad de Madrid
{ "redpajama_set_name": "RedPajamaWikipedia" }
713
Chris Martin has transformed English diffidence into a masochistic religion. Is Coldplay warm milk or just quietly dependable? Don't ask Martin, who has transformed the English art of diffidence into a masochistic religion: "We owe them a career, really," he has said of Radiohead. He has also said, "Like millions of people in the world, I can't listen to Coldplay." He's half right about Radiohead—Coldplay exhibits a taste for melancholy and smeared, stretched-out sounds that leads straight back to Thom Yorke and his friends. The main antecedent is U2, who invented the form that Coldplay works within: rock that respects the sea change of punk but still wants to be as chest-thumping and anthemic as the music of the seventies stadium gods. Translated, this means short pop songs that somehow summon utterly titanic emotions and require you to skip around in triumphant circles and pump your fist, even if it is not entirely clear what you are singing about. The link to U2 has been made explicit on "Viva la Vida," which was co-produced by Brian Eno, the man who moved U2 from a feisty, soccer-chant style into the expansive and hypnotic sound that has defined the rest of their career. The problem is that Coldplay doesn't seem to have unplumbed depths, or a voice as distinctive as either Bono's or the Edge's, whose guitar is U2's second vocalist. The guys in Coldplay are a sweet bunch, and their best songs are modest affairs. "Yellow" was the track that made them famous eight years ago. There's some guitar work that echoes the Edge's—chiming, small chords played high on the neck and repeated, over and over, pushing the song away from the divisions of song form and closer to the ecstasy of the drone (when it works)—but the core of the song is Martin serenading someone with the oldest trick in the book: "Look at the stars, look how they shine for you, and all the things that you do." It's a big fat "Aw!," and it gets me every time. "Yellow" is one of Martin's few straightforward lyrics. For the band's second album, Martin started singing in free-floating slogans. "Am I part of the cure? Or am I part of the disease?" is a line from "Clocks," perhaps the group's loveliest song. The music evokes the song's name, revolving around three circling and falling piano arpeggios. The payoff comes when Martin stretches out the words "you are" in a falsetto sung over the piano figure. You are what? Go figure, and I haven't the slightest idea what is going on with the "tides" and the "clocks" in the lyrics. Doesn't matter. "Clocks" is a big-budget "Ooh!" with lots of pretty lights—it works. At the end of the song, Martin repeatedly sings, "Home, home, where I wanted to go." There's the only part you need take note of—an essentially conservative sentiment, and probably a comfort zone for a guy who grew up thinking he wasn't particularly cool and lost his virginity at the age of twenty-two. I've always wanted to like Coldplay for just that attribute. They're a band of nice young lads being rewarded for niceness. But on the band's third album, "X&Y," a need to Signify Something began to overwhelm the charm. The little bouquet of roses on the doorstep became an oversized vessel filled with cloying, synthetic gas. The title track of "Viva la Vida"—also known as the "iPod song," because it is used in an Apple ad—is easily the best thing about the album. Don't go to the lyrics for any cues; it is entirely obscure why such a jaunty, upbeat song would be referencing "Roman cavalry choirs" or revolutionaries or St. Peter. Martin is the king? Was the king? Whatevs. Coldplay knows how to build a song that draws you in with easy, karaoke-ready moves. I spent a weekend hearing an eight-year-old and an eleven-year-old sing the song (fighting about the lyrics, and sometimes rewriting them), and I never tired of the melody. After that, though, you are on your own. There are Eno touches that catch the ear: the chattering strings and bell-like keyboards that close out "Death and All His Friends," or the timbre of the instrumental "Life in Technicolor," which sounds like it's emanating from the end of a long metal tube. "Technicolor" is one of the album's few concise, concentrated pieces of writing; the rest sounds both incomplete and puffed up, like scraps of previous records scrambled and rearranged. This upending of their style isn't even radical enough to be bad. "Viva la Vida" is an album that keeps going out of focus, a series of disconnected pieces that is impossible to hold on to. And why are they wearing all those vaguely military jackets? What's with Liberty leading the people on the cover? They must know that beyond the cozy confines of London there are a couple of major conflicts going on. It does not feel like the moment, especially for such a vague band, to be playing with any symbols of war. All of this is a paid vacation in Ibiza compared to the Madison Square Garden show, in June. The concert, as you may have heard, was a freebie. As the Rolling Stones have taught us, nobody loves money more than a rich man, and I was impressed that this almost unnaturally successful band gave away a night in such a large venue. Good for Chris Martin, I thought. But guess who reminded us, three separate times, that the concert was free? There Martin was, onstage in his Little Bummer Boy outfit, skipping around and waving his fists. Except the crowd wasn't going wild, and the music wasn't calling for a celebration. Though the audience was obviously delighted to see Coldplay appear, the energy in the room remained fairly controlled throughout the set, even dipping to indifference at points. Which made Martin's moves seem that much more canned. It felt as if he'd done the entire show in a mirror, down to the self-deprecating wisecracks. In one of his increasingly suspect apologies, Martin told his American fans, "We come over here, we steal your women." That's right. If anyone in the audience had forgotten, Chris Martin is married to an actress named Gwyneth Paltrow. She's American. Maybe you've heard of her. No? Well, did you know that "Viva la Vida" went to No. 1? No? It's O.K.—Martin told us, by way of thanking us.
{ "redpajama_set_name": "RedPajamaC4" }
4,043
Q: Parsing PL/SQL code to check against syntax and semantic errors Please consider following scenario: * *I have a Text-Area and a Button on UI. *User will enter a PL/SQL block in Text-Area. *When user will press Button, I want to check for syntax and semantic errors in that block without executing it. I would really like a solution where I don't have to install anything more. Environment information: * *Java 1.6.31 *Oracle database 11g A: SQL> explain plan for select from dual; explain plan for select from dual * ERROR at line 1: ORA-00936: missing expression SQL> explain plan for select * from dual; Explained. or declare c integer := dbms_sql.open_cursor(); begin dbms_sql.parse(c, 'select * emp', dbms_sql.native); dbms_sql.close_cursor(c); end; / Error at line 1 ORA-00923: FROM keyword not found where expected ORA-06512: at "SYS.DBMS_SQL", line 1053 ORA-06512: at line 4 or hit http://www.softpedia.com/get/Internet/Servers/Database-Utils/EasySQL-Checker-for-Oracle.shtml A: It is difficult and you probably have to consider all kind of possiblities and make sure that the users do not wreck you database by making sure to grant very little rights etc. You got the point. This is not the full solution but it points you into the right direction. You could try to embed it into a CREATE or REPLACE PROCEDURE and fetch errors. Something like this: declare text_area varchar2(4000) := 'declare x number; begin xy := x + 1; end;'; begin execute immediate 'create or replace procedure DUMMY#__ IS BEGIN null; begin '|| text_area ||' end; END;'; exception -- see comment below about error handling when others then -- signal yourself it went wrong RAISE; end; The trouble with an anonymous block would be that it is right away executed. But that way you only execute a createion of the procedure which does the compile. If you have several users you probably want to create different procedure names or you want to create different schemas even to prevent conflicts. As I said this is not the full solution but just pointing into some direction. "ORA-23344 success with compilation error" can be used to fetch compile errors. A: I think you need a PL/SQL interpreter. It can check the given almost the syntax of the code. If you want to get full check, it is not easy, you have to check the DB objects, properties. permissions etc. You can create yourself a PL/SQL interpreter to ensure you requirements, OR you can try this parser: https://github.com/porcelli/plsql-parser By the way, I will be in trouble with "execute immediate" -calls. A: If you do have access to a running Oracle system which contains the tables in question, you can use dbms_sql.parse() to check if a given piece of SQL is valid or not. Regular DML statements are not execute through parse(), but DDL will be executed immediately. So you might want to check if the SQL is not a DDL statement (or better, only allow certain statements to begin with). Note that if the database you are connecting to, does not contain the tables used in the SQL, parse() will throw an error even if the statement is syntactically correct.
{ "redpajama_set_name": "RedPajamaStackExchange" }
3,485
import {IModelConstructor} from 'mobx-collection-store'; import ParamArrayType from './enums/ParamArrayType'; import IDictionary from './interfaces/IDictionary'; import IFilters from './interfaces/IFilters'; import IHeaders from './interfaces/IHeaders'; import IRawResponse from './interfaces/IRawResponse'; import IRequestOptions from './interfaces/IRequestOptions'; import IResponseHeaders from './interfaces/IResponseHeaders'; import * as JsonApi from './interfaces/JsonApi'; import {Record} from './Record'; import {Response as LibResponse} from './Response'; import {Store} from './Store'; import {assign, getValue, isBrowser, objectForEach} from './utils'; export type FetchType = ( method: string, url: string, body?: object, requestHeaders?: IHeaders, ) => Promise<IRawResponse>; export interface IStoreFetchOpts { url: string; options?: IRequestOptions; data?: object; method: string; store: Store; } export type StoreFetchType = (options: IStoreFetchOpts) => Promise<LibResponse>; export interface IConfigType { baseFetch: FetchType; baseUrl: string; defaultHeaders: IHeaders; defaultFetchOptions: IDictionary<any>; fetchReference: Function; paramArrayType: ParamArrayType; storeFetch: StoreFetchType; transformRequest: (options: IStoreFetchOpts) => IStoreFetchOpts; transformResponse: (response: IRawResponse) => IRawResponse; } export const config: IConfigType = { /** Base URL for all API calls */ baseUrl: '/', /** Default headers that will be sent to the server */ defaultHeaders: { 'content-type': 'application/vnd.api+json', }, /* Default options that will be passed to fetchReference */ defaultFetchOptions: {}, /** Reference of the fetch method that should be used */ /* istanbul ignore next */ fetchReference: isBrowser && window.fetch && window.fetch.bind(window), /** Determines how will the request param arrays be stringified */ paramArrayType: ParamArrayType.COMMA_SEPARATED, // As recommended by the spec /** * Base implementation of the fetch function (can be overridden) * * @param {string} method API call method * @param {string} url API call URL * @param {object} [body] API call body * @param {IHeaders} [requestHeaders] Headers that will be sent * @returns {Promise<IRawResponse>} Resolves with a raw response object */ baseFetch( method: string, url: string, body?: object, requestHeaders?: IHeaders, ): Promise<IRawResponse> { let data: JsonApi.IResponse; let status: number; let headers: IResponseHeaders; const request: Promise<void> = Promise.resolve(); const uppercaseMethod = method.toUpperCase(); const isBodySupported = uppercaseMethod !== 'GET' && uppercaseMethod !== 'HEAD'; return request .then(() => { const reqHeaders: IHeaders = assign({}, config.defaultHeaders, requestHeaders) as IHeaders; const options = assign({}, config.defaultFetchOptions, { body: isBodySupported && JSON.stringify(body) || undefined, headers: reqHeaders, method, }); return this.fetchReference(url, options); }) .then((response: Response) => { status = response.status; headers = response.headers; return response.json(); }) .catch((e: Error) => { if (status === 204) { return null; } throw e; }) .then((responseData: JsonApi.IResponse) => { data = responseData; if (status >= 400) { throw { message: `Invalid HTTP status: ${status}`, status, }; } return {data, headers, requestHeaders, status}; }) .catch((error) => { return {data, error, headers, requestHeaders, status}; }); }, /** * Base implementation of the stateful fetch function (can be overridden) * * @param {IStoreFetchOpts} reqOptions API request options * @returns {Promise<Response>} Resolves with a response object */ storeFetch(reqOptions: IStoreFetchOpts): Promise<LibResponse> { const { url, options, data, method = 'GET', store, } = config.transformRequest(reqOptions); return config.baseFetch(method, url, data, options && options.headers) .then((response: IRawResponse) => { const storeResponse = assign(response, {store}); return new LibResponse(config.transformResponse(storeResponse), store, options); }); }, transformRequest(options: IStoreFetchOpts): IStoreFetchOpts { return options; }, transformResponse(response: IRawResponse): IRawResponse { return response; }, }; export function fetch(options: IStoreFetchOpts) { return config.storeFetch(options); } /** * API call used to get data from the server * * @export * @param {Store} store Related Store * @param {string} url API call URL * @param {IHeaders} [headers] Headers to be sent * @param {IRequestOptions} [options] Server options * @returns {Promise<Response>} Resolves with a Response object */ export function read( store: Store, url: string, headers?: IHeaders, options?: IRequestOptions, ): Promise<LibResponse> { return config.storeFetch({ data: null, method: 'GET', options: {...options, headers}, store, url, }); } /** * API call used to create data on the server * * @export * @param {Store} store Related Store * @param {string} url API call URL * @param {object} [data] Request body * @param {IHeaders} [headers] Headers to be sent * @param {IRequestOptions} [options] Server options * @returns {Promise<Response>} Resolves with a Response object */ export function create( store: Store, url: string, data?: object, headers?: IHeaders, options?: IRequestOptions, ): Promise<LibResponse> { return config.storeFetch({ data, method: 'POST', options: {...options, headers}, store, url, }); } /** * API call used to update data on the server * * @export * @param {Store} store Related Store * @param {string} url API call URL * @param {object} [data] Request body * @param {IHeaders} [headers] Headers to be sent * @param {IRequestOptions} [options] Server options * @returns {Promise<Response>} Resolves with a Response object */ export function update( store: Store, url: string, data?: object, headers?: IHeaders, options?: IRequestOptions, ): Promise<LibResponse> { return config.storeFetch({ data, method: 'PATCH', options: {...options, headers}, store, url, }); } /** * API call used to remove data from the server * * @export * @param {Store} store Related Store * @param {string} url API call URL * @param {IHeaders} [headers] Headers to be sent * @param {IRequestOptions} [options] Server options * @returns {Promise<Response>} Resolves with a Response object */ export function remove( store: Store, url: string, headers?: IHeaders, options?: IRequestOptions, ): Promise<LibResponse> { return config.storeFetch({ data: null, method: 'DELETE', options: {...options, headers}, store, url, }); } /** * Fetch a link from the server * * @export * @param {JsonApi.ILink} link Link URL or a link object * @param {Store} store Store that will be used to save the response * @param {IDictionary<string>} [requestHeaders] Request headers * @param {IRequestOptions} [options] Server options * @returns {Promise<LibResponse>} Response promise */ export function fetchLink( link: JsonApi.ILink, store: Store, requestHeaders?: IDictionary<string>, options?: IRequestOptions, ): Promise<LibResponse> { if (link) { const href: string = typeof link === 'object' ? link.href : link; /* istanbul ignore else */ if (href) { return read(store, href, requestHeaders, options); } } return Promise.resolve(new LibResponse({data: null}, store)); } export function handleResponse(record: Record, prop?: string): (response: LibResponse) => Record { return (response: LibResponse): Record => { /* istanbul ignore if */ if (response.error) { throw response.error; } if (response.status === 204) { record['__persisted'] = true; return record as Record; } else if (response.status === 202) { (response.data as Record).update({ __prop__: prop, __queue__: true, __related__: record, } as Object); return response.data as Record; } else { record['__persisted'] = true; return response.replaceData(record).data as Record; } }; } function __prepareFilters(filters: IFilters): Array<string> { return __parametrize(filters).map((item) => `filter[${item.key}]=${item.value}`); } function __prepareSort(sort?: string|Array<string>): Array<string> { return sort ? [`sort=${sort}`] : []; } function __prepareIncludes(include?: string|Array<string>): Array<string> { return include ? [`include=${include}`] : []; } function __prepareFields(fields: IDictionary<string|Array<string>>): Array<string> { const list = []; objectForEach(fields, (key: string) => { list.push(`fields[${key}]=${fields[key]}`); }); return list; } function __prepareRawParams(params: Array<{key: string, value: string}|string>): Array<string> { return params.map((param) => { if (typeof param === 'string') { return param; } return `${param.key}=${param.value}`; }); } export function prefixUrl(url) { return `${config.baseUrl}${url}`; } function __appendParams(url: string, params: Array<string>): string { if (params.length) { url += '?' + params.join('&'); } return url; } function __parametrize(params: object, scope: string = ''): Array<{key: string, value: string}> { const list = []; objectForEach(params, (key: string) => { if (params[key] instanceof Array) { if (config.paramArrayType === ParamArrayType.OBJECT_PATH) { list.push(...__parametrize(params[key], `${key}.`)); } else if (config.paramArrayType === ParamArrayType.COMMA_SEPARATED) { list.push({key: `${scope}${key}`, value: params[key].join(',')}); } else if (config.paramArrayType === ParamArrayType.MULTIPLE_PARAMS) { list.push(...params[key].map((param) => ({key: `${scope}${key}`, value: param}))); } else if (config.paramArrayType === ParamArrayType.PARAM_ARRAY) { list.push(...params[key].map((param) => ({key: `${scope}${key}][`, value: param}))); } } else if (typeof params[key] === 'object') { list.push(...__parametrize(params[key], `${key}.`)); } else { list.push({key: `${scope}${key}`, value: params[key]}); } }); return list; } export function buildUrl( type: number|string, id?: number|string, model?: IModelConstructor, options?: IRequestOptions, ) { const path: string = model ? (getValue<string>(model['endpoint']) || model['baseUrl'] || model.type) : type; const url: string = id ? `${path}/${id}` : `${path}`; const params: Array<string> = [ ...__prepareFilters((options && options.filter) || {}), ...__prepareSort(options && options.sort), ...__prepareIncludes(options && options.include), ...__prepareFields((options && options.fields) || {}), ...__prepareRawParams((options && options.params) || []), ]; return __appendParams(prefixUrl(url), params); }
{ "redpajama_set_name": "RedPajamaGithub" }
4,983
17/F/Illinois loving someone dearly takes all of the breath out of you. M/Nigeria A Naturopathic Physician,Poet, Researcher,Lecturer,Human rights activist,Crisis Councillor, Humanitarian, Author of A Guide to Health(Natural Medicine), Whisperings from within:Poems by Emeka Mokeme,Personal Glimpses,and a Poetry Therapist. 22/F/Philadelphia "in a world where everyone wears a mask, it's a privilege to see a soul." 20/M/India Neither a writer nor a lover, just a boy who like to write short poems. Would you like more s.a.l.t in your alphabet soup?
{ "redpajama_set_name": "RedPajamaC4" }
9,308
El huracán John fue el undécimo tormenta nombrada, la séptima huracán y el quinto huracán mayor de la temporada de huracanes en el Pacífico de 2006. John se desarrolló el 28 de agosto desde una onda tropical al suroeste de México. Las condiciones favorables permitieron que la tormenta se intensificara rápidamente y alcanzó vientos máximos de 130 mph (215 km/h) el 30 de agosto. Los ciclos de reemplazo de la pared y la interacción con el occidente de México debilitaron el huracán y John tocó tierra en el sureste de Baja California Sur con vientos de 110 mph (175 km/h) el 1 de septiembre. Poco a poco se debilitó a medida que avanzaba hacia el noroeste a través de la península de Baja California, y se disipó el 4 de septiembre. La humedad de los remanentes de la tormenta entró en el suroeste de los Estados Unidos. El huracán amenazó a gran parte de la costa occidental de México, lo que provocó la evacuación de decenas de miles de personas. En las partes costeras del occidente de México, los fuertes vientos derribaron los árboles, mientras que las fuertes lluvias provocaron deslaves. El huracán John causó daños moderados en la península de Baja California, incluida la destrucción de más de 200 casas y miles de chabolas endebles. El huracán mató a cinco personas en México y el daño totalizó $ 663 millones (2006 MXN, $ 60.9 millones en 2006 USD). En el sudoeste de los Estados Unidos, la humedad de los remanentes de John produjo fuertes lluvias. La lluvia ayudó a las condiciones de sequía en partes del norte de Texas, aunque fue perjudicial en lugares que habían recibido lluvias superiores a lo normal durante todo el año. Historia metorológica La onda tropical que se convertiría en una tormenta se movió de la costa de África el 17 de agosto. Ingresó al Océano Pacífico oriental el 24 de agosto, y rápidamente mostró signos de organización. Esa noche, las clasificaciones de Dvorak se iniciaron en el sistema mientras estaba justo al oeste de Costa Rica, y se desplazó hacia el oeste-noroeste a 10-15 mph (15-25 km/h). Las condiciones parecían favorables para un mayor desarrollo, y la convección aumentó tarde el 26 de agosto en el área de baja presión. Temprano el 27 de agosto, el sistema se organizó mucho a unas 250 millas (400 km) al sur-suroeste de Guatemala, aunque la convección se mantuvo mínima. A principios del 28 de agosto, las bandas aumentaron en su convección organizativa, y el sistema se convirtió en depresión tropical Once-E. Debido a la baja cantidad de cizalladura vertical, aguas muy cálidas y humedad abundante, se pronosticó una intensificación constante, y la depresión se fortaleció a la tormenta tropical que llevó el nombre de John más tarde el 28 de agosto. La convección profunda continuó desarrollándose durante la tormenta, mientras que una característica del ojo se desarrolló dentro de la expansión del denso central. La tormenta continuó intensificándose, y John alcanzó el estado de huracán el 29 de agosto, mientras que 190 millas (305 km) al sur-sureste de Acapulco. Las características de anillamiento continuaron aumentando a medida que el huracán se movía hacia el oeste-noroeste en la periferia suroeste de una cresta de nivel medio-alto sobre el norte de México. El huracán se intensificó rápidamente y John alcanzó un estatus de huracán mayor 12 horas después de convertirse en huracán. Poco después, el ojo se oscureció y la intensidad se mantuvo en 115 mph (185 km/h) debido a un ciclo de reemplazo de la pared del ojo. Sin ha formado un nuevo ojo, y basado en datos de Cazadores de huracanes, el huracán alcanzó la categoría 4 en la escala de Huracanes de Saffir-Simpson el 30 de agosto a 160 millas (260 km) al oeste de Acapulco, o 95 millas (155 km) al sur de Lázaro Cárdenas, Michoacán. Horas más tarde, el huracán se sometió a otro ciclo de reemplazo de la pared del ojo, y posteriormente se debilitó como categoría 3 ya que era paralela a la costa mexicana a poca distancia de la costa. Debido a la interacción de la tierra y el ciclo de reemplazo de la pared del ojo, John se debilitó a un huracán de 105 mph (170 km/h) el 31 de agosto a última hora, pero se reforzó con un gran huracán poco después a medida que su ojo se definió mejor. Después de completar otro ciclo de reemplazo de la pared del ojo, el huracán nuevamente se debilitó como categoría 2, y el 1 de septiembre, tocó tierra en Cabo del Este en el extremo sur de Baja California Sur con vientos de 110 mph (180 km/h). John pasó cerca de La Paz como un huracán de categoría 1 en debilitamiento el 2 de septiembre, y se debilitó a una tormenta tropical poco después de tocar tierra. John siguió debilitándose y a última hora del 3 de septiembre, el sistema se deterioró hasta convertirse en depresión tropical mientras aún se extendía por la tierra. El 4 de septiembre, la mayor parte de la convección se desacopló de la circulación hacia el territorio continental de México, y no se pudo discernir una circulación clara durante 24 horas. Basado en la desorganización del sistema, el Centro Nacional de Huracanes emitió su último aviso sobre el sistema. Preparaciones El ejército mexicano y los servicios de emergencia estaban estacionados cerca de la costa, mientras que las clases en las escuelas públicas de Acapulco y sus alrededores fueron canceladas. Funcionarios en Acapulco aconsejaron a los residentes en áreas bajas estar en alerta, y también instaron a los pescadores a regresar al puerto. Las autoridades en las ciudades gemelas de Ixtapa y Zihuatanejo cerraron el puerto a pequeñas embarcaciones oceánicas. Los funcionarios del gobierno del estado de Jalisco declararon una evacuación obligatoria para 8.000 ciudadanos en las zonas bajas a 900 refugios temporales. También se establecieron refugios temporales cerca de Acapulco. El estado de Michoacán estaba en alerta amarilla, en el medio de un sistema de alerta de cinco niveles. La Carnival Cruise Lines desvió el camino de un crucero que viajaba a lo largo de las aguas del Pacífico frente a México. El 31 de agosto, el gobierno estatal de Baja California Sur ordenó la evacuación de más de 10,000 residentes. Aquellos que se negaron a seguir la orden de evacuación habrían sido obligados a evacuar por el ejército. Se establecieron refugios para permitir que los residentes locales y los turistas salgan de la tormenta. Apenas unas semanas después de una gran inundación en el área, las autoridades evacuaron a cientos de ciudadanos en Las Presas, en el norte de México, cerca de una represa. Todas las escuelas públicas en el área estaban cerradas, también. El Servicio Meteorológico Nacional de los Estados Unidos emitió advertencias y alertas de inundación para partes de Texas y los dos tercios del sur de Nuevo México. Impacto México Los poderosos vientos del huracán John produjeron fuertes olas y árboles caídos cerca de Acapulco. El huracán produjo una marejada ciclónica de 10 pies (3 m) en Acapulco que inundó las carreteras costeras. Además, John causó fuertes lluvias a lo largo de la costa occidental de México, con un máximo de 12.5 pulgadas (317.5 mm) en Los Planes, Jalisco. La lluvia provocó deslizamientos de tierra en la región de Costa Chica en Guerrero, dejando alrededor de 70 comunidades aisladas. En La Paz, capital de Baja California Sur, el huracán derribó 40 postes de energía. Las autoridades cortaron el suministro de energía a la ciudad para evitar que las electrocuciones caigan en los cables. Fuertes vientos derribaron árboles y destruyeron muchas señales publicitarias. Fuertes lluvias que totalizaron más de 20 pulgadas (500 mm) en áreas aisladas resultaron en inundaciones profundas, cerrando muchas carreteras además del aeropuerto de La Paz..En La Paz, 300 familias recibieron daños a sus hogares, y otras 200 familias quedaron sin hogar después de que sus casas fueron destruidas. La combinación de vientos y lluvia destruyó miles de casas frágiles en toda la región. La lluvia también destruyó grandes áreas de cultivos y también mató a muchos animales. La lluvia causó el desbordamiento de la represa Iguagil en Comondú, aislando 15 ciudades debido a inundaciones de 4 pies (1.5 m). En la ciudad costera de Mulegé, las inundaciones repentinas causaron daños generalizados en toda la ciudad y la muerte de un ciudadano de los Estados Unidos. Más de 250 casas fueron dañadas o destruidas en la ciudad, dejando a muchas personas sin hogar. Las graves inundaciones bloquearon porciones de la Carretera Federal 1 y dañaron un acueducto en la región. En total, el huracán John destruyó cientos de casas y destruyó los techos de 160 casas en la península de Baja California. Cinco personas murieron y el daño en México ascendió a $663 millones (2006 MXN, $60.8 millones en 2006 USD). En Ciudad Juárez, Chihuahua, al otro lado de la frontera con los Estados Unidos desde El Paso (Texas), Texas, la lluvia de los remanentes de la tormenta inundó 20 vecindarios, derribó líneas eléctricas y provocó varios accidentes de tránsito. La lluvia de John, combinada con la precipitación continua durante las dos semanas previas a la tormenta, dejó a miles de personas sin hogar. Estados Unidos La humedad de los restos de John se combinó con un frente frío que se aproxima para producir cantidades moderadas de lluvia en el suroeste de los Estados Unidos, incluido un total de 8 pulgadas (200 mm) en Whitharral y más de 3 pulgadas (75 mm) en El Paso, Texas. La lluvia inundó muchas carreteras en el suroeste de Texas, incluyendo una porción de ½ milla (800 m) de la Interestatal 10 en El Paso. Una pista resbaladiza en el aeropuerto internacional de El Paso retrasó un avión de Continental Airlines cuando sus neumáticos estaban atascados en el barro. La lluvia de John en El Paso, combinada con un año inusualmente húmedo, resultó en el doble de la precipitación anual normal, y causó que 2006 sea el noveno año más lluvioso registrado en septiembre. El daño totalizó alrededor de $100,000 (2006 USD) en el área de El Paso por la precipitación. En el norte de Texas, la lluvia alivió una sequía severa, causó que el Río Double Mountain Fork Brazos se hinchara y el Lago Alan Henry se desborda. El Departamento de Transporte de Texas cerró numerosas carreteras debido a las inundaciones causadas por las precipitaciones, incluida una parte de la ruta 385 de los Estados Unidos cerca de Levelland. Varios otros caminos fueron arrasados. La humedad derivada de John también produjo lluvias en el sur de Nuevo México, con un máximo de 5,25 pulgadas (133 mm) en Ruidoso. La lluvia desbordó los ríos, obligando a las personas a evacuar a lo largo del Río Ruidoso. La lluvia también causó inundaciones de carreteras aisladas. La lluvia en Nuevo México canceló un festival anual de vino en Las Cruces y causó condiciones de barro en All American Futurity en Ruidoso Downs, el día más grande de las carreras de caballos en Nuevo México. Las inundaciones fueron severas en Mesquite, Hatch y Rincón, donde muchas casas experimentaron inundaciones y deslizamientos de lodos de 4 pies (1.5 m). Algunos propietarios perdieron todo lo que poseían. La humedad tropical de la tormenta también produjo lluvias en Arizona y el sur de California. En California, la lluvia produjo ocho aludes de lodo, que atraparon a 19 vehículos, pero no causaron heridos. Sucesos Las sucursales de la Cruz Roja Mexicana en Guerrero, Oaxaca y Michoacán se pusieron en alerta. El equipo nacional de respuesta a emergencias de la organización estuvo a disposición para ayudar a las áreas más afectadas. Los helicópteros de la Armada entregaron alimentos y agua a áreas remotas de la península de Baja California. La Cruz Roja Mexicana envió 2,000 paquetes de alimentos al extremo sur de Baja California Sur. En la ciudad de Mulegé, el suministro de gas, que era necesario para el funcionamiento de los generadores, era bajo, el agua potable había desaparecido y la pista de aterrizaje estaba cubierta de lodo. Muchos residentes sin hogar al principio se quedaron con amigos o en albergues administrados por el gobierno. En toda la península de Baja California, miles permanecieron sin agua o electricidad dos días después de la tormenta, aunque un piloto de Phoenix se preparó para volar al área del desastre con 380 litros de agua. Se esperaba que otros pilotos ejecutaran vuelos similares, también. La oficina del Turismo de Baja California Sur indicó que se produjo un daño mínimo en la infraestructura del turismo, con solo retrasos mínimos en los aeropuertos, las carreteras y las instalaciones marítimas. La Beneficencia y Desarrollo Episcopal entregan alimentos, ropa, medicina y transporte a unas 100 familias, y dieron colchones a alrededor de 80 familias. Muchos residentes en Tucson, incluidos más de 50 estudiantes, entregaron suministros a las víctimas de las inundaciones en Nuevo México, incluyendo ropa y otras donaciones. Véase también Huracán Henriette (2007) Huracán Jimena (2009) Huracán Odile (2014) Huracán Newton (2016) Referencias Enlaces externos El archivo del Centro Nacional de Huracanes acerca del huracán John John John John John Huracanes en México México en 2006 John
{ "redpajama_set_name": "RedPajamaWikipedia" }
9,381
The Rochester and Eastern Rapid Railway (R&ER) was an electric interurban railway in New York State, USA, connecting Rochester, Canandaigua, and Geneva. History The company was chartered in 1901, the investors being mostly from Rochester. Service between that city and Canandaigua began in 1903, the power house and car barn being erected in the latter place. Completion to Geneva was in 1904. A spur line to Fairport was begun, but abandoned unfinished. In 1905 the line came under control of the New York Central Railroad (NYC) through its Mohawk Valley Company subsidiary. In this period, Canandaigua was provided with a limited local streetcar service using one car. The R&E was consolidated with the Rochester Railway Company and the Rochester and Sodus Bay Railway in 1909 to form New York State Railways which was wholly owned by New York Central. One change was that the power house was shut down, and electricity purchased from the local public utility company owing to economies of scale. In its early years, the line attracted a heavy leisure traffic of people wishing to visit the Finger Lakes, but this was especially vulnerable to automobile competition. Ridership declined sharply through the 1920s, but nevertheless the interurban was re-routed to link up to the Rochester Subway at Rowlands in 1928, providing a faster and more direct route to serve downtown Rochester. The effort was wasted, as New York State Railways petitioned to abandon the R&E in 1929. Months later, legal permission was granted to end all service on July 31, 1930. The line was dismantled soon after, and there was no successor. Route Before 1928, the R&ER used the Rochester Railway Company streetcar tracks, and left the city on Monroe Avenue. In that year, it briefly connected with the Rochester Subway at Rowlands, at the south-east end of Monroe, before shutting down for good. The terminus in this short period was the City Hall subway stop. The first major stop after Rochester city limits was Pittsford, whence the R&ER closely paralleled the NYC line to Canandaigua on the latter's north side. Stops were at Bushnell's Basin, Fishers, Victor, Mertensia (actually at Hathaway Corners) and Paddleford. In Canandaigua the line ran down Main Street, and the car barn was at the end of a stub line left when the main line turned east to Geneva. The location of the car barn at the end of this stub line at the south end of Canandagua's Main Street meant that the city's local trolley service could shuttle from there to the city limits at the north end of Main Street and back again, without getting in the way of the interurban cars. The main line then ran due east, with main stops at Hopewell, Seneca Castle, Gates and Pre-emption. It terminated in Geneva next to the City Hall on the north side of the Castle Street, halfway between Geneva and Exchange Streets. At Geneva, the line connected with the Geneva, Seneca Falls and Auburn Railroad which took folk to Seneca Falls and to a terminus on the Cayuga Lake shore where there was an amusement park (it never got to Auburn). This little company also ran the Geneva streetcars. In 1905, the R&ER was proposing a loop line from Mertensia through Shortsville, Clifton Springs and Phelps to Geneva, but this was abortive -although it appeared on its publicity maps. The line had interchanges with the NYC, and also with the Pennsylvania Railroad at Canandagua and the Lehigh Valley Railroad at Geneva. However it was never able to develop any substantial carload freight traffic although, like most interurbans, it handled LCL (less-than-carload) freight in the baggage compartments of its cars. References Streetcars in New York (state) Transportation in Rochester, New York Defunct New York (state) railroads Railway companies established in 1901 Railway companies disestablished in 1930
{ "redpajama_set_name": "RedPajamaWikipedia" }
50
Q: Add value to dictionary key only if it is not already present I find that I am often wanting to append values to dictionary lists, but only if the value is not in the list already. Therefore, I'm trying to separate this out into a procedure, and I can't figure out why the following code doesn't achieve this: proc DictAdd {_dictionary key value} { upvar 1 $_dictionary dictionary if { $value ni [dict get $dictionary $key] } { dict lappend dictionary $key $value } } Calling this procedure returns the following error: can't read "dictionary": no such variable while executing "dict get $dictionary $key" (procedure "DictAdd" line 5) invoked from within "DictAdd $files baseline $fileName" (procedure "getFilesToLoad" line 53) invoked from within ... Could someone please tell me what I'm doing wrong here? Thanks. A: The problem is most likely that the dictionary variable that you are referring to is actually unset, and so impossible to read. Try this: proc DictAdd {_dictionary key value} { upvar 1 $_dictionary dictionary if {![info exists dictionary]} { set dictionary [dict create $key [list $value]] } elseif {$value ni [dict get $dictionary $key]} { dict lappend dictionary $key $value } } A: In the invocation DictAdd $files baseline $fileName $files is the value of your dictionary, but DictAdd expects the name of the dictionary variable. If you instead invoke it like this: DictAdd files baseline $fileName the command works as designed. BTW: if you define DictAdd like this: proc DictAdd {_dictionary key value} { upvar 1 $_dictionary dictionary if {[dict exists $dictionary $key]} { dict with dictionary { upvar 0 $key values if {$value ni $values} { lappend values $value } } } else { dict lappend dictionary $key $value } } you don't get an error message if the key is missing (it adds the value under that key) (the dictionary still needs to exist outside DictAdd) and the checking/adding code gets a little less messy. Why the name? It's because of how upvar works. The command takes a stack level (in this case 1 = the caller's level) and a variable name (contained in _dictionary; "files" in this case); using those it locates a variable outside the executing command, and creates a local alias inside the executing command (in this case named dictionary: files outside is now basically the same variable as dictionary inside). If you pass something else, e.g. the value of files, say {baseline {a b c}}, upvar will look for a variable called {baseline {a b c}} and most likely not find it. It will create the alias anyway, and if you initialize it, a variable called {baseline {a b c}} at the caller's level will actually be created. But, again, you will want to use the name of the variable (of course, the name of the variable could be the value of another variable when you invoke the command…). Documentation: dict, if, lappend, ni operator, proc, upvar
{ "redpajama_set_name": "RedPajamaStackExchange" }
135
\section{Introduction} This paper introduces the \emph{subgraph nomination} inference task, in which example subgraphs of interest are used to query a network for similarly interesting subgraphs. Stated succinctly, the subgraph nomination problem is as follows: given a subgraph or subgraphs of interest in a network $G_1$, we seek to find heretofore unknown subgraphs of interest in $G_1$ or in a second network $G_2$. The subgraph nomination problem can be viewed as an amalgam of the problems of noisy subgraph detection \cite{nsg1,nsg2,nsg3,bertozzi_2018,sussman2018matched} and vertex nomination \cite{marchette2011vertex,coppersmith2014vertex,FisLyzPaoChePri2015,lyzinski2017consistent}, in which our task is to produce a rank-list of candidate subgraphs, with (ideally) unknown subgraphs of interest concentrating at the top of the rank list. While seemingly simple on its surface, subgraph nomination necessarily entails the often non-trivial combination of subgraph detection (i.e., we have to find candidate subgraphs to rank) and multiple graph comparison methodologies (i.e., we have to rank the candidate subgraphs). \emph{Subgraph detection} ---here, producing candidate subgraphs of interest from $G_1$ or $G_2$---is an active area of research in machine learning and pattern recognition, and encompasses both the famous subgraph isomorphism problem (see, for example, \cite{sg1,sg2,sg3,sg4}) as well as numerous noisy subgraph detection routines. In subgraph nomination, subgraph detection is further complicated by the generality of the (topological) features that define the training subgraphs as ``interesting." In particular, it may be the case that the query set encompasses multiple different interesting templates \cite{bertozzi_2018, nsg1} or motifs-of-interest \cite{bio3, lyzinski2015community}, each delineating an activity or structure of interest in the network. Moreover, the templates of interesting subgraphs need not be manifestly similar to each other. The notion of \emph{similarity} across subgraphs will be formalized via a subgraph dissimilarity mapping $\boldsymbol{\Delta}$, and a subgraph nomination scheme can (loosely) be considered as an estimate of the subgraph structure of $g_2$ combined with a dissimilarity $\boldsymbol{\Delta}$ used to rank the subgraphs (see Def. \ref{def:SGN}). If $\mathfrak{S}_{G}$ represents the collection of all subgraphs of a network $G$, then in the single-graph setting $$\boldsymbol{\Delta}:\mathfrak{S}_{G_1}\times \mathfrak{S}_{G_1}\rightarrow[0,1], $$ and in the pair of graphs setting $$\boldsymbol{\Delta}:\mathfrak{S}_{G_1}\times \mathfrak{S}_{G_2}\rightarrow[0,1], $$ with smaller values indicating more similar subgraphs. We comment here that although the two graph setting will be our focus moving forward, this naturally lifts to the single graph setting by considering the two graphs to nominate across as parts of a partition of a larger network. Large values of $\boldsymbol{\Delta}$ indicate that the subgraphs are highly dissimilar according to $\boldsymbol{\Delta}$, and candidate subgraphs are defined as interesting if they are sufficiently similar (i.e., insufficiently dissimilar) to \emph{any} subgraph in the training sample. Choosing an appropriate $\boldsymbol{\Delta}$ from which to construct the subgraph rankings is of primary import, and adaptive methods (similar to those in the learning-to-rank problem in the information retrieval literature \cite{liu2011learning,helm2020learning}) can be considered to learn $\boldsymbol{\Delta}$ from the training subgraphs of interest. While estimating an optimal $\boldsymbol{\Delta}$ is central in this problem framework, we do not derive formal procedures for estimating $\boldsymbol{\Delta}$ in this paper. Instead, we choose to focus on the effect of user-in-the-loop supervision across a variety of possible $\boldsymbol{\Delta}$ choices, where the user is modeled as additional light supervision that can be used to refine the output of a subgraph nomination routine. As a motivating example for our focus on studying the effects of a user-in-the-loop, we consider the problem of detecting and nominating particular brain regions of interest across the hemispheres of a human connectome from the BNU1 dataset \cite{zuo2014open} downloaded from \url{https://neurodata.io/mri/}. In this setting, subgraph nomination can be used as a tool to better understand (and detect) the structural similarity between brain regions across hemispheres. To this end, we consider a region (or regions) of interest in the left hemisphere, and there might be multiple subgraphs in the right hemisphere that match (according to $\boldsymbol{\Delta}$) well to the training data. Moreover, the most similar region of interest in the right hemisphere (i.e., the ${\boldsymbol{\Delta}}$-optimal subgraph of the right hemisphere) may vary dramatically depending on the $\boldsymbol{\Delta}$ used (see, for example, Figure 7 in \cite{sussman2018matched}), and the optimal region may not significantly overlap with a true latent region of interest in the left hemisphere. In this case, even a sensible ranking scheme would be potentially unable to correctly retrieve the desired region of interest in the right hemisphere (even if the proper subgraph was identified). In this case, we can naturally employ the help of a user-in-the-loop \cite{amershi2014power}, who, given a vertex, can decide whether it is interesting/part of an interesting subgraph to help refine our ranking. The subgraph nomination framework we present in this paper is both general enough to include the operationally significant effect of a user-in-the-loop and principled enough to theoretically analyze nomination schemes. After discussing preliminaries and presenting the problem framework, we demonstrate multiple pathologies to highlight the power of the subsequent theory as a predictive tool of the efficacy of the user in general situations. In particular, we show that a user can improve the performance over a scheme that achieves the Bayes optimal rate sans user-in-the-loop, even in the presence of potential user-error. This result is similar to results in the classification literature when given noisy training labels \cite{frenay2013classification,natarajan2013learning}. We further show that, given noisily recovered subgraphs, including information from an oracle user can deteriorate performance due to the noise within the subgraph detection routines. \vspace{3mm} \noindent{\bf Notation:} For a positive integer $n$, we will denote $[n]:=\{1,2,3,\cdots,n\}$; $J_n$ to the $n\times n$ matrix of all $1$'s; $\mathcal{G}_n$ to be the set of labeled $n$-vertex graphs; for $G\in\mathcal{G}_n$, and $W\subset V(G)$, we let $G[W]$ denote the induced subgraph of $G$ on $W$. \section{Introduction and Background} \label{Motivation} In subgraph nomination, our task is as follows: given a subgraph or subgraphs of interest in a network $G_1$, we seek to find heretofore unknown subgraphs of interest in a second network $G_2$. Our approach to subgraph nomination proceeds by \begin{itemize} \item[1.] Partitioning the vertices in $G_2$ into candidate subgraphs (note that the choice of non-overlapping candidate subgraphs is merely a theoretical/notational convenience, and in practice, the subgraphs to be ranked can have overlap); \item[2.] Ordering the candidate subgraphs in $G_2$ into a rank list with the unknown subgraphs of interest ideally concentrating at the top of the rank list. \end{itemize} Subgraph nomination is a natural extension of vertex nomination (VN) \cite{marchette2011vertex,coppersmith2014vertex,suwan2015bayesian,Fishkind_2015,lyzinski2017consistent,agterberg2019vertex}, and we will begin by providing a brief background on recent developments in VN, as this will provide useful context for our later novel formulation of subgraph nomination. \subsection{Vertex Nomination} \label{sec:vn} Semi-supervised querying of large graphs and databases is a common graph inference task. For example, in a graph database with multiple features, one may be given a collection of book names all with the common \emph{latent} feature ``Horror Fiction Best Sellers,'' and the goal would be to query the database for more titles that fit this description; see, for example, the work in \cite{angles2016foundations,rastogi2017vertex}. Learning this unifying feature from the query set is a challenging problem unto itself \cite{rastogi2019neural}, especially in settings where the features that define interestingness are nuanced or multi-faceted. While we can interpret such a problem as a vertex classification problem where vertices are either labeled interesting or not, in the presence of very large networks with large class imbalance between interesting and non-interesting vertices, there is often limited training data and limited user resources for verifying returned results. In this setting, an information retrieval (IR) framework may be more appropriate, wherein unlabeled vertices would be ordered in a rank list based on how interesting they are deemed to be. This inference task is synonymous with \emph{vertex nomination} (or personal recommender systems on graphs). In \cite{patsolic2017vertex,lyzinski2017consistent,agterberg2019vertex,levin2020role}, the vertex nomination problem is defined as follows. Given vertices of interest $V^*\subset V(G_1)$ in a network $G_1$, and corresponding unknown vertices of interest $U^*\subset V(G_2)$ in a second network $G_2$ (whose identities are hidden to the user), use $G_1,G_2,V^*$ to rank the vertices in $G_2$ into a nomination list, with vertices in $U^*$ concentrating at the top of the nomination list. In its initial formulation \cite{coppersmith2014vertex,marchette2011vertex,Fishkind_2015,yoder2018vertex}, the feature that defined vertices as interesting was membership in a community of interest, and the community memberships of vertices in $V^*$ were used to nominate the vertices in $G_1$ (no second network was introduced, or $G_2=G_1\setminus\{V^*\}$) with unknown community memberships. Subsequent work \cite{rastogi2017vertex,rastogi2019neural,patsolic2017vertex,lyzinski2017consistent} sought to generalize the features that defined vertices as interesting beyond simple community membership, and lifted the vertex nomination problem to the two graph setting. In order to allow for a broadly general class of networks to be considered, the general vertex nomination problem framework of \cite{agterberg2019vertex,lyzinski2017consistent} referenced above is situated in the context of nominatable distributions, a broad class of random graph distributions defined in \cite{agterberg2019vertex}. Within this broad class of models, the concepts of Bayes optimality and consistency were developed in \cite{lyzinski2017consistent, agterberg2019vertex} and the important result that universally consistent VN schemes do not exist is proven in \cite{lyzinski2017consistent}. These results were leveraged in \cite{agterberg2019vertex} to develop an probabilistic adversarial contamination model in the context of VN as well as regularization schemes to counter the adversary. The results of \cite{lyzinski2017consistent, agterberg2019vertex} were further extended to the richly featured network setting in \cite{levin2020role}, in which the (potentially) complementary roles of features and network structure are explored in the VN task. Beyond the development of novel VN algorithms \cite{yoder2018vertex}, one of the key aspects of the recent theoretical developments in VN is the notion that vertex labels in $g_2$ are uninformative in the ranking scheme, which is sensible if we are mirroring setting in which the vertex labels do not aid in the delineation between interesting and uninteresting vertices. In subgraph nomination, the uninformative nature of the labels can be accounted for in the dissimilarity $\boldsymbol{\Delta}$ (see Eq. \ref{eq:D}). Note that while similar consistency results to those in VN can be derived in the setting of subgraph nomination, we do not focus on that here. Rather we will focus on the role of users-in-the-loop in the subgraph nomination framework. \subsection{Hierarchical Subgraph Models} \label{sec:HG} \begin{figure}[t!] \centering \includegraphics[width=1\textwidth]{Images/Hier-crop.pdf} \caption{An example of two 3-Level network hierarchical functions of $[5]$, $H^{(1)}$ and $H^{(2)}$. Note the nestedness, and the fact that we do not require the top level to be a single cluster nor the bottom layer to be $n=5$ clusters.} \label{fig:hier_ex2} \end{figure} We will use $\mathcal{G}_n$ to denote the set of $n$-vertex labeled graphs on a common vertex set $V=V_n$. Given the (informal) explanation of the motivating task in subgraph nomination---given light supervision, partition a graph into subgraphs and rank the subgraphs based on how interesting they are judged to be---hierarchical network models are a natural setting for initially formalizing the subgraph nomination inference task. Hierarchical models have seen a surge in popularity in the network literature (see, for example, \cite{sales-pardo07:_extrac,clauset08:_hierar,park10:_dynam_bayes,Peixoto_HSBM,lyzinski2015community,bickel_hierarchical}) and naturally allow for multiple layers of structure to be simultaneously modeled in a network, allowing us to imbue a graph $G$ with subgraph structures of a desired form or motif-type. We begin with our definition of a hierarchical network as a network together with a \emph{Network Hierarchical Function}. \theoremstyle{definition} \begin{definition} Let $g=(V,E)\in\mathcal{G}_n$, and for each $i\in\mathbb{Z}>0$ let $[i]=\{1,2,\cdots,i\}$. Let $k\leq n$. A function $$H=H_k:V\times \big [k]\longrightarrow \big [n]$$ is a \emph{k-level Network Hierarchical Function of [n]} if \begin{itemize} \item[i.] For each $i\in[k]$, $H(\cdot,i)$ represents a partition of the vertices of $V$ into $n_i:=n_i^{(H)}$ nonempty parts (so that $\max_v H(v,i)=n_i$). These $n_i$ satisfy $1\leq n_1\leq n_2\leq\cdots\leq n_k\leq n.$ The \emph{signature} of $H$ is defined to be the vector $\vec n=(n_1,n_2,\cdots,n_k)$. Note that by considering appending $0$'s onto the end of $\vec n$ as needed to make it length $n$, we can consider the signature of an $n$ vertex hierarchical graph to be a length $n$ vector. \item[ii.] $H(v_1,j)=H(v_2,j)$ only if $H(v_1,i)=H(v_2,i)$ for all $i\leq j$; i.e., the partition is nested. \end{itemize} For an example of network hierarchical functions, see Figure \ref{fig:hier_ex2}. A graph $g\in\mathcal{G}_n$ together with a Network Hierarchical Function $H_k$ of $[n]$ is a \emph{k-level Hierarchical Graph}. For $k\leq n$, we denote $$\mathcal{H}_{k,n}=\{H_k\,|\,H_k\text{ is a k-level Network Hierarchical Function of }[n]\},$$ and we define $$\mathcal{HG}_n:=\{(g,H)\,|\,g\in\mathcal{G}_n,\text{ and }H\in \cup_{k=1}^n\mathcal{H}_{k,n}\}$$ to be the set of all Hierarchical graphs of order $n$. \end{definition} \noindent Letting $k\leq n$, let $H\in\mathcal{H}_{k,n}$. For each $i\in [k]$ and $j\in[n_i]$, we define the sets $$B^i_j=\{v\in V(g)|H(v,i)=j\}$$ to be the set of vertices in the $j$-th part of the $i$-th level of the hierarchy; $$B_{(j)}^{i-1}=\{v\in V(G)\,|\text{ for all }v'\in B_j^i, \,H(v,i-1)=H(v',i-1)\}$$ to be the one-step upward merge of $B^i_j$ in the hierarchy; $$B^i=\{B^i_j\,|\, 1<j<n_i\}$$ to be the set of all parts at level $i$ in the hierarchy; and $$B=\{B^i:i\in[k]\}.$$ to be the set of blocks. \begin{figure}[t!] \centering \includegraphics[width=1\textwidth]{Images/hier_ex} \caption{An example of a 3-Level hierarchical network (where the three levels are shown in the right panel, the top two levels in the middle panel, and the highest level in the left panel). The partition provided by $H(\cdot,1)$ is in darkest grey; the partition provided by $H(\cdot,2)$ is in medium grey; and the partition provided by $H(\cdot,3)$ is in lightest grey.} \label{fig:hier_ex} \end{figure} As an example, consider the hierarchical network in Figure \ref{fig:hier_ex}, which shows a 3-level hierarchical network, where we have (for example): \begin{align*} n_1&=1; n_2=3; n_3=9;\\ B_1^1&=V=B_2^1\cup B_2^2\cup B_2^3;\\ B^2_{(3)}&=B^3_3\cup B^3_4\cup B^3_5\cup B^3_6 ;\\ B^3&=\{B^3_1,B^3_2,B^3_3,B^3_4,B^3_5,B^3_6,B^3_7,B^3_8,B^3_9 \};\\ B&=\{B^1_1,B^2_1,B^2_2,B^2_3,B^3_1,B^3_2,B^3_3,B^3_4,B^3_5,B^3_6,B^3_7,B^3_8,B^3_9\}. \end{align*} \subsubsection{Hierarchical Stochastic Blockmodels} \label{sec:HSBM} One important example of a network distribution with hierarchical structure is the hierarchical stochastic blockmodel (HSBM) \cite{Peixoto_HSBM,lyzinski2015community}. Before defining the HSBM, we first define the standard stochastic blockmodel \cite{sbm}. \begin{definition} We say that an $n$-vertex random graph $G$ is an instantiation of a stochastic blockmodel with parameters $(n,K,\Lambda,\pi)$ (abbreviated $A\sim\text{SBM}(n,K,\Lambda,\pi)$) if \begin{itemize} \item[i.] The block membership vector $\pi\in\mathbb{R}^K$ satisfies $\pi(i)\geq 0$ for all $i\in[K]$, and $\sum_i\pi(i)=~1$; \item[ii.] The vertex set $V=V(G)$ is the disjoint union of $K$ blocks $V=\mathcal{B}_1 \sqcup \mathcal{B}_2 \sqcup \cdots \sqcup \mathcal{B}_K$, where each vertex $v\in V$ is independently assigned to a block according to a Multinomial($1,\pi$) distribution. For each vertex $v\in V(G)$, let $b_v$ be the block that $v$ is assigned to. \item[iii.] The block probability matrix $\Lambda\in[0,1]^{K\times K}$ is a symmetric matrix. Conditional on the block assignment vector $\vec b=(b_v)$, for each pair of vertices $\{u,v\}\in\binom{V}{2}$, $$\left\{\mathds{1}\{u\sim_{G} v\}\right\}\stackrel{\text{ind.}}{\sim}\text{Bernoulli}(\Lambda[b_u,b_v]).$$ \end{itemize} \end{definition} As in \cite{Peixoto_HSBM}, we will define the HSBM recursively from the top down. In essence, the 2-level HSBM is an SBM where each block itself has an SBM structure. If further, every one of the blocks of the second level is an SBM then we have a three level HSBM, and so on. Formally, we have the following recursive definition. \begin{definition} We say that an $n$-vertex random graph $G$ is an instantiation of a 2-level hierarchical stochastic blockmodel with parameters $$(n,K_1,\Lambda_1,\pi_1,\{K_2^{(j)},\Lambda_2^{(j)},\pi_2^{(j)} \}_{j=1}^{K_1})$$ if \begin{itemize} \item[i.] The block membership vector $\pi_1\in\mathbb{R}^{K_1}$ satisfies $\pi_1(i)\geq 0$ for all $i\in[K_1]$, and $\sum_i\pi_1(i)=~1$; \item[ii.] The vertex set $V=V(G)$ is the disjoint union of $K_1$ blocks $V=\mathcal{B}^1_1 \sqcup \mathcal{B}^1_2 \sqcup \cdots \sqcup \mathcal{B}^1_{K_1}$, where each vertex $v\in V$ is independently assigned to a block according to a Multinomial($1,\pi_1$) distribution. For each vertex $v\in V(G)$, let $b^{(1)}_v$ be the block that $v$ is assigned to. \item[iii.] The block probability matrix $\Lambda_1\in[0,1]^{K_1\times K_1}$ is a hollow symmetric matrix. Conditional on the block assignment vector $\vec b^{(1)}=(b^{(1)}_v)$, for each pair of vertices $\{u,v\}\in\binom{V}{2}$ with $b^{(1)}_u\neq b^{(1)}_v$, $$\{\mathds{1}_{u\sim_{G} v}\}\stackrel{\text{ind.}}{\sim}\text{Bernoulli}(\Lambda_1[b^{(1)}_u,b^{(1)}_v]).$$ \item[iv.] For each $j\in[K_1]$, conditional on the block assignment vector $\vec b^{(1)}=(b^{(1)}_v)$ and $|\mathcal{B}^1_j|>0$, we have that $$G[\mathcal{B}^1_j]\sim SBM(|\mathcal{B}^1_j|,K_2^{(j)},\Lambda_2^{(j)},\pi_2^{(j)} ).$$ Moreover, conditional on the block assignment vector $\vec b^{(1)}=(b^{(1)}_v)$ the collection $\{G[\mathcal{B}^1_j]\}_{j=1}^{K_1}$ are mutually independent. \end{itemize} \end{definition} \noindent Formally defining the HSBM beyond the second level of the hierarchy is notationally complex, but the main idea is that a $k$-level HSBM has the same set-up as the $2$-level HSBM except that at part \emph{iv.} of the definition, we have \begin{itemize} \item[\emph{iv'.}] For each $j\in[K_1]$, conditional on the block assignment vector $\vec b^{(1)}=(b^{(1)}_v)$ and $|\mathcal{B}^1_j|>0$, we have that $$G[\mathcal{B}^1_j]\sim \text{k-1 level HSBM}.$$ Moreover, conditional on the block assignment vector $\vec b^{(1)}=(b^{(1)}_v)$, we have that the collection $\{G[\mathcal{B}^1_j]\}_{j=1}^{K_1}$ are mutually independent. \end{itemize} Recursively applying \emph{iv'.} together with the definition of a 2-level HSBM allows us to define HSBMs of arbitrary depth ($\leq n$ of course). \begin{remark} \emph{ For a $k$-level HSBM, we shall adopt the notation from the definition of hierarchical graphs already established. Namely, we will denote the $j$-th block at level $i$ via $\mathcal{B}_j^i$, where the blocks are labeled in order beginning with those of $B^{i-1}_{(1)}$ and ending with those of $B^{i-1}_{(n_{i-1})}$. We will denote the block-membership function at level $i$ via $\vec b^{(i)}$. } \end{remark} While the model complexity of the HSBM can grow quite rapidly as we allow for more structure in each block of the higher level SBMs, the top-down structure does offer complexity savings versus an SBM with the same number of blocks at the bottom-level of the hierarchy. For example, the 1-level HSBM (i.e., the SBM) with $K$ blocks requires $$ \underbrace{\binom{K}{2}+K}_{\text{for }B}+\underbrace{K-1}_{\text{for }\pi}=O(K^2) $$ parameters to define, while the $2$-level HSBM requires $$ \underbrace{\binom{K_1}{2}}_{\text{for }B_1}+\underbrace{K_1-1}_{\text{for }\pi_1}+\sum_{j=1}^{K_1}\Bigg[\underbrace{\binom{K_2^{(j)}}{2}+K_2^{(j)}}_{\text{for }B_2^{(j)}}+\underbrace{K_2^{(j)}-1}_{\text{for }\pi_2^{(j)}}\Bigg]=O(K_1^2+\sum_j (K_2^{(j)})^2) $$ parameters. As a simple example, if $K_1=3$ and each $K_2^{(j)}=3$, then the HSBM has 9 blocks at its bottom level and requires 29 parameters while a 9 block SBM requires 53 parameters. The complexity of an HSBM can be further reduced by enforcing repeated motif structure \cite{lyzinski2015community}, though we do not explore this further herein. \begin{remark} \emph{Consider a $k$-level HSBM $G$. Letting $(n_i)_{i=1}^k$ denote the number of blocks at level $i$ in the hierarchy of $G$, so that (for example) \begin{align*} n_1&=K_1\\ n_2&=\sum_{i=1}^{K_1} K_2^{i}\\ n_3&=\sum_{i=1}^{K_1}\sum_{j=1}^{K_2^{i}}K_3^{(j,i)}\\ n_4&=\sum_{i=1}^{K_1}\sum_{j=1}^{K_2^{i}}\sum_{\ell=1}^{K_3^{(j,i)}}K_4^{(\ell,j,i)}\\ &\vdots \end{align*} (where $K_3^{(j,i)}$ is the number of blocks in the $j$-th block of the $i$-th block of $G$; $K_4^{(\ell,j,i)}$ is the number of blocks in the $\ell$-th block of the $j$-th block of the $i$-th block of $G$; etc...) and conditioning on all blocks being non-empty at each level of the hierarchy, $G$ defines a distribution over the subset of $k$-level hierarchical graphs $$\mathcal{HG}_n^{\vec n}:=\{(g,H)\in\mathcal{HG}_n\,|\text{ the signature of \emph{H} is }\vec n\}.$$ We then have that (for example), $H(v,i)=\vec b^{(i)}_v$ for all $v\in [n]$ and $i\in[k]$.} \end{remark} \subsubsection{Hierarchical Subgraph Nomination} A key component to the hierarchical subgraph nomination (HSN) inference task is the notion of subgraphs of interest within and across the network pair. This is easily accomplished in non-random settings, where a fixed set of indices can be used to define the subgraphs of interest within and across networks. In order to expedite this in random networks, for a fixed pair of signatures $\vec n$ and $\vec m$, we will consider distributions restricted to $\hg_n^{\vec n}\times\hg_m^{\vec m}$; this will allow us to define a consistent set of indices for subgraphs of interest across the random network pair. In hierarchical subgraph nomination, we consider a pair of hierarchical graphs $(g_1,H_1)\in\hg_n^{\vec n}$ and $(g_2,H_2)\in\hg_m^{\vec m}$ where $H_1$ and $H_2$ are unobserved and only the graphs $g_1$ and $g_2$ are observed. We are additionally given a set of ``training'' subgraphs of interest in $g_1$, denoted $$ T_1=\{B^{s_{1}}_{s_{2},1} \}_{s=(s_1,s_2)\in S_1}$$ where $S_1$ is a set of ordered pair indices $S_1=\{s=(s_{1},s_{2})\}$ of the given subgraphs of interest, with $s_{1}$ denoting the level and $s_{2}$ the index of the subgraph in $H_1$. Note that the second index in the subscript of $B^{\bullet}_{\bullet,1}$ is used to denote the subgraph in $g_1$, whereas $B^{\bullet}_{\bullet,2}$ will be used to indicate subgraphs in the hierarchy of $g_2$. Note that the possible indices of the seed graphs depends on the structure of $H_1$, and so we will explicitly tether these two in the sequel, writing $((g_1,H_1),S_1)$, and writing $\mathcal{HGS}_n^{\vec n}$ for the collection of all such feasible triples. The aim then is to \begin{itemize} \item[1.] First estimate the latent hierarchical structure $H_2$ of $g_2$; we will denote this estimate via $\widehat H_2$, and we will let $$\widehat B_{j,2}^i=\{v\in V(g_2)|\widehat H_2(v,i)=j\}.$$ Let the collection of all subgraphs in the partition defined by $\widehat H_2$ be denoted $\widehat B$. \item[2.] Compute dissimilarity measurements $$\boldsymbol{\Delta}:T_1\times \widehat B\mapsto[0,1]$$ between each subgraph of interest in $(g_1,H_1)$ and each subgraph defined by $\widehat H_2$. Note that $\boldsymbol{\Delta}$ ideally is a function that can compute the dissimilarity between any graph of order up to $n$ and any graph of order up to $m$; and the larger the value of $\boldsymbol{\Delta}$ between two graphs, the more dissimilar the networks. As $\boldsymbol{\Delta}$ needs to compute dissimilarities across graphs of different sizes and orders, it may encompass a collection of dissimilarity measures. \end{itemize} After a few preliminary definitions, we will be ready to define a HSN scheme. \begin{definition} For a $k$-level hierarchical graph $(g,H)$ and $i\in[k]$, let $\mathcal{T}_{g,i}^H$ denote the set of all total orderings of $B^i$. Let $\mathcal{T}_{g}^H=\bigotimes_{i=1}^{k} \mathcal{T}_{g,i}^H$. \end{definition} \noindent For example, in the hierarchical subgraph depicted in Figure \ref{fig:hier_ex}, we have $$ \mathcal{T}_{g,2}^H=\left\{ \begin{bmatrix} B_{1}^2\\ B_{2}^2\\ B_{3}^2 \end{bmatrix}, \begin{bmatrix} B_{1}^2\\ B_{3}^2\\ B_{2}^2 \end{bmatrix}, \begin{bmatrix} B_{2}^2\\ B_{1}^2\\ B_{3}^2 \end{bmatrix}, \begin{bmatrix} B_{2}^2\\ B_{3}^2\\ B_{1}^2 \end{bmatrix}, \begin{bmatrix} B_{3}^2\\ B_{1}^2\\ B_{2}^2 \end{bmatrix}, \begin{bmatrix} B_{3}^2\\ B_{2}^2\\ B_{1}^2 \end{bmatrix} \right\} $$ where $$ \begin{bmatrix} B_{i}^2\\ B_{j}^2\\ B_{k}^2 \end{bmatrix}$$ is shorthand for the ordering $$ B_{i}^2> B_{j}^2> B_{k}^2. $$ An example element of $\mathcal{T}_{g}^H$ is given by $$\left([B^1_1],\, [B_{3}^2,B_{2}^2,B_{1}^2],\, [B^3_2,B^3_9,B^3_8,B^3_1,B^3_3,B^3_6,B^3_5,B^3_7,B^3_4]\right).$$ As in the case of vertex nomination, if vertex labels in $(g_2,H_2)$ are uninformative, then an HSN must account for the indistinguishability of subgraphs with isomorphic structure. To wit, we have the following definition: For a $k$-level hierarchical network $(g,H)$ and $i\in[k]$, $j\in[n_i]$, we define \begin{align*} \mathfrak{I}(B_j^i;g)= \{&\ell\in[n_i]\big|\, B_\ell^i\subset B_{(j)}^{i-1}\text{ and there exists }\\ &\text{ an automorphism }\sigma\text{ of }g\text{ with }\sigma(B^i_\ell)=B_j^i \}. \end{align*} \noindent These are indices of the elements of the partition at level $i$ that are indistinguishable from $B_j^i$ without further information/supervision, and it is sensible to require that a HSN rank all these subgraphs as equally interesting. In vertex nomination, accounting for label uncertainty was achieved by means of an obfuscation function; in HSN, the uncertainty is not at the level of labels so much as at the level of subgraphs and this this can be achieved by further requiring the dissimilarity $\boldsymbol{\Delta}$ satisfy \begin{equation} \boldsymbol{\Delta}(B^{s_1}_{s_2,1},\cdot)\text{ is constant over graphs indexed by }\mathfrak{I}(\widehat B_{j,2}^i;g_2)\text{ for each }s=(s_1,s_2)\in S_1. \label{eq:D} \end{equation} We are now ready to define hierarchical subgraph nomination schemes formally. \begin{definition}{\emph[Hierarchical Subgraph Nomination Scheme (HSN)]}\label{def:SGN} Let $((g_1,H_1),S_1)\in\mathcal{HGS}_n^{\vec n}$ and $((g_2,H_2),S_2)\in\mathcal{HGS}_m^{\vec m}$ and let $T_1$ be the parts of $H_1$ indexed by $S_1$. A \emph{Hierarchical subgraph nomination scheme} is composed of two parts: \begin{itemize} \item[i.] An estimator $\widehat H_2=\widehat H_2(g_1,g_2,T_1)$ of the hierarchical clustering of $g_2$ provided by $H_2$; let the signature of $\widehat H_2$ be denoted $\hat m_2=(\hat m_{k,2})$; \item[ii.] A dissimilarity $ \boldsymbol{\Delta}$ satisfying Eq.\@ (\ref{eq:D}) which produces a ranking scheme $\Phi_{n,m}$, where $$\Phi_{n,m}\left(g_1,g_2,T_1,\widehat{H}_2 \right)\in\mathcal{T}_{g_2}^{\widehat H_2}. $$ For each level $i$ of $\widehat H_2$, the subgraphs are ordered in $\Phi_{n,m}$ via increasing value of $$\min_{s=(s_1,s_2)\in S_1} \boldsymbol{\Delta}(B^{s_1}_{s_2,1},\widehat B_{j,2}^i),$$ with ties broken in a fixed but arbitrarily manner. \end{itemize} \end{definition} \subsection{Loss in HSN Schemes} \label{HSGNBEBO} The goal of HSN is to effectively query $g_2$ given limited training resources from $g_1$. Given the resources required for a user to verify the interestingness of the returned subgraphs, we seek to maximize the probability that a priori unknown subgraphs of interest in $g_2$ are close to the top of the returned rank list. For evaluation purposes, it is necessary (at least in theory) to be able to compare the true-but-unknown subgraphs of interest in $((g_2,H_2),S_2)\in\mathcal{HGS}_m$ with elements of the HSN ranked list. In order to account for the fact that the dissimilarity used to construct the HSN scheme may be mis-specified (i.e., not captured exactly), the dissimilarity for verification will be denoted $\boldsymbol{\Delta}_E$ (the dissimilarity used for evaluation that defines the true but unknown subgraphs of interest), though we do not discount the possibility that $\boldsymbol{\Delta}_E=\boldsymbol{\Delta}$ for any given scheme. A logical loss function for HSN is motivated by the concept of precision in information retrieval. As the goal of HSN is to efficiently query large networks for structures of interest, at a given level $k$, considering precision at $i$ for $1<i\ll \hat m_{k,2}$ enables us to model the practical loss associated with using a HSN scheme to search for $\{B^{s_1}_{s_2,2}\}_{(s_1,s_2)\in S_2}$ in $(g_2,H_2)$ given limited resources. There are two levels to the loss function for a given scheme $\Phi_{n,m} =(\widehat H_2,\boldsymbol{\Delta})$: \begin{itemize} \item[i.] The error in approximating $H_2$ via $\widehat H_2$; \item[ii.] Given $\widehat H_2$, the potential mismatch between $\boldsymbol{\Delta}$ and $\boldsymbol{\Delta}_E$. \end{itemize} Our loss function will account for these error sources as follows. Let $F$ be a distribution supported on $\mathcal{HGS}_n\times \mathcal{HGS}_m$, and let $$ ((G_1,H_1),(G_2,H_2))\sim F. $$ Consider fixed subgraph of interest index sets $S_1$ for $G_1$ and $S_2$ for $G_2$, and let $T_1$ (resp., $T_2$) be those subgraphs in $(G_1,H_1)$ (resp., $(G_2,H_2)$) indexed by $S_1$ (resp., $S_2$). Let $\Phi_{n,m}=(\widehat H_2,\boldsymbol{\Delta})$ be an HSN scheme and let the ranking provided at level $k$ of $\Phi_{n,m}$ be denoted via $\Phi_{n,m}^k$ (with the implicit assumption that this is an empty list if $\hat m_{k,2}=0$). \begin{definition}[HSN loss function, level-$(i,k)$ error at hierarchical level] \label{def:lossfcn} With setup as above, for $i,k<n$ we define the \emph{level-$(i,k)$ nomination loss with threshold $t>0$ under dissimilarity $\boldsymbol{\Delta}_E$} via (where to ease notation, we will use $\Phi_{n,m}^k$ for $\Phi_{n,m}^k(g_1,g_2,T_1,\widehat H_2)$ and $\Phi_{n,m}^k[j]$ the $j$-th ranked subgraph in this list) $$\ell_{i,k,t}:=\ell_{i,k,t}(\Phi_{n,m},g_1,H_1,S_1,g_2,H_2,S_2,\boldsymbol{\Delta}_E) $$ where \begin{align} \label{eq:lossfcn} \ell_{i,k,t}:=\begin{cases}\frac{1}{\min(i,\hat{m}_{k,2})} \sum\limits_{j=1}^{\min(i,\hat{m}_{k,2})}\mathds{1}\left\{ \bigcap_{(s_1,s_2)\in S_2} \{\Delta_{E}(\Phi_{n,m}[j],B^{s_1}_{s_2,2})>t\}\right\}&\text{ if }\hat m_{k,2}>0\\ 1&\text{ if }\hat m_{k,2}=0 \end{cases} \end{align} For distribution $F$ as above, the \emph{level-$(i,k)$ error with threshold $t>0$ under dissimilarity $\boldsymbol{\Delta}_E$} of $\Phi_{n,m}$ for recovering $S_2$ is defined to be $L_{i,k,t}=\mathbb{E}_{F}(\ell_{i,k,t})$. Letting $\mathfrak{H}_{n,m}$ be the collection of all HSN schemes (i.e., the set of all ordered pairs of estimators and dissimilarities $(\widehat H_2,\boldsymbol{\Delta})$), the Bayes optimal level-$(i,k)$ scheme with threshold $t>0$ under dissimilarity $\boldsymbol{\Delta}_E$ is defined to be any scheme that achieves the Bayes' error, which here is defined to be $$\mathrm{min}_{\Phi_{n,m}\in \mathfrak{H}_{n,m}} \mathbb{E}_{F}(\ell_{i,k,t}(\Phi_{n,m}))$$ \end{definition} \noindent \noindent While the number of possible dissimilarities is indeed uncountably infinite, there are nonetheless a finite number of possible rankings that can be achieved via a combination of $\widehat H_2$ and $\boldsymbol{\Delta}$ (hence $\min$ rather than $\text{inf}$ in the Bayes error definition), and so the Bayes error is indeed achieved by at least one such pair. \subsection{User-in-the-loop Supervision} \label{sec:user} Interactive machine learning, via incorporating user-in-the-loop supervision, can lead to an enhanced user experience and better downstream inference performance \cite{amershi2014power}. In subgraph nomination, the need for user-in-the-loop supervision can be understood as follows. Consider nominating in $((g_2,H_2),S_2)$ at level $k$, where $S_2=\{(k,\ell)\}$, and $|\mathfrak{I}(B^{k}_{\ell,2};g_2)|>1$. Even if $\widehat H_2$ agrees with $H_2$ at level $k$, there are multiple subgraphs in $(g_2,H_2)$ that are isomorphic to the unknown subgraph of interest, and the HSN scheme has no information available to distinguish these. In this setting, it is reasonable to model the relative order of the elements in $\mathfrak{I}(B^{k}_{\ell,2};g_2)$ in an HSN scheme as (effectively) uniformly random. For example, if $|\mathfrak{I}(B^{k}_{\ell,2};g_2)|=c$, then considering top $h$ of our nomination list ($h<c$), a scheme that identifies the correct structure for $B^{k}_{\ell,2}$ would still have probability $$ \frac{\binom{c-1}{h}}{\binom{c}{h} }=\frac{c-h}{c}=1-\frac{h}{c} $$ of not finding $B^{k}_{\ell,2}$. This can be mitigated by the following user-in-the-loop system: \begin{itemize} \item[i.] The user can take a limited number of single vertex inputs and can output whether that vertex is interesting or not (i.e., part of an interesting subgraph). This output can be modeled as error-free (oracle-user) or errorful. \item[ii.] The supervision can then be used to re-rank the subgraphs in the nomination scheme. \end{itemize} While practically, we do not suspect that there will be many perfect matches to the subgraphs of interest in the hierarchy provided by $H_2$, it may be the case that there are many subgraphs ``close" to the subgraph of interest in the (possibly lossy) estimate $\widehat H_2$, in which case this supervision is equally necessary. What follows is a formalization of the above heuristic. We first define the concept of a user-in-the-loop; noting that our definition is not the most general, as we focus in our analysis on binary users; i.e., they can only output $1$ (yes) or $0$ (no) for the interestingness of a vertex. In general, one might allow for the user to use a rating system with more levels, or even add a continuous space for responses. \begin{definition}[VN User] Let $(g_1,H_1)\in\mathcal{HGS}_n$ with indices of interest $S_1$ and $(g_2,H_2)\in\mathcal{HGS}_m$ with indices of interest $S_2$. Let $(\theta,\gamma)\in [0,1]\times[0,1]$ and $t\in\mathbb{Z}>0$ with $t<m$. We define the capacity $t$, \emph{HSGN user-in-the-loop} for $((g_2,H_2),S_2)$ with parameters $(\theta,\gamma)$ ( denoted $U_t$) as follows: \begin{itemize} \item[i.] Letting $\Omega$ be the underlying sample space, the user is a function $$U_t:\mathcal{T}_{V_2,t}\times\Omega\mapsto\{0,1\}^t$$ where $\mathcal{T}_{V_2,t}$ is the set of ordered $t$-tuples of distinct elements from $V_2=V(g_2)$. Note that dependence on $\Omega$ will be suppressed when appropriate. \item[ii.] For each $\eta\in\mathcal{T}_{V_2,t}$ and $x\in\{0,1\}^t$, we define \begin{align} \mathbb{P}(U_t(\eta)=x)=\prod_{i=1}^t \bigg[\bigg(&\mathds{1}\{\eta_i\in \bigcup_{(s_1,s_2)\in S_2}B_{s_2,2}^{s_1}\} \theta^{x_i}(1-\theta)^{1-x_i} \bigg)\notag\\ \label{eq:user} + &\bigg(\mathds{1}\{\eta_i\!\notin\! \bigcup_{(s_1,s_2)\in S_2}B_{s_2,2}^{s_1}\} \gamma^{x_i}(1-\gamma)^{1-x_i}\bigg)\bigg] \end{align} In essence, the user has independent binary components where the Bernoulli success probabilities depend only on the membership (or lack thereof) in a subgraph of interest. \end{itemize} For a given $((g_2,H_2),S_2)\text{ and }t$, we define the set of all such users by $\mathfrak{U}_t=\mathfrak{U}_{((g_2,H_2),S_2),t}$. \end{definition} \paragraph{} Effectively, the HSN user-in-the-loop outputs a sequence of $\{0,1\}$ values, one for each vertex in a training set $\eta$. An output of 1 is considered the positive answer, i.e. an interesting vertex; for vertices in $\cup_{(s_1,s_2)\in S_2}B_{s_2,2}^{s_1}$, this is a correct answer, and an error otherwise. An output of 0 is the negative answer meaning that this vertex is not of interest; for vertices not in $\cup_{(s_1,s_2)\in S_2}B_{s_2,2}^{s_1}$, this is a correct answer and is an error otherwise. This allows us to simultaneously model oracle users ($\theta=1$, and $\gamma=0$) and errorful users ($\theta<1$, and $\gamma>0$). The capacity of the user (i.e., $t$) allows us to model the practical setting in which user-in-the-loop resources are costly and only limited supervision is available. Users can be incorporated into HSN schemes as follows. \begin{definition}{User Aided HSN Scheme (UHSN)} Consider the setting in Definition (\ref{def:SGN}). Let $t\leq n_j^{(2)}$ and $U_t\in\mathfrak{U}_t$. Consider an HSN $\Phi_{n,m}=(\widehat H_2,\boldsymbol{\Delta})$, and let $\eta$ be an ordered $t$-tuple of distinct elements of $V_2=V(g_2)$. For each $k$, $U_t$ acts on $\Phi_{n,m}^k$ as follows: \begin{itemize} \item[i.] Given $\eta$, $U_t$ is distributed according to Eq. \ref{eq:user}. Let the output of the user process be denoted $x\in\{0,1\}^t$. \item[ii.] Let \begin{align*} I_k:=&\{\ell\in [\hat m_k]\text{ s.t. more than half of the vertices in }\eta\cap\widehat B_{\ell,2}^{k}\\ &\text{ were labeled interesting (i.e., as 1) by the user}\}\\ N_k:=&\{\ell\in [\hat m_k]\text{ s.t. at least half of the vertices in }\eta\cap\widehat B_{\ell,2}^{k}\\ &\text{ were labeled as not interesting (i.e., as 0) by the user}\} \end{align*} If $\eta\cap\widehat B_{\ell,2}^k=\emptyset$, then $\ell\notin I_t\cup N_t$. \item[iii.] Let the indices of $I_t$ be indexed via $(\mathfrak{i}_1,\cdots,\mathfrak{i}_{|I_t|})$ where (according to $\Phi_{n,m}^k$) $$ \widehat B_{\mathfrak{i}_1,2}^{k}> \widehat B_{\mathfrak{i}_2,2}^{k}>\cdots> \widehat B_{\mathfrak{i}_{|I_t|},2}^{k}. $$ Similarly index $N_t$ via $(\mathfrak{n}_1,\cdots,\mathfrak{n}_{|N_t|})$, and $M_t:=[\hat m_k]\setminus \{I_t,N_t\}$ (the unsupervised subgraph indices) via $(\mathfrak{m}_1,\cdots,\mathfrak{m}_{|M_t|})$ The user improved ranking is then given by \begin{align*} \Phi_{n,m}^{k,U}= \bigg( \widehat B_{\mathfrak{i}_1,2}^{k}&> \widehat B_{\mathfrak{i}_2,2}^{k}>\cdots> \widehat B_{\mathfrak{i}_{|I_t|},2}^{k}> \widehat B_{\mathfrak{m}_1,2}^{k}> \widehat B_{\mathfrak{m}_2,2}^{k}>\cdots\\ &> \widehat B_{\mathfrak{m}_{|M_t|},2}^{k}> \widehat B_{\mathfrak{n}_1,2}^{k}> \widehat B_{\mathfrak{n}_2,2}^{k}>\cdots> \widehat B_{\mathfrak{n}_{|N_t|},2}^{k} \bigg). \end{align*} \end{itemize} \end{definition} \begin{remark} \emph{ We note here the possibility of even an oracle user introducing large errors into a ranking scheme based on an approximate $\widehat H_2$. Indeed, even if $\widehat B_{\ell,2}^k$ is very similar to an interesting subgraph in $g_1$ according to $\mathbf{\Delta}$, it is still possible that uninteresting vertices are chosen to provide to the user and $\ell\in N_t$. This can be mitigated by choosing multiple vertices from each of some of the top ranked subgraphs for the user to evaluate.} \end{remark} \begin{remark} \emph{ For iterative applications of the user-in-the-loop, we can sequentially run the user-in-the-loop, first on the output of $\Phi_{n,m}$, then on $\Phi_{n,m}^{U}$ (with $t$ new user training points), and so on.} \end{remark} \subsubsection{The (Theoretical) Benefit of the User-in-the-loop} \label{sec:use-theory} \paragraph{} We turn our attention now to take a look at the theoretical benefit of the user-in-the-loop in the setting of HSN. As in the motivating example for the user, the setting for this section will be as follows. Letting $(g_1,H_1)\in\mathcal{HGS}_n$ and $(g_2,H_2)\in\mathcal{HGS}_m$ with respective interesting index sets $S_1$ and $S_2$, consider a HSN scheme $\Phi_{n,m}$ with $\widehat H_2=H_2$. Consider the nomination provided by $\Phi^k_{n,m}$ and the simple setting in which $S_2=\{(k,\ell)\}$ with $|\mathfrak{I}(B_{\ell,2}^k;g_2)|=c>1$. In this setting, it is reasonable to model the relative order of the elements in $\mathfrak{I}(B_{\ell,2}^k;g_2)$ in an $\Phi^k_{n,m}$ as (effectively) uniformly random (though in practice, they are a fixed arbitrary order). \begin{figure}[t!] \centering \includegraphics[width=1\textwidth]{Images/HSNhtc1.pdf} \caption{For $c=50$ we plot (for $h,t\in\{1,2,\ldots,49\}$) the relative loss $R(c,t,h,p,q)$ in the setting of Theorem \ref{thm:losspq} part (iii) in the left panel $\min(R(c,t,h,p,q),1)$ in the right panel.} \label{fig:numericL} \end{figure} \begin{theorem} \label{thm:losspq} Given the setup above where the scheme $\Phi_{n,m}$ effectively ranks the elements of $\mathfrak{I}(B_{\ell,2}^k;g_2)$ uniformly at random at the top of the rank list, let $t\leq c<m_k$ be the capacity of the user-in-the-loop. Consider any training set for the user where $\eta$ contains exactly one element of each $\Phi_{n,m}^k[h]$ with $1\leq h\leq t$. Let $E_h$ denote the event that $\Phi_{n,m}^U$ ranks $B_{\ell,2}^k$ not in the top $h$. Considering $c>t,h$, we have the following. \begin{itemize} \item[i.] Let $U_t$ be an oracle user, then $\mathbb{P}(E_h)=\max(1-\frac{h+t}{c},0).$ \item[ii.] Let $(\theta,\gamma)=(1-p,0)$ for $p<1$, then \begin{itemize} \item If $t+h\leq c$, then $\mathbb{P}(E_h)=1-\frac{h}{c}-(1-p)\frac{t}{c}$; note that this is strictly less than $1-h/c$ for all $p\in(0,1)$. \item If $t+h> c$, then $\mathbb{P}(E_h)=\frac{tp}{c}$; note that if $p<(c-h)/t$, this is strictly less than $1-h/c$. \end{itemize} \item[iii.] Let $(\theta,\gamma)=(1-p,q)$ for $p<1$ and $q>0$. Then, letting $F(i; n,p)$ be the CDF of a Binomial$(n,p)$ random variable evaluated at $i$, we have \begin{itemize} \item $\text{ If }h+t\leq c,\text{ and }\,t\leq h$, then $\mathbb{P}(E_h)=1-\frac{h}{c}-\frac{(1-p-q)t}{c}$; note that if $p+q<1$, this is less than $1-\frac{h}{c}$. \item $\text{ If }h+t> c,\text{ and }\,t\leq h$, then $\mathbb{P}(E_h)=\frac{pt}{c}+ \frac{1}{c}\sum_{i=1}^{c-h}F(i-1;t,1-q)$; this is upper bounded by $(p+q)t/c$ and if $(p+q)<(c-h)/t$, this is strictly less than $1-h/c$. \item $\text{ If }h+t\leq c,\text{ and }\,t> h$, then \begin{align*} \mathbb{P}(E_h)=1-&\frac{h}{c}-\frac{(1-p)t}{c}+\frac{1-p}{c}\sum_{i=1}^{t-h}F(i-1;h+i-1,1-q)\\ &+\frac{1}{c}\sum_{i=t-h+1}^tF(i-1;t,1-q) \end{align*} Note that a rough upper bound of this is given by $1-h/c+qt/c-(1-p)h/c$, and if $qt<(1-p)h$, this is strictly less than $1-h/c$. \item $\text{ If }h+t> c,\text{ and }\,t> h$, then \begin{align*} \mathbb{P}(E_h)=\frac{pt}{c}&+\frac{1-p}{c}\sum_{i=1}^{t-h}F(i-1;h+i-1,1-q)+\frac{1}{c}\sum_{i=t-h+1}^{c-h} F(i-1;t,1-q) \end{align*} Note that a rough upper bound of this is given by $\frac{(1+q)t-(1-p)h}{c}$, and if $(1+q)t+ph<c$, this is strictly less than $1-h/c$. \end{itemize} \end{itemize} \end{theorem} \noindent We, perhaps surprisingly, see that there are $p$ and $q$ values that guarantee improvement (over the user-in-the-loop-free setting) in all cases. The conditions in part $iii.$ bear further consideration. The interaction of $p,q,h,t$ here is nuanced, and is most easily analyzed numerically, as is shown in Figures \ref{fig:numericL} and \ref{fig:numericL2}. Letting $L(c,t,h,p,q)$ denote the loss from Theorem \ref{thm:losspq} part (iii) (where we can interpret this in the context of Definition \ref{def:lossfcn} by considering $\boldsymbol{\Delta}$ as a $0/1$ oracle dissimilarity function), for $c=50$ we plot the relative loss improvement, \begin{equation} \label{eq:R} R(c,t,h,p,q)=\frac{L(c,t,h,p,q)-(1-h/c)}{1-h/c}, \end{equation} for $h$ between 1 and 49 (on the $y$-axis), and $t$ between 1 and 49 (on the $x$-axis). Different panels correspond to different values of $p$ (varies by row) and $q$ (varies by column). Note that in Figures \ref{fig:numericL}, we plot the relative loss in the left panel $R(c,t,h,p,q)$ and $\min(R(c,t,h,p,q),1)$ in the right panel; this truncating aids in distinguishing the areas of improvement from those where $R(c,t,h,p,q)>0$. In each plot, darker red areas correspond to parameter settings where the loss is improved with user-supervision, and darker blue areas to parameter settings where the loss is increased with user-supervision (due to the user error). A few trends are clear from the figures. Unsurprisingly, supervision becomes detrimental (i.e., more supervision leads to more loss) as $p$ and $q$ increase and $q\geq 0.5$, although the loss appears to be more tolerant of higher values of $p$ than it is of $q$. In all cases, large values of $t$ and $h$ simultaneously lead to training becoming less effective, though $h<t$ seems to yield resilience to larger user-supervised loss for all $p,q$ pairs. \begin{figure}[t!] \centering \includegraphics[width=1\textwidth]{Images/HSNhtc2.pdf} \caption{For $c=50$ we plot (for $h,t\in\{1,2,\ldots,49\}$) $\min(R(c,t,h,p,q),1)$ for the loss function in the setting of Theorem \ref{thm:losspq} part (iii) and $R$ defined in Equation (\ref{eq:R}). Different panels correspond to different values of $p$ and $q$.} \label{fig:numericL2} \end{figure} In the event that the hierarchical clustering is noisily recovered in $g_2$ (i.e., $\widehat H_2\neq H_2$), even the supervision provided by an oracle user-in-the-loop may be worse than no extra supervision. Indeed, consider the setting where $S_2=\{(k,\ell)\}$ and $$\alpha=\frac{|\widehat B^k_{\ell,2}\cap B^k_{\ell,2}| }{| \widehat B^k_{\ell,2}| };\quad \beta=\min_{i\neq \ell}\frac{|\widehat B^k_{i,2}\cap B^k_{\ell,2}| }{| \widehat B^k_{i,2}| },$$ then, adopting the notation and setting above, the probability that $\widehat B^k_{\ell,2}$ is not ranked in the top $h$ after oracle $U_t$ supervision is bounded below by the probabilities in part (iii) of Theorem \ref{thm:losspq} with $p=\alpha$ and $q=\beta$. In particular, there are values of $\alpha$ and $\beta$ under which this error is worse than $1-h/c$ (the error sans additional supervision). \section{Simulations and Real Data Experiments} We now provide empirical evidence for the theory we have outlined. Namely, we will show through simulations and real data examples the impact of a user-in-the-loop. We first consider simulations where graphs are drawn from an HSBM distribution. In this setting we evaluate our methodology on the task of nominating subgraphs with a similar motif to our subgraph of interest. We then consider 57 pairs of analyzed human neural connectomes for which brain regions (i.e., communities) are well defined and the community for each voxel is known. We evaluate our methodology on the task of nominating the same brain region within a given pair of connectomes with varying levels of a user-in-the-loop. \subsection{Simulations} \label{sims} In this section we will first provide a complete description of the model and procedures that were employed to test our theory in a simulated data setting. We then discuss results and how they fit within our theory. \subsubsection{Model description} \label{sec:model} \begin{figure}[t!] \centering \includegraphics[trim={0 3mm 3mm 3mm},clip, width=0.7\textwidth]{Images/Sim_s0.png} \caption{We consider 2000 Monte Carlo replicates of distributions as in Section \ref{sec:model}, and from each distribution, we generate 100 networks. We use Method 1, and performance with an oracle user (varying the replies for the user; i.e., the amount of supervision provided) and the chance algorithm are compared. In all cases, we plot the proportion of trials (y-axis) that ranked the correct block in the nth place (x-axis) which represents $1-L_n(\phi,B^*)$.} \label{fig:sim1-1} \end{figure} For our first example, we consider nominating within the Hierarchical Stochastic Blockmodel setting of \cite{lyzinski2015community}. We consider sampling from a 2-level HSBM that has 16 blocks in the first level and 3 sub-blocks in each of the first level blocks (so that this model can also be realized as a 48 block standard SBM). Further, the blocks of the first level belong to one of three motifs $B_1,\ B_2$, or $B_3$, with constant cross-community connection probability $p=0.01$ for the first level. Formally, we are sampling from the following 2-level HSBM: \begin{itemize} \item[i.] There are $K_1=16$ blocks in the first level of the hierarchy, and the size of the blocks is i.i.d. $10*\floor*{20+50*Unif(0,1)}$; note that this differs slightly from the random block assignment HSBM defined previously. \item[ii.] The motifs $\mathbf{B}_i\in\mathbb{R}^{3\times 3}$ for $i=1,2,3$ are i.i.d. samples satisfying \[\mathbf{B}_i=X_i^TX_i\text{ where } X_i\in\mathbb{R}^{3\times 3}\text{ satisfies }X_i[\cdot,j]\stackrel{i.i.d.}{\sim} Dirichlet(1,1,1);\] \item[iii.] For each $j\in[16]\setminus\{9\}$, we have $B_2^{j}=\mathbf{B}_i$ independently with probability $1/3$ for $i=1,2,3$. $B_2^{9}$ is set to equal $B_2^{1}$; \item[iv.] In the second level of the hierarchy ($K_2^j=3$ for all $j$), the block sizes are defined via (where $\round{\cdot}_1$ rounds the number to the nearest tenth) \[|b^{(2)}_{(j,i)}|=|\{v\in V| b^{(1)}(v)=j,b^{(2)}(v)=i\}|\distas{i.i.d.}|b^{(1)}_{j}|\cdot\round*{Dirichlet\left(\omega \right)}_{1}(i), i< 3\] where the entries of $\omega=(\omega_1,\omega_2,\omega_2)$ are \[\omega_i\distas{i.i.d.} Unif(2,10).\] Lastly, we set \[|b^{(2)}_{(j,3)}|=|\{v\in V| b^{(1)}(v)=j,b^{(2)}(v)=3\}|=|b^{(1)}_{j}|-|b^{(2)}_{(j,i)}|-|b^{(2)}_{(j,i)}|\] \end{itemize} This particular parameterization is chosen to produce different motifs that stress our HSGN framework by producing motifs that are very similar, and motifs where the block structure is extremely subtle (i.e., close to flat), and cases where both these problems happen, as it draws from a distribution in which it is not uncommon that the motifs turn out to be extremely similar and/or very flat. \begin{figure}[t!] \centering \includegraphics[width=8cm]{Images/Sims_S0_4.png} \includegraphics[width=8cm]{Images/Sims_S0_3.png} \includegraphics[width=8cm]{Images/Sims_S0_2.png} \includegraphics[width=8cm]{Images/Sims_S0_1.png} \caption{We consider 2000 Monte Carlo replicates of distributions as in Section \ref{sec:model}, and from each distribution, we generate 100 networks. We use Method 1, and performance with an oracle user (varying the replies for the user; i.e., the amount of supervision provided) and the chance algorithm are compared. We plot the density of all trials (y-axis) that ranked the correct block in the nth place (x-axis).} \label{fig:sim1-2} \end{figure} The block structure was inferred by clustering via Gaussian mixture modeling after embedding, utilizing the {\bf R} package {\bf Mclust} \cite{fraley1999mclust} to cluster the graph we are nominating from into 8 sub-graphs. After that, we use two different methods to infer similarity. The first approximates $\boldsymbol{\Delta}$ via the value of the non-parametric test statistic of \cite{tang14:_nonpar} computed between re-embeddings of each inferred community (as in \cite{lyzinski2015community}); simulation results are shown in Figures \ref{fig:sim1-1} and \ref{fig:sim1-2}. We will call this \emph{Method 1} in the sequel. The second approximates $\boldsymbol{\Delta}$ between communities $i$ and $j$ via the scaled graph matching pseudo-distance \cite{fishkind2019alignment}, $$ 1-\frac{\min_{P\in\Pi(n)}\|A_i-PA_jP^T\|_F^2}{\frac{1}{n!}\sum_{P\in\Pi(n)} \|A_i-PA_jP^T\|_F^2 } $$ where either $A_i$ (the induced subgraph of community $i$) or $A_j$ has been appropriately padded as in \cite{FAP}); results are shown in Figures \ref{fig:sim2-1} and \ref{fig:sim2-2}. \begin{figure}[t!] \centering \includegraphics[width=0.8\textwidth]{Images/Sim_s1.png} \caption{We consider 2000 Monte Carlo replicates of distributions as in Section \ref{sec:model}, and from each distribution, we generate 100 networks. We use Method 2, and performance with an oracle user (varying the replies for the user; i.e., the amount of supervision provided) and the chance algorithm are compared. We plot the proportion of trials (y-axis) that ranked the correct block in the nth place (x-axis) which represents $1-L_n(\phi,B^*)$} \label{fig:sim2-1} \end{figure} We will call this \emph{Method 2} in the sequel. The user replies are those of an oracle. However, as discussed previously since we do perfectly recover the true network partition (rather than using an estimate of the true network partition), our user could still end up being errorful. In particular, the user could be asked to rank vertices that are not in the subgraph of interest. This exemplifies the behavior characterized by $iii$ of Theorem \ref{thm:losspq}, albeit with a different $q$ for each of the blocks. Another significant difference in the scheme of the simulation is that the nomination process takes the first affirmative reply as the correct community and returns the others in their order as is. \subsubsection{Results \& Discussion} In this section we will go over the results of the experiments we described. The aim is to bolster the claims in the theoretical results with experimental results, as such we will have a special focus on the features that are iconic from the theory. First we start with Method 1, the experiment that uses the test statistic form \cite{tang14:_nonpar} to estimate the dissimilarity function. \begin{figure}[t!] \centering \includegraphics[width=8cm,height=4.5cm]{Images/Dist_Sim_SGM_1.png} \includegraphics[width=8cm,height=4.5cm]{Images/Dist_Sim_SGM_2.png} \includegraphics[width=8cm,height=4.5cm]{Images/Dist_Sim_SGM_3.png} \includegraphics[width=8cm,height=4.5cm]{Images/Dist_Sim_SGM_4.png} \caption{We consider 2000 Monte Carlo replicates of distributions as in Section \ref{sec:model}, and from each distribution, we generate 100 networks. We use Method 2 and performance with an oracle user (varying the replies for the user; i.e., the amount of supervision provided) and the chance algorithm are compared. We plot the density of all trials (y-axis) that ranked the correct block in the nth place (x-axis).} \label{fig:sim2-2} \end{figure} In Figure \ref{fig:sim1-1} we can see that the user training provides an improvement over the original algorithm (``No user''), as suggested by Theorem \ref{thm:losspq}, with more training often yielding superior performance. Moreover, especially for small $x$-values, the the algorithm outperforms a chance ranking of the obtained blocks. Interestingly, we see that more user-supervision is not always better. Indeed, as mentioned previously, in our regime of lossy block recovery additional supervision can be deleterious. Figure \ref{fig:sim1-2} elucidates this finding further from the perspective of the distribution density of the correct block in each position. We notice that as user-supervision increases the distribution are shifted towards putting the correct block in the first position. We also notice the trend predicted by Figure \ref{fig:numericL2}, as we increase $h$ (the nomination error level) the ideal value of $t$ (user-provided replies) is decreasing, and less supervision is preferred. We see this, as the curves with less supervision overtake those of higher supervision as we plot $1-L_n(\phi,B^*)$. However we note that here the value of $q$ is not constant and hence the minimizers of the loss function do not not adhere to the figure exactly. In Figures \ref{fig:sim2-1} and \ref{fig:sim2-2} we see similar trends using Method 2. Here the main contribution to the error of the methods is the misclustering of the vertices in the initial GMM step, and both dissimilarity estimates provide good estimates of the the latent $\boldsymbol{\Delta}$. In the real data example considered below, there is a more pronounced differentiation across methods. \subsection{Subgraph nomination in BNU1 connectomes} \label{expr} Data acquisition resources and limitations can be eased through automation. For example, in the study of human connectomes, one of the more labour intensive steps, is classifying different parts of the brain. One such classification task is finding corresponding regions of interest across hemispheres in a human connectome. This task is time and human resource intensive when done manually, and automation is essential for achieving a high throughput \cite{gray2012magnetic}. To illustrate our HSN methodology further, we will consider then the task of nominating brain regions across hemispheres in the DT-MRI derived brain networks in the BNU1 database \cite{zuo2014open}. The spatial image of the brain includes information about its structure; in other words, the neural fibers that run through different areas of the brain. A graph of the subject's brain is then created with the following definitions: nodes or vertices correspond to different spatial units (usually the brain is divided into cubic sections called volumetric pixels, or voxels), edges are weighted based on the amount of neural fibers that run through two regions. One may hypothesize that there is a sense of structural symmetry between paired regions across hemispheres, and discovery of this pairing is the inference task for this HSN application. Here, we apply the user aided subgraph nomination scheme to a connectomic dataset. The data we apply this to is a set of 114 neuronal networks---two repeated scans for each of 57 human subjects---derived from the BNU1 connectome dataset \cite{zuo2014open}. Each brain scan sections the brain into one thousand voxels---or volumetric pixels---with weighted edges between them indicating the amount of neuronal batches that are shared, indicating connectedness in a structural but not necessarily functional sense. Additionally each voxels contains information on it detailing its membership to the left or right hemispheres, membership to one of the 71 neuronal regions \cite{desikan2006automated}, whether it is grey or white matter, and its XYZ coordinates in the partially registered scan. The relative size of the networks is then $\approx 1000$ vertices, with pre-defined neuronal regions varying in size between 2 and 100 vertices. As in the simulation, although the distributional properties in paired regions across hemispheres is often similar (i.e. similar motifs), there are no guarantees that the numbers of vertices that make up each region are equal. Errors from this (and misclustering) are magnified in smaller subgraphs, especially since there is less information about the motif in the small subgraph setting. The aforementioned sources of errors are mitigated by only considering paired regions that have at least 10 vertices in each hemisphere and using similarity metrics (Methods 1 and 2) that do not depend on the size of a region. \begin{figure}[t!] \centering \includegraphics[width=8cm]{Images/block.png} \includegraphics[width=8cm]{Images/top25.png} \caption{With perfect knowledge of the block structure of the brain (the division into 71 regions provided by the data) and using Method 1, (left) we plot on the $y$-axis the proportion of subjects (averaged across all blocks considered as the block-of-interest in the left hemisphere) that ranked the correct block in the right hemisphere the $n$-th place (x-axis); (right) we plot on the $y$-axis the the proportion of subjects (averaged across all blocks considered as the block-of-interest in the left hemisphere) for which we find the total proportion of the block of interest in the top 25 vertices nominated (x-axis). } \label{fig:blockleveltrue} \end{figure} Our procedure can then be described as follows, considering each pair of sufficiently large matched regions ($\geq 10$ vertices in each hemisphere) separately, first we use the information given by spectrally embedding the adjacency matrix of the connectome (and the collected spatial coordinate data in Figure \ref{fig:blocklevelXYZ}) to infer the subgraph structure in the right hemisphere via clustering by GMM \cite{mclust}; using the region-of-interest in the left hemisphere as our training, next we use Methods 1 and 2 to rank above to provide different estimates of $\boldsymbol{\Delta}$. We then supply a user-in-the-loop with a vertex (or a few vertices) from each of the top $k$ communities to re-rank the nomination list. We plot the performance of our algorithms with perfect knowledge of the block structure of the brain (the division into 71 regions provided by the data) and using Method 1 in Figure \ref{fig:blockleveltrue}. We plot: \begin{itemize} \item (Left) on the $y$-axis the proportion of subjects (averaged across all blocks considered as the block-of-interest in the left hemisphere) that ranked the correct block in the right hemisphere in the at worst $x$-th place (x-axis). \item (Right) on the $y$-axis the the proportion of subjects for which we find the total proportion of the block of interest in the top 25 vertices nominated (x-axis). \end{itemize} Note that an artifact of the metric in panel (b) is that we may never find a complete block that is of a size larger than 25 this is why we see a drop near 90$\%$. As ideal performance in the left and right panels corresponds to the function $f(x)\equiv 1$, this figure enforces the intuition that, given high fidelity clusters, our ranking procedure is significantly better than chance and benefits strongly from user-in-the-loop supervision. In Figure \ref{fig:blocklevelgoodvsbad}, in the setting of potentially errorful clusters (obtained via \texttt{Mclust} applied to the adjacency spectral embeddings of the brain networks, we plot (similar to in Figure \ref{fig:blockleveltrue}), in the top panels the proportion whose match is found in the top $x$ (similar to the left panel of Figure \ref{fig:blocklevelgoodvsbad}) and in the bottom panels the percent who had the proportion of the top 25 vertices come from the block of interest (similar to the right panel of Figure \ref{fig:blocklevelgoodvsbad}). The left two panels use Method 1 to estimate $\boldsymbol{\Delta}$, while the right two panels use Method 2. In light of perfect performance corresponding the the constant 1 function in all cases, we see here that Method 2 achieves better performance (especially for large values on the $x$-axis in each figure) than Method 1 both when making use of the user-supervision and in the no-user setting. Here, we see the general trend that the methods perform poorly without a user-in-the-loop (due to the error in the clustering; see Figure \ref{fig:blocklevelXYZ}), but that the method can efficiently make use of the user to achieve better performance. As was the case previously, the clustering is errorful, and on average this can have a deleterious effect on the user-supervision (top row). However, in the bottom row we see that \emph{sample-wise} the supervision is monotonically beneficial. \begin{figure}[t!] \centering \begin{tabular}{cc} \includegraphics[width=8cm]{Images/block_bad.png}& \includegraphics[width=8cm]{Images/block_good.png}\\ \includegraphics[width=8cm]{Images/top25_bad.png}& \includegraphics[width=8cm]{Images/top25_good.png} \end{tabular} \caption{Clustering the graph using Gaussian Mixture Modeling on the embedded network, in the left two panels, we use Method 1 to estimate $\boldsymbol{\Delta}$, while the right two panels we use Method 2. In the top row, on the $y$-axis we plot the proportion of subjects (averaged across all blocks considered as the block-of-interest in the left hemisphere) that ranked the correct block in the right hemisphere the at worst $x$-th place (x-axis). In the bottom row, on the $y$-axis we plot the the proportion of subjects for which we find the total proportion of the block of interest in the top 25 vertices nominated (x-axis).} \label{fig:blocklevelgoodvsbad} \end{figure} We repeat the above experiment using Method 2 and incorporating the XYZ coordinates of each voxel into the clustering (by appending the features onto the spectral graph embeddings), and plot the results in Figure \ref{fig:blocklevelXYZ}. The increased clustering fidelity achieved by incorporating the vertex features manifests itself here by the performance gain achieved in the no-user setting when compared with Figure \ref{fig:blocklevelgoodvsbad}. As a result of this, better performance is achieved across all settings here (again when compared with Figure \ref{fig:blocklevelgoodvsbad}). While the clustering here is still errorful, and on average this can have a deleterious effect on the user-supervision, the right panel again shows that sample-wise the supervision is monotonically beneficial. This means that a user can successfully aid in finding larger portions of the the block of interest. Therefore, we can use these vertices in the input to further investigate and find the rest of the block, by searching in the $k$-nearest neighbours of these vertices. \begin{figure}[t!] \centering \includegraphics[width=8cm]{Images/block_XYZ.png} \includegraphics[width=8cm]{Images/top25_XYZ.png} \caption{Clustering the embedded graphs using Gaussian Mixture Modeling incorporating the XYZ coordinate data and using Method 2 to estimate $\boldsymbol{\Delta}$. In the left panel, on the $y$-axis we plot the proportion of subjects (averaged across all blocks considered as the block-of-interest in the left hemisphere) that ranked the correct block in the right hemisphere the at worst $x$-th place (x-axis). In the right panel, on the $y$-axis we plot the the proportion of subjects (averaged across all blocks considered as the block-of-interest in the left hemisphere) for which we find the total proportion of the block of interest in the top 25 vertices nominated (x-axis).} \label{fig:blocklevelXYZ} \end{figure} \section{Conclusions and Future Direction} In this paper we have introduced a formal, and versatile framework for the novel subgraph nomination inference task, with special emphasis paid to the utility of users-in-the-loop in the context of subgraph nomination. Subgraph nomination is an important tool for structural queries within and across networks, and we demonstrated its utility in both real and synthetic data examples. In addition, the relatively simple users-in-the-loop formulation outlined herein can easily be lifted to more complex cases including supervision done over several iterations, and can be simply extended to the case of multiple users. The theory and experiments included herein highlight the important role that user supervision can play in effective information retrieval. Further, this paper shows the importance of analysing the original algorithm and the validity of user input before including them. For example, if the inferred clustering is incorrect in a hierarchical subgraph nomination framework, in the sense that the correct clusters' vertices are evenly spread among the inferred blocks, then our resources should be focused on collecting relevant data that would improve the inferred blocking structure, as even oracle user supervision in this case could be detrimental to the performance. In our approach to subgraph nomination, inferring high-fidelity clusters is essential, and we have demonstrated that clustering can be improved by incorporating informative vertex features. Exploring this further, understanding the information theoretic gain of features on our nomination performance as in \cite{levin2020role}, is an open area of research that we are actively pursuing. The validity of the user input itself is also important to consider when thinking about adding a user-in-the-loop. While we have theoretically demonstrated the possibility of increased performance even with an errorful user, in practice the situation is significantly more nuanced. The user errors, rather than being uniformly random, may be systematic or deterministic. For example an adversarial attacks could manipulate the results of the search using a user bot attack. More nuanced users-in-the-loop demand more nuanced theory to understand their broad effects. As an example, in our nomination regime an adversary can target the obtained cluster labels for contamination. This would immediately mitigate the benefit of a user-in-the-loop, as the benefit of the user decays as the cluster fidelity worsens. More robust users would be, most likely, more costly and a cost-benefit optimization analysis would be needed in these more nuanced setting to tease out the positive (or negative) impact of incorporating the user. Another avenue that has not been investigated explicitly but for which the framework is still sufficient is the situation with multiple blocks of interest. For that we need to adjust the definition of the VN-user to accommodate for different values of $p$ for each block of interest. Then, after some necessary adjustment to the loss function, one may obtain results generalized to the case of multiple blocks of interest in the simple user-in-the-loop setting. On the other hand, one may want to extend the results of Theorem \ref{thm:losspq} to more nuanced users. \vspace{2mm} \noindent{\bf Acknowledgement} This material is based on research sponsored by the Air Force Research Laboratory and DARPA under agreement number FA8750-20-2-1001. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the Air Force Research Laboratory and DARPA or the U.S. Government. The authors also gratefully acknowledge the support of NIH grant BRAIN U01-NS108637. \section*{APPENDIX I: Proof of Theorem \ref{thm:losspq}} \label{proof17} For $i\leq j\leq c$, let the event $A_{[i,j]}$ be the event that the unknown subgraph of interest in $g_2$ has rank in $[i,j]$. Let $E_h$ be the event that after user-in-the-loop supervision, the subgraph of interest has rank strictly greater than $h$. \vspace{2mm} \noindent\emph{Part I:} With an oracle user-in-the-loop, we have that \begin{align*} \mathbb{P}(E_h)&=\underbrace{\mathbb{P}(E_h| A_{[1,h]})}_{=0} \mathbb{P}(A_{[1,h]}) + \underbrace{\mathbb{P}(E_h| A_{[h+1,h+t]})}_{=0} \mathbb{P}(A_{[h+1,h+t]})+ \underbrace{\mathbb{P}(E_h| A_{[h+t+1,m_k]})}_{=1} \mathbb{P}(A_{[h+t+1,m_k]})\\ &=\mathbb{P}(A_{[h+t+1,m_k]}) \end{align*} If $h+t\leq c$, then $\mathbb{P}(A_{[h+t+1,k]})=1-\frac{h+t}{c}$, else it is $0$. Hence, $\mathbb{P}(E_h)=\max(0,1-\frac{h+t}{c})$ as desired. \vspace{2mm} \noindent\emph{Part II:} When the user has probability $p$ of misidentifying the vertex from the subgraph of interest as non-interesting, then when $h+t\leq c$, \begin{align*} \mathbb{P}(E_h)&=\underbrace{\mathbb{P}(E_h| A_{[1,t]})}_{=p} \mathbb{P}(A_{[1,t]}) + \underbrace{\mathbb{P}(E_h| A_{[t+1,h+t]})}_{=0} \mathbb{P}(A_{[t+1,h+t]})+ \underbrace{\mathbb{P}(E_h| A_{[h+t+1,m_k]})}_{=1} \mathbb{P}(A_{[h+t+1,m_k]})\\ &=p\mathbb{P}(A_{[1,t]})+ \mathbb{P}(A_{[h+t+1,m_k]})\\ &=\frac{pt}{c}+ 1-\frac{h+t}{c}=1-(1-p)\frac{t }{c}-\frac{h}{c}. \end{align*} When $m_k> h+t> c$, the above probability is simply $\frac{pt}{c}$. \vspace{2mm} \noindent\emph{Part III:} When the user has probability $p$ of misidentifying the vertex from the subgraph of interest as non-interesting and probability $q$ of misidentifying a non-interesting vertex as interesting, then when $h+t\leq c$ and $t\leq h$, \begin{align*} \mathbb{P}(E_h)&=\underbrace{\mathbb{P}(E_h| A_{[1,t]})}_{=p} \mathbb{P}(A_{[1,t]}) + \underbrace{\mathbb{P}(E_h| A_{[t+1,h]})}_{=0} \mathbb{P}(A_{[t+1,h]})+\sum_{i=1}^{t} \mathbb{P}(E_h| A_{[h+i,h+i]}) \underbrace{\mathbb{P}(A_{[h+i,h+i]})}_{=\frac{1}{c}}\\ &\hspace{5mm}+ \underbrace{\mathbb{P}(E_h| A_{[h+t+1,m_k]})}_{=1} \mathbb{P}(A_{[h+t+1,m_k]})\\ &=\frac{pt}{c}+ \sum_{i=1}^{t}\frac{1}{c}\sum_{j=0}^{i-1}\binom{t}{j}(1-q)^jq^{t-j}+1-\frac{h+t}{c}\\ &=\frac{pt}{c}+ \sum_{j=0}^{t-1}\frac{t-j}{c}\binom{t}{j}(1-q)^jq^{t-j}+1-\frac{h+t}{c}\\ &=\frac{pt}{c}+ \sum_{j=0}^{t}\frac{t-j}{c}\binom{t}{j}(1-q)^jq^{t-j}+1-\frac{h+t}{c}\\ &=\frac{pt}{c}+\frac{qt}{c}+1-\frac{h+t}{c}. \end{align*} When $h+t> c$ and $t\leq h$, \begin{align*} \mathbb{P}(E_h)&=\underbrace{\mathbb{P}(E_h| A_{[1,t]})}_{=p} \mathbb{P}(A_{[1,t]}) + \underbrace{\mathbb{P}(E_h| A_{[t+1,h]})}_{=0} \mathbb{P}(A_{[t+1,h]})+\sum_{i=1}^{c-h} \mathbb{P}(E_h| A_{[h+i,h+i]}) \underbrace{\mathbb{P}(A_{[h+i,h+i]})}_{=\frac{1}{c}}\\ &\hspace{5mm}+ \mathbb{P}(E_h| A_{[c+1,m_k]}) \underbrace{\mathbb{P}(A_{[c+1,m_k]})}_{=0}+\\ &=\frac{pt}{c}+ \frac{1}{c}\sum_{i=1}^{c-h}F(i-1;t,1-q) \end{align*} When $t>h$, and $h+t\leq c$, then \begin{align*} \mathbb{P}(E_h)&=\underbrace{\mathbb{P}(E_h| A_{[1,h]})}_{=p} \mathbb{P}(A_{[1,h]}) + \sum_{i=1}^{t-h} \mathbb{P}(E_h| A_{[h+i,h+i]}) \underbrace{\mathbb{P}(A_{[h+i,h+i]})}_{=\frac{1}{c}}\\ &\hspace{4mm}+\sum_{i=t-h+1}^{t} \mathbb{P}(E_h| A_{[h+i,h+i]}) \underbrace{\mathbb{P}(A_{[h+i,h+i]})}_{=\frac{1}{c}}+ \underbrace{\mathbb{P}(E_h| A_{[h+t+1,m_k]})}_{=1} \mathbb{P}(A_{[h+t+1,m_k]})\\ &=\frac{ph}{c}+ \sum_{i=1}^{t-h}\frac{1}{c}\left(p+(1-p)\sum_{j=0}^{i-1}\binom{h+i-1}{j}(1-q)^jq^{h+i-1-j}\right)\\ &\hspace{4mm}+\sum_{i=t-h+1}^{t}\frac{1}{c}\sum_{j=0}^{i-1}\binom{t}{j}(1-q)^jq^{t-j} +1-\frac{h+t}{c}\\ &= 1-\frac{h}{c}-\frac{(1-p)t}{c}+\frac{1-p}{c}\sum_{i=1}^{t-h}F(i-1;h+i-1,1-q)\\ &\hspace{4mm}+\frac{1}{c}\sum_{i=t-h+1}^tF(i-1;t,1-q) \end{align*} When $t>h$, and $h+t> c$, then (assuming $c\geq t,h$) \begin{align*} \mathbb{P}(E_h)&=\underbrace{\mathbb{P}(E_h| A_{[1,h]})}_{=p} \mathbb{P}(A_{[1,h]}) + \sum_{i=1}^{t-h} \mathbb{P}(E_h| A_{[h+i,h+i]}) \underbrace{\mathbb{P}(A_{[h+i,h+i]})}_{=\frac{1}{c}}\\ &\hspace{4mm}+\sum_{i=t-h+1}^{c-h} \mathbb{P}(E_h| A_{[h+i,h+i]}) \underbrace{\mathbb{P}(A_{[h+i,h+i]})}_{=\frac{1}{c}}+ \mathbb{P}(E_h| A_{[c+1,m_k]}) \underbrace{\mathbb{P}(A_{[c+1,m_k]})}_{=0}\\ &= \frac{pt}{c}+\frac{1-p}{c}\sum_{i=1}^{t-h}F(i-1;h+i-1,1-q)\\ &\hspace{4mm}+\frac{1}{c}\sum_{i=t-h+1}^{c-h} F(i-1;t,1-q) \end{align*} \bibliographystyle{plain}
{ "redpajama_set_name": "RedPajamaArXiv" }
4,698
\section{Introduction} In the course of our own research on theoretical concepts for ecology, inspired by computing and information science, we have discovered the recent\footnote{At the time of writing, 2012.} article ``Quantifying sustainability: Resilience, efficiency and the return of information theory'' \parencite{Ulanowicz2009}. It has been a sincere pleasure to discover that renowned ecologists keep advocating the use of information theory for the description and assessment of ecosystems. It has been a severe disappointment, however, that information theory is far from being portrayed in the best possible light in the article in question---the formulation and use of basic IT concepts are unnecessarily obscure, deliberately incompatible with standard terminology, and in places just mathematically wrong. We shall endeavor to lift some of the confusion by correcting errors, dissecting ambiguities and using standard terminology wherever possible. We believe that clear and rigorous presentations are necessary in order to enable information theoreticians and ecologists to engage in a fruitful dialogue. Only very basic concepts of IT are required to follow our arguments. The interested reader is referred to the textbook of \cite{MacKay2003}, an excellent and encyclopaedic treatise that is freely accessible online. This critique should not be understood as extending to the \emph{content} of the criticized article; it adresses its formal \emph{presentation} only. To the contrary, our disappointment arises mainly from fear that the article might influence readers rather against than in favour of application of IT in ecology, because of the vague air of inconsistency owing to the many terminological and mathematical issues we are about to discuss. \section{Terms} \subsection{Surprisal and Shannon Information Content} What \citeauthor{Ulanowicz2009} call ``Boltzmann's famous definition of surprisal'', $s = - k \log p$, is nowadays known in IT as the \emph{Shannon information content} of an event. While Boltzmann preceded Shannon by some 80 years, it was the latter who generalized the idea, brilliant as it may have been, into a full-fledged theory. The scalar constant $k$ is not explicit in the modern definition; rather, it is implicit in the dimensionless unit implied by the choice of base of the logarithm; cf.\ \cref{log-units} below. The particular constant $k_{\mathrm{B}}$ that nowadays bears Boltzmann's name is for the particular case of thermodynamic entropy, and carries the dimension of a heat capacity. Shannon argues that, in IT, no single appropriate value exists, and the constant ``merely amounts to a choice of a unit of measure'' \parencite{Shannon1948}, in the sense that it is the only degree of freedom in an axiomatic specification of the entropy function, see below. It is ironic that \citeauthor{Ulanowicz2009} elaborate on the sign of the expression, \begin{quote} ``Because the probability, $p$, is normalized to a fraction between zero and one, most offhandedly conclude that the negative sign is a mathematical convenience to make $s$ work out positive (and that may have been Boltzmann's motivation). But from the perspective of logic {\bfseries one can only read this equation} as defining $s$ to gauge what $p$ is not.'' {\upshape[Emphasis added]} \end{quote} seeing that the form inscribed on Boltzmann's tombstone, $S = k \log W$, has a positive sign instead. Confer also the notation of \cite{MacKay2003}, where the likewise positive form $\log_2 (1/P(x))$ is used. Reciprocal pairs of quantities abound in science, for instance consider ``period'' and ``frequency'', or ``resistance'' and ``conductance''. Which of each pair is about what is and what is not, respectively, depends very much on the frame of reference. \subsection{Aggregate Indeterminacy and Entropy} What \citeauthor{Ulanowicz2009} call the ``aggregate systems indeterminacy'', defined in their equation (3) up to a missing equality sign (a simple typographic error that may subtly confuse the reader nevertheless), is nowadays usually called the \emph{Shannon entropy} of a random variable whose possible outcomes are specified by the events $i$ and respective probabilities $p_i$. \begin{equation*} H = - \sum_i p_i \log p_i \end{equation*} Because all of the following definitions involve more than one random variable, it greatly adds to clarity to mention them explicitly: \begin{equation*} H(Y) = - \sum_i P(Y=i) \log P(Y=i) \end{equation*} We shall use only the symbols $Y$ and $Z$ for random variables, because $X$ is reserved in the notation of \cite{Ulanowicz2009}. \subsection{Average Mutual Constraint and Mutual Information} \label{mutual} What \citeauthor{Ulanowicz2009} call the ``average mutual constraint'' (without saying precisely between what, see \cref{event-random,symm}), defined in their equation (5), is usually called the \emph{mutual information} or \emph{transinformation} between two random variables. \begin{equation*} I(Y, Z) = \sum_{i,j} P(Y = i, Z = j) \log \frac{P(Y = i, Z = j)}{P(Y = i)P(Z = j)} \end{equation*} Note that the explanations leading to equation (5) are problematic in various ways, see \cref{event-random,joint,symm}. \subsection{Conditional Entropy} What \citeauthor{Ulanowicz2009} call the ``conditional entropy'' $\Psi$ is \emph{not} what is usually called a conditional entropy. A typical definition of conditional entropy, or \emph{quivocation}, of $Y$ given $Z$ would look like: \begin{equation*} H(Y \mid Z) = \sum_{i,j} P(Y = i, Z = j) \log \frac{P(Z = j)}{P(Y = i, Z = j)} \end{equation*} It plays an important role in the decomposition of uncertainties in IT, such as in the so-called \emph{chain rule}: \begin{equation*} H(Y, Z) = H(Y \mid Z) + H(Z) \end{equation*} The definition of \citeauthor{Ulanowicz2009}'s equation (7) can be retrieved by \emph{adding} the conditional entropies of $Y$ given $Z$ and vice versa: \begin{align*} H(Y \mid Z) + H(Z \mid Y) &= \sum_{i,j} P(Y = i, Z = j) \log \frac{P(Z = j)}{P(Y = i, Z = j)} + \\ &\mathrel{\hphantom{=}} \sum_{i,j} P(Y = i, Z = j) \log \frac{P(Y = i)}{P(Y = i, Z = j)} \\ &= \sum_{i,j} P(Y = i, Z = j) \log \frac{P(Y = i)P(Z = j)}{P(Y = i, Z = j)^2} \end{align*} This sum is sometimes called the \emph{variation of information} between $Y$ and $Z$. It is easy to see that it satisfies the axioms of a \emph{metric} and may hence serve as an information-theoretic measure of distance between random variables. \section{Ambiguities and Errors} \subsection{Events and Random Variables} \label{event-random} In equation (3), \citeauthor{Ulanowicz2009} give the (slightly broken) definition of the Shannon entropy of a single random variable. Immediately afterwards, they move on to the more complex topic of a pair of random variables, but without explicitly saying so. The transition is implied in the statement: \begin{quote} ``Accordingly, we will define $p_{ij}$ as the joint probability that events $i$ and $j$ co-occur.'' \end{quote} The failure to explicitly identify the two random variables, only distinguished by the fact that their outcomes are indexed with variables $i$ and $j$, respectively, causes confusion even with the authors themselves: They apparently fail to realize that what they call \emph{the} conditional entropy is in fact the sum of two conditional entropies, namely of each random variable conditional on the other. Consequently, the interesting question whether the two should be lumped together or examined separately is not addressed: For instance, calculation of conditional entropies of $Y$ (origin of flow) given $Z$ (destination of flow) and vice versa for the networks depicted in \citeauthor{Ulanowicz2009}'s Figs.~1 and 3 are strongly asymmetric, with ratios of $1:6$ and $1:8.6$, respectively. We do not have an ecological rationale why this should be meaningful, but we feel that a distinction that arises naturally from standard terminology should not be obscured deliberately, unless it has been proven irrelevant. The same source of imprecision might also have contributed to the problem discussed in the next subsection. \subsection{Joint Indeterminacy and Independency} \label{joint} In their explanation of ``average mutual constraint'' (mutual information), between their two equations both numbered (4), \citeauthor{Ulanowicz2009} state: \begin{quote} ``Here the assumption is made that the indeterminacy $s_{ij}$ {\upshape [$= -k\log(p_{ij})$]} is maximal when $i$ and $j$ are totally independent. We call that maximum $s_{ij}^*$.'' \end{quote} The assumption does not hold, and there is no such maximum: For independent events, we have $p_{ij} = p_{i.} p_{.j}$, but joint probabilities can of course be smaller than that value. In particular, $s_{ij} = +\infty$ for mutually exclusive events. Stating that $s_{ij}$ is bounded above by $s_{ij}^*$ is equivalent to stating that $p_{ij}$ is bounded below by $p_{i.}p_{.j}$. We can only conjecture that this (false) assumption is made in order to ensure that mutual information is nonnegative. Fortunately, the assumption is not needed: Individual terms $\log (p_{ij}/p_{i.}p_{.j})$ in equation (5) may well be negative, but their weighted sum, the mutual information, is nonnegative by Gibbs' inequality; see also \cref{log-role}. \subsection{Symmetry of Mutual Information} \label{symm} \citeauthor{Ulanowicz2009} claim that their second equation (4) is symmetric, written as $x_{i|j} = \dots = x_{j|i}$. Note that this is \emph{not} symmetry in the usual algebraic sense, namely that the indices $i$ and $j$ may simply be exchanged. If one does so na{\"\i}vely, the denominator of the argument to the logarithm becomes $p_{j.}p_{.i}$ which is very different from $p_{i.}p_{.j}$. On the other hand, if one remembers that $i$ and $j$ are events concerning two different random variables $Y$ and $Z$, respectively, then the symmetry becomes obvious, as both $P(Y = i, Z = j) = P(Z = j, Y = i)$ and $P(Y = i)P(Z = j) = P(Z = j)P(Y = i)$ hold trivially. Hence it follows that $I(Y, Z) = I(Z, Y)$; see \cref{mutual} above. \subsection{The role of the Logarithm} \label{log-role} Much has been said about the role of the logarithm in the formulae of IT. \citeauthor{Ulanowicz2009} promise in their footnote~1: \begin{quote} ``Here the reader might ask why the lack of $i$ is not represented more directly by $(1 - p_i)$? The advantage and necessity of using the logarithm will become apparent presently.'' \end{quote} But only a very brief rationale can be found in the following discussion, qualifying hardly as advantageous, and certainly not as necessary: In the statement leading to their equation (6) it is claimed (correctly) that the convexity of the logarithm ensures that joint entropy decomposes into mutual information and conditional entropies, all non-negative. But that would be true for any other convex function. \cite{Shannon1948} has the definitive answer to the riddle: Any function satisfying a small number of properties characteristic of a measure of uncertainty (namely continuity, monotonicity in the number of outcomes of uniform distributions, and additive compositionality of choice) is necessarily equivalent to Shannon entropy, up to a conversion of units. Instead of actually working with logarithms, \citeauthor{Ulanowicz2009} promptly revert to non-logarithmic scales in their section~5, in the introduction to the concept of ``window of vitality''. There they cite previous work \parencite{Zorach2003}, where the measures are developed in entirely non-logarithmic form. For additional serious problems with the concerned paragraphs, see \cref{expo} below. \subsection{Logarithmic Units} \label{log-units} In the argument leading towards their definitions of ``ascendency'' and ``reserve'', after equation (10), \citeauthor{Ulanowicz2009} state: \begin{quote} ``The dimensions in the definitions (10) remain problematic, however. All of the ratios that occur there are dimensionless (as required of probabilities), so that the only dimensions that the variables $H$, $X$ and $\Psi$ carry are those of the base of the logarithm used in their calculation. For example, if the base of the logarithm is 2, the variables are all measured in bits.'' \end{quote} In this statement, the concepts of \emph{unit} and \emph{dimension} are confused. A unit conveys two independent aspects of meaning: dimension and \emph{magnitude}. For example, the SI unit $1\,\mathrm{m}$ has the dimension \emph{length} and a magnitude in relation to other units of length that makes it equivalent to, say, $39.37\,\mathrm{in}$. The units of information, just like other logarithmic quantities, are \emph{dimensionless} units, but they do have a magnitude. For example, $1\,\mathrm{bit}$ is equivalent to $1/8\,\mathrm{byte}$, about $1.44\,\mathrm{nat}$ or $3.01\,\mathrm{dB}$. If we take the last sentence of the above quotation as implying that the base of the logarithm is not fixed once and for all, then the magnitude of the unit of IT measures carries essential meaning. In the immediately following section~4, ``A two-tendency world'', \citeauthor{Ulanowicz2009} give concrete numbers for material flows in an ecosystem and purport to multiply IT measures with total flow: \begin{quote} ``$T_{..}$ for this system is $102.6\,\mathrm{mg}\,\mathrm{C}\,\mathrm{m}^{-2}\,\mathrm{y}^{-1}$; the ascendency, $A$, works out to $53.9\,\mathrm{mg}\,\mathrm{C}\,\mathrm{bits}\,\mathrm{m}^{-2}\,\mathrm{y}^{-1}$ and the reserve, $\Phi$, is $121.3$ $\mathrm{mg}\,\mathrm{C}\,\mathrm{bits}\,\mathrm{m}^{-2}\,\mathrm{y}^{-1}$.'' \end{quote} Have you noticed the difference in units between the first and the following figures? That they are given in one sentence creates the dangerously false impression that the absolute values can be compared as ``apples with apples'' \parencite[section 7, ``One-Eyed Ecology'']{Ulanowicz2009}. The practice is even worse than ambiguous: the use of several decimal places and fully explicit dimensional units suggest an absoluteness that is not warranted. The above figure for $A$ could be given as either \begin{itemize} \item $53.9~\mathrm{mg}\,\mathrm{C}\,\mathrm{bit}\,\mathrm{m}^{-2}\,\mathrm{y}^{-1}$, in the logarithmic units of base $2$ chosen by the authors, or \item $37.7~\mathrm{mg}\,\mathrm{C}\,\mathrm{nat}\,\mathrm{m}^{-2}\,\mathrm{y}^{-1}$, in the logarithmic units of base $e$ used by Boltzmann, or \item $162.3~\mathrm{mg}\,\mathrm{C}\,\mathrm{dB}\,\mathrm{m}^{-2}\,\mathrm{y}^{-1}$, in the logarithmic units of base $10^{1/10}$ preferred by electric engineers, or \item $6.74~\mathrm{mg}\,\mathrm{C}\,\mathrm{byte}\,\mathrm{m}^{-2}\,\mathrm{y}^{-1}$, in the logarithmic units of base $2^8$ (such that $1~\mathrm{byte} = 8~\mathrm{bit}$) preferred by many computer scientists, or \item $9.88~\mathrm{pg}\,\mathrm{C}\,\mathrm{CDROM}\,\mathrm{m}^{-2}\,\mathrm{y}^{-1}$, in the logarithmic units corresponding to the information content of a standard (Red Book) Mode-1 CD-ROM of $666\,000 \times 1024~\mathrm{byte}$, or \item $64.7~\mathrm{g}\,\mathrm{C}\,\mathrm{cent}\,\mathrm{m}^{-2}\,\mathrm{y}^{-1}$, in the logarithmic units of base $2^{1/1200}$ preferred by musicians, \end{itemize} or in whatever logarithmic units another author might fancy. Having ruled out the possibility to compare flows with flow--information products directly, and having conceded that flow--information products only make sense when compared to each other in appropriately adjusted units, one wonders why the IT measures have been multiplied with flows in the first place; all of the arguments of their section~4 would have worked perfectly, and with less room for confusion, with $X$ and $\Psi$ in place of $A$ and $\Phi$. See also the next subsection. \subsection{Exponential Units in the Window of Vitality} \label{expo} Even the reader who has worked through the bookkeeping of dimensional and dimensionless units in the section we have just discussed, must be baffled by the following statement \cite[section~5]{Ulanowicz2009}, ``The survival of the most robust'': \begin{quote} ``Zorach and Ulanowicz (2003) [\dots] plotted the networks, not on the axes $A$ vs.\ $\Phi$, but rather on the transformed axes $c = 2^{\Phi/2}$ and $n = 2^A$. \end{quote} Certainly this cannot be correct, because here dimensional quantities appear in the exponents. Cross-reading of the cited source \parencite{Zorach2003} reveals a definition of the symbol $\Phi$ equivalent to the definition of the symbol $\Psi$ by \cite{Ulanowicz2009}. Although more difficult to verify, we conjecture that $A$ should be read as $X$ accordingly. That is, after attaching physical dimensions to information measures by multiplying them with flows, the authors silently revert to the dimensionless quantities, in order to make exponential scaling meaningful. What makes the comparison of information measures for different systems possible is exactly what is criticized scornfully in their section~3: \begin{quote} ``Unfortunately, bits do not convey any sense of the physical magnitude of the systems to which they pertain. For example, a network of flows among the populations of microbes in a Petri Dish could conceivably yield an $H$ of the same order of magnitude as a network of trophic exchanges among the mammalian species on the Serengeti Plain.'' \end{quote} This feature of IT, namely that it quantifies information content of observed choice (traditionally of messages in communication) as \emph{intensive} properties, regardless of the extent of systems that produce them, is commonly regarded as one of its essential abstractions. Nor is this an uncommon practice in science; certainly no one would object to the temperature of a Petri Dish being compared to the temperature of the Serengeti Plain. Temperature as an example has not been chosen randomly; indeed Boltzmann's work on uncertainty has had its main application in thermodynamics. \section{Conclusion} Interdisciplinary research is partially motivated by the hope that innovative research can be sparked by exposing experts of one field to the concepts of another. The article criticized here, though clearly intended for the very purpose, is an illustrative example how \emph{not} to do it. An overview article in a journal is not the place to provide a sound and complete introduction to a theory, but it can and should leave a non-expert reader both motivated and prepared to read up on the details in the literature of the field. We have identified two key issues that merit words of warning and recommendations to prospective interdisciplinary researchers: \begin{enumerate} \item \emph{Ideosyncratic terminology.} Renaming key concepts of theories in the process of translating them to a different field leaves the reader unfit to look up the technical details in the relevant literature. Theoreticians have a cause for celebration whenever they discover that concepts in disparate fields are actually the same; actively propagating obfuscated terminology hinders theoretical progress and may well cause the reinvention of several wheels. \item \emph{Internal inconsistency.} A reader who is not an expert on the subject may not fully grasp, or explicitly doubt, the results of a mathematical discussion at first sight; scepticism is generally a laudable trait for a scientist. However, there are numerous heuristics to judge whether the arguments in question can possibly be valid: Mathematical digressions should be both true and relevant, dimensions and units should match for all figures to be compared, all mathematical objects involved in a definition should be named, etcetera. All maths, especially when given to demonstrate the usefulness of a theory, should be prepared carefully enough to pass the test of heuristic reading by a reasonably educated non-expert. On the other side, editors should feel encouraged to seek reviews of interdisciplinary articles from experts in the respective other field. Thus, issues as the ones we have pointed out above, and possibly less obvious ones also, could be remedied before publication. \end{enumerate} A description of an unfamiliar theory in terms that cannot be traced to their usage in the field, and in formulae that obviously cannot be quite right, makes a tough reading. Who can blame the readers for concluding that the subject is not worth their attention? However, that conclusion could be severely premature. In the present case, we see no reason to doubt the validity or the utility of \citeauthor{Ulanowicz2009}'s philosophical observations and empirical findings. The proposal to investigate the \emph{potential} (as opposed to \emph{actual} as in philosophy, not as opposed to \emph{kinetic} as in physics) is certainly sound. We conjecture that similar investigations would be worthwhile for other subdisciplines of what is nowadays subsumed under computing and information science, for instance in modal, fuzzy and other nonclassical logics, or nondeterministic automata theory. Evidence of tentative import into ecology can be found in the logical study of uncertainty \parencite{Regan2002}, and in the use of Petri nets (not to be confused with Petri dishes) for ecological modelling by \cite{Sharov1991}, respectively. The latter is a fine case in point of our critique, as an interdisciplinary application that is also quite up to the internal standards of the Petri net community. We are looking forward to an ecological presentation of information measures of comparable quality. \printbibliography \end{document}
{ "redpajama_set_name": "RedPajamaArXiv" }
6,689
\section{INTRODUCTION} Different methods have been developed by academic and professional researchers for designing controllers for nonlinear systems. Despite these efforts, designing controllers for systems subjected to constraints and unknown time-varying disturbances remains challenging. Various constraints arise in most practical systems, including performance constraints, saturation, physical stoppages, and safety requirements. Therefore, constraints cannot be avoided while designing controllers for practical systems. For controller design, constraints can typically be prescribed in two forms: prescribed performance constraints (PPC) on some variable (such as tracking error) and prescribed input constraints (PIC). A wide variety of methods have been developed to address PPC, including: reference governors \cite{rg}, model predictive control \cite{mpc}, funnel control \cite{fc}, barrier Lyapunov functions (BLF) \cite{Mishra2021f}, prescribed performance control \cite{Bechlioulisl2013}, control barrier functions \cite{cbf}, and extremum seeking control \cite{es}. As far as the literature is concerned, BLF has been extensively used in dealing with constraints. That's because its design methodology allows it to incorporate many Lyapunov-based nonlinear control techniques. However, for uncertain, unknown systems, the design lacks simplicity. The approach described in \cite{Bechlioulisl2013} is well-suited to a diverse set of situations \cite{Zhang2019a} since it is low-complexity, approximation-free, and robust. Much research and development have gone into the controller design for nonlinear systems that undergo input saturation. One can refer to the results in \cite{Wen2011,Chen2015,Xiao2012}. It follows that controller design for nonlinear systems subjected to either PPC or PIC is a well-established field of study. Improving performance with limited resources is always difficult. Same with PPC and PIC \cite{Yong2020}, PPC aims for lower steady-state error, safe transient response, and fast convergence of tracking error. In contrast, PIC focuses on actuator safety or control effort minimization. Thus, very few results are available addressing PPC and input saturation, notably \cite{Hopfe2010,Hopfe2010a,berger2022input,Kanakis2020}. In \cite{Hopfe2010}, works are done for linear systems, nonlinear systems in \cite{Hopfe2010a,Kanakis2020}. Also, in \cite{Hopfe2010,Hopfe2010a, berger2022input} authors relax the PPC whenever the input saturation is active, and in \cite{Kanakis2020} assumptions are made on the existence of a feasible set of control input for a given initial conditions and actuator saturation limit. Moreover, given any desired trajectory for an uncertain nonlinear system with unknown bounded disturbances and arbitrary PIC, it is certainly impractical to guarantee that the desired trajectory is trackable. For example, a large external disturbance or a desired trajectory with a large upper bound will inevitably necessitate the same level of opposing control command, which may extend beyond the PIC \cite{Asl2019}. Thus, before prescribing input constraints, one must look for the feasible condition for PIC. Further, many practical systems always operate in some specified regions where they are controllable under PIC \cite{Yong2020}. In the presence of PIC, one cannot globally stabilize the unstable system. There is always a feasible set of initial conditions for PIC. Also, global results are not attained in many PPC studies \cite{Theodorakopoulos2015,Bechlioulis2017,Zhang2017} on tracking error. The prescribed performance function choice depends on the constrained variable's initial state. In \cite{Cao2022g}, global results were achieved by transiently relaxing the PPC. However, as discussed, it makes no sense to pursue global results when there is a PIC. In addition, arbitrary PPC makes no sense because there may be an initial condition of error variable within the initial bounds of PPC that does not belong to the set of initial conditions that are feasible for PIC. Therefore, we must seek a viable PPC for a PIC. Motivated by the above discussions and aforementioned works, a controller has been developed in this paper with the following listed contributions: \begin{enumerate} \item An approximation-free low complexity controller has been proposed for the nonlinear system with PPC and PIC. {Below is a representation of the controller structure:} $$\frac{2\bar \upsilon}{\pi}\arctan\left(\frac{\pi}{2\bar \upsilon}\tan\left(\frac{\pi e}{2\psi}\right)\right),$$ where $\bar \upsilon$ and $\psi$ represent PIC and PPC respectively, and $e$ is tracking error. \item The above novel control structure consists of both PPC and PIC in its design. This simplifies the task of deriving a feasibility condition for PPC and PIC, avoiding the need to relax PPC when input approaches its constraints. \end{enumerate} The remainder of the paper is structured as follows. In Section II, preliminaries and problem formulation are presented. It also contains key assumptions for the prescription of input constraints. Section III presents the design of the controller. Section IV presents the few lemmas used for the stability analysis in Section V. Section VI presents the simulation results and discussion. Finally, Section VII concludes the paper. \section{Preliminaries and Problem Formulation} \textbf{Notations:} We denote the set of real, positive real, nonnegative real, and positive integer numbers by $\mathbb{R}$, $\mathbb{R}^+$, $\mathbb{R}_0^+ $, and $\mathbb{N}$, respectively. $\mathbb{N}_n$: $\{1,\ldots , n\}$, $n$ is positive integer. $\mathcal{L}^\infty$ represents the set of all essentially bounded measurable functions. For $x (t)\in \mathbb{R}$, $x\uparrow a$: $x$ approaches a real value $a$ from the left side, $x\downarrow a$: $x$ approaches a real value $a$ from the right side, and $x^{(n)}$ represent $n$th time derivative of signal $x$. $\binom{m}{k}=\frac{m(m-1)\cdots(m-k+1)}{k!}$ denote the binomial coefficients. $L\{\cdot\}$ denotes the Laplace transform, and $s$ is the Laplace variable. Consider a class of strict-feedback nonlinear system \begin{equation}\label{sys1} \begin{split} \dot \xi_i&=\xi_{i+1}, ~\forall i \in \mathbb{N}_{n-1},\\ \dot \xi_n&=f\left({\bm \xi}\right)+g\left({\bm \xi}\right)\upsilon+d,\\ y&=\xi_1, \end{split} \end{equation} where $ \bm{\xi}(t)=[\xi_1(t), \ldots, \xi_n(t)]^T \in \mathbb{R}^n$ is the state vector, $f:\mathbb{R}^{n}\rightarrow\mathbb{R}$ is the unknown smooth nonlinear function, $g:\mathbb{R}^{n}\rightarrow\mathbb{R}$ is the unknown control coefficient, $d(t) \in \mathbb{R}$ is the unknown piecewise continuous bounded disturbance, $\upsilon (t) \in \mathbb{U}\subseteq \mathbb{R}$ and $y(t) \in \mathbb{R}$ are the input and output of the system, respectively. The control problem is to design a control law $\upsilon$ such that $(i)$ the output $\xi_1(t)$ track the desired output $\xi_\mathsf{d}(t)\in \mathbb{R},~ \forall t \in \mathbb{R}^{+}_{0}$, $(ii)$ output tracking error defined as $\Tilde\xi\coloneqq\xi_1-\xi_\mathsf{d},$ follow its prescribed performance constraints $\psi(t)\in \mathbb{R}$, defined as $\psi(t):=\psi_0e^{-\mu t}+\psi_\infty,$ such that $|\Tilde\xi|<\psi(t),~ \forall t \in \mathbb{R}^{+}_{0},$ where $\psi_0$ is a positive constant, and $\psi_\infty$ and $ \mu$ are positive and nonnegative constants, represent the bounds on the steady-state error and the decay rate of the tracking error, respectively, and $(iii)$ all the closed-loop signals are bounded. In addition, one of our problems will be to seek the feasibility condition for the PIC and PPC. Obtaining such a feasibility condition will necessitate specific knowledge of the system's dynamics, disturbances, and tracking performance parameters in terms of their upper bounds on the signals. A few assumptions are required for this are listed below. \smallskip \begin{assumption}\label{af} \cite{7150402,4717255, 6075525, 7289399} The unknown map $f$ satisfies the Lipschitz continuity condition, that is, for all $\bm x,\bm x'\in \mathbb{R}^n$, there exists a constant $k_l\in\mathbb{R}^+$ such that the following holds \begin{align*} |f(\bm x)-f(\bm x')|\le k_l||\bm x-\bm x'||_{p^*}, \end{align*} where $k_l$ is a known Lipschitz constant and $||\cdot||_{p^*}$ known as the $p^*$ norm in the $\mathbb{R}^n$. \end{assumption} Note that one can use the Lipschitz constant inference approaches proposed in \cite{wood1996estimation,Bubeck2011,malherbe2017global} to estimate the Lipschitz constant of unknown dynamics from a finite number of data collected from the system. \smallskip \begin{assumption}\label{ag} There exist a known constant $\barbelow g>0$ and a constant $\bar g\ge\barbelow g$, such that $\barbelow g\le g(x)\le\bar g$ for all $x \in\mathbb{R}^n$. \end{assumption} \smallskip \begin{assumption}\label{ad} There exists known constant $\bar d\ge 0$ such that disturbances $|d(t)|\le \bar d$ for all $t\in \mathbb R_0^+$. \end{assumption} \smallskip \begin{assumption}\label{ade} For a given desired trajectory $\xi_\mathsf{d}$, there exists a constant $\bar\xi_\mathsf{d}>0$, such that $||\bm{\xi_\mathsf{d}}(t)||_{\infty}<\bar\xi_\mathsf{d},$ for all $t\in\mathbb{R}^+_0$ for $\bm{\xi_\mathsf{d}}=[\xi_\mathsf{d}, \xi_\mathsf{d}^{(1)}, \hdots,\xi_\mathsf{d}^{(n-1)}]^T.$ \end{assumption} \section{Controller Design} This section proposes a robust approximation-free controller for \eqref{sys1}. To begin the controller design, we define a filtered tracking error, \begin{align} r\coloneqq\lambda_1\tilde\xi+\lambda_2\dot{\tilde\xi}+\cdots+\lambda_{n-1}{\tilde\xi}^{(n-2)}+{\tilde\xi}^{(n-1)}, \label {r} \end{align} where $\lambda_i, \forall i \in \mathbb{N}_{n-1}$ is a strictly positive constant and following the definition of output tracking error mentioned in the problem statement, \begin{align}\label{tilde} {\tilde\xi}^{(i-1)}=\xi_i-\xi_\mathsf{d}^{(i-1)},~\forall i \in \mathbb{N}_{n}. \end{align} Taking the time derivative of \eqref{r} and using \eqref{tilde}, one has \begin{align} \dot r=\lambda_1{\tilde\xi}^{(1)}+\lambda_2{\tilde\xi}^{(2)}+\cdots+\lambda_{n-1}{\tilde\xi}^{(n-1)}+\dot\xi_n-\xi_\mathsf{d}^{(n)}. \label {r1} \end{align} Using \eqref{sys1} and \eqref{r1}, closed-loop dynamics can be written as \begin{align} \dot r= \phi+f\left({\bm \xi}\right)+g\left({\bm \xi}\right)\upsilon+d-\xi_\mathsf{d}^{(n)}, \label{r2} \end{align} where $\phi=\sum_{i=1}^{n-1}\lambda_i{\tilde\xi}^{(i)}$. Consider a non-increasing smooth function $\psi_r:\mathbb R_0^+\rightarrow\mathbb R^+$ as a virtual performance constraint (VPC) over $r$, defined as \begin{align}\label{constr} \psi_r(t):=\psi_{r0}{e}^{-\mu_r t} +\psi_{r\infty}, \forall t\in\mathbb{R}_0^+, \end{align} where $\psi_{r\infty},\psi_{r0}$ and $\mu_r$ have similar attributes as of $\psi_{\infty},\psi_{0}$ and $\mu$ for PPC. Note that, in \eqref{constr}, $\psi_r$ and $\dot \psi_r$ are bounded for all $t\in\mathbb{R}_0^+$ and the bounds are given as \begin{align} \psi_{r\infty}&\le{\psi_r}\le \psi_{r0}+\psi_{r\infty}, \text{~and} \label{psib}\\ -\mu_r\psi_{r0}&\le\dot \psi_r \le 0. \label{psidb} \end{align} The control input is designed as \begin{align}\label{u} \upsilon=-\frac{2\bar \upsilon}{\pi}\arctan\left(\cfrac{\pi}{2\bar\upsilon}\tan\left(\frac{\pi r}{2\psi_r}\right)\right), \end{align} where $\bar\upsilon$ is PIC, $r$ and $\psi_r$ are as mentioned in \eqref{r}, respectively. In \eqref{u}, $r$ is designed using \eqref{r} and \eqref{tilde}, with \begin{align} \lambda_i&=\binom{n-1}{n-i}a^{n-i}, ~a>\mu,~\forall i \in \mathbb{N}_{n-1},\label{lambda} \end{align} and $\psi_r,$ i.e., VPC is chosen based on the PPC defined in the problem statement, as follows \begin{align} \mu_r&=\mu,\label{mu}\\ \psi_{r0}&=(a-\mu_r)^{n-1}\psi_0,\label{psir0}\\ \psi_{r\infty}&=a^{n-1}\psi_\infty.\label{psirinf} \end{align} \section{Preliminaries the Stability Analysis} In this section, first, a few results will be established, which will motivate the idea behind the selection of parameters of VPC $(\psi_r)$ in \eqref{mu}-\eqref{psirinf} based on PPC $(\psi)$. Further, a few lemmas will be presented, which will be later used in stability analysis. The lemmas are as follows. \smallskip \begin{lemma}\label{lem1} Consider the signals $X(t) \in \mathbb{R}$ and $Z(t) \in \mathbb{R}$, such that $|X(t)|<X_0e^{-\mu_xt}+X_\infty,$ where have similar attributes as of $\psi_{\infty},\psi_{0}$ and $\mu$ for PPC. If $z=\frac{x}{(s+a)^p},$ where $z=L\{Z(t)\}$, $x=L\{X(t)\}$, and $a>\mu_x$ and $p\in \mathbb{Z}^+$, then $|Z(t)|<Z_0e^{-\mu_xt}+Z_\infty$, with $Z_0=\frac{X_0}{(a-\mu_x)^p}$ and $Z_\infty=\frac{X_\infty}{a_{}^p}.$ \end{lemma} \begin{proof} Here, $z=\frac{x}{(s+a)^p},$ can be represented as a signal passing through a series of low pass filters as shown in the figure below: \smallskip \begin{center} \begin{tikzpicture} \node[draw, minimum width=1cm, minimum height=1cm](f1){$\frac{1}{s+a}$}; \node[draw, minimum width=1cm, minimum height=1cm, right=0.5cm of f1](f2){$\frac{1}{s+a}$}; \node[draw, minimum width=1cm, minimum height=1cm, right=2cm of f2](f3){$\frac{1}{s+a}$}; \draw [stealth-](f1.west)--++(-0.5,0) node[midway,above]{$x$}; \draw [-stealth](f1.east) -- (f2.west) node[midway,above]{$z_1$}; \draw [-stealth](f2.east) -- ++(1,0) node[midway,above](a3){}; \draw [stealth-](f3.west) -- ++(-0.5,0) node[midway,above](a4){}; \draw [-stealth](f3.east) -- ++(0.5,0) node[midway,above]{$z$}; \path (f2) -- node[auto=false]{\ldots} ++(3,0); \draw [decorate, decoration = {calligraphic brace,mirror}] (-0.5,-0.8) -- (5,-0.8) node[midway,below]{$p$ blocks}; \end{tikzpicture} \end{center} Let $z_1$ be the output of first filter, then $Z_1(t)= L^{-1}(z_1)$, can be written as $Z_1(t)=\int_0^t e^{-a(t-\tau)}X(\tau)d\tau.$ Since $|X(t)|<X_0e^{-\mu_xt}+X_\infty,$ thus we have $|Z_1(t)|<\int_0^t e^{-a(t-\tau)}(X_0e^{-\mu_x\tau}+X_\infty)d\tau.$ Simplifying it, we have $|Z_1(t)|<\frac{X_0}{(a-\mu_x)}(e^{-\mu_xt}-e^{-at})+ \frac{X_\infty}{a}(1-e^{-at}).$ Further it can be written as $ |Z_1(t)|<\frac{X_0}{(a-\mu_x)}e^{-\mu_xt}+\frac{X_\infty}{a}.$ Recursively following the above steps $(p-1)$ times, it can be easily found that $|Z(t)|<\frac{X_0}{(a-\mu_x)^p}e^{-\mu_xt}+\frac{X_\infty}{a_{}^p}.$ \end{proof} \smallskip \begin{corollary}\label{cor1} If in Lemma \ref{lem1}, $z=\frac{s^q}{(s+a)^q}x, q\in \mathbb{Z}^+$; then $|Z(t)|<Z_0e^{-\mu_xt}+Z_\infty$, with $Z_0={X_0}\left(\frac{2a-\mu_x}{a-\mu_x}\right)^q$ and $Z_\infty=2^qX_\infty.$ \end{corollary} \begin{proof} Similar to Lemma \ref{lem1}, $z=\frac{s}{(s+a)^q}x,$ can be represented as a signal passing through a series of filters as shown in the figure below. \smallskip \begin{center} \begin{tikzpicture} \node[draw, minimum width=1cm, minimum height=1cm](f1){$\frac{s}{s+a}$}; \node[draw, minimum width=1cm, minimum height=1cm, right=0.5cm of f1](f2){$\frac{s}{s+a}$}; \node[draw, minimum width=1cm, minimum height=1cm, right=2cm of f2](f3){$\frac{s}{s+a}$}; \draw [stealth-](f1.west)--++(-0.5,0) node[midway,above]{$x$}; \draw [-stealth](f1.east) -- (f2.west) node[midway,above]{$z_1$}; \draw [-stealth](f2.east) -- ++(1,0) node[midway,above](a3){}; \draw [stealth-](f3.west) -- ++(-0.5,0) node[midway,above](a4){}; \draw [-stealth](f3.east) -- ++(0.5,0) node[midway,above]{$z$}; \path (f2) -- node[auto=false]{\ldots} ++(3,0); \draw [decorate, decoration = {calligraphic brace,mirror}] (-0.5,-0.8) -- (5,-0.8) node[midway,below]{$q$ blocks}; \end{tikzpicture} \end{center} Let $z_1$ be the output of the first filter, then we can write $z_1=x(1-\frac{a}{s+a})$. Further, we have \begin{align} |Z_1(t)|= |L^{-1}(z_1)|<|X(t)|\hspace{-0.1em}+\hspace{-0.1em}a\hspace{-0.2em}\int_0^t\hspace{-0.2em} e^{-a(t-\tau)}X(\tau)d\tau.\label{sz2} \end{align} For the second term of \eqref{sz2}, performing a similar analysis as done in Lemma \ref{lem1}, we have $ |Z_1(t)|<{X_0}\left(\frac{2a-\mu_x}{a-\mu_x}\right)e^{-\mu_xt}+2X_\infty.$ Recursively following the above steps $(q-1)$ times, we have $|Z(t)|<{X_0}\left(\frac{2a-\mu_x}{a-\mu_x}\right)^qe^{-\mu_xt}+2^qX_\infty.$ \end{proof} \smallskip \begin{corollary}\label{coro1_2} If $z=\frac{s^q}{(s+a)^p}x,$ and, $p\ge q$ and $p,q\in \mathbb{Z}^+,$ then $|Z(t)|<Z_0e^{-\mu_xt}+Z_\infty$, with $Z_0={X_0}\frac{(2a-\mu_x)^q}{(a-\mu_x)^p}$ and $Z_\infty=\frac{2^q}{a^{p-q}}X_\infty.$ \end{corollary} \begin{proof} Following figure below and using Lemma \ref{lem1}, we can easily obtain the bounds of $Z_1(t)$. \smallskip \begin{center} \begin{tikzpicture} \node[draw, minimum width=1cm, minimum height=1cm](f2){$\frac{1}{s+a}$}; \node[draw, minimum width=1cm, minimum height=1cm, right=1cm of f2](f3){$\frac{1}{s+a}$}; \node[draw, minimum width=1cm, minimum height=1cm, right=0.5cm of f3](f4){$\frac{s}{s+a}$}; \node[draw, minimum width=1cm, minimum height=1cm, right=1cm of f4](f5){$\frac{s}{s+a}$}; \draw [stealth-](f2.west) -- ++(-0.5,0) node[midway,above]{$x$}; \draw [-stealth](f2.east) -- ++(0.25,0) node[midway,above](a3){}; \draw [stealth-](f3.west) -- ++(-0.25,0) node[midway,above](a4){}; \draw [-stealth](f3.east) -- ++(0.5,0) node[midway,above]{$z_1$}; \path (f2) -- node[auto=false]{\ldots} ++(1.5,0); \draw [decorate, decoration = {calligraphic brace,mirror}] (-0.5,-0.8) -- (2.5,-0.8) node[midway,below]{$(p-q)$ blocks}; \draw [-stealth](f4.east) -- ++(0.25,0) node[midway,above](a3){}; \draw [stealth-](f5.west) -- ++(-0.25,0) node[midway,above](a4){}; \draw [-stealth](f5.east) -- ++(0.5,0) node[midway,above]{$z$}; \path (f4) -- node[auto=false]{\ldots} ++(1.5,0); \draw [decorate, decoration = {calligraphic brace,mirror}] (3,-0.8) -- (6,-0.8) node[midway,below]{$q$ blocks}; \end{tikzpicture} \end{center} Further, using Corollary \ref{cor1}, it is straightforward to prove the given result. \end{proof} \smallskip \begin{lemma}\label{blemma} If $|r|<\psi_{r}$ and $\lambda_i=\binom{n-1}{n-i}a^{n-i}$, $a>\mu_r$ is a positive design constant, then for all $ t\in\mathbb{R}_0^+$ and $\forall i\in\{0,1,\ldots, n-1\}$, \begin{align}\label{rilde} |\tilde\xi^{(i)}(t)|<\frac{(2a-\mu_r)^{i}\psi_{r0}}{(a-\mu_r)^{n-1}}e^{-\mu_rt}+\frac{2^i\psi_{r\infty}}{a^{n-i-1}}. \end{align} . \end{lemma} \begin{proof} Using the Laplace transformation, \eqref{r} can be written as $ L\{\tilde \xi(t)\}=\frac{L\{r\}}{(s+a)^{n-1}}+\sum_{k=1}^{n-1}(\frac{1}{s^k}-\frac{\sum_{i=1}^{k}\binom{n-1}{n-i}a^{n-i}s^{(i-1)}}{s^k(s+a)^{n-1}})\tilde\xi^{(k-1)}(0).$ Further, we can have $L\{\tilde \xi^{(i)}(t)\}=s^iL\{\tilde \xi(t)\}-\sum_{k=0}^{i-1}s^{i-k-1}\tilde\xi^{(k)}(0),~\forall i \in \mathbb{N}_{n-1}.$ For the sake of simplicity, the proof will be restricted to the situation of $\tilde \xi^{(i)}(0)=0, ~\forall i\in\{0,1,\ldots, n-1\}$. Now, the general expression for $L\{\tilde \xi^{(i)}(t)\}, i\in\{0,1,\ldots, n-1\},$ can be written as \begin{align} L\{\tilde \xi^{(i)}(t)\}=\frac{s^iL\{r\}}{(s+a)^{n-1}}. \label{winitial} \end{align} Following the hypothesis and Corollary \ref{coro1_2}, one can deduce from \eqref{winitial} that $\forall t\in\mathbb{R}_0^+,$ and $i\in\{0,1,\ldots, n-1\}$, $|\tilde\xi^{(i)}(t)|<\frac{(2a-\mu_r)^{i}\psi_{r0}}{(a-\mu_r)^{n-1}}e^{-\mu_rt}+\frac{2^i\psi_{r\infty}}{a^{n-i-1}}.$ \end{proof} \smallskip \begin{remark} In \eqref{rilde}, substituting the parameter given in \eqref{mu} - \eqref{psirinf} for $i=0$, yields $|\tilde\xi|<\psi_0 e^{-\mu t}+\psi_\infty$, or $|\tilde\xi|<\psi$, one of our control goals. Hence, if we can make filtered tracking error $r$ to follow its VPC $\psi_r$, or the hypothesis of the above lemma, i.e., $|r|<\psi_r$, then the goal will be achieved. To achieve the same, control input is designed in \eqref{u}, based on filtered tracking error, VPC and PIC. Next, a few results based on the above lemma is presented, and further, a few lemmas will be established to aid the stability analysis in a subsequent section. \end{remark} \smallskip \begin{corollary}\label{colo2_1} If $|r|<\psi_{r}$ and $\lambda_i=\binom{n-1}{n-i}a^{n-i}$, $a>\mu_r$ is a positive design constant, then in \eqref{r2}, \begin{align} |\phi|&<\frac{\psi_{r0}\bar c_1}{(a-\mu_r)^{n-1}}+\frac{\psi_{r\infty}\bar c_2}{(a)^{n-1}},\label{phibound} \end{align} where $\bar c_1=(2a-\mu_r)\left((3a-\mu_r)^{n-1}-(2a-\mu_r)^{n-1}\right)$ and $\bar c_2=2a\left((3a)^{n-1}-(2a)^{n-1}\right)$. \end{corollary} \begin{proof} Following $\phi$ in \eqref{r2} and using Lemma \ref{blemma}, we have $|\phi|<\frac{\psi_{r0}}{(a-\mu_r)^{n-1}}\sum_{i=1}^{n-1}\binom{n-1}{n-i}a^{n-i}(2a-\mu_r)^i + \frac{\psi_{r\infty}}{a^{n-1}}\sum_{i=1}^{n-1}\binom{n-1}{n-i}a^{n-i}(2a)^i$. Further following the identity given in Appendix \ref{appendix1}, i.e., $\sum_{i=1}^{n-1}\binom{n-1}{n-i}a^{n-i}h^{i}=h((a+h)^{n-1}-h^{n-1})$, with $h=2a-\mu_r$ and $h=2a$, for the first and second terms, respectively, one can readily obtain \eqref{phibound}. \end{proof} \smallskip \begin{corollary}\label{coro2_2} If $|r|<\psi_{r}$ and $\lambda_i=\binom{n-1}{n-i}a^{n-i}$, $a>\mu_r$ is a positive design constant, then in \eqref{r2}, \begin{align} |f\left({\bm \xi}(t)\right)|\hspace{-0.1em}<\hspace{-0.1em} k_ln^{1/p^*}\hspace{-0.2em}\left(\hspace{-0.2em}\frac{(2a-\mu_r)^{n-1}\psi_{r0}}{(a-\mu_r)^{n-1}}\hspace{-0.1em}+\hspace{-0.1em}\frac{(2a)^{n-1}\psi_{r\infty}}{a^{n-1}}\hspace{-0.1em}+\hspace{-0.1em}\bar \xi_\mathsf{d}\hspace{-0.2em}\right).\label{fbound} \end{align} \end{corollary} \begin{proof} Since $\bm {\xi}(t)\in \mathbb{R}^n$, i.e., finite-dimensional vector space, so all norms are equivalent or one can find constant $c_1$ such that $||\bm {\xi}(t)||_{p^*}\le c_1||\bm {\xi}(t)||_\infty,$ for all $t\in\mathbb{R}_0^+$ Further, using Holder inequality, one can find $c_1=n^{1/p^*}$, holds the equivalence relation. Now using the Assumption \ref{af}, we have \begin{align}\label{fineq} |f\left({\bm \xi}(t)\right)|\le k_ln^{1/p^*}||\bm {\xi}(t)||_\infty. \end{align} Let $\bm{\tilde \xi}= [\tilde \xi, \dot{\tilde\xi}, \ldots, {\tilde\xi}^{(n-1)}]^T$ and $\bm{\xi_\mathsf{d}}=[\xi_\mathsf{d}, \xi_\mathsf{d}^{(1)}, \hdots,\xi_\mathsf{d}^{(n-1)}]^T,$ then following \eqref{tilde}, we have $\bm{\tilde \xi} = \bm{ \xi}-\bm{\xi_\mathsf{d}}$. Substituting $\bm{ \xi}=\bm{\tilde \xi}+\bm{\xi_\mathsf{d}}$ in \eqref{fineq}, and applying triangular inequality, we have $|f\left({\bm \xi}(t)\right)|\le k_ln^{1/p^*}(||\bm {\tilde \xi}(t)||_\infty+||\bm {\xi_\mathsf{d}}(t)||_\infty)$, for all $t\in\mathbb{R}_0^+$. Further using the Lemma \ref{blemma} and Assumption \ref{ade}, one can readily obtain \eqref{fbound}. \end{proof} \smallskip \begin{corollary} \label{cor2.3} If $|r|<\psi_{r}$ and $\lambda_i=\binom{n-1}{n-i}a^{n-i}$, $a>\mu_r$ is a positive design constant, then \begin{equation}\label{rdot} \begin{split} \dot r&<\psi_{r0}c_1+\psi_{r\infty}c_2+\bar\xi_\mathsf{d}c_3+\bar d+ g\upsilon, \text{and}\\ \dot r&>-\psi_{r0}c_1-\psi_{r\infty}c_2-\bar\xi_\mathsf{d}c_3-\bar d+g\upsilon. \end{split} \end{equation} where $c_1=\frac{\bar c_1+k_ln^{1/p^*}(2a-\mu_r)^{n-1}}{(a-\mu_r)^{n-1}}$, $c_2=\frac{\bar c_2+k_ln^{1/p^*}(2a)^{n-1}}{a^{n-1}}$, and $c_3=k_ln^{1/p^*}+1$ are positive constants. \end{corollary} \begin{proof} It is straightforward to write \begin{align} |\phi+ f\left({\bm \xi}\right)+ d-\xi_\mathsf{d}^{(n)}|\le |\phi|+ |f\left({\bm \xi}\right)|+ |d|+|\xi_\mathsf{d}^{(n)}|. \end{align} Using the corollaries \ref{colo2_1} and \ref{coro2_2}, and following assumptions \ref{ad} and \ref{ade}, one can have the following inequality \begin{align} |\phi+ f\left({\bm \xi}\right)+ d-\xi_\mathsf{d}^{(n)}|< \psi_{r0}c_1+\psi_{r\infty}c_2+\bar\xi_\mathsf{d}c_3+\bar d. \label{phifd} \end{align} Further, using \eqref{phifd} in \eqref{r2}, one gets \eqref{rdot}. \end{proof} \smallskip \begin{lemma}\label{lemmainf} If the filtered tracking error $r$ given in \eqref{r} is transgressing its upper bound mentioned in \eqref{constr}, then $(r-\psi_{r})$ will approach $0$ from the left side and \begin{align}\label{elemmainf} \lim_{(r-\psi_r)\uparrow 0}{\dot r}\ge-\mu_r\psi_{r0}. \end{align} \end{lemma} \begin{proof} It is straightforward to assume that before transgressing any bounds, the tracking error must be within its prescribed performance bounds (i.e., $-\psi_r<r<\psi_r$). This implies that $ -2\psi_r<r-\psi_r<0$. Thus, we can analyze that if $r$ is transgressing its upper bound, i.e., $\psi_r$ then $(r-\psi_r)$ will approach $0$ from the left side. Consequently, it is easy to know that when $(r-\psi_r)$ approaches $0$ from the left side, the time derivative of $(r-\psi_r)$ will be greater than equal to $0$. As a result, we have \begin{align}\label{zphi} \lim_{(r-\psi_r)\uparrow 0}{\dot r}\ge\dot\psi_r. \end{align} Noting \eqref{psidb}, we can infer from \eqref{zphi} that $\lim_{(r-\psi_r)\uparrow 0}{\dot r}\ge-\mu_r\psi_{r0}$. \end{proof} \begin{lemma}\label{lemmasup} If the tracking error is transgressing its lower bound, then $(r+\psi_r)$ will approach $0$ from the right side, and \begin{align}\label{elemmasup} \lim_{(r+\psi_r)\downarrow 0}{\dot r}\le\mu_r\psi_{r0}. \end{align} \end{lemma} \begin{proof} The proof is similar to that of Lemma \ref{lemmainf}. \end{proof} \section{Stability analysis} In this section, stability analysis will be shown based on the results of the lemmas presented in the previous section. \smallskip \begin{theorem}\label{theorem1} Consider a system \eqref{sys1}, with desired state trajectory $\xi_\mathsf{d}$, PPC on output tracking error, $\psi=\psi_0e^{-\mu t}+\psi_\infty$ and a PIC $\bar \upsilon$. If the system \eqref{sys1} satisfies Assumption \ref{af}-\ref{ade}, and the control input is designed as in \eqref{u}, then system output will follow its desired trajectory, tracking error and input will never transgress its PPC and PIC, respectively, and all the closed-loop signals will remain bounded, provided the following feasibility conditions for PIC and PPC are true. \begin{align} &\text{PIC:}~ \bar\upsilon>\frac{1}{\barbelow g}(\psi_{r\infty}c_2+\bar\xi_\mathsf{d}c_3+\bar d),\label{cond}\\ &\text{PPC:}~ |r(0)|(a\hspace{-0.1em}-\hspace{-0.1em}\mu)^{1-n}\hspace{-0.1em}<\hspace{-0.1em}\psi_0\hspace{-0.1em}<\hspace{-0.1em}\frac{\barbelow g \bar\upsilon \hspace{-0.1em}-\hspace{-0.1em}\psi_{r\infty}c_2\hspace{-0.1em}-\hspace{-0.1em}\bar\xi_\mathsf{d}c_3\hspace{-0.1em}-\hspace{-0.1em}\bar d}{(c_1\hspace{-0.1em}+\hspace{-0.1em}\mu)(a\hspace{-0.1em}-\hspace{-0.1em}\mu)^{n-1}}.\label{cond1} \end{align} \end{theorem} \begin{proof} Stability analysis is done using proof-by-contradiction. To begin with the proof, we will first establish proposition \textit{P}$1$ as follows. \textit{P}$1$: If the \eqref{cond} and \eqref{cond1} is true, input is designed as \eqref{u}, and initially $r$ is within its designed constraints $\psi_r$, then there exists at least a time instant at which $r$ violates its constraints, or, $$ \exists~ t_j \text{~such that~} |r(t_j)|>\psi_r(t_j), \forall t_j \in (t_1,\ldots, t_i, \ldots, t_{\bar n} ),$$ where $t_i<t_{i+1}$, $t_i$ represent $i${th} instant of violation of performance constraint, $i \in \mathbb{N}$, and $\bar n \in \mathbb{N}$. We are now prepared for the proof. Suppose that \textit{P}$1$ is true, then we have the following. \begin{align}\label{zt1} |r(t)|<\psi_r(t),~ \forall t\in [0,t_1). \end{align} Suppose that at the instant of time $t_1$, the tracking error is transgressing its performance constraints (i.e., upper or lower bounds). With the following analysis, we will see that error never transgresses its performance constraints. Noting \eqref{zt1}, and using \eqref{rdot} of Corollary \ref{cor2.3}, for all $t\in [0,t_1)$, we have \begin{align} \dot r&<\psi_{r0}c_1+\bar\xi_\mathsf{d}c_2+g\upsilon+\bar d, \text{and} \label{proof1}\\ \dot r&>-\psi_{r0}c_1-\bar\xi_\mathsf{d}c_2+g\upsilon-\bar d. \label{proof2} \end{align} Following \eqref{u}, we infer that \begin{align} \liminf_{(z-\psi)\uparrow0}{\upsilon}&=-\bar \upsilon,\label{liminff}\\ \limsup_{(z+\psi)\downarrow0}{\upsilon}&=\bar \upsilon\label{limsupp}. \end{align} Consequently, following Assumption \ref{ag}, we have \begin{align} -\bar g\bar \upsilon&\le\liminf_{(r-\psi_r)\uparrow0}{g\upsilon}\le-\barbelow g\bar \upsilon,\label{liminf}\\ \barbelow g\bar \upsilon&\le\limsup_{(r+\psi_r)\downarrow0}{g\upsilon}\le\bar g\bar \upsilon.\label{limsup} \end{align} Using \eqref{liminf} and \eqref{proof1}, we can infer that for all $ t\in [0,t_1),$ \begin{align}\label{infproof} \liminf_{(r-\psi_r)\uparrow 0}{\dot r}<\psi_{r0}c_1+\psi_{r\infty}c_2+\bar\xi_\mathsf{d}c_3-\barbelow g\bar\upsilon+\bar d. \end{align} Similarly, using \eqref{limsup} and \eqref{proof2}, $\forall t\in [0,t_1),$ we obtain \begin{align}\label{supproof} \limsup_{(r+\psi_r)\downarrow 0}{\dot r}>-\psi_{r0}c_1-\psi_{r\infty}c_2-\bar\xi_\mathsf{d}c_3+\barbelow g \bar\upsilon-\bar d. \end{align} Now, recalling \eqref{mu}, \eqref{psir0}, and \eqref{cond1}, it follows \begin{align} \psi_{r0}(c_1+\mu_r)<\barbelow g \bar\upsilon-\psi_{r\infty}c_2 -\bar\xi_\mathsf{d}c_3-\bar d. \label{step} \end{align} Further, \eqref{step} can be written as,\vspace{-0.2cm} \begin{align} \psi_{r0}c_1+\psi_{r\infty}c_2+\bar\xi_\mathsf{d}c_3-\barbelow g\bar\upsilon+\bar d&<-\mu_r\psi_{r0}, \text{or} \label{step1}\\ -\psi_{r0}c_1-\psi_{r\infty}c_2-\bar\xi_\mathsf{d}c_3+\barbelow g \bar\upsilon-\bar d&>\mu_r\psi_{r0}.\label{step2} \end{align} Now incorporating \eqref{step1} in \eqref{infproof}, and \eqref{step2} in \eqref{supproof}, it can inferred that over $[0, t_1)$ \vspace{-0.3cm} \begin{align} \liminf_{(r-\psi_r)\uparrow 0}{\dot r}&<-\mu_r\psi_{r0},\label{infproof1}\\ \limsup_{(r+\psi_r)\downarrow 0}{\dot r}&>\mu_r\psi_{r0}. \label{supproof1} \end{align} Recalling lemmas \ref{lemmainf} and \ref{lemmasup}, it can be inferred that \eqref{infproof1} contradicts \eqref{elemmainf}, and \eqref{supproof1} contradicts \eqref{elemmasup}. Hence, over $[0, t_1)$, tracking error will never approach its performance constraints. Consequently, it can be concluded that there is no $t_1$ in which the $r$ violates its designed constraint $\psi_r$. Since there does not exist the first instant of violation of the designed constraint, there does not exist any time at which $r$ will violate its constraints $\psi_r$. Therefore, it can be concluded that \textit{P}$1$ is false. Now following \eqref{mu}, \eqref{psir0} and \eqref{cond1}, it follows $\psi_{r0}>|r(0)|$, and following \eqref{constr}, $\psi_{r}(0)>|r(0)|$. Thus initially $r$ is within its designed VPC ($\psi_r$), and further noting that Proposition \textit{P}$1$ is false, we have the following. \begin{align}\label{ztf} |r(t)|<\psi_r(t),~ \forall t\ge0. \end{align} Now, following Lemma \ref{blemma} and using \eqref{mu}-\eqref{lambda}, we have \begin{align}\label{finalbound} |\tilde\xi^{(i)}(t)|\hspace{-0.2em}<\hspace{-0.2em}(2a-\mu_r)^{i}\psi_{0}e^{-\mu_rt}\hspace{-0.2em}+\hspace{-0.2em}{(2a)}^i\psi_{\infty},~ i\hspace{-0.1em}\in\hspace{-0.1em}\{0,1,\ldots, n\hspace{-0.2em}-\hspace{-0.2em}1\}. \end{align} Using \eqref{finalbound}, it can be concluded that $\tilde\xi ^{(i)}$ will converge asymptotically to a set, $\Gamma_{i}:=\{\tilde\xi ^{(i)}\in \mathbb{R}:|\tilde\xi ^{(i)}|<{(2a)}^i\psi_{\infty}\}$ and output tracking error $\tilde\xi$ will follow its PPC, i.e., $\psi(t)$, $\forall t\in \mathbb{R}^{+}_0$. Now, we will seek the boundedness of all the closed-loop signals. Following \eqref{ztf} and \eqref{finalbound} we have $r \in \mathcal{L}^\infty $ and $\tilde \xi ^{(i)}\in \mathcal{L}^\infty$. Consequently, following assumption \ref{ade} and recalling from \eqref{tilde} that $\xi_i=\tilde\xi_i^{(i-1)}+\xi_\mathsf{d}^{(i-1)}$, we have $\xi_i \in \mathcal{L}^\infty, ~\forall i \in \mathbb{N}_{n}$. Knowing the fact that $f(\bm\xi)$ in \eqref{sys1} is smooth nonlinear function, as a result, we have $f(\bm\xi) \in \mathcal{L}^\infty$. Also, it is straightforward to follow from \eqref{u} that if $|r|<\psi_r$, then $|\upsilon|<\bar \upsilon$, thus we have $\upsilon \in\mathcal{L}^\infty $. Following \eqref{sys1} and \eqref{r2}, and with the help of established boundedness of the signal, and assumptions \ref{ag} and \ref{ad}, that $g(\bm\xi)$ and disturbance are bounded, we have $\dot\xi_i \in\mathcal{L}^\infty, ~\forall i \in \mathbb{N}_{n} $ and $\dot r \in\mathcal{L}^\infty,$ respectively. Thus, all closed-loop signals are bounded. This completes the proof. \end{proof} \section{Simulation Results and Discussion} In this section, a simulation study is presented to show the effectiveness of the proposed approach. Consider a control-affine nonlinear system \begin{equation}\label{sys2} \begin{split} \dot \xi_1 &=\xi_2,\\ \dot \xi_2&=-0.5(\sin{\xi_1}+\xi_2)+(3+\cos{\xi_2})\upsilon+d,\\ y&=\xi_1, \end{split} \end{equation} where $\xi(t)\in \mathbb{R}$, $\upsilon(t)\in \mathsf{U}\in \mathbb{R}$ and $y$ are the state, the input, and the output of the system \eqref{sys2}, respectively, and $d(t)=0.5\sin{2t}$ is a disturbance. The desired output is $\xi_\mathsf{d}(t)=0.5\sin{t}$. For \eqref{sys2}, correspondingly, following \eqref{sys1}, we can note that $f(\xi)=-0.5(\sin{\xi_1}+\xi_2)$ and $g(\xi)=3+\cos{\xi_2}$, and are assumed to be unknown. For \eqref{sys2}, one can readily obtain $k_l=0.5$, $\barbelow g=2$, $\bar d =0.5$, and for the given desired output, we have $\bar \xi_\mathsf{d}=0.5$. The design parameter $a$ is chosen as $a=2$, accordingly following corollary \ref{cor2.3}, $c_1=9$, $c_2=6$, and $c_3=2$. Now, following the feasibility conditions \eqref{cond}, we have PIC: $\bar\upsilon> 0.78$. The goal is to design control law $\upsilon$ such that the output tracks the desired trajectory without transgressing PIC: $\bar \upsilon=6$, and PPC: $\psi(t)=\psi_0e^{-\mu t}+\psi_{\infty}$, (with $\psi_0=1, \psi_{\infty}=0.01$ and $\mu=1$), on tracking error. It can be easily verified using \eqref{cond1} that PPC satisfies its feasibility condition for a given PIC, i.e. $\psi_0<1.1$. The controller is designed using \eqref{u}, $\upsilon=-\frac{2\bar \upsilon}{\pi}\arctan\left(\frac{\pi}{2\bar\upsilon}\tan\left(\frac{\pi r}{2\psi_r}\right)\right)$, where, as mentioned in \eqref{r} $r=\lambda_1\tilde\xi_1+\dot{\tilde{\xi}}_1$, with $\tilde{\xi}_1=\xi_1-\xi_\mathsf{d}$ and $\dot{\tilde\xi}_1=\xi_2-\dot\xi_\mathsf{d}$ as mentioned in \eqref{r}, and $\psi_r=\psi_{r0}e^{-\mu_rt}+\psi_{r\infty}$. The parameter $\mu_r, \psi_{r0}, \psi_{r\infty},$ and $\lambda_1$ are given by \eqref{mu}-\eqref{lambda}, the aforementioned parameters, and the simulation study is done for two sets of initial conditions, i.e., $\bm\xi(0)=[0.4 ~0.29]^T$ and $\bm\xi(0)=[0.6 ~0.29]^T.$ For both sets of initial conditions, it can be observed from Fig. \ref{fig1} that the output tracks the desired trajectory along with its tracking error following the PPC. Also, from Fig. \ref{fig2}, it can be seen that that input follows its PIC. Further, it can be observed from Fig. \ref{fig2} the filtered tracking errors follow its VPC. It is to note that, since $\xi_\mathsf{d}(0)=0$ and $\dot\xi_\mathsf{d}(0)=0.5$, so with change in the initial condition $\bm\xi(0)$ from $[0.4 ~0.29]^T$ to $[0.4 ~0.29]^T$, $\bm\tilde\xi(0)=[\tilde\xi_1(0)~ \dot{\tilde{\xi}}_1(0)]$ changes from $[0.4 ~-0.21]^T$ to $[0.6 ~-0.21]^T$, respectively. Consequently, $r(0)$ changes from $0.59$ to $0.99$, and also $|r(0)|(a-\mu)^{1-n}$ changes from $0.59$ to $0.99$. It can be calculated that a further increase in $\xi_1(0)$ from $0.6$ will violate the feasibility condition $|r(0)|(a-\mu)^{1-n}<\psi_0$, also it can be observed from Fig. \ref{fig2} that initially control input is near to its PIC, thus motivating the feasibility condition. The observation made from the Fig. \ref{fig1} and \ref{fig2} was as expected and stated in Theorem \ref{theorem1}. \begin{figure} \centering \includegraphics[width=8.5cm,height=5cm]{1.eps} \vspace{-0.7cm} \caption{Top: output tracking performance (\lep $\xi_1$ for $\bm{\xi}(0)=[0.4 ~0.29]^T$, \leb $\xi_1$ for $\bm{\xi}(0)=[0.6 ~0.29]^T$, \leblack $\xi_\mathsf{d}$(desired output)); Bottom: prescribed performance of tracking error( \ledas PPC $(\psi)$, \lep $\tilde\xi_1$ for $\bm{\xi}(0)=[0.4 ~0.29]^T$, \leb $\tilde\xi_1$ for $\bm{\xi}(0)=[0.6 ~0.29]^T$.) } \label{fig1} \end{figure} \begin{figure} \centering \includegraphics[width=8.5cm,height=5cm]{2.eps} \vspace{-0.7cm} \caption{Top: control input (\lep $\bar \upsilon$ for $\bm{\xi}(0)=[0.4 ~0.29]^T$, \leb $\bar \upsilon$ for $\bm{\xi}(0)=[0.6 ~0.29]^T$); Bottom: performance of filtered tracking error (\lep $r$ for $\bm{\xi}(0)=[0.4 ~0.29]^T$, {\leb} $r$ for $\bm{\xi}(0)=[0.6 ~0.29]^T$) for the designed VPC ({\ledas $\psi_r$}). } \label{fig2} \end{figure} \section{Conclusion} A controller has been proposed for the tracking problem of control affine nonlinear system subjected to PPC and PIC. The structure of the controller is simple as it does not require any adaptive laws, calculation of any derivatives, system knowledge or approximation. Hence, the controller is easy to implement and an approximation-free controller. Also, the derived feasibility condition for the prescription of constraint restricts arbitrary prescription. The simulation results confirm these facts. In future, the work will be extended for multiagent systems. \begin{appendices} \section{Proof for $\sum_{i=1}^{n-1}\binom{n-1}{n-i}a^{n-i}h^{i}=h((a+h)^{n-1}-h^{n-1}).$} \label{appendix1} Using the binomial identity $\binom{m}{k}=\binom{m}{m-k}$, we have $\sum_{i=1}^{n-1}\binom{n-1}{n-i}a^{n-i}h^{i}=\sum_{i=1}^{n-1}\binom{n-1}{i}a^{n-i}h^{i}.$ Further using the binomial identity $\binom{m}{k}=\binom{m-1}{k}+\binom{m-1}{k-1},$ it can be written as $\sum_{i=1}^{n-1}\binom{n-1}{n-i}a^{n-i}h^{i}=\sum_{i=1}^{n-1}\left(\binom{n}{i}-\binom{n-1}{i-1}\right)a^{n-i}h^{i}.$ Substituting $\sum_{i=1}^{n-1}\binom{n}{i}a^{n-i}h^{i}=(a+h)^{(n)}-h^n-a^n$ and $\sum_{i=1}^{n-1}\binom{n-1}{i-1}a^{n-i}h^{i}=a(a+h)^{(n-1)}-a^n$, it can be written as $\sum_{i=1}^{n-1}\binom{n-1}{n-i}a^{n-i}h^{i}=(a+h)^{(n)}-h^n-a(a+h)^{(n-1)}.$ Further simplifying, we have $\sum_{i=1}^{n-1}\binom{n-1}{n-i}a^{n-i}h^{i}=h((a+h)^{n-1}-h^{n-1}).$ \end{appendices} \bibliographystyle{ieeetr}
{ "redpajama_set_name": "RedPajamaArXiv" }
5,748
Healthcare Industry News: neurodegenerative Biopharmaceuticals Neurology Venture Capital News Release - March 26, 2009 Neuraltus Pharmaceuticals Raises $17 Million Series A Funding MENLO PARK, CA--(Healthcare Sales & Marketing Network)--Mar 26, 2009 -- Neuraltus Pharmaceuticals, Inc., a privately held pharmaceutical company developing proprietary small molecule drugs for neurodegenerative diseases, today announced the completion of $17 million in Series A financing. Co-investing in the Series A funding are Latterell Venture Partners of San Francisco, CA, VantagePoint Venture Partners of San Bruno, CA and Adams Street Partners of Chicago, IL. Dr. James Woody of Latterell, Annette Bianchi of VantagePoint and Terry Gould of Adams Street will join the Neuraltus Board of Directors. Neuraltus was founded in 2005 by Michael McGrath, MD, PhD, Professor of Laboratory Medicine at the University of California, San Francisco, Edgar Engleman MD, Professor of Medicine and Pathology at Stanford University School of Medicine and Ari Azhir, PhD. "Neuraltus offers a strong pipeline of compounds for the treatment of serious neurological diseases for which there are few if any clinical options," said Dr. James Woody of Latterell Venture Partners. "It is a great vote of confidence when investors with so much experience in biotechnology have chosen to devote their resources to Neuraltus," said Ari Azhir, CEO and co-founder. Neuraltus has a number of compounds in the pipeline, including a drug to treat ALS (Amyotropic Lateral Sclerosis, also known as Lou Gehrig's Disease), a drug that will reduce dyskinesia (jerky involuntary movement) in patients suffering from Parkinson's Disease, and a drug for the treatment of Gaucher's Disease (a Lysosomal Storage Disorder). The Series A funding will enable Neuraltus to conduct and complete phase I and phase II clinical trials for each of these disorders. "We believe Neuraltus has the potential to develop innovative drugs for these intractable diseases," said Annette Bianchi of VantagePoint. ALS is a progressive neurodegenerative disease that affects nerve cells in the brain and the spinal cord. When the motor neurons die, the ability of the brain to initiate and control muscle movement is lost. As voluntary muscle action is progressively affected, patients in the later stages of the disease may become totally paralyzed. The progressive degeneration of the motor neurons in ALS eventually leads to death. Most people who develop ALS are between the ages of 40 and 70, but victims of ALS can be as young as 20 or 30. "We think Neuraltus has discovered a way to slow or even to stop the progression of ALS," said Dr. Michael McGrath, co-founder of Neuraltus Pharmaceuticals. "Our drug functions in a new way by attacking a novel disease target. We think it is a promising platform for the treatment of ALS as well as other neurodegenerative diseases." Parkinson's Disease occurs when the neurons in the brain that produce dopamine die or become impaired. Dopamine allows smooth, coordinated function of the body's muscles and movement. Drugs that replace or simulate dopamine activity are the mainstay of Parkinson's Disease treatment regimens. However, after a short period of time dyskinesia, a major side effect of this treatment program, occurs. Dyskinesia manifests as tremulous jerky movements that are often as severe as the symptoms of Parkinson's Disease. Currently there is no effective therapy for this dyskinesia. The first effective treatment of dyskinesia will enable Parkinson's patients to benefit from the effectiveness of Parkinson's therapies. Gaucher's Disease is the most prevalent Lysosomal Storage Disorder and results from a specific enzyme deficiency in the body, caused by a genetic mutation received from both parents. The disease is progressive, incurable and causes severe disability and death. About Neuraltus Pharmaceuticals, Inc. Neuraltus Pharmaceuticals, Inc. is a privately held biopharmaceutical company dedicated to developing and commercializing innovative small molecule therapeutics for the treatment of neurodegenerative disorders About Latterell Venture Partners Latterell Venture Partners ("LVP") invests in early stage biotechnology, specialty pharmaceutical, and medical device companies with innovative technologies, large market opportunities, and passionate entrepreneurs. The LVP General Partners offer a unique blend of venture capital, entrepreneurial, technical, clinical and collaborative skills which enable entrepreneurs to create highly successful new startups. The LVP team has built more than thirty successful biomedical companies over the past twenty years. Among the recent companies LVP has played a major role in are device companies such as Pathway, Cardiomind, and Xtent, and biotech companies such as OncoMed and Proteolix, where LVP was the founding investor. More information about LVP is available on the LVP website at www.LVPcapital.com. About VantagePoint Venture Partners VantagePoint Venture Partners is a leader in investing in 21st Century technologies and partners with entrepreneurs in the CleanTech, Healthcare and Information Technology sectors. With a large investment team of experts, a broad network of strategic partners and advisors, and more than $4.5 billion in committed capital, the Firm has the depth of resources to help build transformative companies that are clear leaders in their categories. The Silicon Valley Firm has investments in more than 70 companies including TargeGen, Anthera, Conceptus, WageWorks, LifeMasters, Better Place, BrightSource Energy, Mascoma, Miasole, ReachLocal and others. For more information visit www.vpvp.com. About Adams Street Partners Adams Street Partners is one of the largest managers of private equity investments in the world and has one of the longest histories. Together with its predecessor organizations, Adams Street Partners has been investing in private equity partnerships since 1979, managing direct investments in private equity since 1972 and is credited with establishing the first private equity fund of funds for institutional investors. The Firm currently has 100+ employees and $19.5 billion of assets under management. Adams Street Partners has offices in Chicago, London, Menlo Park and Singapore. Source: Neuraltus Pharmaceuticals Search: Neuraltus Pharmaceuticals Search: Amyotropic Lateral Sclerosis Search: Parkinson's Disease
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
8,417
{"url":"https:\/\/www.clutchprep.com\/chemistry\/practice-problems\/135722\/what-is-the-solubility-in-mole-liter-for-copper-ii-sulfide-at-25-c-given-a-ksp-v","text":"# Problem: What is the solubility in mole\/liter for copper(II) sulfide at 25 \u00b0C given a Ksp value of 1.3 x 10-36. Write using scientific notation and use 1 or 2 decimal places (even though this is strictly incorrect!)\n\n###### FREE Expert Solution\n\nFor this problem, we\u2019re being asked to\u00a0calculate the \u00a0solubility in mole\/liter for copper(II) sulfide at 25 \u00b0C given a Ksp value of 1.3 x 10-36\n\nSince the\u00a0copper(II) sulfide (CuS)\u00a0is an ionic compound, it forms ions when dissociating in water. The dissociation of CuS in water is as follows:\n\nThe hydroxide ion, OH, has a charge of \u20131. Nickel then has a charge of +2:\n\nCuS(s)\u00a0\u00a0Cu2+(aq) + S2-(aq)\n\nWe can construct an\u00a0ICE table\u00a0for the dissociation of CuS\n\nRemember that\u00a0solids are ignored\u00a0in the ICE table and Ksp expression.\n\n86% (335 ratings)\n###### Problem Details\n\nWhat is the solubility in mole\/liter for copper(II) sulfide at 25 \u00b0C given a Ksp value of 1.3 x 10-36. Write using scientific notation and use 1 or 2 decimal places (even though this is strictly incorrect!)","date":"2021-01-18 05:08:33","metadata":"{\"extraction_info\": {\"found_math\": false, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8345398306846619, \"perplexity\": 4041.760861578667}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-04\/segments\/1610703514121.8\/warc\/CC-MAIN-20210118030549-20210118060549-00127.warc.gz\"}"}
null
null
\section{Introduction} \label{sec:introduction} Glaucoma is one of the most primary leading causes of blindness\cite{mary2016retinal}. The loss of sight due to glaucoma is irreversible while some other eye diseases such as myopia and presbyopia are not. Thus, early diagnosis of glaucoma for effective treatment and vision conservation matters a lot for patients. However, the symptoms of glaucoma in the early stage are difficult to perceive. One of the standard methods widely used by eye specialists nowadays is the optic nerve head (ONH) assessment\cite{mary2016retinal} in fundus retina images. Whereas, mastering the tricks of performing ONH assessment remains challenging. Therefore, some automatically calculated parameters were presented and popularized as quantitative clinical measurements, such as cup to disc ratio (CRD) which means the ratio of vertical cup diameter to vertical disc diameter in the fundus retina image. Generally, a larger CRD represents a higher possibility of glaucoma and vice verse. However, manually labeling the mask of the cup or disc region is labor-consuming, which makes image-level category labels necessary and reasonable for automatically screening glaucoma. In the past several years, Deep Learning (DL) based methods have received unprecedented attention and achieved state-of-the-art performance in many fields, including medical image analysis\cite{ravi2017deep}. Glaucoma can be screened from fundus retina images by DL models which are well trained on sufficient data and precise image-level labels\cite{fu2018disc-aware}. However, DL models trained on one single site cannot be directly generalized and applied to other sites. The distributions of training and testing data are partially different so the pre-trained model may fail to fulfill the diagnosis task. Commonly, the difference between datasets can be seen as a domain gap. For Example, the discrepancy between images from different dataset can be reflected in many image statistical traits, such as color style, contrast, resolution, and so on. Also, the joint distributions of images and labels may be quite different between the source and the target domain, i.e., $P(x^s,y^s ) \neq P(x^t,y^t )$. This is mainly because the margin distributions are different, i.e., $P(x^s)\neq P(x^t)$ even if the conditional distributions, i.e., $P(y^s\vert x^s)$ and $P(y^t\vert x^t)$ are similar. Many methods have been proposed to solve this problem. Fine tuning\cite{tajbakhsh2016convolutional} is most widely used in real practical applications. However, fine-tuning is unable to apply when the dataset from a new target domain is completely unlabeled. To solve the domain adaptation problem, a novel \emph{self-adaptive transfer learning} (SATL) framework is proposed in this paper for glaucoma diagnosis. Specifically, we train a convolutional neural network in the source domain with sufficient labeled data. Then, the feature extraction layers of this trained model is shared as the encoder of a reconstruction network. The reconstruction network is trained in the target domain using only unlabeled data. The encoder is adapted to fit the distribution of target data while maintains the ability for glaucoma diagnosis. The contributions of this paper can be concluded as follows: (1) To the best of our knowledge, our work is the first to investigate the study of transfer adaptation learning for the classification of glaucoma with multicenter fundus retina images. (2) Our framework only uses unlabeled date in the target domain and is independent from source domain data, so it has great potential for real scene applications and can meet privacy protection policy for medical data. (3) Experimental results shows that our framework can preserve most of the classification ability of the off-shelf model and meanwhile improve its classification performance in target domain data. Even totally independent from source domain data, it outperforms other state-of-the-art domain adaptation methods such as CycleGAN, which heavily relies on source domain data in adaptation stage. \section{Related Works} Transfer adaptation learning(TAL)\cite{zhang2019transfer,wang2018deep} is the most relevant area with the proposed method. It is a combination of transfer learning (TL) and domain adaptation (DA) and can be categorized into three classes, which will be introduced respectively. \textbf {Instance Re-weighting Adaptation Learning (IRAL)} Methods in this area assign weights to the source domain instances based on their similarity to the target domain instances \cite{zhu2020boundary-weighted,qi2019label-efficient}. Via re-sampling or importance weighting, the performance of the trained source classifier in the target domain can be enhanced. However, the estimation of the assigned weights is under a prior-decided parametric distribution assumption\cite{zhang2019transfer}, which may differ from the true parametric distribution. \textbf {Feature Adaptation Learning (FAL)} For adapting datasets from multiple domains, methods in this category are widely proposed to find a feature representation space where the projected features from target and source domain follow similar distributions \cite{wang2019patch-based,shen2020domain-invariant}. In the past few years, the most famous FAL methods are GAN-based domain adaptation models. However, finding a general feature space for most domains remains challenging. Also, training a GAN-based domain adaptation model needs both source and target domain data, which is more and more impractical in the real scene due to the privacy protection policy for medical data. \textbf {Self-Supervised Transfer Learning (SSTL)} Algorithms in this category focus on training a supervised classifier on the source domain and then transfer its knowledge to the target domain via self-supervised learning \cite{cheng2015multimodal,cheplygina2018transfer,ghifary2016deep,sun2020gan} . For example, Cheplygina \emph{et al}\cite{cheplygina2018transfer} investigated a Gaussian texture features-based classification model of chronic obstructive pulmonary disease(COPD) in multicenter datasets. These methods integrate the data information from different domains by extracting some manually designed features from images, which limits the generalization ability of model. Ghifary \emph{et al} \cite{ghifary2016deep} is the most relative literature with our framework. Our method differs from \cite{ghifary2016deep} mainly in the network structure. Moreover, we explore application in glaucoma diagnosis in several datasets. \begin{figure*}[!ht] \begin{center} \centerline{\includegraphics[width=\linewidth]{fig1.png}} \caption{Illustration of the self-adaptive transfer learning (SATL) strategy, which is independent of the source domain data and more suitable for the real scene applications.} \label{fig1} \end{center} \end{figure*} \section{Method} The framework of the proposed method is illustrated in Fig. \ref{fig1}. The proposed SATL framework can transfer a pre-trained source classification model to a target domain without using neither source images nor labels. Let $f^{s}: \mathcal{X}^{s} \to \mathcal{Y}^{s}$ be the source pre-trained classification model and $f^{t}: \mathcal{X}^{t} \to \mathcal{X}^{t}_{rec}$ be the target reconstruction model. The feature encoder is denoted as $f_{enc}:\mathcal{X} \to \mathcal{F}$ and the lightweight classification function $f_{cls}: \mathcal{F} \to \mathcal{Y}$. We denote one more function: an decoder $f_{dec}:\mathcal{F} \to \mathcal{X}$ in $f^{t}$. Then, given an input sample $x$, $f^{s}$ and $f^{t}$ can be formulated as : \begin{equation} f^{s}(x) = f^{s}_{cls}(f^{s}_{enc}(x)); f^{t}(x) = f^{t}_{dec}(f^{t}_{enc}(x)) \end{equation} Once $f^{t}(x)$ is trained, we can build the self-adapted classification model $f^{t}_{SA}(x)$ for target domain image classification by $f^{t}_{SA}(x) = f^{s}_{cls}(f^{t}_{enc}(x))$ As shown in Fig. \ref{fig1}, the reconstruction model $f^{t}_{dec}$ is implemented as a variational auto-encoder (VAE), which can compress the image information and sample a latent vector $z$. The encoder of it $f^{t}_{enc}$ is initialized by the pre-trained source encoder $f^{s}_{enc}$. The loss function used to optimize the proposed self-adaptive reconstruction model can be represented as: \begin{equation} L(f^{t}_{enc},f^{t}_{dec},x^{t}) = \alpha \cdot L_{KL} + \beta \cdot L_{rec}, \end{equation} \begin{equation} L_{KL} = - KL(f_{enc}^{t}(z|x^{t})|f_{dec}^{t}(z|x^{t})), \end{equation} where the first term in the loss function $L_{KL}$ is the KL divergency of the latent vector distribution and the true data distribution. The second term $L_{rec}$ is the reconstruction loss between the output image and the input image. Instead of using a single MSE loss, we perform a new designed combination of two loss functions following \cite{li2017demystifying}. We argue that the self-adaptive reconstruction model should be guided to reconstruct high-level style information in the target domain images rather than just the pixel-wise texture. Thus, the reconstruction loss function designed in this paper is as: \begin{equation} L_{rec} = \beta_1 \cdot \sum_{i,j,k}(B^{output}_{ijk} - B^{input}_{ijk})^2 + \beta_2 \cdot \sum_{m,n}(G^{output}_{mn} - G^{input}_{mn})^2, \end{equation} where $B^{output}$ and $B^{input}$ denote the output and input of the reconstruction model, respectively. $i,j,k$ and $m,n$ represent the position indexes. $G^{output}$ and $G^{input}$ are the Gram matrices of $B^{output}$ and $B^{input}$. The gram matrix can be calculated as: \begin{equation} G = \frac{1}{n_i \times n_j \times n_k} \mathbf{v} \mathbf{v^{T}}, \end{equation} where $\mathbf{v}$ is the flattened column vector of $B^{output}$ or $B^{input}$. \section{Experiments and Results} \subsection{Datasets} \begin{table} \centering \caption{The statistical difference between three datasets} \begin{tabular}{c|c|c|c|c} \toprule Dataset& Domain& Samples & Pos vs. Neg & Avg of image size\\ \hline LAG (public)& Source / Target& 4854& 3143:1689& $300 \times 300$\\ \hline pri-RFG (private)& Source / Target& 1881& $1013:868$& $989 \times 989$\\ \hline REFUGE (public)& Target only& 400& $40:360$& $1062 \times 1062$\\ \bottomrule \end{tabular} \label{tab1} \end{table} We used two public datasets and one private dataset to validate the proposed SATL framework on glaucoma diagnosis task. The first public dataset is large-scale attention-based glaucoma(LAG) dataset\cite{li2019attention} established by Li \emph{et al}. The second is from the REFUGE challenge\cite{orlando2020refuge}. Moreover, we also collected 1881 retina fundus images from one collaborated hospital and built a private dataset (pri-RFG) via labeling all the images by experienced ophthalmologists. The details of the above-mentioned three datasets (LAG, REFUGE, pri-RFG) are summarized and tabulated in Table \ref{tab1}. We can observe that the scales, the average size of images and the ratio of samples in different datasets are quite various, making transfer learning between them challenging. Due to the small number of samples in dataset REFUGE, we just used it as target domain dataset, while LAG and pri-RFG are used for cross-domain evaluation. In other words, we implemented a total of four groups of experiments. Based on the direction from source domain to target domain, they can be represented as LAG $\to$ pri-RFG, pri-RFG $\to$ LAG, LAG $\to$ REFUGE and pri-RFG $\to$ REFUGE. When used as a source domain dataset, we separated training and validation set. When used as a target domain dataset, all the images were fed into the reconstruction model to train and adapt the encoder layers. \subsection{Implement Details and Evaluation Metrics}\label{3b} Both the source classification model and the target reconstruction model were implemented using Pytorch (version 1.3.0) and trained on an NVIDIA RTX 2080Ti GPU. We implemented the source classification model as a VGG\cite{simonyan2014very} and optimized it with cross entropy (CE) loss\cite{ng2001lag}. During the training stage of the source classification model, we set the learning rate as $10^{-6}$, weight decay as $5 \times 10^{-4}$. All the samples in the source domain were split into training set and validation set using a ratio of $7:3$ empirically, following stratified sampling method to ensure that the Pos vs. Neg ratios in each set are similar. At each iteration, a mini-batch of 16 samples were fed into the model. The number of training epochs was set as 50. To avoid the over-fitting issue, the model which achieved the maximum accuracy in the validation set was saved. During the training stage of the self-adaptive reconstruction model on the target dataset, the learning rate of the encoder was set as $10^{-7}$ and that of the rest layers was set as $10^{-3}$. To avoiding over-fitting on the reconstruction task and losing the ability to extract features that are useful for classification task, the target reconstruction model was trained for only 20 epochs. We empirically set the weights $\alpha$, $\beta_1$ and $\beta_2$ in the reconstruction loss function as 0.3, 0.2, 0.5, and the channel number of the latent vector in the model as 32. Once the target reconstruction model was trained, the self-adapted encoder of it was used as the feature extractor of a target classification model. The last lightweight FC layer of the source classification model played a role as classifier. This new combined target classification model was evaluated on target domain dataset by metrics in terms of Accuracy, Recall, Precision, F1 score and Area Under the ROC Curve (AUC). \subsection{Results and discussion} As described in Section \ref{3b}, based on the three available datasets, there are four executable domain adaptation directions denoted as LAG $\to$ pri-RFG, pri-RFG $\to$ LAG, LAG $\to$ REFUGE, and pri-RFG $\to$ REFUGE. For validating the effectiveness of the proposed SATL strategy, on each experiment direction we compared the performance of proposed method (\textbf{w/ SATL}) with the source classification model (\textbf{w/o SATL}) and a state-of-the-art CycleGAN-based domain adaptation method\cite{zhu2017unpaired} (\textbf{w/ CGAN}). The CycleGAN-based method trains a generator to transfer the target images to the source domain by adversarial learning. The most noteworthy difference between CycleGAN and the proposed SATL strategy is that: our method is completely independent of the source domain data while CycleGAN is not. More specifically, training CycleGAN to perform domain adaptation needs both source and target domain images. On the contrary, the proposed SATL strategy relies on only the target domain unlabeled images. \begin{table} \centering \caption{The classification performance of four groups of experiments} \begin{tabular}{c|c|c|c|c|c|c} \toprule Direction& \multicolumn{3}{c|}{LAG $\to$ pri-RFG} & \multicolumn{3}{c}{pri-RFG $\to$ LAG}\\ \hline Strategy& w/o SATL & w/ CGAN &w/ SATL & w/o SATL & w/ CGAN & w/ SATL\\ \hline Accuracy &0.799 &0.672 &\textbf{0.856} &0.352 &\textbf{0.628} &0.579 \\ Recall &0.659 &0.422 &\textbf{0.726} &\textbf{1.000} &0.707 &0.779\\ Precision &0.807 &\textbf{0.923} &0.855 &0.352 &\textbf{0.481} &0.445\\ F1 Score &0.726 &0.580 &\textbf{0.785} &0.521 &\textbf{0.573} &0.566\\ \hline Direction& \multicolumn{3}{c|}{LAG $\to$ REFUGE} & \multicolumn{3}{c}{pri-RFG $\to$ REFUGE}\\ \hline Strategy& w/o SATL& w/ CGAN & w/ SATL & w/o SATL & w/ CGAN & w/ SATL\\ \hline Accuracy &0.933 &0.913 &\textbf{0.945} &0.240 &0.540 &\textbf{0.580}\\ Recall &0.425 &\textbf{0.600} &0.500 &\textbf{0.975} &0.825 &0.850\\ Precision &0.810 &0.558 &\textbf{0.909} &0.114 &0.157 &\textbf{0.173}\\ F1 Score &0.557 &0.579 &\textbf{0.645} &0.204 &0.264 &\textbf{0.288}\\ \bottomrule \end{tabular} \label{tab2} \end{table} The experimental results of three strategies are tabulated in Table \ref{tab2}. Moreover, the ROC curves are also plotted and illustrated in Fig. \ref{fig2}. By observing the demonstrated results, two main conclusions can be drawn: (1) Compared to the source model without SATL, which can be seen as a baseline, the model with SATL outperforms in all four domain adaptation directions in terms of Accuracy and F1 Score. Despite there exist a mass of differences between three used datasets, SATL shows to be effective for self-supervised domain adaptation regardless of the source and target domain data distribution. This phenomenon shows that the proposed SATL is valuable and reliable for the production of pseudo labels in data from a grand-new hospital. (2) When testing the source model in the target domain images transferred by CycleGAN, the performance is comparable with the proposed SATL strategy in domain adaptation directions of pri-RFG $\to$ LAG and LAG $\to$ REFUGE. While in directions of LAG $\to$ pri-RFG and pri-RFG $\to$ REFUGE, the proposed SATL strategy surpasses the CycleGAN by a large margin. This phenomenon demonstrates that SATL is more robust and have more stable generalization ability in different domain adaptation scenes. Note that CycleGAN uses the source domain images in the domain adaptation stage while the proposed SATL does not. Thus, our method which is completely independent of the source domain is more feasible in real scene applications. It can ensure the isolation of multi-center datasets and meet the privacy protection policy. \textbf{Discussion} Despite the proposed method improves the performance of the classification model in the target domain via self-supervised training, there still remains some research worth exploring for enhancing the performance. For example, in this paper, we directly trained and validated the source classification model on the source domain. However, it may be a better option to initialize the source classification model by a model pre-trained on large scale nature image datasets such as ImageNet. Besides, the backbone used in this paper is VGG for the convenience of building the reconstruction VAE model. In the future, it can also be replaced by other state-of-the-art backbone such as Inception\cite{szegedy2016inception-v4} or SENet\cite{hu2019squeeze-and-excitation}. Last but not least, the features adapted by SATL framework in the target domain need to be explore and compare with that before SATL. Further improvement in glaucoma diagnosis may be achieved by learning features which can better represent ONH traits. \begin{figure}[!ht] \begin{center} \centerline{\includegraphics[width=\linewidth]{fig33.png}} \caption{ROC curves of the models evaluated in all four domain adaptation directions.} \label{fig2} \end{center} \end{figure} \section{Conclusion} In this paper, we present a self-adaptive transfer learning (SATL) strategy to fill the domain gap between multicenter datasets and perform the evaluation in glaucoma classification based on three fundus retina image datasets. Specifically, a reconstruction model is trained using only target domain unlabeled images. The encoder of this reconstruction model is initialized from a pre-trained source classification model and self-adapted in the target domain. Experimental results demonstrate that the proposed SATL strategy enhances the classification performance in the target domain and outperforms another state-of-the-art domain adaptation method which even utilizes source domain images for training, as well. In the near future, more efforts will be devoted to exploring how to furthermore lifting the performance of the self-supervised domain adaptation method via designing new reconstruction losses. Moreover, we will extend this strategy to other medical image analysis problems. \textbf{Acknowledgement.} This work was supported in part by Department of Science and Technology of Zhejiang Province - Key Research and Development Program under Grant 2017C03029 and the Biomedical Engineering Interdisciplinary Research Fund of Shanghai Jiao Tong University under Grant YG2020YQ17. {\small \nocite{*} \bibliographystyle{splncs04}
{ "redpajama_set_name": "RedPajamaArXiv" }
5,277
John Francis Clauser (; born December 1, 1942) is an American theoretical and experimental physicist known for contributions to the foundations of quantum mechanics, in particular the Clauser–Horne–Shimony–Holt inequality. Clauser was awarded the 2022 Nobel Prize in Physics, jointly with Alain Aspect and Anton Zeilinger "for experiments with entangled photons, establishing the violation of Bell inequalities and pioneering quantum information science". Biography Clauser was born in Pasadena, California. His father, Francis H. Clauser, was a professor of aeronautical engineering who founded and chaired the aeronautics department at Johns Hopkins University. He later served as the Clark Blanchard Millikan Professor of Engineering at the California Institute of Technology (Caltech). His mother, Catharine McMillan, was the humanities librarian at Caltech and sister of 1951 Nobel Prize in Chemistry laureate Edwin McMillan. Clauser initially found quantum mechanics to be daunting—the field which would become his life's work—and had to repeat a course in Advanced Quantum Mechanics three times before he passed. He received a bachelor of science in physics from Caltech in 1964, where he was a member of Dabney House. He received a master of arts in physics in 1966 and a doctor of philosophy in physics in 1969 from Columbia University under the direction of Patrick Thaddeus. From 1969 to 1975, he worked as a postdoctoral researcher at the University of California, Berkeley and Lawrence Berkeley National Laboratory. In 1972, working with Berkeley graduate student Stuart Freedman, he carried out the first experimental test of the CHSH-Bell's theorem predictions. This was the first experimental observation of a violation of a Bell inequality. In 1974, working with Michael Horne, he first showed that a generalization of Bell's Theorem provides severe constraints for all local realistic theories of nature (a.k.a. objective local theories). That work introduced the Clauser–Horne (CH) inequality as the first fully general experimental requirement set by local realism. It also introduced the "CH no-enhancement assumption", whereupon the CH inequality reduces to the CHSH inequality, and whereupon associated experimental tests also constrain local realism. Also in 1974 he made the first observation of sub-Poissonian statistics for light (via a violation of the Cauchy–Schwarz inequality for classical electromagnetic fields), and thereby, for the first time, demonstrated an unambiguous particle-like character for photons. Clauser worked as a research physicist mainly at Lawrence Livermore and Berkeley from 1975 to 1997. In 1976 he carried out the world's second experimental test of the CHSH-Bell's Theorem predictions. Clauser was awarded the Wolf Prize in Physics in 2010 together with Alain Aspect and Anton Zeilinger. The three were also jointly awarded the 2022 Nobel Prize in Physics. See also Epistemological Letters References External links Oral history interview transcript with John Clauser on 20, 21, and 23 May 2020, American Institute of Physics, Niels Bohr Library & Archives John Clauser's homepage 1942 births People from Pasadena, California 20th-century American physicists 21st-century American physicists Columbia Graduate School of Arts and Sciences alumni University of California, Berkeley staff Wolf Prize in Physics laureates Living people Nobel laureates in Physics American Nobel laureates
{ "redpajama_set_name": "RedPajamaWikipedia" }
5,727
Q: I wrote this and my teacher says that there's a problem with "exit" but I can't see why This is the code and as I said he says that there's a problem with exit. DECLARE a number:=1; b number:=6; exit BOOLEAN; BEGIN exit:='FALSE'; WHILE NOT exit LOOP IF a>=b THEN exit:='TRUE'; ELSE a:=(a+1); END IF; END LOOP; END; A: exit is a statement, and so you should not use it as a variable -- it is a reserved word. Just the statement on its own will exit the loop: DECLARE a number:=1; b number:=6; BEGIN LOOP IF a>=b THEN EXIT; ELSE a:=a+1; END IF; END LOOP; END; You can also combine EXIT with a WHEN clause: DECLARE a number:=1; b number:=6; BEGIN LOOP EXIT WHEN a>=b; a:=a+1; END LOOP; END; Or you can add the inverse condition as a WHILE clause to the LOOP construct: DECLARE a number:=1; b number:=6; BEGIN WHILE a < b LOOP a:=a+1; END LOOP; END;
{ "redpajama_set_name": "RedPajamaStackExchange" }
3
Q: reactjs setState doesn't change properly I have a small react app. When selected dep/arr station changes it refetches the schedule data from the BART API. For some reason with the old code (see below) it didn't work properly. I did setState first then tried to use the new value of the depStation, but it showed the previous value. So for instance let's say I reload the page the initial depState value is SFIA. If I change the value to FRMT then the console.log(selected) shows FRMT but the console.log(this.state.depStation) still shows SFIA. If I change it again to HAYW the console.log(selected) shows HAYW but the console.log(this.state.depStation) shows FRMT. To make the app work I simply just used the selected instead of the this.state.depStation, but I guess this is not the best approach. So I don't really understand why this.state.depStation shows the prev data after calling this.setState({depStation: selected}). Could sby explain me why this is happening? Old version which was not working: reRunFetching(selected, type) { if (type === "depart") { this.setState({depStation: selected}); console.log(selected); //Showing properly what I select console.log(this.state.depStation); //For some reason the previously selected } else if (type === "arrive") { this.setState({arrStation: selected}); } this.fetchingAPI(this.state.depStation, this.state.arrStation) } New version. This is working fine, but I guess it's not the best solution: reRunFetching(selected, type) { if (type === "depart") { this.fetchingAPI(selected, this.state.arrStation) this.setState({depStation: selected}); console.log(selected, this.state.arrStation); } else if (type === "arrive") { this.fetchingAPI(this.state.depStation, selected) this.setState({arrStation: selected}); console.log(this.state.depStation, selected); } } rest of index.js class App extends Component { constructor(props) { super(props); this.state = { schedules: [], depStation: DEP_STATION, arrStation: ARR_STATION }; this.fetchingAPI(this.state.depStation, this.state.arrStation) } fetchingAPI(departureStation, arrivalStation) { fetch("http://api.bart.gov/api/sched.aspx?cmd=depart&orig=" + departureStation + "&dest=" + arrivalStation + "&date=now&key=MW9S-E7SL-26DU-VV8V&b=0&a=4&l=0") .then(function(response) { return response.text();}) .then((responseXML) => { let tripsArray = this.parsingXML(responseXML); this.setState({schedules: tripsArray}); }) .catch((error) => { console.log(error); }); } render () { return ( <div> <div className="row"> <div className="col-md-5 col-md-offset-1 search-bar"> <SelectDepart onSelectedChange={selected => this.reRunFetching(selected, "depart")}/> </div> <div className="col-md-5 search-bar"> <SelectArrive onSelectedChange={selected => this.reRunFetching(selected, "arrive")}/> </div> </div> <TimeTable schedules={this.state.schedules} /> </div> ) } A: setState() is an asynchronous non-blocking method which doesn't immediately set the new state, as you expect it to. As the official docs says: setState() does not immediately mutate this.state but creates a pending state transition. Accessing this.state after calling this method can potentially return the existing value. There is no guarantee of synchronous operation of calls to setState and calls may be batched for performance gains. If you need you can pass a callback a a second argument to setState() and it will be fired on state change: this.setState({depStation: selected}, function() { // some code }); A: setState() is an asynchronous method which queue your state. so in order to access state value immidiately after setState you need to call a callback function as second argument to setState method, which will first set your state and after that will re render your view with updated state. below example will help you. this.setState({ depStation: selected }, () => { // here you will get your depStation state value console.log(this.state.depStation,"depStation value") });
{ "redpajama_set_name": "RedPajamaStackExchange" }
33
const mix = require('laravel-mix'); /* |-------------------------------------------------------------------------- | Mix Asset Management |-------------------------------------------------------------------------- | | Mix provides a clean, fluent API for defining some Webpack build steps | for your Laravel application. By default, we are compiling the Sass | file for the application as well as bundling up all the JS files. | */ const node_path = 'node_modules'; const assets_path = 'resources/assets'; const dist_path = 'public'; const paths = { 'ace' : `${node_path}/ace-min-noconflict`, 'backbone' : `${node_path}/backbone`, 'bootstrap_sass' : `${node_path}/bootstrap-sass`, 'clipboard' : `${node_path}/clipboard`, 'cropper' : `${node_path}/cropper`, 'ionicons' : `${node_path}/ionicons`, 'jquery' : `${node_path}/jquery`, 'jquery_sortable' : `${node_path}/jquery-sortable`, 'livestamp' : `${node_path}/livestamp`, 'localization' : 'vendor/andywer/js-localization', 'moment' : `${node_path}/moment`, 'morris' : `${node_path}/morris.js`, 'raphael' : `${node_path}/raphael`, 'select2' : `${node_path}/select2`, 'socketio_client' : `${node_path}/socket.io-client`, 'toastr' : `${node_path}/toastr`, 'underscore' : `${node_path}/underscore` }; const skeletons = [ `${assets_path}/js/components/dashboard/commands.js`, `${assets_path}/js/components/dashboard/configFiles.js`, `${assets_path}/js/components/dashboard/environments.js`, `${assets_path}/js/components/dashboard/sharedFiles.js`, `${assets_path}/js/components/dashboard/variables.js`, `${assets_path}/js/components/dashboard/servers.js`, ]; mix .options({ processCssUrls: false }) .copyDirectory(`${paths.bootstrap_sass}/assets/fonts/bootstrap`, `${dist_path}/fonts`) .copyDirectory(`${paths.ionicons}/fonts`, `${dist_path}/fonts`) .scripts([ `${paths.jquery}/dist/jquery.min.js`, `${paths.jquery_sortable}/source/js/jquery-sortable-min.js`, `${paths.underscore}/underscore-min.js`, `${paths.moment}/min/moment-with-locales.min.js`, `${paths.bootstrap_sass}/assets/javascripts/bootstrap.min.js`, `${paths.select2}/dist/js/select2.min.js`, `${paths.raphael}/raphael.min.js`, `${paths.morris}/morris.min.js`, `${paths.backbone}/backbone-min.js`, `${paths.socketio_client}/dist/socket.io.js`, `${paths.localization}/resources/js/config.js`, `${paths.localization}/resources/js/localization.js`, `${paths.toastr}/build/toastr.min.js`, `${paths.clipboard}/dist/clipboard.min.js`, `${paths.cropper}/dist/cropper.min.js`, `${paths.livestamp}/livestamp.js` ], `${dist_path}/js/vendor.js`) .scripts([ `${paths.ace}/ace.js`, `${paths.ace}/mode-sh.js`, `${paths.ace}/mode-php.js`, `${paths.ace}/mode-yaml.js`, `${paths.ace}/mode-ini.js` ], `${dist_path}/js/ace.js`) .scripts([ `${assets_path}/js/components/admin/groups.js`, `${assets_path}/js/components/admin/providers.js`, `${assets_path}/js/components/admin/projects.js`, `${assets_path}/js/components/admin/keys.js`, `${assets_path}/js/components/admin/cabinets.js`, `${assets_path}/js/components/admin/users.js` ].concat(skeletons), `${dist_path}/js/admin.js`) .scripts([ `${assets_path}/js/components/dashboard/commands.js`, `${assets_path}/js/components/dashboard/tasks.js`, `${assets_path}/js/components/dashboard/hooks.js`, `${assets_path}/js/components/dashboard/members.js`, `${assets_path}/js/components/dashboard/projects.js`, `${assets_path}/js/components/dashboard/patterns.js`, `${assets_path}/js/components/dashboard/profile.js`, `${assets_path}/js/components/dashboard/environmentLinks.js`, `${assets_path}/js/components/dashboard/releases.js`, `${assets_path}/js/components/dashboard/cabinets.js` ].concat(skeletons), `${dist_path}/js/dashboard.js`) .scripts([ `${assets_path}/js/bootstrap.js`, `${assets_path}/js/piplin.js`, `${assets_path}/js/utils/uploader.js`, ], `${dist_path}/js/app.js`) .styles([ `${paths.select2}/dist/css/select2.min.css`, `${paths.morris}/morris.css`, `${paths.ionicons}/css/ionicons.min.css`, `${paths.toastr}/build/toastr.min.css`, `${paths.cropper}/dist/cropper.min.css` ], `${dist_path}/css/vendor.css`) .sass(`${assets_path}/sass/app.scss`, `${dist_path}/css/app.css`); if (mix.inProduction()) { mix.version(); } if (!mix.inProduction()) { mix.sourceMaps() mix.browserSync({proxy: 'piplin.app'}) }
{ "redpajama_set_name": "RedPajamaGithub" }
3,286
{"url":"https:\/\/orbi.uliege.be\/ph-search?uid=U","text":"Publications of\u00a0??? \u00a0\u00a0\u00a0 Results 1-20 of 713. 1 2 3 4 5 6 7 8 9 10 \u00a0 The path towards high-contrast imaging with the VLTI: the Hi-5 projectDefrere, Denis ; Absil, Olivier ; Berger, J.-P. et alin Experimental Astronomy: Astrophysical Instrumentation and Methods (in press), 1801The development of high-contrast capabilities has long been recognized as one of the top priorities for the VLTI. As of today, the VLTI routinely achieves contrasts of a few 10$^{-3}$ in the near-infrared ... [more\u00a0\u25bc]The development of high-contrast capabilities has long been recognized as one of the top priorities for the VLTI. As of today, the VLTI routinely achieves contrasts of a few 10$^{-3}$ in the near-infrared with PIONIER (H band) and GRAVITY (K band). Nulling interferometers in the northern hemisphere and non-redundant aperture masking experiments have, however, demonstrated that contrasts of at least a few 10$^{-4}$ are within reach using specific beam combination and data acquisition techniques. In this paper, we explore the possibility to reach similar or higher contrasts on the VLTI. After reviewing the state-of-the-art in high-contrast infrared interferometry, we discuss key features that made the success of other high-contrast interferometric instruments (e.g., integrated optics, nulling, closure phase, and statistical data reduction) and address possible avenues to improve the contrast of the VLTI by at least one order of magnitude. In particular, we discuss the possibility to use integrated optics, proven in the near-infrared, in the thermal near-infrared (L and M bands, 3-5 $\\mu$m), a sweet spot to image and characterize young extra-solar planetary systems. Finally, we address the science cases of a high-contrast VLTI imaging instrument and focus particularly on exoplanet science (young exoplanets, planet formation, and exozodiacal disks), stellar physics (fundamental parameters and multiplicity), and extragalactic astrophysics (active galactic nuclei and fundamental constants). Synergies and scientific preparation for other potential future instruments such as the Planet Formation Imager are also briefly discussed. [less\u00a0\u25b2]Detailed reference viewed: 8 (0 ULi\u00e8ge) Human brain patterns underlying vigilant attention: impact of sleep debt, circadian phase and attentional engagementMaire, Micheline; Reichert, Carolin Franziska; Gabel, Virginie et alin Scientific Reports (2018), 8(1), 970Detailed reference viewed: 19 (1 ULi\u00e8ge) The Ionospheric Connection Explorer Mission: Mission Goals and DesignImmel, T. J.; England, S. L.; Mende, S. B. et alin Space Science Reviews (2018), 214(13), The Ionospheric Connection Explorer, or ICON, is a new NASA Explorer mission that will explore the boundary between Earth and space to understand the physical connection between our world and our space ... [more\u00a0\u25bc]The Ionospheric Connection Explorer, or ICON, is a new NASA Explorer mission that will explore the boundary between Earth and space to understand the physical connection between our world and our space environment. This connection is made in the ionosphere, which has long been known to exhibit variability associated with the sun and solar wind. However, it has been recognized in the 21st century that equally significant changes in ionospheric conditions are apparently associated with energy and momentum propagating upward from our own atmosphere. ICON's goal is to weigh the competing impacts of these two drivers as they influence our space environment. Here we describe the specific science objectives that address this goal, as well as the means by which they will be achieved. The instruments selected, the overall performance requirements of the science payload and the operational requirements are also described. ICON's development began in 2013 and the mission is on track for launch in 2018. ICON is developed and managed by the Space Sciences Laboratory at the University of California, Berkeley, with key contributions from several partner institutions. [less\u00a0\u25b2]Detailed reference viewed: 17 (2 ULi\u00e8ge) Effect of procalcitonin-guided antibiotic treatment on mortality in acute respiratory infections: a patient level meta-analysisSchuetz, Philipp; Wirz, Yannick; Sager, Ramon et alin Lancet Infectious Diseases (2018), 18(1), 95-107Background In February, 2017, the US Food and Drug Administration approved the blood infection marker procalcitonin for guiding antibiotic therapy in patients with acute respiratory infections. This meta ... [more\u00a0\u25bc]Background In February, 2017, the US Food and Drug Administration approved the blood infection marker procalcitonin for guiding antibiotic therapy in patients with acute respiratory infections. This meta-analysis of patient data from 26 randomised controlled trials was designed to assess safety of procalcitonin-guided treatment in patients with acute respiratory infections from different clinical settings. Methods Based on a prespecified Cochrane protocol, we did a systematic literature search on the Cochrane Central Register of Controlled Trials, MEDLINE, and Embase, and pooled individual patient data from trials in which patients with respiratory infections were randomly assigned to receive antibiotics based on procalcitonin concentrations (procalcitonin-guided group) or control. The coprimary endpoints were 30-day mortality and setting-specific treatment failure. Secondary endpoints were antibiotic use, length of stay, and antibiotic side-effects. Findings We identified 990 records from the literature search, of which 71 articles were assessed for eligibility after exclusion of 919 records. We collected data on 6708 patients from 26 eligible trials in 12 countries. Mortality at 30 days was significantly lower in procalcitonin-guided patients than in control patients (286 [9%] deaths in 3336 procalcitonin-guided patients vs 336 [10%] in 3372 controls; adjusted odds ratio [OR] 0\u00b783 [95% CI 0\u00b770 to 0\u00b799], p=0\u00b7037). This mortality benefit was similar across subgroups by setting and type of infection (pinteractions>0\u00b705), although mortality was very low in primary care and in patients with acute bronchitis. Procalcitonin guidance was also associated with a 2\u00b74-day reduction in antibiotic exposure (5\u00b77 vs 8\u00b71 days [95% CI \u22122\u00b771 to \u22122\u00b715], p<0\u00b70001) and a reduction in antibiotic-related side-effects (16% vs 22%, adjusted OR 0\u00b768 [95% CI 0\u00b757 to 0\u00b782], p<0\u00b70001). Interpretation Use of procalcitonin to guide antibiotic treatment in patients with acute respiratory infections reduces antibiotic exposure and side-effects, and improves survival. Widespread implementation of procalcitonin protocols in patients with acute respiratory infections thus has the potential to improve antibiotic management with positive effects on clinical outcomes and on the current threat of increasing antibiotic multiresistance. Funding National Institute for Health Research. \u00a9 2018 Elsevier Ltd [less\u00a0\u25b2]Detailed reference viewed: 28 (0 ULi\u00e8ge) A randomized phase II study evaluating different maintenance schedules of nab-Paclitaxel in the first-line treatment of metastatic breast cancer: final results of the IBCSG 42-12\/BIG 2-12 SNAP trial.Gennari, A.; Sun, Z.; Hasler-Strub, U. et alin Annals of oncology : official journal of the European Society for Medical Oncology (2017)Background: The phase II SNAP trial was designed to evaluate the efficacy of alternative chemotherapy schedules for prolonged administration in HER2-negative metastatic breast cancer (MBC), after a short ... [more\u00a0\u25bc]Background: The phase II SNAP trial was designed to evaluate the efficacy of alternative chemotherapy schedules for prolonged administration in HER2-negative metastatic breast cancer (MBC), after a short induction at conventional doses. Methods: Between April 2013 and August 2015, 258 women untreated with chemotherapy for MBC were randomly assigned to receive three different maintenance chemotherapy schedules after three cycles of identical induction chemotherapy: Arm A, nab-Paclitaxel 150 mg\/m2 days 1,15 Q28; Arm B, nab-Paclitaxel 100 mg\/m2 days 1,8,15 Q28; Arm C, nab-Paclitaxel 75 mg\/m2 days 1,8,15,22 Q28. Induction was three cycles nab-Paclitaxel 150\/125 mg\/m2, days 1,8,15 Q28. The primary objective was to evaluate the efficacy of each maintenance schedule, in terms of progression-free survival (PFS), as compared to the historical reference of 7-month median PFS reported by previous studies with first-line docetaxel. One-sample, one-sided log-rank tests were utilized. Quality-of-life evaluation was performed, global indicator for physical well-being was defined as the primary endpoint; completion rates of quality-of-life forms were >90%. Results: 255 patients were evaluable for the primary endpoint. After 18.2 months median follow-up, 182 PFS events were observed. Median PFS was 7.9 months (90%CI 6.8-8.4) in Arm A, 9.0 months (90%CI 8.1-10.9) in Arm B and 8.5 months (90%CI 6.7-9.5) in Arm C. PFS in Arm B was significantly longer than the historical reference of first-line docetaxel (P=0.03). Grade>\/=2 sensory neuropathy was reported in 37.9%, 36.1% and 31.2% of patients in Arm A, Arm B and Arm C, respectively (Grade>\/=3 in 9.1%, 5.6% and 6.6% of patients, respectively). Noteworthy, the quality-of-life scores for sensory neuropathy did not worsen with prolonged nab-Paclitaxel administration in any of the maintenance arms. Conclusion: The SNAP trial demonstrated that alternative nab-Paclitaxel maintenance schedules with reduced dosages after a short induction at conventional doses are feasible and active in the first-line treatment of MBC. Registration: ClinicalTrials.gov NCT01746225. [less\u00a0\u25b2]Detailed reference viewed: 23 (0 ULi\u00e8ge) NEAR: Low-mass Planets in \u03b1 Cen with VISIRKasper, M.; Arsenault, R.; K\u00e4ufl, H.-U. et alin The Messenger (2017), 169ESO, in collaboration with the Breakthrough Initiatives, is working to modify the Very Large Telescope mid-IR imager (VISIR) to greatly enhance its ability to search for potentially habitable planets ... [more\u00a0\u25bc]ESO, in collaboration with the Breakthrough Initiatives, is working to modify the Very Large Telescope mid-IR imager (VISIR) to greatly enhance its ability to search for potentially habitable planets around both components of the binary Alpha Centauri, part of the closest stellar system to the Earth. Much of the funding for the NEAR (New Earths in the Alpha Cen Region) project is provided by the Breakthrough Initiatives, and ESO mostly provides staff and observing time. The concept combines adaptive optics using the deformable secondary mirror at Unit Telescope 4, a new annular groove phase mask (AGPM) coronagraph optimised for the most sensitive spectral bandpass in the N-band, and a novel internal chopper system for noise filtering based on a concept for longer wavelengths invented by the microwave pioneer Robert Dicke. The NEAR experiment is relevant to the mid-infrared METIS instrument on the Extremely Large Telescope, as the knowledge gained and proof of concept will be transferable. [less\u00a0\u25b2]Detailed reference viewed: 8 (0 ULi\u00e8ge) Cognitive brain responses during circadian wake-promotion: evidence for sleep- pressure-dependent hypothalamic activationsReichert, Carolin Franziska; Maire, Micheline; Gabel, Virginie et alin Scientific Reports (2017), 7(1), Detailed reference viewed: 25 (0 ULi\u00e8ge) Gaia Data Release 1. Open cluster astrometry: performance, limitations, and future prospectsGaia Collaboration; van Leeuwen, F.; Vallenari, A. et alin Astronomy and Astrophysics (2017), 601Context. The first Gaia Data Release contains the Tycho-Gaia Astrometric Solution (TGAS). This is a subset of about 2 million stars for which, besides the position and photometry, the proper motion and ... [more\u00a0\u25bc]Context. The first Gaia Data Release contains the Tycho-Gaia Astrometric Solution (TGAS). This is a subset of about 2 million stars for which, besides the position and photometry, the proper motion and parallax are calculated using Hipparcos and Tycho-2 positions in 1991.25 as prior information.","date":"2018-02-18 01:28:05","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.3585664629936218, \"perplexity\": 9284.92166366463}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2018-09\/segments\/1518891811243.29\/warc\/CC-MAIN-20180218003946-20180218023946-00035.warc.gz\"}"}
null
null
\section{Introduction} Distinguishing between moving and static parts in the surrounding environment is an important task for autonomous vehicles. Compared with static objects that occupy most of the space, moving objects usually account for only a small part. This unbalance distribution increases the difficulty of the MOS task. In addition, considering the influence of the MOS accuracy on navigation safety, pose estimation, mapping, and path planning tasks, reliable and real-time moving object segmentation is actually a necessary and challenging task. \begin{figure}[!t] \begin{center} \makebox[1pt]{\includegraphics[width=\linewidth]{flowchart.eps}} \end{center} \caption{Range-based LiDAR segmentation methods. (a) Multiple-scan-based LiDAR semantic segmentation method. It uses consecutive range images as inputs and outputs corresponding semantic segmentation results, including moving information. (b) LiDAR-based moving object segmentation method. It takes range images and residual images as inputs and outputs the MOS results. (c) Our semantics-guided moving object segmentation method. It includes a single-scan-based semantic segmentation module and a multiple-scan-based MOS module. The latter takes range images and semantic features of the former as inputs. Note that to save space, only the middle size of 64 $\times$ 512 is displayed instead of the full size. This similarly applies to the rest figures of the paper.} \label{fig:flowchart} \end{figure} The existing MOS methods are mainly based on camera images. The LiDAR sensors are rarely used due to the lack of labeled MOS data. In recent years, with the release of the SemanticKITTI MOS dataset, the LiDAR-based MOS methods have gradually attracted more and more attention. In this paper, we focus on developing a MOS network with LiDAR data only. The MOS task can be regarded as a simplified version of the multiple-scan-based semantic segmentation one. The latter not only predicts the semantic classes of each LiDAR point but also determines whether it is moving or not. However, in practical applications, the semantic classes of the moving objects, such as vehicles and pedestrians, have no significant difference in the decision-making process of autonomous vehicles. Therefore, this paper only focuses on the separation of moving and static objects. The LiDAR-based MOS methods usually use range images and residual images as inputs and conduct the MOS task directly. By analyzing the multiple-scan-based semantic segmentation methods, we find that the semantic cues are more useful for the MOS task than the commonly used raw LiDAR data. Therefore, instead of dealing with the MOS task alone, we associate the MOS task with the single-scan-based semantic segmentation task. This is similar to the multiple-scan-based semantic segmentation methods. The unified method can predict not only the moving labels but also the semantic ones. Compared with the multiple-scan-based semantic segmentation methods, the combination of the MOS and single-scan-based semantic segmentation has the following advantages: First, the complex multiple-scan-based semantic segmentation task is divided into modules of commonly researched single-scan-based semantic segmentation and relatively simple moving object segmentation. This can make full use of the existing single-scan-based semantic segmentation network architectures. Second, the modular design allows us to directly use the pre-trained single-scan-based models, instead of training from scratch. This simplifies the training process, we can only train the MOS module. The single-scan-based semantic segmentation methods can also upgrade the MOS capability in a quick time without much modification. In addition, in order to effectively connect the single-scan-based semantic segmentation module and the MOS module, we propose a novel adjacent scan association (ASA) module to explore the semantic feature association of adjacent LiDAR scans. The ASA module can establish correspondences of the semantic features according to the relative poses between scans. This helps to accurately transfer information between scans. The semantic features are transformed into the same coordinate system and the semantic differences can be used in the MOS task. In sum, we propose a semantics-guided CNN model for the MOS task. The network consists of three modules: a single-scan-based semantic segmentation module, an adjacent scan association (ASA) module, and a moving object segmentation (MOS) module. As a key component, the ASA module associates the semantic features from the semantic segmentation module, then transforms and transmits them to the MOS module. The main contributions of this paper are: (i) The semantic aware MOS simplifies the training process by making use of the existing single-scan-based semantic segmentation network architectures and pre-trained models. (ii) The ASA module realizes accurate association between semantic features of different LiDAR scans. (iii) The modular design enables the upgrading of a single module. Experimental results on the SemanticKITTI MOS dataset show that the proposed network\footnote{https://competitions.codalab.org/competitions/28894\#results} can obtain accurate MOS performance in a simple and fast way. \section{Related Works} \begin{figure*}[!t] \begin{center} \makebox[1pt]{\includegraphics[width=\linewidth]{flowchart2.eps}} \end{center} \caption{The flowchart of the proposed semantics-guided moving object segmentation method from LiDAR point cloud. The whole network consists of three modules and conducts single-scan-based semantic segmentation and multiple-scan-based moving object segmentation in a cascaded way. The semantic segmentation module learns the semantic features of each LiDAR point. The adjacent scan association module converts the features of different scans to the same coordinate system. The moving object segmentation module combines the transformed semantic features and LiDAR range images to differentiate the moving objects. } \label{fig:flowchart2} \end{figure*} \subsection{LiDAR Data Representation} \label{sec:representation} LiDAR is an important sensor for autonomous vehicles. The typical LiDAR data representations can be roughly divided into three categories, including point-based, voxel-based, and projection-based techniques. The point-based methods process the unstructured LiDAR point clouds directly. PointNet\cite{QiSMG17} is the first point-based method that aggregates information through a permutation invariant operation on the raw LiDAR point clouds. Recent works focus on developing special convolution operations and kernels for point clouds, such as PointConv\cite{WuQL19}, TangentConv\cite{TatarchenkoPKZ18} and KPConv\cite{ThomasQDMGG19}. However, due to memory requirements and time complexity, most point-based methods still struggle on large-scale point clouds. The voxel-based methods transform the LiDAR point clouds into 3D volumetric grids. MinkowskiNet\cite{ChoyGS19} utilizes the sparse 3D convolutions to efficiently process the voxelized LiDAR data. SegCloud\cite{TchapmiCAGS17} applies a 3D Fully Convolutional Neural Network (FCNN) on the voxelized point clouds. Cylinder3D\cite{Zhu0WHM00L21} splits the raw point clouds into cylindrical grids. However, the 3D convolutions are computationally intensive and the voxels are usually sparse, which limits the resolution of voxels and the overall performance. The projection-based methods project the LiDAR point clouds onto regular 2D images and use the well-researched 2D CNNs. In SqueezeSeg\cite{WuWYK18}, SqueezeSegV2\cite{WuZZYK19} and RangeNet++\cite{MiliotoVBS19}, the spherical projection is performed on the LiDAR data. The point clouds can also be projected into cartesian bird's eye view (BEV) (VolMap\cite{abs-1906-11873}) or polar BEV (PolarNet\cite{0035ZDYXGF20}). The projection-based methods can achieve real-time performance, but the accuracy is often limited due to the information loss in the projection process. Considering the computation and time cost, in this work, we use spherical projection to convert the LiDAR data into 2D range images. \subsection{LiDAR Semantic Segmentation} Semantic segmentation and MOS are closely relevant tasks. According to the number of scans used, the LiDAR-based semantic segmentation methods can be divided into single-scan-based semantic segmentation and multiple-scan-based ones. The single-scan-based semantic segmentation methods\cite{QiYSG17,LandrieuS18,Hu0XRGWTM20,ChengRTLL21} deal with the static LiDAR data and can only find the movable objects, such as vehicles and humans, not the real moving objects. The multiple-scan-based semantic segmentation methods\cite{ShiLWH020,DuerrPWB20,CaoLPLYL20,TangLZLLWH20} takes the continuous sequence of LiDAR scans as inputs. They fuse the spatial information of a single scan and the temporal information between adjacent scans to predict the semantic and moving characteristics of each LiDAR point. The multiple-scan-based semantic segmentation methods can be viewed as the combination of the single-scan-based semantic segmentation and the multiple-scan-based MOS. Inspired by the multiple-scan-based semantic segmentation methods, we use the off-the-shelf single-scan-based LiDAR semantic segmentation models to assist the MOS process. \subsection{LiDAR Moving Object Segmentation} A variety of approaches have been proposed for MOS based on camera images only\cite{BarnesMPP18,PatilBDM20} or with both camera and LiDAR data\cite{YanCMSM14,PosticaRM16}. Recently, the LiDAR-based MOS methods have achieved more and more attention. Some LiDAR-based MOS methods\cite{KimK20a,PagadANRKY20,SchauerN18} focus on classifying dynamic points by checking the inconsistency between the query scan and the pre-constructed map. However, the static map can only be built in an offline way, which affects the actual deployment of these methods. In recent years, the map-free MOS methods using only LiDAR data have achieved success. Ruchti et al.\cite{RuchtiB18} predict the point-wise probabilities of points belonging to moving objects by a learning-based method. Chen et al.\cite{ChenLMWGBS21} combines the LiDAR range images and the residual images to exploit the temporal information and differentiate the moving objects. He et al.\cite{HeERR22} proposes a sequential scene flow estimation method to learn the motion information of the point clouds. In this work, we propose a semantic-aware MOS model. The MOS task is reformulated as cascaded single-scan-based semantic segmentation and multiple-scan-based MOS task. The semantic features will greatly improve the MOS performance. \section{Method} The proposed network conducts a single-scan-based semantic segmentation and a multiple-scan-based MOS in turn as shown in Fig. \ref{fig:flowchart2}. \subsection{Semantic Segmentation} \label{sec:semantic} To achieve real-time performance, the single-scan-based semantic segmentation module uses LiDAR range images as inputs. The LiDAR points are projected onto a 2D image plane according to the yaw and pitch angles. Assuming the LiDAR point is $P=(x,y,z)$, the transformation is defined as follows, \begin{equation} \begin{cases} \theta_{yaw} = arctan(y,x)\\ \theta_{pitch} = arcsin(z/{\sqrt{x^2+y^2+z^2}}) \\ u = 0.5*[(1-\theta_{yaw}/\pi)] \cdot W\\ v = [1-(\theta_{pitch}-f_{down})/f] \cdot H \\ \end{cases} \label{equ:spherical} \end{equation} where $p=(u,v)$ are the corresponding coordinates in the LiDAR range image, $\theta_{yaw}$ and $\theta_{pitch}$ denoting the yaw and pitch angles, $W$ and $H$ representing the width and height of the range image, respectively. $f=f_{up}-f_{down}$ is the vertical field-of-view of the LiDAR sensor. Based on the correspondences between $P$ and $p$ in Eq.\ref{equ:spherical}, we can get the LiDAR range images of size $H\times W\times C$. $C$ denotes the channel of features. For the Velodyne HDL-64E LiDAR, the size is $64\times 2048\times 5$. The features include $range$ ($\sqrt{x^2+y^2+z^2}$), $x$, $y$, $z$ and the $intensity$. Figure \ref{fig:range} shows examples of the LiDAR range images. After the pre-processing step, the LiDAR range images are fed to the single-scan-based semantic segmentation module. To make full use of the existing semantic segmentation models, we directly test our method with the pre-trained models of RangeNet++\cite{MiliotoVBS19} and SalsaNext\cite{CortinhalTA20} in this paper. \begin{figure}[!t] \begin{center} \makebox[1pt]{\includegraphics[width=\linewidth]{range.eps}} \end{center} \caption{The LiDAR range images include $range$, $x$, $y$, $z$ and $intensity$ components. } \label{fig:range} \end{figure} \begin{figure}[!t] \begin{center} \makebox[1pt]{\includegraphics[width=\linewidth]{asa.eps}} \end{center} \caption{The adjacent scan association module. The arrows denote the correspondences between adjacent LiDAR range images. From top to bottom are the current range image, previous range image, and the transformed range image. } \label{fig:asa} \end{figure} \subsection{Adjacent Scan Association} After the single-scan-based semantic segmentation module, we can get high-level features of each LiDAR scan. In order to exploit the semantic features of each scan, an intuitive idea may be to concatenate them directly. However, this simple solution does not work because the semantic features are represented in different coordinate systems. In this paper, we propose an adjacent scan association module to convert the semantic features of adjacent scans into the same coordinate system and then feed them to the MOS module. The ASA module consists of three steps: coordinate system conversion, range image generation, and feature transformation. \textit{Coordinate System Conversion:} Assuming that we have a sequence of $N$ LiDAR scans, and the corresponding odometry information is known. $S_0$ denotes the current LiDAR scan and $S_i$ represents one previous scan $(0<i<=N)$. $T_i^0$ denotes the relative transformation between the previous scan $S_i$ and the current scan $S_0$. Given this information, we can convert a LiDAR point in a previous scan $S_i$ to the one in the current scan $S_0$ as follows, \begin{equation} \left[ \begin{array}{c} x_0^{i} \\ y_0^{i} \\ z_0^{i} \\ 1 \\ \end{array} \right] =T_{i}^{0} \cdot \left[ \begin{array}{c} x_i \\ y_i \\ z_i \\ 1 \\ \end{array} \right] \label{equ:coordinate} \end{equation} where $(x_0^{i}, y_0^{i}, z_0^{i}, 1)$ and $(x_i, y_i, z_i, 1)$ denote the coordinates of the same 3D point in the coordinate systems of scan $S_0$ and scan $S_i$, respectively. Considering the semantic features $F_i$ are represented in the form of range image, we only need to transform the LiDAR points lying in range image $R_i$ to the coordinate system of scan $S_0$. \textit{Range Image Generation:} After the coordinate system conversion step, we can get the transformed LiDAR point clouds. In this step, we use Eq. \ref{equ:spherical} to project the transformed scan $S_0^{i}$ onto the range image $R_0$ and generate a new range image $R_0^{i}$. Through this $R_i\Rightarrow S_0^{i}\Rightarrow R_0^{i}$ process, we can get a corresponding point $(u_0,v_0)$ in current range image $R_0$ if it exists. The association between the previous and current scans is beneficial to the subsequent MOS task. If the relative pose $T_{i}^{0}$ and the semantic segmentation results are accurate enough, we can even distinguish the moving points from the static by comparing the corresponding semantic segmentation results. The point is static if the semantic prediction in the current scan is the same as that in the previous scan, otherwise, it is moving. However, in practical applications, there exist errors in both relative pose and semantic segmentation results. That is why we use the semantic features rather than the semantic results in this paper. \textit{Feature Transformation:} In this step, we use the calculated point correspondence information to assist the feature transformation process. Assuming the association between the previous range image $R_i$ and the current range image $R_0$ is represented by $Tr$ with the size of $H\times W$, $(u_0,v_0)$ is the corresponding point of $(u_i,v_i)$ in $R_0$. The transformation $Tr$ is defined as follows, \begin{equation} Tr(u_i,v_i)= \begin{cases} u_0+v_0\cdot W\\ 0, \quad\text{no correspondences} \end{cases} \end{equation} where $Tr$ stores the index information from range image $R_i$ to range image $R_0$. Then, we use the $\mathit{reshape}$ and $\mathit{scatter}$ functions to transform the semantic features $F_i$ of the previous range images $R_i$ to the current range image $R_0$ according to the association information in $Tr$. \subsection{Moving Object Segmentation} Like the semantic segmentation module, we reuse the existing single-scan-based semantic segmentation models, RangeNet++\cite{MiliotoVBS19} and SalsaNext \cite{CortinhalTA20}, for the MOS module in this paper. Both RangeNet++ and SalsaNext follow the encoder-decoder architectures and can obtain accurate semantic segmentation results in real-time. After the ASA module, we can get the semantic features of the current range image and the transformed features of the previous range image. As mentioned earlier, the differences between these two semantic features already contain enough information about the movement. However, if only the semantic features are used as inputs, the accuracy of relative poses and semantic segmentation network will limit the MOS performance. Therefore, we also add the range images of the current scan to minimize the negative impact. The MOS module combines the semantic features and range images to differentiate the moving objects. Compared with the range-image-based MOS methods, the addition of transformed semantic features can bring significant improvement. After the MOS process, we can get the segmentation results in the form of a range image as shown in Fig. \ref{fig:flowchart2}. In order to further improve the performance, the results will be back-projected to the point cloud form. Then, a k-Nearest-Neighbor (kNN) search algorithm is applied to remove the artifacts caused by spherical projection. \section{Experiments} \subsection{SemanticKITTI MOS Dataset} The SemanticKITTI MOS dataset is a large-scale 3D LiDAR-based moving object segmentation benchmark for the MOS task in outdoor driving scenes. It is built upon KITTI\cite{GeigerLU12} and SemanticKITTI\cite{BehleyGMQBSG19} datasets. The MOS dataset has 22 sequences of LiDAR scans, where sequences 00-07 and 09-10 are used for training, sequence 08 for validation, and sequences 11-21 are used for testing. Only two classes, moving and static, are used in this dataset. The intersection-over-union (IoU) \cite{EveringhamGWWZ10} value on the moving objects is the primary metric used for comparison with other methods. \subsection{Experimental Setup} The proposed network is implemented on a server with 64GB RAM, an Intel(R) Xeon(R) E5-2650 CPU, and 2 NVIDIA Geforce RTX2080Ti GPUs under Ubuntu using PyTorch. The evaluation results on the SemanticKITTI MOS testing dataset are obtained based on all training data, with an epoch size of 150 and batch size of 8. The initial learning rate and the learning rate policy are set to be the same as the original networks (RangeNet++ and SalsaNext). If there is no special description, it indicates that both the single-scan-based semantic segmentation module and the MOS module use the SalsaNext network. The odometry information of each sequence is estimated with the LiDAR-based SLAM method SuMa\cite{BehleyS18}. \subsection{Different Transformation Designs} There are two kinds of data representation conversions in the network. One is the transformation of LiDAR scans to range images in the semantic segmentation module, and the other is the transformation of previous semantic features into the current range image in the ASA module. In this section, we will analyze the influences of data type, source data, and hardware on the process of data representation conversion. \subsubsection{Data Type} The coordinates of 3D LiDAR points and the relative pose matrices are saved as floating-point numbers, while the coordinates of range images are integer point numbers. This means that the range images in the current coordinate system generated based on previous scans are different from the range images generated based on the current scan. Even if the surrounding scene is static, the relative poses are accurate and there is no occlusion. \subsubsection{Source Data} The raw LiDAR scans and the corresponding range images have a different number of points. Therefore, the range images converted from previous scans and previous range images are different. \subsubsection{Hardware} Converting 3D LiDAR point clouds into 2D range images inevitably encounters the many-to-one problem, which means multiple LiDAR points may correspond to the same point in the range image. In the semantic segmentation module, we only save the points with minimum range values and perform the transformation on CPUs. However, in the ASA module, the feature transformation occurs on GPUs. Parallel computing cannot ensure that points with minimum range values are saved, and only random points among the multiple points are saved. This characteristic leads to differences between the range images generated based on CPUs and GPUs. \subsubsection{Analysis} Figure \ref{fig:rangedifference} shows the range images generated with different source data and hardware. Among them, Fig. \ref{fig:rangedifference} (a) and (c) are the range images of the current and previous scans generated based on CPUs, respectively. The differences between the two range images are caused by the data type, the inaccurate relative pose, and the moving objects. The occlusion also results in more holes at the edge of the objects. Figure \ref{fig:rangedifference} (c) and (e) are the results using the previous scan and previous range image on CPUs, respectively. We can find that due to the difference in point number, the range image generated by the previous range image is more sparse than the one generated by the previous scan. Figure \ref{fig:rangedifference} (a) and (b) are the range images of the current scan based on CPUs and GPUs, respectively. The difference between the two range images is visually negligible, which indicates that the influence of the hardware on the range image generation process can be ignored. Based on the above analysis and the qualitative comparison results in Fig. \ref{fig:rangedifference}, we finally use CPUs to convert the LiDAR scans into range images (like Fig. \ref{fig:rangedifference} (a)), and use GPUs to transform the features of the previous range images to the current range image (like Fig. \ref{fig:rangedifference} (f)). \begin{figure}[!t] \begin{center} \makebox[1pt]{\includegraphics[width=\linewidth]{range2.eps}} \end{center} \caption{The range images in the current coordinate system. They are generated with different hardware (CPUs or GPUs) and LiDAR point clouds (current scan, previous scan, or previous range image). (a) and (b) are range images with current scan and based on CPUs and GPUs, respectively. (c) and (d) are range images with the previous scan and based on CPUs and GPUs, respectively. (e) and (f) are results with previous range images and based on CPUs and GPUs, respectively. } \label{fig:rangedifference} \end{figure} \subsection{Evaluations on SemanticKITTI MOS Dataset} In this section, we compare our method with the state-of-the-art MOS networks. Since there are few LiDAR-based MOS approaches, we also compare our method with some methods of semantic segmentation and scene flow. Table \ref{tab:SOTA} shows the quantitative comparison results. \textit{SalsaNext(movable classes)} denotes the MOS result of directly marking all the movable classes as moving. \textit{SalsaNext(retrained)} retrains the SalsaNext network with binary MOS labels. \textit{SceneFlow}\cite{LiuQG19} uses the flow vector to determine the moving objects. \textit{SqSequence}\cite{ShiLWH020} and \textit{KPConv}\cite{ThomasQDMGG19} are multiple-scan-based LiDAR semantic segmentation networks. \textit{LMNet(N=1)}\cite{ChenLMWGBS21} uses range images and the precalculated residual image to differentiate the moving objects. \textit{LMNet(N=8 + Semantics)} represents the semantically enhanced version using 8 residual images. The proposed semantics-guided MOS network takes the semantic features of current range image, the transformed features of previous range image and the range images of current scan as inputs. As shown in Tab. \ref{tab:SOTA}, the performance of our semantics-guided MOS method is basically the same as \textit{KPConv}, slightly worse than the LMNet with 8 residual images and semantic information. The multiple-scan-based semantic segmentation method \textit{KPConv} is implemented based on point clouds, which requires high computational overhead and cannot achieve real-time performance. Our method only uses the compact LiDAR range images as inputs and can work in real-time. The \textit{LMNet(N=8 + Semantics)} is equivalent to using 9 consecutive scans. Considerable computational overhead and time are spent on the generation of 8 residual images on CPUs. Our method only uses two adjacent LiDAR scans, and the feature association process (the ASA module) is implemented on GPUs. Figure \ref{fig:mosresult} shows the ground truths and MOS results in the forms of range image and point cloud. \begin{figure*}[!t] \begin{center} \makebox[1pt]{\includegraphics[width=\linewidth]{result.eps}} \end{center} \caption{The MOS results in the forms of range image and point cloud. (b), (d), (f) and (h) are the MOS results predicted by our method. (a), (c), (e) and (g) are the corresponding ground truths. Red denotes the moving objects.} \label{fig:mosresult} \end{figure*} \begin{table}[!t] \caption{MOS Performance Compared with the State-of-the-art (Test)} \begin{center} \begin{tabular}{c c} \hline \textbf{Algorithms} & IoU\\ \hline SalsaNext(movable classes) & 4.4\% \\ SceneFlow\cite{LiuQG19} & 4.8\% \\ SqSequence\cite{ShiLWH020} & 43.2\% \\ SalsaNext(retrained) & 46.6\% \\ KPConv\cite{ThomasQDMGG19} & 60.9\% \\ LMNet(N=1) & 52.0\% \\ LMNet(N=8 + Semantics) & 62.5\% \\ Ours & 60.6\% \\ \hline \end{tabular} \end{center} \label{tab:SOTA} \end{table} \subsection{Ablation Study} In order to save the computational overhead and time cost, we only use the training data sampled at equal intervals of 4000 scans in the ablation study section. The epoch size and batch size are set as 30 and 2, respectively. \subsubsection{Effectiveness of Semantic Guidance} Most of the existing LiDAR-based MOS methods use the raw LiDAR data and conduct the MOS task directly. The proposed network introduces a semantic segmentation module to assist the MOS process. In this section, we will analyze the effectiveness of the semantic guidance from the semantic segmentation module. Table \ref{tab:guidance} shows the MOS results on the validation dataset. \textit{MOS(RXYZI + Range Residual)} only uses the MOS module and takes range images and residual image as inputs. \textit{MOS(RXYZI + Range Residual) + Semantics} represents its semantically enhanced version and uses the semantic segmentation results at the end. \textit{MOS(RXYZI + Features)} denotes the proposed method with range images and semantic features as inputs of the MOS module. Table \ref{tab:guidance} are the MOS results on the validation dataset with all training data. \textit{LMNet(N=1)} and \textit{LMNet(N=1 + Semantics)} use the precalculated residual image. The latter also uses the semantic segmentation results. \textit{MOS(RXYZI + Features)} represents the proposed network using range images and semantic features in the MOS module. It is obvious that semantic information can help improve the MOS performance. The single-scan-based semantic segmentation task is closely related to the MOS task. Compared with using the residual images or using semantic segmentation results at the end, our method with semantic features as the inputs of the MOS module is more effective. The pre-trained single-scan-based semantic segmentation model can not only simplify the training process but also improve the MOS performance. \begin{table}[!t] \caption{Effectiveness of Semantic Guidance (Validation)} \begin{center} \tabcolsep3pt \begin{tabular}{c c c} \hline \textbf{Algorithms} & Params & IoU\\ \hline MOS(RXYZI + Range Residual) & 6711043 & 38.6\% \\ MOS(RXYZI + Range Residual) + Semantics & 13422615 & 41.4\% \\ MOS(RXYZI + Features) & 13423863 & 60.5\% \\ \hline \end{tabular} \end{center} \label{tab:guidance} \end{table} \begin{table}[!t] \caption{Effectiveness of Semantic Guidance (Validation)} \begin{center} \tabcolsep3pt \begin{tabular}{c c c} \hline \textbf{Algorithms} & Params & IoU\\ \hline LMNet(N=1) & 6711043 & 59.9\% \\ LMNet(N=1 + Semantics) & 13422615 & 61.4\% \\ MOS(RXYZI + Features) & 13423863 & 68.4\% \\ \hline \end{tabular} \end{center} \label{tab:guidance2} \end{table} \subsubsection{Different MOS Inputs} In this section, we will compare the MOS performance of different inputs, including raw LiDAR data (RXYZI), range residual image, and semantic features. Table \ref{tab:inputs} shows the comparison results. \textit{RXYZI} gets the worst MOS performance due to the lack of temporal information. \textit{Features(current)} and \textit{RXYZI + Features(current)} also use the current LiDAR scan only, but obviously, semantic guidance without temporal information can still improve the results. \textit{RXYZI + Range Residual} and \textit{RXYZI + Features} combining the range images with the residual image or semantic features can increase the MOS accuracy. Compared with the residual image, the semantic features are more effective for the MOS task. However, using both residual image and semantic features at the same time degrades the performance. This may be caused by the conflicts between residual image and semantic features. A comparison between \textit{Features} and \textit{RXYZI + Features} shows that range images also contribute to the MOS task. Connecting the current semantic features with the transformed previous features can obtain better results than using the feature residuals or directly concatenating the semantic features without ASA operation. In summary, the transformed semantic features are more effective than the residual image and the original semantic features from previous scans. \begin{table}[!t] \caption{Different MOS Inputs (Validation)} \begin{center} \tabcolsep3pt \begin{tabular}{c c c} \hline \textbf{Algorithms} & Params & IoU\\ \hline RXYZI & 6711011 & 26.9\% \\ Features(current) & 13423063 & 45.2\% \\ RXYZI + Features(current) & 13423223 & 51.8\% \\ \hline RXYZI + Range Residual & 6711043 & 38.6\% \\ RXYZI + Features & 13423863 & 60.5\% \\ RXYZI + Range Residual + Features & 13423895 & 58.9\% \\ \hline Features & 13423703 & 56.1\% \\ RXYZI + Feature Residuals & 13423223 & 39.1\% \\ RXYZI + Features(concat) & 13423863 & 48.7\% \\ \hline \end{tabular} \end{center} \label{tab:inputs} \end{table} \subsubsection{Influence of Scan Numbers} In this section, we will analyze the influence of the LiDAR scans used on the MOS performance. It is similar to the residual image analysis in LMNet. Theoretically, more LiDAR scans bring more useful information and improve the MOS performance. Table \ref{tab:numbers} shows the MOS results with 2 to 8 consecutive LiDAR scans as inputs. It is obvious that the MOS accuracy does not increase with the increase of LiDAR scans. This is different from the residual images in LMNet. There may be two reasons. First, the semantic segmentation module is not accurate enough. The more LiDAR scans (semantic features), the more conflicts, which affects the improvement of the MOS result. Second, the relative poses between scans are also inaccurate. And this inaccuracy will increase with the increase of time interval. In addition, although the transformations from the previous range images to the current are performed on GPUs, it still affects the training time cost. Based on the above considerations, we only use two adjacent LiDAR scans in our MOS method. \begin{table}[!t] \caption{Influence of Scan Numbers (Validation)} \begin{center} \tabcolsep10pt \begin{tabular}{c c c} \hline \textbf{Scans} & Params & IoU\\ \hline 2 & 13423863 & 60.5\% \\ 3 & 13424503 & 59.3\% \\ 4 & 13425143 & 57.0\% \\ 5 & 13425783 & 59.0\% \\ 6 & 13426423 & 54.4\% \\ 7 & 13427063 & 60.0\% \\ 8 & 13427703 & 58.3\% \\ \hline \end{tabular} \end{center} \label{tab:numbers} \end{table} \subsubsection{Influence of Modular Design} The proposed network can be divided into the single-scan-based semantic segmentation module, the ASA module, and the MOS module. In order to analyze the benefits of modular design, we use different network architectures to replace the single-scan-based semantic segmentation module and the MOS module. Table \ref{tab:modular} shows the MOS results with different network architectures. \textit{SS(RangeNet++)} and \textit{MOS(RangeNet++)} represent the use of RangeNet++ network in the single-scan-based semantic segmentation module and the MOS module, respectively. \textit{SS(SalsaNext)} and \textit{MOS(SalsaNext)} indicate the use of SalsaNext network. The SalsaNext network has better semantic segmentation performance than the RangeNet++ network. The MOS results in Tab. \ref{tab:modular} shows that due to the modular design, the MOS performance can be improved by simply updating any trainable module in the network. \begin{table}[!t] \caption{Influence of Modular Design (Validation)} \begin{center} \begin{tabular}{c c c} \hline \textbf{Algorithms} & Params & IoU\\ \hline SS(RangeNet++) + MOS(RangeNet++) & 100761335 & 29.5\% \\ SS(RangeNet++) + MOS(SalsaNext) & 57089655 & 48.4\% \\ SS(SalsaNext) + MOS(SalsaNext) & 13423863 & 60.5\% \\ SS(SalsaNext) + MOS(RangeNet++) & 57095543 & 35.7\% \\ \hline \end{tabular} \end{center} \label{tab:modular} \end{table} \section{Conclusions} In this paper, we propose a moving object segmentation network with semantic information as guidance. The whole network includes three modules: a single-scan-based semantic segmentation module, an adjacent scan association (ASA) module, and a multiple-scan-based moving object segmentation module. The ASA module works as an intermediate to connect the semantic segmentation module and the MOS module. After the ASA process, we can obtain the correspondences and the difference information between the semantic features, which are beneficial to the MOS task. The experiment on the SemanticKITTI MOS dataset shows the effectiveness of the network. The modular design also facilitates the upgrading of a single module and further improves the MOS accuracy. \bibliographystyle{IEEEtran}
{ "redpajama_set_name": "RedPajamaArXiv" }
1,512
Fashion is interesting Watched Madhur Bhandarkar's Fashion last night and in one word I can say that it was CAPTIVATING! Right from start to end I was totally hooked onto the movie. Not that the story was unpredictable or plot was not known…of course it was a Madhur Bhanadarkar's movie on Fashion world and so I had expected it to be like this but still there was something about the movie which kept me totally engaged. It looked realistic and different. To start with, I think Kangana was outstanding and Priyanka was awesome! This has to be Priyanka's best performance till date. The story is of Meghna Mathur (Priyanka Chopra) from Chandigarh who comes to Mumbai with a dream of becoming a supermodel. In all the glitter and glamour of fashion world she encounters the ruthless realities of life. The movie revolves around her journey and her struggle to make it big, how she meets and deals with Shonali (Kangana) who is a very successful showstopper and her competitor, her relationship with Janet (Mugdha Godse) who is also a struggling model with very clear priorities but settles for something quite weird in life, Manav (Arzan) who is also a new-comer and shares a strong bond with Meghna and Sarin (Arbaz Khan) who is the owner of Panache the most sought after brand for super models. No further details of the story in the interest of those who want to see the movie J. Each and every actor has delivered such a strong performance but Kittu Gidwani definitely deserves a special mention. She was too good and yeah she looked as ravishing as ever. Madhur Bhandarkar has done a great job in some of the scenes like the wardrobe malfunctioning one and the fiasco after that, Priyanka's self-realization in front of mirror after she comes back from that shady hotel or Kangana's reaction in rehab center. Songs were average but some of the background scores were very good. Madhur Bhandarkar has once again proved it that he knows what he is delivering. To me as I said the movie looked real, absolutely real. Overall, I loved the movie and I would recommend it to everybody who would like to watch something different. Though the theme is serious but it's not a heavy movie, it has its masala scenes and light moments as well. From my side it's thumbs up to this yet another powerful presentation by Bhandarkar. Other Details of the movie: Duration of the movie: 3 hour + Best to watch it with: Spouse / Partner or Friends. It's a strict NO for kids becoz – 1. It's a serious stuff, 2. It has some adult sequences as well Cast: Priyanka Chopra, Kangana Ranaut, Mugdha Godse, Arbaaz Khan, Arjan Bajwa, Kitu Gidwani, Raj Babbar, Kiran Juneja Director: Madhur Bhandarkar My Rating: 3.5/5 Hey, even I loved the movie. I give it a generous 4/5 :D Kanupriya Sindhu Ramrakhyani said... @ Rajesh: yea, 4 seems to be good rating though I have given 3.5 :-)
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
7,013
lein2 test :all
{ "redpajama_set_name": "RedPajamaGithub" }
2,758
Spencer Daily Reporter Transitioning between cultures By Kate Padilla, Daily Reporter Staff "The Abundance," by Amit Majmudar. Metropolitan Books, 255 pp. $26. It's difficult to understand coming into a culture so different from the one you're used to. In many cases, especially if you're beyond a certain age, the original culture never really leaves you behind. The narrator in Amit Majmudar's "The Abundance" has never lessened her grasp on her Indian heritage. She still makes dahi and rotli from hand; she still cares for her husband wholly in the home and expects him to care for her wholly outside of it. Her children have grown and left her home. They have jobs and support their own families. Her daughter succumbed to an arranged marriage, though she feeds her children macaroni and cheese and soy dogs. Her son married an American woman, a girl he'd dated since grade school without his mother's knowledge. Their children have names that could be pronounced correctly in either American or Indian households. She is the last remaining testament to her culture in her family, from what she can see. And yet, she is dying. Cancer will soon claim her, for better or for worse. "Can I help?" her daughter, Mala, asks one night when she is too weak to make dinner. This question leads to the beginning of a mother-daughter bond that had never before existed between them. They began cooking together, she began teaching her daughter the dishes her mother once taught her. With the food, also, came the culture and the heritage she brought from India. I expected "The Abundance" to be told from the perspective of the children, because they are the ones who visibly transition the most through the novel. But this multi-faceted "prodigal child" story is told from the voice of the mother, who looks on as her family and her life move around her and anticipates the moment she will no longer join them. "The Abundance" is intimate, most notably when the narrator's son seeks to publish the story of the mother-daughter reunion, complete with the recipes Mala has been meticulously documenting since the beginning. We see how personal this story truly is, especially to its characters. Perhaps the biggest message of this book, likely the most subtle, isn't a call to return to the culture we used to hold dear. Instead, it's a merging of the old and the new, the native and the progressive, in order to create a new life, one that blends the two and allows all involved to live in harmony together. Respond to this story © 2019 Spencer Daily Reporter · Spencer, Iowa
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
2,458
{"url":"https:\/\/physics.stackexchange.com\/questions\/88283\/if-charged-particles-always-attach-to-black-hole-event-horizons-how-can-ordinar","text":"# If charged particles always attach to black hole event horizons, how can ordinary matter fall in?\n\n(A friend at work kindly loaned me loaned me his copy of Kip S Thorne's \"Black Holes & Time Warps\". This may have been ill-advised... :)\n\nBH&TW 1994 paperback p.410 Figure 11.5:\n\n... all positive charge that enters the horizon from the outside Universe becomes attached to the horizon, and moves around on it, until it exits back into the outside Universe (in the form of of negative charge falling inward to neutralize the positive charge.\n\nWhoa there buckaroos, QED this ain't. Protons are made of quarks, and quarks are individually charged at fractions of 1e. So, if I'm reading the above very straightforward statement the way it seem to be saying, every charge should be individually trapped by the horizon. So how does that nasty lumpy $-1e$ of a charge cancel out each of the fractional charges of the proton quarks, some of which (the d quarks) are negative BTW?\n\nSo while I'm mostly OK with protons and antiprotons doing this little dance and falling in, because in that case all you are really doing is shooting gammas into the interior of the horizon. Even for that case, wow, that one statement above would be skipping over a plethora of particle complexity for any real event. E.g. you really need to look at the quarks cancelling to get a complete picture, not just two particles with actual spatial size.\n\nThe Thorne book is nine years and intended to be a popular read, so am I safe to assume that someone has subsequently looked in gruesome detail at the troublesome issue how exactly you cancel the mismatched charges of ordinary matter in a way that allows out-of-balance all-matter (or all-antimatter) lumps and atoms to fall into the interior of an event horizon?\n\nI note that the conservation principles in play for this question are the same ones that keep hydrogen atoms from turning into gammas. So while the interior of a black whole is a thing of beauty and pure energy and simplicity, messy matter first has to get there by getting, more or less, converted 100% into gamma radiation. Once you have converted matter into all gammas and shot those into the event horizon interior, you can spit back whatever matter-antimatter or radiation combo you want.\n\nBut if horizon charge attachment is real and fully self-consistent, I fully confess that I flatly do not see how you can ever get into that interior.\n\nAlso, I assume the membrane model has not been dramatically modified or abandoned since that time?\n\nAfter all, the idea that horizons are conductive seems so relevant to how quasars are powered that I find it a bit hard to believe that the conductive-horizon idea could ever be abandoned, since in effect quasars qualify as decent experimental evidence for the idea. Also, for whatever it's worth, I found the arguments about misbehaving field lines leading to charge capture to be nicely self-consistent and persuasive.\n\nSo, more crisply:\n\nIf charge cancellation is required for particles to fall past an event horizon, how can ordinary matter with its odd mix of spatially separated fractional charges ever reach the 100% self-cancellation of charge needed for it to pass through the event horizon?\n\nIf appropriate, a reference to some widely accepted paper that adequately explains the nature of the whole-fractional charge cancellation mechanism would be fine.","date":"2019-10-16 17:31:43","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.3827349841594696, \"perplexity\": 1137.7083024884105}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-43\/segments\/1570986669057.0\/warc\/CC-MAIN-20191016163146-20191016190646-00291.warc.gz\"}"}
null
null
found 22 items where the library's type is Government Agency and the state is California. Showing page 1 of 2. California Geological Survey. ( Sacramento, California United States) [Symphony] + Court of Appeal, Third Appellate Distirct [California]. ( Sacramento, California United States) [LibraryWorld] + Lawrence Livermore National Laboratory. ( Livermore, California United States) [Symphony] + Los Angeles County Metropolitan Transit Authority. ( Los Angeles, California United States) [EOS.Web] + San Bernardino Valley. ( San Bernardino, California United States) [Symphony] + United States -- National Oceanic and Atmospheric Administration. ( Pacific Grove, California United States) United States -- National Oceanic and Atmospheric Administration. ( Santa Cruz, California United States) [Symphony] United States -- National Park Service. ( San Diego, California United States) [EOS.Web] United States -- National Park Service. ( Ventura, California United States) [EOS.Web] United States -- National Park Service. ( Death Valley, California United States) [EOS.Web] United States -- National Park Service. ( Mammoth Lakes, California United States) [EOS.Web] United States -- National Park Service. ( Martinez, California United States) [EOS.Web] United States -- National Park Service. ( Twentynine Palms, California United States) [EOS.Web] United States -- National Park Service. ( Mineral, California United States) [EOS.Web] United States -- National Park Service. ( Independence, California United States) [EOS.Web] United States -- National Park Service. ( Barstow, California United States) [EOS.Web] United States -- National Park Service. ( Paicines, California United States) [EOS.Web] United States -- National Park Service. ( Point Reyes Station, California United States) [EOS.Web] United States -- National Park Service. ( Crescent City, California United States) [EOS.Web] United States -- National Park Service. ( San Francisco, California United States) [Koha -- ByWater Solutions] +
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
518
If you see someone who appears to have taken a shortcut, don't worry! If they appear to be sailing away from the planned circuit you see on your screen, their performance will not be taken into account when they cross the finish line. There is no need to file a complaint.
{ "redpajama_set_name": "RedPajamaC4" }
5,060
\section*{Background \& Summary}\label{sec:background} \iffalse 1. Say something about the COVID19 - DONE 2. relevance of the API in controlling or easing COVID-19 - DONE 3. varieties of NPI - DONE 4. talk about how an NPI dataset might help - DONE 6. talk about the existing datasets and their problems / desirable characteristics of an NPI dataset - DONE 7. Justify our approach (Wikipedia-based) - DONE 8. introduce our dataset formally - DONE 9. Set expectation on what is lies ahead in the paper - DONE \fi The \gls{covid} pandemic has made an unprecedented impact on almost every facet of human civilization from healthcare systems, to economies and governments worldwide. As of August 2020, every country in the world has been affected, with more than 24M confirmed cases of infection and death toll approaching a million cases worldwide~\cite{jhu, who, worldometer}. The pandemic has triggered a wide range of \gls{npi} responses across the world. With therapeutic and preventive interventions still in early stages of development, every country has resorted to \gls{npi} as a primary strategy~\cite{c19hcc, ferguson2020report} for disease control. Examples of such interventions include community actions (e.g. school closures, restrictions on mass gatherings), individual actions (e.g. mask wearing, self-quarantine), and environmental actions (e.g. public facility cleaning). Such \glspl{npi} vary significantly in their implementation based on the maturity of the health infrastructure, robustness of the economy and cultural values unique to the region. Public health policy makers worldwide are striving to introduce successful intervention plans to manage the spread of disease while balancing the socio-economic impacts~\cite{coibion2020cost, lancet2020india}. These initiatives will benefit from modeling the efficacy of different intervention strategies. The pandemic has sparked an ongoing surge of discovery and information sharing resulting in an unprecedented amount of data being published online~\cite{Wang2020CORD19TC}. This includes information about \gls{npi} measures, which are available in a wide variety of unstructured data sources, including official government websites~\cite{us-chamber-of-commerce,us-csg}, press releases, social media, and news articles. However such modeling requires the information about the \glspl{npi} to be available in a structured form. To address this urgent need, several data collection initiatives have emerged in the recent months resulting in several publicly available datasets with varying degrees of coverage, data freshness, and sparsity. For example, the CoronaNet dataset~\cite{CoronaNet} contains the monadic and dyadic data on policy actions taken by governments across the world, manually curated by over 500 researchers covering sixteen \gls{npi} types and is kept fairly up-to-date. The Complexity Science Hub, Vienna enlisted researchers, students and volunteers to curate the \emph{Complexity Science Hub COVID-19 Control Strategies List}~\cite{Desvars-Larrive2020} dataset, of eight different \gls{npi} types but covering only 57 countries. Similarly, the Oxford \gls{covid} Government Response Tracker~\cite{hale2020oxford} dataset, takes a crowd-sourcing approach and covers 17 \gls{npi} types, 186 regions, 52 US states and territories. Because all these datasets are assembled manually, each of them is constrained in one or more respects: geographical scope, taxonomic richness, frequency of updates or granularity of details, and evidential sources. An AI-assisted, semi-automated data collection approach, driven by a rich, extensible taxonomy, can help overcome these issues and may result in a larger, frequently updated dataset with less manual labor. \iftoggle{inplace}{ \begin{figure}[htp!] \centering \includegraphics[width=0.8\linewidth]{wntrac-others} \caption{Artificial intelligence assisted approach to build the \gls{wntrac} dataset.} \label{fig:approach} \end{figure} } Wikipedia is one of the main sources of accessible information on the Internet. Since the start of \gls{covid}, a dedicated global network of volunteers has been creating, updating, and translating Wikipedia articles with vital information about the pandemic~\cite{wiki-covid}. Over 5,000 new Wikipedia pages on \gls{covid} have been written by more than 71,000 volunteers since the onset of the pandemic accumulating more than 440M page views by June~2020. Wikipedia articles, even though crowd-sourced, through the process of collective validation~\cite{jessen2012aggregated} and by citations of credible sources such as government websites, scientific literature, and news articles can serve as a reliable source of \gls{npi} data. Further, these Wikipedia articles are constantly updated; have been edited more than 793,000 times as of August~2020 making it both a rich and up-to-date source. Based on this, we postulated that an approach based on automated information extraction from Wikipedia, followed by human validation to ensure accuracy and veracity, would result in a frequently updated dataset with a wider coverage compared to any of the existing datasets. We present the result of our work, \gls{wntrac}, a comprehensive dataset consisting of over 6,000 \glspl{npi} implemented worldwide since the start of the pandemic. \gls{wntrac} covers \glspl{npi} implemented across 261 countries and territories, and classifies \gls{npi} measures into a taxonomy of sixteen \gls{npi} categories. \gls{npi} measures are automatically extracted daily from Wikipedia articles using \gls{nlp} techniques and manually validated to ensure accuracy and veracity. In what follows, we explain the methods used to create the dataset, outline the challenges and key design choices, describe the format, provide an assessment of its quality and lay out our vision of how this dataset can be used by policy makers, public health leaders, and data scientists and researchers to support modeling and analysis efforts. \iffalse Add the following aspects in the comparison to other datasets: region up to date - semi-automatic coverage -taxonomy fine grained entity human in the loop \fi \section*{Methods}\label{sec:methods} \iffalse 1. Describe how we collected the raw data. Highlight any issues with the approach. Are we missing some data due to this approach? 2. Describe how we process the data. Are we introducing any errors (including that of omission) due to this processing? 3. Describe the details of the system used for processing. 3a. Define the NPI event. 3b. Define the language / technical problem clearly including any sub-problems. Describe common approaches to solving such problems 3c. Describe various stages of the processing pipeline. 3d. What are the ML models that are used and how they have been trained. Include any qualitative assessment of the accuracy of the models (because it would affect quality of the dataset). 4. Describe the validation application and any design choices made there (again, because it would affect quality of the dataset) 5. Describe how this system is scalable and how it allows us to keep up with the changes to the Wikipedia (this is this is where we support the claims made in the background section). 6. Link it to the next section (quality assessment / technical validation) \fi We built a semi-automated system to construct the dataset and keep it current. The \gls{npi} measures are modeled as \glspl{event} and \glspl{evidence} for information extraction purposes. This is illustrated by a motivating example shown in the Figure~\ref{fig:npi-example-may-15}. Each \gls{event} corresponds to an imposition or lifting of a particular \gls{npi}. An \gls{event} is defined to be a 5-tuple (what, value, where, when, restriction), where \begin{enumerate} [noitemsep, topsep=0pt] \item What: the \emph{type} of \gls{npi} that was imposed or lifted. \glspl{npi} are grouped into sixteen major types. In the example, the type is \emph{school closure}. \item Value: sub-category or attribute that further qualifies the \gls{npi} type more specifically. In the example, the associated value is \emph{all schools closed}. A detailed description of each type and the corresponding possible values is shown in Table~\ref{tab:taxonomy}. \item Where: the region (country, territory, province, or state) in which the \gls{npi} measure has been implemented or withdrawn. In this example, there are three distinct regions, namely, \emph{Punjab, Chhattisgarh, Manipur} that are identified and three separate \glspl{event} will be extracted. \item When: The date from which the \gls{npi} was imposed or lifted. In the example, the date will be \emph{13 March}, corresponding to the implementation of the \gls{npi}, even if a likely date for the cancellation of the \gls{npi}, \emph{31 March}, is indicated. \item Restriction: a flag indicating that the event corresponds to the introduction or withdrawal of the \gls{npi}. It should be noted that the lifting of the \gls{npi} is treated as a separate event. In the example, the restriction type is \emph{imposed}. \end{enumerate} \iftoggle{inplace}{ \begin{figure}[htp!] \centering \includegraphics[width=0.6\linewidth]{wntrac-eg-may-15} \caption{An example of the \gls{npi} measure mentioned in the Wikipedia article of 15\textsuperscript{th} May 2020.} \label{fig:npi-example-may-15} \end{figure} \iffalse \begin{figure}[htp!] \centering \includegraphics[width=0.6\linewidth]{wntrac-eg-june-5} \caption{An example of the \gls{npi} event reported in the Wikipedia article on 5\textsuperscript{th} June 2020. Compared to the May 15 version of Figure~\ref{fig:npi-example-may-15}, the regions and sources have been updated. } \label{fig:npi-example-june-5} \end{figure} \fi } In addition to the mandatory fields described above, \gls{event} contains one or more \glspl{evidence}. An \gls{evidence} is a span of text extracted from Wikipedia that discusses a particular \gls{event}. In the example, \emph{On 13 March, the Punjab, Chhattisgarh, and Manipur governments declared holidays in all schools and colleges till 31 March.} is the \gls{evidence}. An \gls{evidence} may support more than one \gls{event}. Each \gls{evidence} is accompanied by a source type indicating the type of source of Wikipedia citation. More details about such additional attributes can be found in the data records section. \iftoggle{inplace}{ \begin{figure}[htp!] \centering \includegraphics[width=0.6\linewidth]{system-architecture} \caption{The \gls{wntrac} automated \gls{npi} curation system. It consists of a processing pipeline, \gls{tool} validation tool, and \gls{npi} data browser.} \label{fig:system} \end{figure} } The system, shown in the Figure~\ref{fig:system}, is designed to be scalable for continuous gathering, extraction and validation of \gls{npi} \glspl{event}. It consists of three subsystems: a data processing pipeline for capturing and extracting potential \gls{npi} \glspl{event} from Wikipedia articles, a tool called \gls{tool} for human validation of \gls{npi} \glspl{event} automatically extracted using the aforementioned pipeline and a data browser for visualizing the data. In the next section, we describe the system and its components at a high level, focusing on key design choices that have a bearing on the quality of the dataset, starting with a brief description of the data collection. \iftoggle{inplace}{ \newcommand{\pf}[1]{\parbox{6cm}{#1}} \newcommand{\aboverulesep = 0.605mm \belowrulesep = 0.984mm}{\aboverulesep = 0.605mm \belowrulesep = 0.984mm} \newcommand{\aboverulesep = 0mm \belowrulesep = 0mm}{\aboverulesep = 0mm \belowrulesep = 0mm} \begin{scriptsize}\centering \begin{longtable}[t]{@{}p{0.15\textwidth}@{}p{0.35\textwidth}p{0.06\textwidth}@{}m{0.30\textwidth}} \toprule \textbf{\gls{npi}} & \textbf{Example} & \textbf{Value} & \textbf{Value description}\\ \midrule {\parbox{2.5cm}{changes in \newline prison-related policies}} & \pf{On March 30, the GNA announced the release of 466 detainees in Tripoli, as part of an effort to stop the spread of the virus in prisons.} & Integer & Number of prisoners that were released \\ \midrule {confinement} &\pf{On 19 March, President Alberto Fernández announced a mandatory lockdown to curb the spread of coronavirus.} & Category & \begin{enumerate}[nosep, noitemsep, leftmargin=*] \item Mandatory/advised for all the population \item Mandatory/advised for people at risk \end{enumerate} \\ \midrule contact tracing & \pf{On 2 March, a case in Nimes was traced to the mid-February Mulhouse Megachurch event.} & Category & \begin{enumerate}[nosep, noitemsep, leftmargin=*] \item Tracing back 14 days of contacts of a confirmed patient through electronic information \item Tracing contacts of a person who needs to be isolated as was in contact with a confirmed patient through electronic information \end{enumerate} \\ \midrule {\parbox{2.5cm} {domestic flight restriction}} & \pf{On 1 April, the Government of Afghanistan suspended flights between Kabul and Herat.} & String & Name of the state where the passenger is arriving from \\ \midrule economic impact & \pf{Up until 14 March, the Afghan government had spent \$25 million to tackle the outbreak, which included \$7 million of aid packages.} & Category & \begin{enumerate}[nosep, noitemsep, leftmargin=*] \item Stock market \item Unemployment rate \item Industrial production \end{enumerate} \\ \midrule {\parbox{2.5cm} {entertainment / \newline cultural sector closure}} & \pf{On April 7, Rockland and Sullivan counties closed their parks.} & Category & \begin{enumerate}[nosep, noitemsep, leftmargin=*] \item Bars, restaurants, night clubs \item Museums, theaters, cinema, libraries, festivities \item Parks and public gardens \item Gyms and pools \item Churches \end{enumerate} \\ \midrule {\parbox{2.5cm} {freedom of movement \newline(nationality dependent)}} & \pf{Iran was added to the list of countries whose nationals were suspended entry to Cambodia, making a total of six.} & String & Name of the country the citizen is from\\ \midrule {\parbox{2.5cm} {international \newline flight restrictions}} & \pf{With effect from midnight on 1 April, Cuba suspended the arrival of all international flights.} & String & Name of the country or state where the passenger is arriving from \\ \midrule {\parbox{2.5cm} {introduction of \newline travel quarantine policies}} & \pf{Israeli nationals returning from Egypt were required to enter an immediate 14-day quarantine.} & String & Name of the country or state where the passenger travelled from\\ \midrule mask wearing & \pf{On April 15, Cuomo signed an executive order requiring all New York State residents to wear face masks or coverings in public places.} & Category & \begin{enumerate}[nosep, noitemsep, leftmargin=*, topsep=0pt] \item Mandatory \item Mandatory in some public spaces \item Recommended \end{enumerate} \\ \midrule mass gatherings & \pf{On 13 March, it was announced at an official press conference that a four-week ban on public gatherings of more than 100 persons would be put into effect as of Monday 16 March.} & Integer & Maximum number of people in social gatherings allowed by the government\\ \midrule public services closure & \pf{On 19 March, Election Commissioner Mahinda Deshapriya revealed that the 2020 Sri Lankan parliamentary election will be postponed indefinitely until further notice due to the coronavirus pandemic.} & Category & \begin{enumerate}[nosep, noitemsep, leftmargin=*] \item Government/parliament system closed \item Legal system closed \end{enumerate} \\ \midrule public transportation & \pf{On March 20, Regina Transit and Saskatoon Transit suspended fares for all bus service, but with reduced service.} & Category & \begin{enumerate}[nosep, noitemsep, leftmargin=*] \item Partial cancellation of routes/stops during the week/weekend \item Total cancellation of transport (special case for some states in China) \end{enumerate} \\ \midrule school closure & \pf{On 13 March, the Punjab and Chhattisgarh governments declared holidays in all schools and colleges till 31 March.} & Category & \begin{enumerate}[nosep, noitemsep, leftmargin=*] \item All schools (general) closed \item Only kindergartens/daycare closed \item Only schools (primary/secondary) closed \item Universities closed \end{enumerate} \\ \midrule state of emergency \newline (legal impact) & {\pf{Governor Charlie Baker declared a state of emergency for the state of Massachusetts on March 10.}} & Category & \begin{enumerate}[nosep, noitemsep, leftmargin=*] \item National guard joins the law enforcement \item Army joins the law enforcement \end{enumerate} \\ \midrule \renewcommand{\arraystretch}{1.5}work restrictions & \pf{On 10 April, Koike announced closure requests for six categories of businesses in Tokyo.} & Category & \begin{enumerate}[nosep, noitemsep, leftmargin=*] \item Suggestion to work from home for non-essential workers \item Mandatory work from home enforcement for non-essential workers \end{enumerate} \\ \bottomrule \caption{Taxonomy of the \acrlong{wntrac} dataset.}\label{tab:taxonomy} \end{longtable} \end{scriptsize} \iffalse \begin{table*}[htp!] \scriptsize \vspace{10pt} \centering \begin{tabular}[t]{p{0.20\textwidth}p{0.06\textwidth}p{0.25\textwidth}p{0.30\textwidth}}\toprule \textbf{\gls{npi}} & \textbf{Value type} & \textbf{Value description} & \textbf{Example} \\ \midrule changes in prison-related policies & Integer & Number of prisoners that were released & \\ \midrule \multirow{2}{*}{confinement} & \multirow{2}{*}{Category} & Mandatory/advised for all the population & \\ \cmidrule{3-4} & & Mandatory/advised for people at risk & \\ \midrule \multirow{2}{*}{contact tracing} & \multirow{2}{*}{Category} & Tracing back 14 days of contacts of a confirmed patient through electronic information & \\ \cmidrule{3-4} & & Tracing contacts of a person who needs to be isolated as was in contact with a confirmed patient through electronic information & \\ \midrule domestic flight restriction & String & Name of the state where the passenger is arriving from & \\ \midrule \multirow{3}{*}{economic impact} & \multirow{3}{*}{Category} & Stock market & \\ \cmidrule{3-4} & & Unemployment rate & \\ \cmidrule{3-4} & & Industrial production & \\ \midrule \multirow{5}{*}{entertainment / cultural sector closure} & \multirow{5}{*}{Category} & Bars, restaurants, night clubs & \\ \cmidrule{3-4} & & Museums, theaters, cinema, libraries, festivities & \\ \cmidrule{3-4} & & Parks and public gardens & \\ \cmidrule{3-4} & & Gyms and pools & \\ \cmidrule{3-4} & & Churches & \\ \midrule freedom of movement (nationality dependent) & String & Name of the country the citizen is from & \\ \midrule international flight restrictions & String & Name of the country or state where the passenger is arriving from & \\ \midrule introduction of travel quarantine policies & String & Name of the country or state where the passenger travelled from & \\ \midrule \multirow{2}{*}{state of emergency (legal impact)} & \multirow{2}{*}{Category} & National guard joins the law enforcement & \\ \cmidrule{3-4} & & Army joins the law enforcement & \\ \midrule mask wearing & Category & Recommended & \\ \midrule mass gatherings & Integer & Maximum number of people in social gatherings allowed by the government & \\ \midrule \multirow{2}{*}{public services closure} & \multirow{2}{*}{Category} & Government/parliament system closed & \\ \cmidrule{3-4} & & Legal system closed & \\ \midrule \multirow{2}{*}{public transportation} & \multirow{2}{*}{Category} & Partial cancellation of routes/stops during the week/weekend & \\ \cmidrule{3-4} & & Total cancellation of transport (special case for some states in China) & \\ \midrule \multirow{4}{*}{school closure} & \multirow{4}{*}{Category} & All schools (general) closed & \\ \cmidrule{3-4} & & Only kindergartens/daycare closed & \\ \cmidrule{3-4} & & Only schools (primary/secondary) closed & \\ \cmidrule{3-4} & & Universities closed & \\ \midrule \multirow{2}{*}{work restrictions} & \multirow{2}{*}{Category} & Suggestion to work from home for non-essential workers & \\ \cmidrule{3-4} & & Mandatory work from home enforcement for non-essential workers & \\ \bottomrule \end{tabular} \caption{Taxonomy of the \acrlong{wntrac} dataset.} \label{tab:taxonomy} \end{table*} \fi } \subsubsection*{Data Collection}\label{sec:data_collection} As stated earlier, Wikipedia includes a broad range of articles on \gls{covid} covering a variety of topics, including the cause, transmission, diagnosis, prevention, management, economic impact, and national responses. Categories are used in Wikipedia to link articles under a common topic and are found at the bottom of the article page. This dataset was collected by automatically crawling Wikipedia articles discussing \gls{covid} in different regions belonging to the category~\cite{wiki:catagory} \texttt{\gls{covid} pandemic by country} \footnote{For \emph{mask wearing} \gls{npi} type, Wikipedia articles were observed to be incomplete for some regions, so we augmented the dataset with hand-curated list of \gls{npi} measures from web sources.}. There are 156 subcategories and 198 articles directly under \texttt{\gls{covid} pandemic by country}, and when retrieved recursively, there are 384 articles under this top-level category as of July 2020. Considering the limited availability of volunteers, and the volume of \gls{npi} measures that had to be validated initially, we restricted the number of articles to a manageable size, covering 261 regions (i.e. countries and territories) as listed in the tables at the end of the paper. \subsubsection*{Processing Pipeline} The first step in the data processing is to retrieve the aforementioned list of Wikipedia articles on a periodic basis. The \gls{crawler} module implements this functionality. It uses the MediaWiki API~\cite{wiki-api} for downloading the articles. As part of this step, we extract the text content of each article, while at the same time preserving all the associated citations. This process produces a \gls{document} for each article. Each sentence in a \gls{document} is a candidate for \gls{npi-ext}. As of August 2020, the aggregate crawled data contains over 55,000 sentences, with an average of 213 sentences per \gls{document}. The second step in the pipeline is the extraction of the \gls{npi} \glspl{event} from a \gls{document}. It is broken into a sequence of steps described below. \begin{itemize} \item \gls{prep}: As the first step in processing a \gls{document}, we use sentence boundary detection algorithms from libraries such as spaCy~\cite{honnibal2015improved}, to identify where sentences begin and end. Although the sentences are used as logical units to extract \gls{npi} events, we preserved the order in which they appear in the source document for reasons detailed below. Also, at this step, we extract and retain the citation URL, if available for each sentence. \item \gls{sent-class}: Next, we classify the sentence into one of the \gls{npi} types such as \emph{school closure} to identify potential \gls{npi} \glspl{event}. If no \gls{npi} is discussed in the sentence, we classify it as \emph{discarded}. We use multiple learning algorithms, including logistic regression, Support Vector Machines, and \gls{bert}~\cite{Devlin_Chang_Lee_Toutanova_2018}, and employ an ensemble method to obtain better overall predictive performance. A small subset of the data (1490 sentences), was manually annotated to train the models. Independently, we also categorize the sentence as implying either the introduction or the withdrawal of an \gls{npi} (\gls{restriction}). \item \gls{ner-d}: After we identify the potential \glspl{event} in the previous step, we extract specific constituent entities for each candidate \gls{event} from the sentence. We used state-of-the-art named-entity recognizers (such as spaCy~\cite{honnibal2015improved}) and normalizers to detect and normalize locations (\emph{Where} : [\emph{Punjab}, \emph{Chattisgarh}, \emph{Manipal} ]) and time expressions (\emph{When} : \emph{March 13}). In addition, we also link the location entities of type 'GPE' in the Wikipedia article title to the corresponding ISO codes~\cite{wiki:iso-3166-1, wiki:iso-3166-2}. Even though we use the sentence as a logical unit for the extraction of an \gls{npi} event, the sentence itself may not include all the relevant information. For example, date or location may be available in sentences in the vicinity or in the header of the paragraph to which the sentence belongs. To address this key challenge, we developed a heuristic-based relation detection algorithm to associate one of the extracted dates or locations from the current document to each sentence. \item \gls{val-ex}: The last step in \gls{npi} event extraction, is determining the associated \gls{value}. We use multiple rule-based algorithms that either operate independently or depend on information extracted by the previous steps. For example, given the sentence \texttt{"On 13 March, it was announced at an official press conference that a four-week ban on public gatherings of more than 100."}, the event type is \emph{mass gathering} and the associated value is \emph{maximum number of people in social-gathering allowed by the government}. The value extraction is performed using parse-based rule engines~\cite{honnibal2015improved}. It is worth noting that the value extraction components should know the actual type \emph{mass gatherings} before extracting the correct value "100". Similarly, given a sentence \texttt{"On 1 April, the Government of USA suspended flights from New York to Texas"}, the event type is \emph{domestic flight restriction} and the associated value is \emph{name of the state where the passenger is arriving from}. To correctly extract the \gls{value}, the value extraction needs to know the correct \gls{type} and normalized locations ("New York") respectively. \end{itemize} Thus, using the above procedure, we extract the unique 5-tuples that are the candidate \gls{npi} \glspl{event}. Once extracted, they are presented to the volunteers for validation to ensure data quality. This process is repeated every day. In order to minimize manual labor, considering the small number of volunteers, we attempt to detect changes since the last time we crawled Wikipedia. We use a combination of syntactic similarity metrics such as Levenshtein Norm, and semantic similarity metrics such as event attribute matching to perform this daily \gls{delta} for each extracted \gls{document}. \iftoggle{inplace}{ \begin{figure}[htp!] \centering \includegraphics[width=0.99\linewidth]{wntrac-curator} \caption{\gls{tool} tool used for ongoing validation of the dataset.} \label{fig:wntrac-curator} \end{figure} } \subsubsection*{\gls{tool}} The \glspl{event} automatically extracted from the pipeline are vetted by volunteers using the \gls{tool} validation tool. The tool is a simple web-application backed by a \gls{db} as shown in Figure~\ref{fig:system}. The tool is shown in Figure~\ref{fig:wntrac-curator}. At the top, it displays the complete Wikipedia \gls{document} extracted by the processing pipeline. Below the \gls{document}, each candidate \gls{event} is shown to the volunteer in separate \emph{cards}. The volunteer can adjudge the candidate \gls{event} to be a brand new \gls{npi} event or an \gls{evidence} to an existing \gls{event} or discard the candidate. They can also correct any of the attributes associated with the \gls{event} extracted by the pipeline. \iftoggle{inplace}{ \begin{figure}[htp!] \centering \includegraphics[width=0.9\linewidth]{browser} \caption{Data browser for visualizing the \acrlong{wntrac} dataset.} \label{fig:data_browser} \end{figure} } \subsubsection*{Data Browser} Figure~\ref{fig:data_browser} presents an interactive data browser~\cite{data-browser} that uses a chart, map, and histogram to provide a descriptive analyses of \glspl{npi} and \gls{covid} outcomes such as confirmed cases and deaths. The browser has a control panel used to filter the data being visualized (e.g cases vs deaths), as well as how it is visualized (e.g. linear vs log scale). A play slider can be used to view the temporal evolution of \glspl{npi} and \gls{covid} outcomes in a given region. The chart illustrates the time points in which a geographical region imposes or lifts an \gls{npi} along with the temporal trends of \gls{covid} outcomes. The different types of \glspl{npi} are illustrated using specific icons that are described in a legend. Groups of interventions are noted with the star icon. The number of countries/territories and the number of \glspl{npi} shown in the chart can be adjusted in the settings. The user can select a specific line on the chart referring to a territory to focus on the \glspl{npi} imposed and lifted in that location. The histogram below the chart shows the number of territories that have imposed the different types of \glspl{npi} and can be selected to see the territories on the map that have imposed the selected subset of \glspl{npi}. The map illustrates the proportion of \gls{npi} categories (out of the 16 \gls{npi} categories in the dataset) implemented in each region using a gray-colored bar. Furthermore, when a region is selected, the gray-colored bar in any other region illustrates the proportion of \gls{npi} categories in the other region as a proportion of \gls{npi} categories implemented in the selected region. The map is also used to visualize the geographic distribution of the selected \gls{covid} outcome using choropleth, spikes, or bubbles. The user can interact with the territories on the map to focus on a location and view the data on the chart. Note that for some countries such as the United States, the map can be zoomed to reveal finer-grained data for sub-regions such as states. \section*{Data Records}\label{sec:data-records} In addition to the key fields discussed earlier, the dataset also contains a few additional attributes for each \gls{event}. A complete listing of all fields across \gls{event} and \gls{evidence} is shown in Table~\ref{tab:data-record}, along with an example for each field. Each version of the dataset consists of two CSV files named \texttt{ibm-wntrac-yyyy-mm-dd-events.csv} and \texttt{ibm-wntrac-yyyy-mm-dd-evidences.csv}, corresponding to \glspl{event} and \glspl{evidence} respectively. A live version of dataset is available in our GitHub repository~\url{https://github.com/IBM/wntrac/tree/master/data} for download. The dataset is regularly updated. At the time of the submission, the dataset is updated as of October 13\textsuperscript{th}, 2020. Historical versions of the dataset are made available in the same GitHub repository. Further, a static copy of the dataset containing \glspl{npi} recorded as of 8\textsuperscript{th} July 2020, used for the technical validation in the paper has been archived in figshare~\cite{figshare}. In the next section, we include some high-level dataset statistics to provide a sense of the distribution of the data. \iftoggle{inplace}{ \begin{table*}[htp!] \scriptsize \centering \begin{tabular}[t]{p{0.10\textwidth}p{0.50\textwidth}p{0.30\textwidth}}\toprule \textbf{Field name} & \textbf{Description} & \textbf{Example} \\ \midrule even\_id & Globally unique identifier~\cite{wiki:uuid} for the particular \gls{npi} & 7db34fd1-d121-479f-9713-af7596a45aa1 \\ type & Type of the \gls{npi} & School closure \\ country & Country where the \gls{npi} was implemented. Name in ISO 3166-1 coding~\cite{wiki:iso-3166-1} & USA \\ state/province & State or province where the \gls{npi} was implemented. Name in ISO 3166-2 coding~\cite{wiki:iso-3166-2} & Vermont \\ date & Date when the \gls{npi} comes to effect. It is not the date of announcement & 2020-03-26 \\ epoch & Unix epoch time~\cite{wiki:epoch} corresponding to the date & 1589749200000.0 \\ value & Value associated with the \gls{npi}. & Refer to Table for details \\ restriction & Ordinal values representing imposition ($1$) or lifting ($0$) of an \gls{npi} & 0 \\ sent\_id & Globally unique identifier~\cite{wiki:uuid} for the evidence sentence & d68ea644-24d5-4abf-93b0-dabc1cd3c2eb \\ doc\_url & Document URL & \url{https://en.wikipedia.org/wiki/COVID-19_pandemic_in_Vermont} \\ crawl\_id & Globally unique identifier~\cite{wiki:uuid} for the particular crawl in which this evidence sentence was fetched & 2020-05-06\_d0cba9ae-8fda-11ea-b351-069b8ffc8dc8 \\ crawl\_date & Date of the crawl that fetched this evidence sentence & 2020\-05\-06 \\ text & Evidence sentence in the document where the \gls{npi} is discussed & On March 26, Governor Scott ordered all schools in Vermont to remain closed for in-person classes for the rest of the academic year \\ citation\_url & URL cited for the evidence sentence in the source document & \iffalse \url{https://governor.vermont.gov/content/directive-5-continuity-learning-planning-pursuant-eo-01-20} \fi \\ anno\_provided\_url & Additional citation URL provided by the human volunteer who performed the validation. & \iffalse \url{https://www.vpr.org/post/gov-closes-vermont-schools-rest-academic-year} \fi \\ fine\_grained\_location & Geographic locations mentioned in the evidence sentence separated by pipeline. & Vermont \\ source\_type & Wikipedia citation source type indicating government ($G$) or other sources ($O$) & G \\ \bottomrule \end{tabular} \caption{Data record for the \acrlong{wntrac} dataset.} \label{tab:data-record} \end{table*} } \subsection*{Dataset Statistics} Figure~\ref{fig:stats-npi-distribution} shows the distribution of the \gls{npi} measures imposed worldwide. \emph{Entertainment / cultural sector closure}, \emph{confinement} and \emph{school closure} are the predominant \glspl{npi} taken by governments\footnote{Figures in Dataset Statistics, Usage Notes sections were generated from the latest version of dataset, dated 13\textsuperscript{th} October 2020, available at the time of manuscript submission. A copy of this version of the dataset is also available in figshare.~\cite{figshare}}. Figure~\ref{fig:stats-region-count-by-npi} summarizes the overall total number of regions that implemented \glspl{npi} of each type. As shown in the graph confinement, \emph{school\ closure} and \emph{freedom of movement} are the most common \glspl{npi} imposed worldwide, as expected from Figure~\ref{fig:stats-npi-distribution}. Figure~\ref{fig:stats-npi-count-by-region} shows the breakdown of the \glspl{npi} within each region, for the top twenty regions that have implemented the highest number of \glspl{npi} measures. \iftoggle{inplace}{ \begin{figure}[htp!] \centering \includegraphics[width=0.5\linewidth]{stats-npi-distribution} \caption{Distribution of \glspl{npi} in the \acrlong{wntrac} dataset.} \label{fig:stats-npi-distribution} \end{figure} \begin{figure}[htp!] \centering \subcaptionbox{}{\includegraphics[height=14 \baselineskip]{stats-region-count-by-npi}} \quad \subcaptionbox{}{\includegraphics[height=14 \baselineskip]{stats-region-count-by-npi-us}} \quad \caption{Number of regions implementing each \gls{npi} globally (left) and within US (right).} \label{fig:stats-region-count-by-npi} \end{figure} \begin{figure}[htp!] \centering \subcaptionbox{}{\includegraphics[height=18\baselineskip]{stats-npi-count-by-region}} \quad \subcaptionbox{}{\includegraphics[height=18\baselineskip]{stats-npi-count-by-region-us}} \quad \caption{Distribution of \gls{npi} measures implemented in different geographies globally (left) and within US (right).} \label{fig:stats-npi-count-by-region} \end{figure} } \section*{Technical Validation}\label{sec:technical-validation} The validation team consisted of a mix of experts who participated in the design of the taxonomy and/or the pipeline and IBM volunteers who completed a brief training session about the annotation schema and tool. Validation was done in two stages. In the first phase, because the \gls{wntrac} tool was still being developed, we used simple CSV files to distribute the data for validation. Each annotator was given a complete \gls{document} corresponding to a Wikipedia article for a particular region, retrieved as on June 6, 2020, pre-annotated with the output of the pipeline. Each sentence was displayed in a separate line with sentences corresponding to candidate \glspl{event} highlighted with a different background color. The attributes extracted by the pipeline were listed next to each sentence. Annotators were asked to verify and correct each of these attributes. If a sentence does not discuss any of the valid event types, they were asked to mark the \gls{type} as \emph{discarded}. If a sentence was incorrectly discarded by the pipeline, they were asked to correct the \gls{type} and fill in the attributes when possible. This was, however, not uniformly enforced. In the second phase, we made \gls{tool} tool available to the annotators. The tool randomly assigns a single \gls{document} to be validated to each annotator. Each \gls{document}, consists of incremental changes to the underlying Wikipedia article since the last validation of the \gls{document}. The validation process for the second phase is similar to the first phase except that only candidate \glspl{event}, as determined by the pipeline were shown to the annotators. This time-saving move was based on the observation during the first phase, when all sentences were presented, human annotators generally agreed with the automated pipeline on discarded sentences. The \gls{nlp} model used a recall-oriented threshold and only discarded sentences with low scores on all valid \gls{npi} types. \iftoggle{inplace}{ \begin{table*}[!htb] \centering \begin{tabular}{lcccccc} \toprule \multirow{2}{*}{} & \multicolumn{3}{c}{\textbf{All \gls{npi} event types}} & \multicolumn{3}{c}{\textbf{Top 5 \gls{npi} event types}} \\ \cmidrule{2-7} & \textbf{A vs E\textsubscript{1}} & \textbf{A vs E\textsubscript{2}} & \textbf{E\textsubscript{1} vs E\textsubscript{2}} & \textbf{A vs E\textsubscript{1}} & \textbf{A vs E\textsubscript{2}} & \textbf{E\textsubscript{1} vs E\textsubscript{2}} \\ \midrule \textbf{Type} & 0.63 & 0.69 & 0.80 & 0.81 & 0.77 & 0.85 \\ \textbf{Type + Value} & 0.41 & 0.42 & 0.69 & 0.51 & 0.47 & 0.70 \\ \textbf{Date} & 0.50 & 0.61 & 0.73 & 0.60 & 0.69 & 0.76 \\ \textbf{Region} & 0.99 & 1.00 & 0.99 & 0.98 & 1.00 & 0.98 \\ \textbf{Restriction} & 0.36 & 0.43 & 0.74 & 0.74 & 0.58 & 0.69 \\ \textbf{Type + Date} & 0.44 & 0.53 & 0.70 & 0.51 & 0.59 & 0.72 \\ \textbf{Type + Value + Date} & 0.31 & 0.34 & 0.62 & 0.35 & 0.36 & 0.59 \\ \textbf{Type + Value + Date + Region} & 0.30 & 0.33 & 0.62 & 0.35 & 0.36 & 0.59 \\ \textbf{Type + Value + Date + Region + Restriction} & 0.26 & 0.29 & 0.61 & 0.34 & 0.35 & 0.59 \\ \bottomrule \end{tabular} \caption{Inter-annotator agreement between average volunteers (A) and two groups of experienced volunteers (E\textsubscript{1} and E\textsubscript{2}). Region includes both country and state/territories as applicable.} \label{tab:iaa} \end{table*} } To determine the quality of the dataset post validation, \gls{iaa} was calculated on a subset, randomly sampled (2\%), from the full set that was validated by IBM volunteers. Each instance in the subset was further double annotated by two experts (randomly selected from a pool of six experts) independently, resulting in three sets of annotations per instance. The \gls{iaa} was evaluated on all five fields of the 5-tuple that uniquely defines an \gls{event}. Furthermore, the evaluation was performed at a field level for all fields except the \gls{value}, which is technically a sub-field of \gls{type} and it does not make sense to be analyzed on its own. The \gls{iaa} results are shown in Table~\ref{tab:iaa}. Note that the \gls{iaa} between experts were consistently high in all categories, indicating that the annotation schema is not ambiguous and most sentences can be consistently assigned to one of the \gls{npi} \gls{type} defined in the taxonomy. The \gls{iaa} between the volunteers and experts were also good (0.58) at the \gls{npi} \gls{type} level and the agreement is high (0.81) in the five most frequent \gls{npi} types. We plan to expand the taxonomy over time to cover more \gls{npi} types. We also plan to improve the accuracy of the pipeline by using end-to-end entity linking techniques for entity normalization and state-of-the-art methods for better temporal alignment. We plan to expand to other data sources to improve coverage. \section*{Usage Notes}\label{sec:usage-notes} One of the primary objectives in creating the \gls{wntrac} dataset was to understand what types of \glspl{npi} are being implemented worldwide and to facilitate analysis of the efficacy of the different types of \glspl{npi}. Specifically, the dataset supports a variety of studies, such as correlation and analysis to understand the associations between \glspl{npi} and outcomes, causal inference between \glspl{npi} and specific outcome variables, as well as impact analysis to understand the impact on socio-economic factors. Furthermore, this dataset offers an opportunity to perform local contextualized What-if scenarios and optimal intervention planning, by incorporating \glspl{npi} into epidemiological models. Such capabilities are critical for target decision-making to control the spread of the disease and minimize impact on society. There are a number of questions, ranging in complexity, that the dataset can be used to answer. For example, consider the question: \emph{How many \glspl{npi} were imposed and lifted globally as the pandemic continues?}. Figure~\ref{fig:number-imposed-lifted} sums the number of \glspl{npi} imposed and lifted in all geographies per month. As expected the vast amount of \glspl{npi} were imposed during the first outbreak of \gls{covid} in March, and lifted mainly in April and May. This figure also reveals the imbalance between imposed and lifted \glspl{npi} that exists in the data. For example, while more than three thousand \glspl{npi} were imposed at March, less than five hundred were lifted between April and September. The imbalance can be the outcome of many factors, such as, how and when lifting of \glspl{npi} is announced over time. Such factors should be taken into account performing analysis using this dataset. \iftoggle{inplace}{ \begin{figure}[htp!] \centering \subcaptionbox{Imposed\label{fig:imposed_count_by_month}}{\includegraphics[height=12\baselineskip,width=.45\linewidth]{imposed_count_by_month}} \quad \subcaptionbox{Lifted\label{fig:lifted_count_by_month}}{\includegraphics[height=12\baselineskip,width=.45\linewidth]{lifted_count_by_month}} \quad \caption{Number of imposed and lifted \glspl{npi} measures per month.} \label{fig:number-imposed-lifted} \end{figure} } {} A second example use of the dataset is to explore which \glspl{npi} were imposed by different countries early in the pandemic, to contain the spread of \gls{covid}?. One approach is to break the set of \glspl{npi} into two sets: travel-related and community related. Travel-related \glspl{npi} include \emph{domestic flight restrictions}, \emph{international flight restrictions}, \emph{freedom of movement (nationality dependent)}, and \emph{introduction of travel quarantine policies}. Figure~\ref{fig:travel-related} visualizes the elapsed time between the implementation of a travel-related \glspl{npi} and the recording of at least 50 cases, and time to the first reported death. The visualization shows 9 selected regions each of which had at least one travel-related \gls{npi} among the first set of \glspl{npi} imposed in the country, and was generated by combining \gls{wntrac} dataset with \gls{covid} outcomes dataset from the World Health Organization (WHO)~\cite{who}. For each region, the blue bar plot illustrates the number of days before 50 cumulative cases, and the red points shows the number of days before the first death. From the graph, it can be observed that Singapore first imposed a travel-related \gls{npi} more than 50 days before their first death, showing an earlier response than Brazil and New York State where the first travel related \gls{npi} were imposed about 10 days after the first death. Similarly, Figure~\ref{fig:community-related} visualizes the elapsed time between the implementation of community-related \glspl{npi} and the recording of at least 50 cases and at least one death for 9 selected regions. The community-related \glspl{npi} include \emph{entertainment/cultural sector closure}, \emph{confinement}, \emph{school closure}, \emph{mass gatherings}, \emph{mask wearing}, \emph{public services closure}, \emph{public transportation}, \emph{work restrictions}, and \emph{state of emergency}. It can be noted that at least one community-related \gls{npi} was imposed for each of the selected regions prior to their first recorded death due to COVID-19. \iftoggle{inplace}{ \begin{figure}[htp!] \centering \subcaptionbox{Travel-related \glspl{npi}\label{fig:travel-related}}{\includegraphics[height=12\baselineskip,width=.45\linewidth]{travel-related}} \quad \subcaptionbox{Community-related \glspl{npi}\label{fig:community-related}}{\includegraphics[height=12\baselineskip,width=.45\linewidth]{community-related}} \quad \caption{Elapsed time (in days) between the introduction of \glspl{npi} and recording of first death (\textcolor{red}{red}) or 50 cases (\textcolor{blue}{blue}) in countries that implemented travel-related vs community-related \glspl{npi} first.} \label{fig:travel-community-related} \end{figure} } \\ As a third example, we demonstrate how the \gls{wntrac} dataset can be used to generate an index, a summary statistic between $[0,1]$ that represents the NPIs imposed and, if available, the adherence. This index can be used to study the relationship between \glspl{npi} and \gls{covid} outcomes over time and to compare response strategies across jurisdictions. Figure~\ref{fig:npi-trends} illustrates this using data from representative states in the United States (Florida, Georgia, New York, and Texas). In the figure, the bar graph shows the trend for the exponentially weighted moving average of new cases per 100,000 population. The red continuous line is the proportion of the \gls{npi} (out of thirteen \gls{npi} \glspl{type} in the \gls{wntrac} dataset) that a region has imposed at a given time. The blue continuous line is the \gls{wntrac} \gls{npi-index}, a composite index that captures both the stringency levels of the \glspl{npi} and community mobility data as a proxy measure of adherence to \glspl{npi} strategies. The \gls{wntrac} \gls{npi-index}, denoted $\eta(t)$, is presented in Eq. \ref{eq:1}, and the code for the \gls{wntrac} \gls{npi-index} is available in the repository. \begin{equation} \label{eq:1} \eta(t) = \omega_{0}SI(t) + \omega_{1}\frac{e^{A(t)}}{1 + e^{A(t)}}, \end{equation} $\omega_{0}, \omega_{1} > 0$ are weights applied to each term and $\omega_{0} + \omega{1} = 1$. Specifically, the first term, $SI$, is derived from mapping and scoring the \gls{wntrac} \gls{npi} similarly as presented in the \gls{oxcgrt} stringency index~\cite{hale2020oxford}. The second term represents adherence at a specific point in time, $A(t)$, by using mobility data as a proxy. Specifically, we define $A(t)$ in Eq.~\ref{eq:2} as a function of the "anticipated mobility", $m_{ant}$, and the "observed mobility," $m_{obs}$. The anticipated mobility at a specific point in time is the mobility score that would potentially be associated with the \glspl{npi} at that time. The observed mobility is the mobility value observed in that region at a specific time point and ideally should be close the value of anticipated mobility. In our work, we assume a negative relations between stringency and mobility, and anticipated mobility is derived from this linear relationship with noise. \begin{equation} \label{eq:2} A(t) = \frac{m_{ant} - m_{obs}}{m_{ant}}. \end{equation} As illustrated, the \gls{wntrac} \gls{npi} metrics can be compared to existing metrics such as the \gls{oxcgrt} stringency index\cite{hale2020oxford}. Of note is the detailed interpretation of the relationships illustrated in this example is subject to addressing limitations such as missing data and will be pursued as part of our future work. \iftoggle{inplace}{ \begin{figure}[htp!] \captionsetup[subfigure]{labelformat=empty} \centering \subcaptionbox{}{\includegraphics[height=10\baselineskip,width=.4\linewidth]{npi_index_usa-fl}}\quad \subcaptionbox{}{\includegraphics[height=10\baselineskip,width=.4\linewidth]{npi_index_usa-ga}} \quad \subcaptionbox{}{\includegraphics[height=10\baselineskip,width=.4\linewidth]{npi_index_usa-ny}} \quad \subcaptionbox{}{\includegraphics[height=10\baselineskip,width=.4\linewidth]{npi_index_usa-tx}} \quad \caption{Trends in \gls{covid} cases per 100,1000 population and the \gls{npi}-based indices in representative US states.} \label{fig:npi-trends} \end{figure} } Finally, another important application of the \gls{wntrac} dataset is to support What-if analysis and decision-making for optimal intervention planning. This is especially important to provide critical, time-sensitive decision support to various leaders, and decision-making teams such as \gls{covid} task force teams as they determine which \glspl{npi} to impose or lift over time. Efficiency in this decision-making process is important, as the space of all potential combinations and variations of \glspl{npi} is large and complex. The options for a particular region have varying degrees of impact on outcomes for that region. Tools ~\cite{notnets} that enable what-if analysis and intervention planning, at both national and sub-national levels, that incorporate the \gls{wntrac} dataset can be leveraged to meet this need. For decision-makers, these tools enable easy navigation through the complex intervention space in a timely manner to generate the most optimal and context-relevant \gls{covid} intervention programs. A key requirement for such tools are epidemiological models that are calibrated in such a way that the resulting forecasts can be trusted as accurate projections. To calibrate these models, it is critical to consider the \gls{npi} that have been imposed so that the drivers of disease spread can be contextualized for a region. By incorporating \gls{npi} into the models improved projections of outcomes of the disease can be generated, yielding more accurate scenarios for decision-makers to explore. In addition to the above examples, the \gls{wntrac} dataset can be used to support other objectives, including estimating the relationships between \glspl{npi} and \begin{itemize} [noitemsep, topsep=0pt] \item consumers behavior by, for example, correlating between retail data and \glspl{npi}. \item environmental changes such as pollution levels. \item actual compliance by the population. Naturally, not all the interventions recorded in the dataset are an accurate representation of reality as some of the interventions capture a governmental request that might not be followed by the entire population. Thus, it might be useful to integrate the \gls{wntrac} dataset with other publicly available data sources that can provide information regarding the level of compliance with an intervention, such as mobility information~\cite{apple-mobility, google-mobility}, where we provided an example with the NPI-Index above, and social media. \end{itemize} Lastly, one other interesting use case is to estimate the economic impact of \glspl{npi} by, for example, relating unemployment rates and jurisdictional debt with \glspl{npi}. Estimation of the effect of \glspl{npi} on non-\gls{covid} health problems, such as late cancer detection due to missed screening tests, will also be useful. \iffalse Procedurally, this involves a series of preprocessing steps that include loading the \gls{wntrac} dataset, selecting the subset of 13 \glspl{npi} categories used for subsequent analyses, identifying country-level \gls{npi} events for each of the selected \gls{npi} categories, and identifying the date when each of the selected \gls{npi} categories was first imposed in each country. Subsequently, we combined the preprocessed \gls{npi} data with \gls{covid} outcomes data from the World Health Organization (WHO) \cite{who}. Note that the WHO dataset contains daily counts of new cases, cumulative cases, new deaths, and cumulative deaths reported per country/territory. In our example analysis, for each country/territory in the preprocessed NPI data, we used WHO data to identify the dates when the country/territory first reported 50 cumulative cases and first deaths, and excluded countries/territories with fewer than 50 cases, no deaths, or no data. Consequently, the final combined dataset used for our example analyses was in a longitudinal format where each row represents daily counts of outcomes per territory from the WHO, along with binary indicator variables for each \gls{npi} derived from \gls{wntrac} that denote whether or not the \gls{npi} was imposed or not imposed. \fi \section*{Code Availability}\label{sec:code-availability} The source code for the \gls{wntrac} automated \gls{npi} curation system, including the data processing pipeline, \gls{tool} tool and \gls{npi} data browser is available in a public GitHub repository at~\url{https://github.com/IBM/wntrac/tree/master/code} along side the up-to-date version of the dataset~\url{https://github.com/IBM/wntrac/tree/master/data}. Please refer to the README file in repository for further instructions on using the code.
{ "redpajama_set_name": "RedPajamaArXiv" }
1,737
\section{Introduction} \label{sec:intro} Solar flares are abrupt electromagnetic explosions occurring in magnetically active regions on the solar surface. Intense solar flares are frequently followed by coronal mass ejections and solar energetic particles, which may disturb or disable satellites, terrestrial communication systems, and power grids. Predicting such strong flares from solar observations is therefore of particular significance and has been one of the primary tasks in space weather research. Flare prediction can be posed as a classification problem, asking for a binary decision on whether the sun will produce a flare above some level in a future time window. Since strong solar flares mostly occur in active regions, it is common to first produce predictions for each active region on the solar disk. In this paper, we consider a ``strong-vs-quiet" flare prediction problem, distinguishing active regions that will produce an M- or X- class flare in the future 24 hours, from those that stay flare quiescent within 24 hours before---and after---the forecast issuance time. Over the past decade, a great amount of flare prediction studies have been conducted on a data product named Space-Weather HMI Active Region Patches \citep[SHARPs,][]{bobra2014helioseismic}. The SHARP database is derived from full-disk observations of the Helioseismic and Magnetic Imager \citep[HMI,][]{schou12} aboard the {\it Solar Dynamics Observatory} (SDO), containing maps and summary parameters of automatically tracked active regions from May 2010 to the present day, covering much of Solar Cycle 24. Despite the fact that SHARP is one of the most recent and highest quality datasets of its kind, it only contains a limited number of strong events, as Solar Cycle 24 is the weakest solar cycle in a century. Recently, a new data product, Space-Weather MDI Active Region Patches \citep[SMARPs,][]{bobra2021smarps}, was developed as an effort to extend backward the SHARP database to include active region observations in Solar Cycle 23, a much stronger solar cycle with significantly more flaring events. In fact, Solar Cycle 23 is the longest solar cycle (147 months) in the past 150 years\footnote{Source: \url{https://ntrs.nasa.gov/api/citations/20130013068/downloads/20130013068.pdf}}. The SMARP database was derived from the Michelson Doppler Imager \citep[MDI,][]{scherrer95} aboard the {\it Solar and Heliospheric Observatory} (SoHO), which observed the sun from 1996 to 2010. Compared to its successor HMI, MDI's measurement of the solar surface magnetic field is only restricted to the line-of-sight component, with lower spatial resolution, lower signal-to-noise ratio, and shorter cadence. As such, SMARP does not contain as much information as SHARP, and its data quality is not as high. Nonetheless, SMARP's coverage of a stronger solar cycle and its partial compatibility with SHARP make it a valuable data product to use with SHARP, especially for statistical studies in which a large sample size or a long time span is desired. Many machine learning methods for flare prediction have been proposed in recent years. They roughly fall into three categories in terms of how flare pertinent features are extracted from data. The first category uses \emph{explicit} parameterization of observational data that are considered relevant to flare production, e.g., SHARP parameters that characterize the photospheric magnetic field. Much of the effort in data-driven flare forecasting has been made in this category, exploring a wide range of machine learning algorithms including discriminant analysis \citep{leka2003photospheric}, regularized linear regression \citep{jonas2018flare}, support vector machine \citep{yuan2010automated, bobra2015solar, nishizuka2017solar, florios2018forecasting}, k-nearest neighbors \citep{nishizuka2017solar}, extremely random trees \citep{nishizuka2017solar}, random forests \citep{liu2017predicting, florios2018forecasting}, multi-layer perceptrons (MLP) \citep{florios2018forecasting}, residual networks \citep{nishizuka2018deep, nishizuka2020reliable}, long short-term memory (LSTM) networks \citep{chen2019identifying, liu2019predicting}, etc. The second category learns features from images using fixed transformations, e.g., random filters \citep{jonas2018flare}, Gabor filters \citep{jonas2018flare}, wavelet transforms \citep{hada2016deep}. The third category, only popularized more recently, \emph{implicitly} learns flare indicative signatures directly from active region magnetic field maps. This category features mainly convolutional neural networks (CNNs) \citep{huang2018deep, li2020predicting}. Note that the three categories are not mutually exclusive. For example, methods in the second category typically also depend on explicitly constructed features \citep[e.g.][]{jonas2018flare} as the information within transformation coefficients is often limited. In this study, two representative deep learning methods, LSTM and CNN, are considered. LSTM uses times series of active region summary parameters derived from line-of-sight magnetograms, whereas CNN uses static point-in-time magnetograms. With so many machine learning algorithms developed for flare forecasting, one might expect an improved performance by combining different methods. This expectation seems even more reasonable if component methods in the combination use different data to provide complementary information. This is the idea behind ensemble learning, a learning paradigm that capitalizes on different models to achieve better performance than is achievable by any of the models alone. During the past few decades, the rapidly-evolving field of ensemble learning has achieved great success in many areas, which has attracted the attention of the space weather community \citep{murray2018importance}. In this paper, using CNN and LSTM models independently trained on active region magnetograms and parameter sequences, respectively, we consider a particular type of ensemble method called stacking \citep{wolpert1992stacked}. Ideas similar to our stacking ensemble method have previously appeared in solar flare forecasting, most notably \citet{guerra2015ensemble,guerra2020ensemble}. In their work, full-disk probabilistic forecasts are linearly combined with weights selected by maximizing a potentially non-convex performance metric (e.g., the True Skill Score, the Heidke Skill Score). In contrast, our stacking ensemble linearly combines two state-of-the-art machine learning classifiers (LSTM and CNN). To select the combination weights, we consider a convex cross-entropy loss function in addition to other performance metrics. Deep learning models are often considered to be ``black-box" due to lack of interpretability. Recently the machine learning community has developed empirical tools that aim to better interpret the decisions of the deep neural network (DNN). Among these tools is the class of attribution methods \citep[e.g.][]{springenberg2015striving,selvaraju2017grad,shrikumar2017learning,sundararajan2017axiomatic} that attribute a score to gauge the contribution of each input feature for a given input sample. Attribution methods such as the occlusion method and Grad-CAM have been previously used to interpret CNNs in flare prediction applications \citep{bhattacharjee2020supervised,yi2021visual}. In this work, we evaluate additional attribution methods (Deconvolution, Guided Backpropagation, DeepLIFT, and Integrated Gradients) on the interpretation of CNNs trained to predict flares. In particular, we show that Integrated Gradient attribution maps, which have the same resolution as the input image, lead to insights on the important magnetic features that inform the CNN's decisions on flare prediction. The contributions of this paper are as follows: \begin{enumerate} \item We demonstrate the value of combining SMARP and SHARP to improve flare prediction performance \item We compare the flare prediction performance of LSTM and CNN on an equal footing, i.e., on the same temporally evolving active region dataset. \item We demonstrate that stacking the LSTM and CNN can significantly improve flare class prediction in certain settings. \item We provide visual explanations of the CNN predictor using visual attribution methods including Deconvolution, Guided Backpropagation, Integrated Gradients, DeepLIFT, and Grad-CAM. We demonstrate the potential of these methods in identifying flare indicative signatures, interpreting CNN's decisions, revealing model limitations, and suggesting model modifications. \end{enumerate} The rest of the paper is organized as follows. Section \ref{sec:data} introduces the data sources and how they are processed into machine-learning-ready datasets. Section \ref{sec:methodology} describes the flare prediction methods, stacking ensemble, and visual attribution methods. Section \ref{sec:results} presents and compares the flare prediction performance on the datasets. Section \ref{sec:discussion} concludes the paper by presenting the lessons learned from the experiments. \section{Data} \label{sec:data} \subsection{Data sources} Observational data of active regions of Solar Cycle 23 and 24 are extracted from the SMARP and the SHARP data product, respectively. Both SMARP and SHARP contain automatically-tracked active region cutouts of full-disk line-of-sight magnetograms, referred to as Track Active Region Patches (TARPs) and HMI Active Region Patches (HARPs), respectively. They also contain summary parameters that characterize physical properties of active regions. We consider definitive SMARP and SHARP records in Cylindrical Equal-Area (CEA) coordinates hosted in Joint Science Operations Center\footnote{See \url{http://jsoc.stanford.edu}.}. We query SMARP records from 1996 April 23 to 2010 October 28 and SHARP records from 2010 May 1 to 2020 December 1, both at a cadence of 96 minutes. Only good quality SMARP and SHARP records within $\pm 70^\circ$ of the central meridian matching at least one NOAA active region are considered. For active region summary parameters, we use four metadata keywords that are common to SMARP and SHARP, i.e., \texttt{USFLUXL}, \texttt{MEANGBL}, \texttt{R\_VALUE}, and \texttt{AREA}. Definitions of these summary parameters are listed in Table \ref{tab:keywords}. For images, we use photospheric line-of-sight magnetic field maps, or magnetograms, from the two data products. \begin{table} \centering \caption{Active region summary parameters used in this study. The line-of-sight magnetic field is denoted by $B_{L}$.} \begin{tabular}{p{0.07\textwidth}p{0.24\textwidth}p{0.28\textwidth}p{0.2\textwidth}p{0.1\textwidth}} \toprule Keyword & Description & Pixels & Formula & Unit \\ \midrule \texttt{USFLUXL} & Total line-of-sight unsigned flux & Pixels in the TARP/HARP region & $\sum|B_{L}|dA$ & Maxwell\\ \texttt{MEANGBL} & Mean gradient of the line-of-sight field & Pixels in the TARP/HARP region & $\frac{1}{N}\sum \sqrt{\left(\frac{\partial B_{L}}{\partial x}\right)^2 + \left(\frac{\partial B_{L}}{\partial y}\right)^2}$ & Gauss/pixel \\ \texttt{R\_VALUE} & $R$, or a measure of the unsigned flux near polarity inversion lines \citep{schrijver2007r} & Pixels near polarity inversion lines & $\log\left(\sum|B_{L}|dA\right)$ & Maxwell \\ \texttt{AREA} & De-projected area of patch on sphere in micro-hemisphere & Pixels in the TARP/HARP region & $\sum dA$ & mH \\ \bottomrule \end{tabular} \label{tab:keywords} \end{table} Observational data samples are labeled using the GOES catalog of X-ray solar flare events. Based on the peak magnitude of 1--8 \AA{} soft X-ray flux measured by \emph{Geostationary Operational Environmental Satellites} (GOES), solar flare events are classified into five increasingly intense classes: A, B, C, M, and X, sometimes appended with a number that indicates a finer scale classification. M- and X- class flares are referred to as strong flares throughout the paper. We only consider GOES solar flare events with at least one associated NOAA active region that can be used to cross-reference the \texttt{NOAA\_ARS} keyword in SHARP (or SMARP) databases to associate the flare with a HARP (or TARP). The GOES event records are queried using the Sunpy package \citep{sunpy_community2020} from the beginning of 1996 to the end of 2020, covering the period of the SMARP and SHARP observations used in this paper. Of note, although the GOES catalog is widely considered as the ``go-to" record database in solar flare forecasting, it is not error-free. There are cases in which flares, even the strong ones, are not assigned to any active region \citep{leka2019comparison}. Furthermore, small-sized flares could be buried under the background radiation, a phenomenon frequently observed for A- and B-class flares, especially after a strong flare occurs. Moreover, there are 61 event records annotated with an unknown GOES event class, most of them in the year 1996. These 61 event records are excluded in this study. \subsection{Data fusion} The challenge of combining the disparate SHARP and SMARP data is mitigated by the fact that there is a short overlapping time period over which they were jointly collected (May 1 to October 28 of 2010). We used this common time period to evaluate the dissimilarities between the SHARP/SMARP data and to develop methods for data alignment. As explained below, our analysis of the data over the common time period led us to adopt a simple method for fusing the SHARP and SMARP data: (1) we downsampled the SHARP magnetograms to match the resolution of the SMARP magnetograms; (2) we separately transformed the SHARP and SMARP summary parameters by Z-score (translation-scale) standardization. We first discuss the fusion of the SHARP and SMARP magnetograms. SHARP magnetograms inherit the HMI resolution of about $0.5''$ per pixel, whereas SMARP magnetograms inherit the MDI resolution of about $2''$ per pixel. To compare HMI and MDI magnetograms, \citet{liu2012comparison} reduced HMI spatial resolution to match MDI's by convolving a two-dimensional Gaussian function with an FWHM of 4.7 HMI pixels and truncated at 15 HMI pixels. Then, the HMI pixels enclosed in each MDI pixel are averaged to generate an MDI proxy pixel. Subsequently, a pixel value transformation $\textrm{MDI} = -0.18 + 1.40\times\textrm{HMI}$ is applied. In this work, we adopted a simpler approach by subsampling SHARP magnetograms 4 times in both dimensions to match the resolution of SMARP magnetograms. Unlike \citet{liu2012comparison}, we approximated pixel value transformation with an identity map, as pixel value distributions of SHARP and SMARP magnetograms in the overlap period are very similar (Figure \ref{fig:qq}). Our approximation agrees well with \citet{riley2014multi}, who found a multiplicative conversion factor of $0.99\pm 0.13$ between MDI and HMI using histogram equating. The discrepancy between our multiplicative conversion factor (1.099) and that of \citet{liu2012comparison} (1.40) may be because they considered full-disk magnetograms whereas we focus on active regions. In addition, they considered only 12 pairs in June--August 2010, whereas we considered every possible matching in May--October 2010. Moreover, they performed pixel-to-pixel match of full-disk magnetograms, whereas we use histogram-based methods on active regions because more precise models for aligning coordinates between CEA-projected SHARP and SMARP are not yet available \citep{bobra2021smarps}. Furthermore, they considered pixels within 0.866 solar radius of Sun's center, whereas we considered pixels within $\pm 70 ^\circ$ from the central meridian. \begin{figure} \centering \gridline{\fig{figures/qq.pdf}{0.36\textwidth}{}} \vspace{-2em} \caption{Q-Q (quantile-quantile) plot of 50 matched magnetogram pairs of HARP and TARP from May 1 to October 28 in 2010. Active regions with pixels outside of $\pm 70 ^\circ$ from the central meridian are not used. For each pair, the co-temporal magnetograms are sampled at a rate of every 8 hours. The pixels within the intersection of the bounding boxes of active region pairs are used. Lighter colors indicate higher latitudes. \label{fig:qq}} \end{figure} We next discuss the fusion of SHARP and SMARP summary parameters. Although designed to represent the same physical quantity, summary parameters with identical keywords in SHARP and SMARP are calculated from two pipelines with different source data, and the differences between them cannot be neglected. \citet{bobra2021smarps} investigated these differences by comparing the marginal and the pairwise joint distribution of co-temporal SMARP and SHARP summary parameters for 51 NOAA active regions over the overlap period of MDI and HMI \citep[][Figure 3]{bobra2021smarps}. Motivated by these findings, we investigated the linear associations between SHARP and SMARP using a univariate linear regression analysis. Specifically, SMARP parameters were regressed on their SHARP counterparts. As shown in Figure \ref{fig:regplot}, \texttt{USFLUXL} is the most correlated parameter between SHARP and SMARP, with Pearson correlation coefficient $r = 0.970$, whereas \texttt{MEANGBL} is the least correlated parameter, with $r=0.796$. Note that applying linear transformations to the SHARP summary parameters would have no effect once Z-score standardization was performed. This is because Z-scores are invariant to univariate linear transformation. Therefore, in practice, the linear transformation on SHARP summary parameters is not performed. \begin{figure} \centering \gridline{ \fig{eda/target_vs_feature_2.pdf}{0.6\textwidth}{} } \vspace{-2em} \caption{2D histograms of summary parameters \texttt{USFLUXL}, \texttt{MEANGBL}, \texttt{R\_VALUE}, and \texttt{AREA} between SHARP and SMARP. SHARP summary parameters are suffixed with \texttt{\_HMI} and SMARP with \texttt{\_MDI}. The orange lines in the diagonal blocks are the least square fit of SMARP summary parameters on the corresponding SHARP summary parameters, with coefficient $k$, intercept $b$, and Pearson correlation coefficient $r$ displayed in the corner. } \label{fig:regplot} \end{figure} \subsection{Sample extraction and labeling} \label{sec:sample} We focus our joint SMARP and SHARP analysis on a particular task of interest, which we refer to as ``strong-vs-quiet" flare prediction: based on a sequence of observations of an evolving active region, the objective is to discriminate active regions that will generate strong flares in the near future, from active regions having no flare activity whatsoever. To construct a dataset for this task, we extract data samples using a sliding window approach similar to \citet{angryk2020multivariate}. Specifically, samples are extracted from a 24-hour time window that slides through an active region sequence with step size of 96 minutes (i.e., one 24-hour subsequence starts 96 minutes after the starting point of its previous subsequence). The 24-hour time window that a sample covers is called the \emph{observation period}, and the 24-hour time window following immediately after the observation period is called the \emph{prediction period}. Then we retain the samples that either: (1) exhibit an M- or X-class flare in the prediction period (assigned to the positive class); or (2) have no flare of any class in both observation and the prediction period (assigned to the negative class). Table \ref{tab:labeling} lists the sample sizes of all the possible flare activity evolution types and the associations between evolution types and class labels. Figure~\ref{fig:sample_def} shows examples of extracted and labeled samples. We note that two evolution types are excluded in this study. Evolution types denoted by blank spaces in Table~\ref{tab:labeling} indicate a decay in flare activity---a process different from the onset or the continuation of the flare activity. They are unrelated to our task and also less studied in the literature. Including them in the dataset brings an unnecessary source of heterogeneity and makes learning a reliable predictor substantially more difficult. Evolution types denoted by question marks have only weak flares in the prediction period. They are excluded to enhance the contrast between the two classes, to avoid the concerns on the detection of weak flares (many B- and C-class flares are obscured by background radiation), and to avoid the controversy on the granularity of labels (for instance, an M1.0 class flare and a C9.9 class flare relieve a similar amount of energy but are categorized differently). Possible limitations of our sample selection rules are discussed in Section~\ref{sec:discussion}. \begin{table} \centering \caption{ \emph{Left}: Samples sizes of all possible flare activity evolution types, with missing/inconsistent data removed. The flare activity in the observation period of a sample can be \texttt{QUIET} (the active region is flare-quiet), \texttt{WEAK} (only flares of size smaller than M1.0 occur), or \texttt{STRONG} (there is at least one large flare of size M1.0 or above). The prediction period can be classified the same way. The entries denote the sample counts in SMARP/SHARP data sets. \emph{Right}: The associations of flare activity evolution types and the class labels for the ``strong-vs-quiet" flare prediction task. Positive samples are denoted as \texttt{+} and negative samples as \texttt{-}. Samples with the evolution type signifying a decaying flare activity (denoted as a blank space) or leading to only small flares (denoted as \texttt{?}) are not relevant to our task. } \begin{tabular}{cccc} \toprule & \multicolumn{3}{c}{Prediction} \\ \cmidrule(lr){2-4} & \texttt{QUIET} & \texttt{WEAK} & \texttt{STRONG} \\ Observation & & & \\ \midrule \texttt{QUIET} & 130695 / 66349 & 12341 / 10715 & 932 / 296 \\ \texttt{WEAK} & 12688 / 11110 & 12033 / 14891 & 1915 / 1366 \\ \texttt{STRONG} & 1071 / 282 & 1723 / 1371 & 1754 / 1187 \\ \bottomrule \end{tabular} \begin{tabular}{cccc} \toprule & \multicolumn{3}{c}{Prediction} \\ \cmidrule(lr){2-4} & \texttt{QUIET} & \texttt{WEAK} & \texttt{STRONG} \\ Observation & & & \\ \midrule \texttt{QUIET} & \texttt{-} & \texttt{?} & \texttt{+}\\ \texttt{WEAK} & \texttt{ } & \texttt{?} & \texttt{+}\\ \texttt{STRONG} & \texttt{ } & \texttt{ } & \texttt{+}\\ \bottomrule \end{tabular} \label{tab:labeling} \end{table} \begin{figure} \centering \includegraphics[width=0.8\textwidth]{sample_def.pdf} \caption{Demonstration of the sample extraction and labeling procedure of an active region. The dark orange dots represent flares that occurred in an active region, with the last flare exceeding the M1.0 threshold. The blue sample is labeled as the negative class because no flare of any class occurs in the observation and the prediction period. The gray sample is irrelevant to the task since all flares in the prediction period are weaker than M1.0. The orange sample is labeled as the positive class because the prediction period contains a flare of size exceeding M1.0. } \label{fig:sample_def} \end{figure} After extracting and labeling active region samples, we discarded the samples having inconsistent or missing data. We consider a point-in-time record in a sample sequence as a ``bad record" if (1) the magnetogram contains Not-a-Number (NaN) pixels, (2) the magnetogram has either height or width deviating more than 2 pixels from the median dimension in the sample sequence, or (3) any of the summary parameters is NaN. A sample is discarded if it contains more than 2 bad records or the last record is a bad record. The validity of the last record is enforced because the CNN uses only the last record in the sample sequence. The numbers of SMARP and SHARP samples output by the above pipeline are shown in Table \ref{tab:tally}. The count of negative samples is observed to dominate in both SMARP and SHARP. To address the issue of significant class imbalance, we randomly undersample the negative samples to equalize the positive and negative classes, which will be described in more detail in Section \ref{sec:rus}. \begin{table} \centering \caption{Sample sequences extracted from SMARP and SHARP} \begin{tabular}{cccc} \toprule & Positive (M1.0+) & Negative (Quiet) & Event Rate\\ \midrule SMARP & 4601 & 130695 & 0.0340 \\ SHARP & 2849 & 66349 & 0.0412 \\ \bottomrule \end{tabular} \label{tab:tally} \end{table} \subsection{Train/validation/test split} A common practice in machine learning is to divide the data samples into three disjoint subsets, as known as splits: a training set on which the model is fitted, a validation set on which hyperparameters are selected, and a test set on which the model is evaluated for generalization performance. The ability of a trained machine learning algorithm to generalize to unseen samples hinges on the distributional similarity among splits. Therefore, it is important that splits be sufficiently similar in distribution. Due to the temporal coherence of an active region in its lifetime, a random split of data samples will have samples coming from one active region categorized into different splits. Such correlation constitutes an undesirable information leakage among splits. For instance, information leaking from the training set into the test set will likely result in an overly optimistic estimate of the generalization performance. Much of the flare prediction literature deals with this issue by taking a chronological split, e.g., a year-based split \citep[e.g.][]{bobra2015solar, chen2019identifying}. Unfortunately, it is observed that the splits may not share the same distribution due to solar cycle dependency \citep{wang2020predicting}. Some other works take an active-region-based split, where data samples from the same active region must belong to the same split \citep[e.g.][]{guerra2015ensemble, campi2019feature, zheng2019solar, li2020predicting}. Compared to splitting by years, this approach has the advantage that active regions in each split are randomly dispersed in different phases of a solar cycle, removing the bias introduced by artificially specifying splits. This distributional consistency between splits comes at the price of an additional source of information leakage due to sympathetic flaring in co-temporal active regions. \subsection{Random undersampling} \label{sec:rus} As shown in Table \ref{tab:tally}, both SMARP and SHARP exhibit prominent class imbalance, with positive (minority class) samples significantly outnumbered by negative (majority class) samples. In flare forecasting, class imbalance has been recognized as a major challenge, both in forecast verification \citep{woodcock1976evaluation,jolliffe2012forecast} and in data-driven methodology \citep{bobra2015solar,ahmadzadeh2021train}. A data-driven forecasting method needs to be calibrated to handle class imbalance properly, in order to effectively detect the events of interest, as opposed to being overwhelmed by the sheer volume of the negative samples in the training set. Methods to tackle class imbalance can be categorized into three types: data-level methods, algorithm-level methods, and a hybrid of the two \citep{krawczyk2016learning,johnson2019survey}. Data-level methods rebalance the class distribution by oversampling the minority class and/or undersampling the majority class---both have been used in flare forecasting \citep[e.g.][]{ribeiro2021machine,yu2010short}. Classifiers trained on rebalanced datasets, without being biased towards the majority class, are more likely to effectively detect the event of interest. Such classifiers are also generally more robust to variations in class imbalance than classifiers trained on the original imbalanced data \citep{xue2014does}. Algorithm-level methods modify the learners to alleviate their bias towards the majority groups. The most popular algorithm-level approach---also widely used in flare forecasting \citep[e.g.][]{bobra2015solar, nishizuka2018deep, liu2019predicting}---is cost-sensitive learning, which assigns a higher penalty to the misclassification of samples from the minority class to boost their importance \citep{krawczyk2016learning}. The penalty weights for different classes are usually set to be inversely proportional to their samples sizes \citep{nishizuka2018deep,liu2019predicting,ahmadzadeh2021train}. Other algorithm-level approaches include imbalanced learning algorithms, one-class learning, and ensemble methods \citep{ali2013classification}, but they are rarely used in flare forecasting. Recent work by \citet{ahmadzadeh2021train} provided a thorough investigation of class imbalance in flare forecasting by presenting an empirical evaluation of multiple approaches. We handle the class imbalance problem using random undersampling: we randomly remove samples from the majority class until the number of positive and negative samples are equalized. By training on rebalanced training and validation sets, we obtain a predictor that is more robust to shifts in climatological flare rates and learns more resilient pre-flare features. Following \citet{zheng2019solar} and \citet{deng2021fine}, we also perform random undersampling on test sets to preserve the class proportion consistency among splits. By testing on rebalanced test sets, we evaluate the generalization performance of the predictor under the \emph{same} climatological rate as it is trained with. We note there are also studies that do not rebalance the test set in order to evaluate the performance under a realistic event rate \citep{cinto2020framework,ahmadzadeh2021train}. However, the bias caused by the class-balance change between the test and the training set is often neglected. We discuss such bias in addition to possible corrective methods in Section~\ref{sec:discussion}. Applying such corrections will be left to future work that directly addresses operational applications. \subsection{Image resizing} The CNN requires all input images to be of the same size, but the active region cutouts are of different sizes and aspect ratios. Resizing (via interpolation), zero padding, and cropping are among mostly used methods to convert different-sized images into a uniform size. \citet{jonas2018flare} cropped and padded input images to a square aspect ratio and then downsampled them to $256\times 256$ pixels. This has the advantage of preserving the aspect ratio. However, since many active regions are east-west elongated, cropping may exclude part of active regions and padding may introduce artificial signals. In this work, we resize all active region magnetograms to $128\times 128$ pixels using bilinear interpolation, similar to \citet{huang2018deep} and \citet{li2020predicting}. \subsection{Standardization} Magnetogram pixel values and summary parameters are different physical quantities expressed in different units and ranges. Unlike physical modeling, many machine learning algorithms are invariant to scaling the input; they only care about the relative feature amplitudes. Moreover, drastically different ranges of features may hurt the convergence and stability of many algorithms. Therefore, the data of different scales are typically transformed into the same range via a process called standardization. In particular, Z-score standardization transforms the input data by removing the mean and then dividing by the standard deviation. In this work, we apply the Z-score standardization to the image data using the mean and standard deviation of the magnetogram pixels in SHARP. This is because the pixel values between SMARP and SHARP are similar. We apply the Z-score standardization to SMARP and SHARP summary parameters separately. That is, the mean and standard deviation are calculated for SHARP and SMARP separately, and data in one dataset is standardized using the mean and the standard deviation in that dataset. The transformation is ``global" \citep{ahmadzadeh2021train} in that it is calculated regardless of the splits. Empirical evaluation in \citet{ahmadzadeh2021train} showed a global standardization to be better than the local standardization, i.e., the mean and standard deviation are calculated only for the training split. We note that, with this standardization, the linear transformation converting SHARP summary parameters to SMARP proxy data is no longer needed; any coefficients and bias will have no effect after standardization. \section{Methodology} \label{sec:methodology} In this section, we introduce the two deep learning models, LSTM and CNN, that we use for flare prediction. Then we describe the stacking ensemble approach that combines the two models. Subsequently, we describe the forecast verification methods (skill scores and graphical tools) we used. Then we discuss how we use the paired $t$-test to compare empirical performance between algorithms and settings with statistical confidence. Lastly, we introduce the visual attribution methods used to interpret the decisions generated by the CNN. \subsection{Deep learning models} We use two deep neural network models, LSTM and CNN, to predict strong flares from active region observation. LSTMs use 24-hour-long time series of summary parameters before the prediction period begins, whereas CNNs use the static point-in-time magnetogram right before the prediction period begins. Both networks output the probability that an input sample belongs to the positive class, i.e., the probability that the active region will produce a strong flare in the next 24 hours, rather than continue to be flare-quiescent. The long short-term memory (LSTM) network was introduced by \citet{hochreiter1997long} as a type of recurrent neural networks that learns from sequential data for classifying text and speech. A common LSTM unit is composed of a cell, an input gate, an output gate, and a forget gate. In solar flare prediction, LSTMs have been applied to prediction using SHARP parameter series \citep{chen2019identifying, liu2019predicting}. The architecture of the LSTM used in this paper is adapted from \citet{chen2019identifying}, shown in Figure \ref{fig:arch}(a). Two LSTM layers, each with 64 hidden states, are stacked. The last output of the second LSTM layer, a 64-dimensional vector, is sent to a linear layer with 2 outputs. The softmax is applied to this 2-dimensional output to get the predicted probabilities of the positive and the negative class. The convolutional neural network (CNN) is a neural network architecture that learns from images. CNNs have been applied to solar flare forecasting by \citet{huang2018deep} and \citet{li2020predicting}. We use the architecture proposed by \citet{li2020predicting}, illustrated in Figure \ref{fig:arch}(b), which was itself inspired by the VGG network \citep{simonyan2014very} and the Alexnet network \citep{krizhevsky2012imagenet}. The first two convolutional layers have kernels of size $11 \times 11$, designed to learn low-level and concrete features. The three following convolutional layers have kernels of size $3\times 3$, designed to learn more high-level, abstract concepts. Batch normalization is used after all convolutional and linear layers to speed convergence. ReLU nonlinearity is applied to only convolutional layers. The batch normalization outputs of the two linear layers are randomly dropped out with a probability of 0.5 in training to reduce overfitting. The 2-dimensional output is passed to softmax to generate a probability assignment between the positive and the negative class. More details of this architecture can be found in \citet{li2020predicting}. The procedures used to train the LSTM and the CNN are similar. For both models, the Adam optimizer \citep{kingma2014adam} is used to minimize the cross-entropy loss with learning rate $10^{-3}$ and batch size 64. Both models are evaluated on the validation set after each epoch of training. To prevent overfitting, the training is early-stopped if no improvement on the validation True Skill Score (or TSS, explained later in Section \ref{sec:evaluation}) is observed for a certain number of epochs, called the \emph{patience}, before early stopping. The LSTM is trained for at most 20 epochs with a patience of 5 epochs, whereas the CNN is trained at most 20 epochs with a patience of 10 epochs. After training, the LSTM or the CNN with the best validation TSS among the checkpoints of all epochs is selected and evaluated on the test set to estimate its generalization performance. \begin{figure} \gridline{\fig{lstm_arch.png}{0.6\textwidth}{(a) LSTM architecture} } \gridline{\fig{cnn_arch.pdf}{0.7\textwidth}{(b) CNN architecture} } \caption{Neural network architectures. (a) shows the LSTM architecture. (b) shows the CNN architecture.} \label{fig:arch} \end{figure} \subsection{Stacking ensemble} In ensemble learning, the most common approach to combining individual models---called \emph{base learners}---is averaging their outputs, possibly with non-equal weights, to produce a final output. Another approach is stacking. Different from output averaging, stacking uses cross-validation training to learn the best combination---called the \emph{meta-learner}---of the outputs of the base learners. The meta-learner is often chosen to be global and smooth \citep{wolpert1992stacked}, such as linear models \citep{breiman1996stacked,leblanc1996combining,ting1999issues} and decision trees \citep{todorovski2003combining,dvzeroski2004combining}. Training a stacking ensemble consists of two stages. First, the base learners are fitted on the training set. Then, the predicted probabilities by all base learners on the validation set, as well as their labels, are collected into the so-called \emph{level-one} data, on which the meta-learner is fitted. Cross-validation is frequently used in place of a simple train-validation split to significantly increase the sample size of the level-one dataset. Either way, it is important that the level-one data are out-of-sample for base learners, otherwise the meta-learner will inevitably prefer models that overfit the training data over ones that make more realistic decisions \citep{witten2016data}. An early application of stacking for solar flare prediction problems was presented by \citet{dvzeroski2004combining}. These authors proposed a decision-tree-based stacking method, demonstrating it on the {\it UCI Repository of machine learning databases} \citep{Dua:2019}, including a dataset with 1389 flare instances, each characterized by 10 categorical attributes. In the space weather community, \citet{guerra2015ensemble} proposed stacking for flare prediction. They linearly stacked four full-disk probabilistic forecasting methods, with the weights maximizing HSS under the constraint that they are non-negative and sum to 1. They found that the stacking ensemble performed similarly to an equally weighted model. \citet{guerra2020ensemble} continued in this direction adopting stacking for more forecasting methods and a larger data sample. They also considered an unconstrained linear combination with a climatological frequency term. They found most ensembles perform better than a bagging model that essentially averages the members' predictions. However, these authors overlooked the nonconvexity of the objective in training the meta-learner. Furthermore, their conclusions about the superiority of the stacking ensemble over the equally weighted model were limited to the evaluation of in-sample error, which is unlikely to generalize. We will discuss these issues as we introduce our proposed stacking ensemble. We formulate the stacking ensemble as a linear combination of LSTM and CNN. Given a sample $x_i$, the stacking ensemble outputs the probability that the sample belongs to the positive class \begin{align} r_i = \alpha p_i + (1-\alpha) q_i,\quad 0\leq\alpha\leq 1, \label{eq:meta} \end{align} where $p_i$ and $q_i$ are the predicted probability by LSTM and CNN, respectively, and $\alpha$ is the meta-learner parameter. We note that, in order to be most effective, stacking methods require base learners that are diverse and complement each other. Examples include base learners trained on different types of data, each providing an alternative view of the same phenomenon. The magnetograms and summary parameters in SHARP/SMARP provide such diverse multiviews. These multiviews are processed by CNN and LSTM, respectively, to generate two predicted probabilities, which are then fused into a single prediction by the aforementioned stacking procedure. The meta-learning of the stacking ensemble essentially means finding the optimal combination weight $\alpha$. To that end, we minimize a loss function that penalizes the difference between the prediction $r_i$ and the binary label $y_i\in\{0, 1\}$ for samples in the validation set. A natural choice is to maximize the accuracy or skill scores, or equivalently, to minimize the loss which is the negation of these metrics. The downside of these loss functions is that they may not be convex or differentiable. This is not problematic when there are only a few base learners, as is the case with this work and \citet{guerra2015ensemble}; a grid search can be applied to find the weights that minimize the loss. However, as the number of base learners increases, the grid search quickly becomes infeasible, and iterative algorithms have to be used---many of them require convexity and differentiability of the loss function for guaranteed convergence. In machine learning, convexity and smoothness ensure the uniqueness of the minimizer and guarantee faster convergence rates for iterative algorithms \citep{nocedal2006numerical,bottou2018optimization}. \citet{guerra2020ensemble} found that for some optimization metrics, the resulting weights were sensitive to the initialization of the solver. This is likely the consequence of the nonconvexity of the loss function. Their proposed solution was to run the solver with multiple initializations and take the average. To circumvent convergence problems, machine learning researchers often use loss functions that are convex and differentiable. One example is the negative log-likelihood function for the logistic regression model, whose minimum corresponds to the maximum likelihood estimator (MLE). Within the meta-learning framework specified by Equation~(\ref{eq:meta}), the negative log-likelihood loss function is \begin{align} L(\alpha) &= -\log \prod_{i=1}^n r_i^{y_i} (1-r_i)^{1-y_i} \\ &= \sum_{i=1}^n \underbrace{\left( -y_i \log r_i - (1 - y_i) \log (1-r_i) \right)}_{L_i}\,. \end{align} The negative log-likelihood objective can also be interpreted as the binary cross-entropy loss, a divergence measure between the distributions of ground truth labels and predicted probabilities. This loss function can be decomposed into the summation of instance-wise loss $L_i$, having gradient and the Hessian \begin{align} L_i'(\alpha) &= \left(-\frac{y_i}{r_i} + \frac{1-y_i}{1-r_i}\right)(p_i - q_i)\,, \\ L_i''(\alpha) &= \left(\frac{y_i}{r_i^2} + \frac{1-y_i}{(1-r_i)^2}\right) (p_i - q_i)^2 \geq 0. \end{align} Minimizing $L$ on $\alpha\in[0, 1]$ is a convex optimization problem and we use grid search to find the minimizer. In general cases with more than two base learners, iterative algorithms like projected gradient descent or Newton's method will be more efficient. We point out that convex loss functions are widely adopted in the literature of stacking, such as least square estimate \citep{breiman1996stacked}, regularized linear regression \citep{leblanc1996combining}, multi-response linear regression \citep{ting1999issues}, and hinge loss \citep{csen2013linear}. \subsection{Evaluation tools} \label{sec:evaluation} The prediction probabilities output by CNN and LSTM can be turned into binary decisions by thresholding and the algorithm performance can be represented as a contingency table (or confusion matrix), as shown in Table \ref{tab:contingency}. The contingency table contains the most complete information for categorical prediction. However, a single numerical metric is often needed to summarize the table for model selection. Accuracy and skill scores are examples of such contingency table based metrics that are adopted in space weather forecasting. \begin{table} \centering \renewcommand\arraystretch{1.2} \settowidth\rotheadsize{\theadfont True} \begin{tabular}{@{} cc|c|c|c} \multicolumn{2}{c}{} & \multicolumn{2}{c}{Predicted} & \\ & & Negative & Positive & Total\\ \cline{2-4} \multirow{2}{*}[0ex]{\rothead {True}} & Negative & TN & FP & $\textrm{N}} %_\textrm{true}$ \\ \cline{2-4} & Positive & FN & TP & $\textrm{P}} %_\textrm{true}$ \\ \cline{2-4} & \multicolumn{1}{c}{Total} & \multicolumn{1}{c}{$\textrm{N}'} %_\textrm{pred}$} & \multicolumn{1}{c}{$\textrm{P}'} %_\textrm{pred}$} & \multicolumn{1}{c}{$\textrm{N} + \textrm{P}} %\textrm{M}$} \end{tabular} \hspace{70px} \vspace{2ex} \caption{A contingency table consisting of TP (true positive), FP (false positive), FN (true negative), and TN (true negative).} \label{tab:contingency} \end{table} We start our discussion on metrics with accuracy (ACC), also known as rate correct, the simplest metric that is widely used in all sorts of domains. In terms of the contingency table, accuracy is defined as \begin{align} A = \frac{\textrm{TN} + \textrm{TP}}{\textrm{N}} %_\textrm{true} + \textrm{P}} %_\textrm{true}}. \end{align} For a highly imbalanced classification problem like solar flare prediction, accuracy is generally not considered a useful metric, since a no-skill classifier that assigns the majority label to all samples will be correct most of the time. Therefore, a plethora of skill scores are devised to overcome this issue. A skill score provides a normalized measure of the improvement against a specific reference method. In its most general form, a skill score can be expressed as \begin{align} \textrm{Skill} = \frac{A_{\textrm{forecast}} - A_{\textrm{reference}}}{A_{\textrm{perfect}} - A_{\textrm{reference}}}, \end{align} where $A_{\textrm{forecast}}$, $A_{\textrm{reference}}$, and $A_{\textrm{perfect}}$ are the accuracy of the forecast to be evaluated, the reference forecast, and the perfect forecast, respectively. A higher skill score indicates better performance, with the maximum value 1 corresponding to the perfect performance, 0 corresponding to no improvement over the reference, and negative values corresponding to performance worse than the reference. Below, we review some of the mostly used skills scores in flare forecasting. For a more complete discussion, we refer readers to \citet{woodcock1976evaluation} and \citet{wilks2011statistical}. The Heidke Skill Score (HSS), also known as Cohen's kappa coefficient due to \citet{cohen1960coefficient}, uses a random forecast independent from the flare occurrences as a reference. The expected number of correct forecasts made by the random predictor, denoted by $\textrm{E}$, can be calculated using the law of total expectation as \begin{align} \textrm{E} = \frac{\textrm{P}} %_\textrm{true}}{\textrm{N} + \textrm{P}} %\textrm{M}} \times \textrm{P}'} %_\textrm{pred} + \frac{\textrm{N}} %_\textrm{true}}{\textrm{N} + \textrm{P}} %\textrm{M}} \times \textrm{N}'} %_\textrm{pred}. \end{align} The accuracy of the random predictor can then be expressed as \begin{align} A_{\textrm{reference}} = \frac{\textrm{E}}{\textrm{N} + \textrm{P}} %\textrm{M}}. \end{align} Defined using this reference accuracy, $\textrm{HSS}$ has the form \begin{align} \textrm{HSS} = \frac{\textrm{TP} + \textrm{TN} - \textrm{E}}{\textrm{N} + \textrm{P}} %\textrm{M} - \textrm{E}} = \frac{2[(\textrm{TP}\times\textrm{TN}) - (\textrm{FN}\times\textrm{FP})]}{\textrm{P}} %_\textrm{true}\textrm{N}'} %_\textrm{pred} + \textrm{P}'} %_\textrm{pred}\textrm{N}} %_\textrm{true}}. \end{align} HSS quantifies the forecast improvements over a random prediction. Since the random reference forecast is dependent on the event rate (climatology) $\textrm{P}} %_\textrm{true} / (\textrm{N} + \textrm{P}} %\textrm{M})$, HSS has to be used with discretion in comparing methods when the event rate varies. The True Skill Score (TSS), also known as Hanssen \& Kuiper's Skill Score (H\&KSS) or Peirce Skill Score. It is the difference between the probability of detection (POD) and the false alarm rate (FAR): \begin{align} \textrm{TSS} = \underbrace{\frac{\textrm{TP}}{\textrm{P}} %_\textrm{true}}}_{\text{POD}} - \underbrace{\frac{\textrm{FP}}{\textrm{N}} %_\textrm{true}}}_{\text{FAR}}. \end{align} TSS falls into the general skill score definition with a reference accuracy \citep{barnes2016comparison} \begin{align} A_{\textrm{reference}} = \frac{\textrm{FN}(\textrm{TN} - \textrm{FP})^2 + \textrm{FP}(\textrm{TP} + \textrm{FN})^2}{(\textrm{N}} %_\textrm{true} + \textrm{P}} %_\textrm{true})[\textrm{FN}(\textrm{TN} - \textrm{FP}) + \textrm{FP}(\textrm{TP} + \textrm{FN})]}\,, \end{align} constructed such that both the random and unskilled predictors score 0. A nice property of TSS is its invariance to the class imbalance ratio, and hence is suggested by \citet{bloomfield2012toward} to be the standard measure for comparing flare forecasts. We note that, on a balanced dataset for which the event rate is 0.5, it can be shown that $\textrm{TSS} = \textrm{HSS} = 1 - 2(1 - \textrm{ACC})$. The trend and the paired $t$-test results for $\textrm{TSS}$ apply to $\textrm{ACC}$ and $\textrm{HSS}$ due to perfect correlation. Therefore, we mainly focus on the discussion on $\textrm{TSS}$, list $\textrm{ACC}$ as a complement metric, and omit $\textrm{HSS}$ as it is equal to $\textrm{TSS}$ in our setting. For probabilistic forecasts, the aforementioned metrics (ACC, HSS, and TSS) depend upon the threshold applied to the predicted probability. A common practice is to apply a threshold of 0.5, which is considered to be ``random" by many researchers. In contrast, the following two metrics, BSS and AUC, are irrelevant to the threshold, and they need information (i.e., predicted probabilities) beyond the mere contingency table. The Brier Skill Score (BSS) is a skill score evaluating the quality of a probability forecast. It is of a nature different from those of HSS and TSS, in that it directly uses probabilistic predictions without thresholding them. The BSS also admits the general skill score formulation, with the accuracy replaced by the Brier Score (BS), defined as the mean squared error between the probability predictions $f_i$'s and binary outcomes $o_i$'s: \begin{align} \textrm{BS} = \frac{1}{n}\sum_{i=1}^n (f_i - o_i)^2\,. \end{align} With a reference forecast that consistently predicts the average event frequency $\bar{o}$ (also known as climatology), the $\textrm{BSS}$ is given by \begin{align} \textrm{BSS} = 1 - \frac{\textrm{BS}_\textrm{forecast}}{\textrm{BS}_\textrm{reference}}\,. \end{align} It is sometimes of interest to decompose $\textrm{BS}$ into three components of reliability, resolution, and uncertainty \citep{murphy1973new, mccloskey2018flare}. BSS is frequently accompanied by the reliability diagram discussed below, providing more complete information about the performance of probabilistic predictions. For completeness we briefly discuss the Area Under Curve (AUC), defined as the area under the receiver operating characteristic (ROC) curve. The ROC curve depicts how the probability of detection changes with the false alarm rate by varying the classification threshold. A higher AUC implies a higher average detection probability over all false alarm rates. Unlike dichotomous metrics like TSS and HSS, AUC summarizes detection performance over all possible false positive rates and, in particular, does not depend on the threshold selected to convert probabilistic forecasts into binary decisions. Consequently, the AUC is not as useful in flare prediction, especially when stringent false positive control is exercised \citep{steward2017automatic}. The above skill scores provide one way to directly compare flare prediction models. In addition to such metrics, flare forecasts are often evaluated using graphical tools for diagnostics and comparison. Apart from ROC curves discussed above, reliability diagrams (RD) and skill score profiles (SSP) are also commonly used in forecast verification. All three of them are applicable to forecasts that predict probabilities or continuous scores (e.g., logits) that can be converted to probabilities. We briefly discuss RD and SSP below. The reliability diagram, also known as the calibration curve, measures how well a probabilistic forecast agrees with the observation. The predicted probabilities are binned into groups and the observed event rate within each group is plotted. If the predicted probability agrees well with the observed rate, the points will be close to the diagonal of the plot (the line of perfect reliability). Such a forecast is called reliable. Any forecast that produces predictions independent of flare activity has all its points close to the horizontal line at the event rate. BSS provides a metric that accounts for both reliability and resolution. Figure \ref{fig:bss} shows an example of the plane on which the reliability diagram is drawn. The climatology rate is set to be 0.1. The overall BSS can be seen as a histogram weighted average of the contributions of the points on the reliability diagram. The contours are equal contribution lines. The points in the shaded area contribute positively to BSS. The dashed line with slope 1/2 is called the ``no skill" line, the points on which have zero contribution to the overall BSS. \begin{figure} \centering \includegraphics[width=0.43\textwidth]{bss.png} \caption{An illustration of the relation between the reliability diagram and BSS.} \label{fig:bss} \end{figure} A skill score profile plot shows how skill scores change as a function of the probability threshold. A method with high and flat profile is usually desired, as such a method achieves high skill scores and the performance is robust to the changes of the threshold. \subsection{Statistical performance comparisons} We use a one-sided paired $t$-test to assess the comparative performance of a pair of prediction algorithms, called algorithm 1 and 2, that are tested on the same test data. Specifically, two competing hypotheses are formulated: the null hypothesis ($H_0$) that algorithms 1 and 2 have identical performance and the alternative hypothesis ($H_1$) that algorithm 2 is better than algorithm 1. Suppose we have $n$ pairs of empirical performance $\{(x_i,y_i)\}_{i=1}^n$ achieved by algorithm 1 and 2 on $n$ test samples. The paired $t$-statistic is $t=\sqrt{n}(\overline{y}-\overline{x})/\sigma$ where $\overline{y}$ and $\overline{x}$ are the sample means of $\{x_i\}_{i=1}^n$ and $\{y_i\}_{i=1}^n$, respectively, and $\sigma^2$ is the sample variance of the differences $\{y_i-x_i\}_{i=1}^n$. Under $H_0$, $t$ follows a Student-$t$ distribution with $n-1$ degrees of freedom \citep{bickel2015mathematical}. The $p$-value associated with the test statistic $t$ is defined as the area under the Student-$t$ density to the right of the value $t$. Small $p$-values provide strong evidence in favor of $H_1$, i.e., that algorithm 2 is better than algorithm 1. In this paper, we use the paired $t$-test to examine the following hypotheses: \begin{itemize} \item Training a predictor (LSTM or CNN) on data from two solar cycles (SMARP and SHARP) improves upon training a predictor on data from a single solar cycle (only SMARP or only SHARP) (Section~\ref{sec:results_data}). \item LSTM achieves better performance than CNN (Section~\ref{sec:results_model}). \item The LSTM-CNN stacking ensemble achieves better performance than the better model between LSTM and CNN (Section~\ref{sec:results_stacking}). \end{itemize} \subsection{Interpretation of CNNs} Deep learning methods are widely applied to many domains such as computer vision, natural language processing, speech processing, robotics, and games \citep[see, e.g.][]{he2016deep, silver2016mastering, devlin2018bert}. As of today, deep learning algorithms largely remain black box methods, raising concerns of lack of interpretability, transparency, accountability, and reliability. Interpretability is of particular importance when deep learning is used in scientific discovery. Over the years, many tools for interpreting the functioning of deep neural networks have been proposed, revealing aspects of their underlying decision process. One way to interpret a black-box model, often referred to as ``attribution", is to see how different parts of the input contribute to the model's output. An attribution method generates a vector of the same size as the input, with each element indicating how much the corresponding element in the input contributes to the model decision for that input. In the context of CNNs, the attribution vector is a heatmap of the same size as the input image. A multitude of attribution methods have been proposed for CNNs in the task of image classification. One type of approach is perturbation-based methods, among which occlusion \citep{zeiler2014visualizing} is well known. Occlusion masks the input image with a gray patch at different locations and sees how much the prediction score of the ground truth class drops. The prediction score drop varies with location, forming a heatmap, with large values indicating the positions of the features important to the CNN's correct prediction. One drawback of the occlusion method is that it is computationally expensive. Another drawback is that the attribution depends on the size and shape of the patch, which need to be tuned to obtain sensible results. Therefore, this type of approach is not used in our work. Another type of approach uses gradient-based methods, the basic idea being that the gradient of the predicted score of a certain class with respect to the input reveals the contribution of each dimension of the input. Saliency map \citep{simonyan2013deep}, one of the earliest gradient-based methods, is simply the absolute value of the gradients. The intuition is that the magnitude of the derivative indicates which pixels need to be changed the least to affect the class score the most \citep{simonyan2013deep}. Deconvolution Network \citep{zeiler2014visualizing} and Guided Backpropagation \citep{springenberg2015striving} attribution methods modify the backpropagation rule. Integrated Gradients \citep{sundararajan2017axiomatic} integrate the gradients along the path from a reference image to the target image. Formally, the integrated gradient along the $i$-th dimension for an input $x$ and a baseline $x'$ is \begin{align} L^c_i(x; x') = (x_i - x_i') \times \int_{\alpha=0}^1 \frac{\partial F_c(x' + \alpha \times (x - x'))}{\partial x_i}\,\mathrm{d}\alpha, \end{align} where $F_c(x)$ is the model output for class $c$ with input $x$. One desirable property of Integrated Gradients, known as completeness, is that the pixels in the attribution map add up to the difference of prediction scores of the target and the reference image, i.e., $F(x) - F(x')$. DeepLIFT \citep{shrikumar2017learning} and its gradient-based interpretation \citep{ancona2018towards} can be seen as the gradient with modified partial derivatives of non-linear activations with respect to their inputs. Grad-CAM (Gradient-weighted Class Activation Mapping) \citep{selvaraju2017grad} accredits decision-relevant signatures by generating a saliency map, highlighting pixels in the input image that increase the confidence of the network's decision for a particular class. More formally, the Grad-CAM heatmap $L^c$ for class $c$ with respect to a particular convolutional layer is given by the positive part of the weighted sums of the layer's activation maps $A_k$, i.e., \begin{align} L^c &= \textrm{ReLU}\left(\sum_k \alpha_k^c A^k\right)\,, \end{align} with weights $\alpha^c_k$ given by the spatial average of partial derivatives of the class-specific score $y^c$ with respect to the class activation map as \begin{align} \alpha_k^c &= \frac{1}{Z} \sum_i \sum_j \frac{\partial y^c}{\partial A^k_{ij}}\,, \end{align} where $Z$ is a normalization constant. Intuitively, a class activation map is weighted more if the pixels therein make the CNN more confident in its decision that the input belongs to class $c$. In solar flare prediction, \citet{bhattacharjee2020supervised} applied the occlusion method and found that CNNs pay attention to polarity inversion regions. \citet{yi2021visual} applied Grad-CAM to CNNs and found that polarity inversion lines in full-disk MDI and HMI magnetograms are highlighted as an important feature for flare prediction. In this paper, using a variety of attribution methods, we observe similar trends for the CNN trained on SHARP and SMARP data. \section{Results} \label{sec:results} We aim to answer the following four scientific questions: (1) Can additional data from another solar cycle benefit the performance of deep learning methods for solar flare prediction? (2) Do features implicitly learned by CNN work better than handcrafted physical parameters used by LSTM? (3) Can we combine the two deep learning methods to obtain a better prediction? (4) What preflare signatures can the CNN detect from the magnetogram of an active region? To summarize, we report the following findings: (1) Additional training data from HMI collected in Solar Cycle 24 improve the predictive performance of both LSTM and CNN when tested on Solar Cycle 23. (2) LSTM (using summary parameters) generally outperforms CNN (directly using magnetograms) in flare prediction. (3) Stacking CNN and LSTM generally leads to better prediction performance. (4) Visual attribution methods help us interpret the decision of CNN by identifying preflare features. This section presents the empirical results that lead to these findings. \subsection{Data from another solar cycle improves prediction} \label{sec:results_data} A major goal of this paper is to examine the utility of using SMARP and SHARP together. We set an experimental group and a control group and contrast their 24-hour ``strong-vs-quiet" flare prediction performance. The control group consists of models that train, validate, and test exclusively on SHARP data. We refer to this type of dataset as \texttt{SHARP\_ONLY}{}. Compared to the control group, models in the experimental group have the training set enriched by SMARP data, while the validation and the test set are kept the same. We call this type of dataset \fusedsharp{}. The only difference between \texttt{SHARP\_ONLY}{} and \fusedsharp{} is that models using \fusedsharp{} have access to data from a previous solar cycle in the training phase. Symmetrically, we design \texttt{SMARP\_ONLY}{} and \texttt{FUSED\_SMARP}{} to examine the utility that SHARP brought to SMARP. Specifically, the four types of datasets are generated as follows: \begin{enumerate} \item Dataset \texttt{SHARP\_ONLY}{}: 20\% of all the HARPs are randomly selected to form a test set. 20\% of the remaining HARPs are randomly selected to form a validation set. The rest of the HARPs belong to the training set. In each split, negative samples are randomly selected to match the number of positive samples. \item Dataset \fusedsharp{}: The test set and the validation set stay the same, respectively, with those in \texttt{SHARP\_ONLY}{}. The remaining HARPs are combined with all TARPs to form the training set. In each split, negative samples are randomly selected to match the number of positive samples. \item Dataset \texttt{SMARP\_ONLY}{}: 20\% of all the TARPs are randomly selected to form a test set. 20\% of the remaining TARPs are randomly selected to form a validation set. The rest of the TARPs belong to the training set. In each split, negative samples are randomly selected to match the number of positive samples. \item Dataset \texttt{FUSED\_SMARP}{}: The test set and the validation set stay the same, respectively, with those in \texttt{SMARP\_ONLY}{}. The remaining TARPs are combined with all HARPs to form the training set. In each split, negative samples are randomly selected to match the number of positive samples. \end{enumerate} \begin{table} \centering \caption{Sample sizes of a random realization of the four datasets} \begin{tabular}{lcccccc} \toprule {} & \multicolumn{2}{c}{Train} & \multicolumn{2}{c}{Validation} & \multicolumn{2}{c}{Test} \\ \cmidrule(lr){2-3} \cmidrule(lr){4-5} \cmidrule(lr){6-7} {} & Positive & Negative & Positive & Negative & Positive & Negative \\ \midrule \texttt{SHARP\_ONLY} & 1774 & 1774 & 665 & 665 & 410 & 410 \\ \texttt{FUSED\_SHARP} & 6375 & 6375 & 665 & 665 & 410 & 410 \\ \texttt{SMARP\_ONLY} & 2849 & 2849 & 860 & 860 & 892 & 892 \\ \texttt{FUSED\_SMARP} & 5698 & 5698 & 860 & 860 & 892 & 892 \\ \bottomrule \end{tabular} \label{tab:data} \end{table} Since train/validation/test split and undersampling are both random, repeating these two steps with different seeds enables uncertainty quantification to the evaluation results. The tally of samples produced by one particular random seed is shown in Table \ref{tab:data}. On each of the four types of datasets, LSTMs and CNNs are fitted on the training set, validated on the validation set, and evaluated on the test set. \begin{table} \centering \caption{Test set performance of the LSTM and the CNN on 24-hour ``strong-vs-quiet" flare prediction. The two datasets within each comparison group share common test sets. The 1-$\sigma$ error is calculated from 10 random experiments. Bold fonts indicate the experiments in which the mean of the metric on the fused dataset is higher than that on the single dataset.} \input{table_member} \label{tab:member} \end{table} Table \ref{tab:member} shows the results of the ``strong-vs-quiet" active region prediction using the LSTM and the CNN. For LSTMs, a consistent improvement on the fused datasets (\texttt{FUSED\_SHARP} and \texttt{FUSED\_SMARP}) is observed in terms of the mean of all metrics. This aligns with the fact that more data are typically desired to improve the generalization performance of deep learning models because they are overparameterized and can easily overfit on small datasets. For CNNs, an improvement is observed on \texttt{FUSED\_SMARP} over \texttt{SMARP\_ONLY}, but not on \texttt{FUSED\_SHARP} over \texttt{SHARP\_ONLY}. This indicates that the lower image quality in SMARP has a negative effect on CNN's performance. The statistical significance of the improvement on the fused datasets is tested using a one-sided paired $t$-test with significance level 95\%. Table \ref{tab:ttest} shows the $t$-statistics and the associated $p$-values of the paired $t$-tests. The bold font $p$-values are less than 0.05 and considered to be significant. For LSTMs, the fused datasets are better than the single datasets in a statistically significant way in almost all settings. The only exception is BSS on \fusedsharp{}, whose $p$-value is only slightly larger than 0.05. For CNNs, across all metrics, statistically significant improvement is observed for \texttt{FUSED\_SMARP}{} over \texttt{SMARP\_ONLY}{}, but not for \fusedsharp{} over \texttt{SHARP\_ONLY}{}. This indicates that adding SHARP magnetograms into SMARP during training helps the CNN to better predict flares, but not the other way around. One potential reason is SMARP magnetograms have a lower signal-to-noise ratio than SHARP magnetograms, which may have negatively affected the CNN. The LSTM, on the other hand, uses the active region summary parameters, which could suppress the effect of noise during summarizing magnetograms, providing information in a sufficiently good quality that does not offset the improvement induced by the increased training sample size. \begin{table} \centering \caption{Paired $t$-tests for significant improvement of test set performance on the fused datasets as measured by different metrics. The alternative hypothesis $H_1$ claims that metric $S$ on the fused dataset (\texttt{FUSED\_SHARP} or \texttt{FUSED\_SMARP}) is greater than the respective single dataset (\texttt{SHARP\_ONLY} or \texttt{SMARP\_ONLY}), which is tested against the null hypothesis $H_0$ claiming otherwise. The bold font $p$-values are less than 0.05 and considered to be significant.} \input{table_ttest} \label{tab:ttest} \end{table} \begin{figure} \centering \gridline{ \fig{graphical/sharp_LSTM/reliability}{0.3\textwidth}{(a1)} \fig{graphical/sharp_LSTM/roc}{0.3\textwidth}{(a2)} \fig{graphical/sharp_LSTM/ssp}{0.3\textwidth}{(a3)} } \gridline{ \fig{graphical/sharp_CNN/reliability}{0.3\textwidth}{(b1)} \fig{graphical/sharp_CNN/roc}{0.3\textwidth}{(b2)} \fig{graphical/sharp_CNN/ssp}{0.3\textwidth}{(b3)} } \caption{Verification plots on SHARP test data to compare models trained on \fusedsharp{} and \texttt{SHARP\_ONLY}{}. Shown in (a1)--(a3) are the reliability diagram, ROC, and SSP for LSTM. Shown in (b1)--(b3) are the same plots but for CNN. In each panel, the blue/orange curve is the test performance for the model trained on \fusedsharp{}/\texttt{SHARP\_ONLY}{}. In each graph, solid curves and error bars (or shaded area) indicate respectively the means and the standard deviations calculated from 10 random experiments. In each reliability plot, the short horizontal bars indicate the number of samples in each probability bin, and the two curves are separated horizontally to prevent error bars from overlapping.} \label{fig:graphical_sharp} \end{figure} \begin{figure} \centering \gridline{ \fig{graphical/smarp_LSTM/reliability}{0.3\textwidth}{(a1)} \fig{graphical/smarp_LSTM/roc}{0.3\textwidth}{(a2)} \fig{graphical/smarp_LSTM/ssp}{0.3\textwidth}{(a3)} } \gridline{ \fig{graphical/smarp_CNN/reliability}{0.3\textwidth}{(b1)} \fig{graphical/smarp_CNN/roc}{0.3\textwidth}{(b2)} \fig{graphical/smarp_CNN/ssp}{0.3\textwidth}{(b3)} } \caption{Same as Figure \ref{fig:graphical_sharp} but for SMARP test data to compare models trained on \texttt{FUSED\_SMARP}{} and \texttt{SMARP\_ONLY}{}.} \label{fig:graphical_smarp} \end{figure} Aside from the numerical metrics, we provide graphical evaluation results for Group 1 (\fusedsharp{} and \texttt{SHARP\_ONLY}{}) in Figure \ref{fig:graphical_sharp}, and Group 2 (\texttt{FUSED\_SMARP}{} and \texttt{SMARP\_ONLY}{}) in Figure \ref{fig:graphical_smarp}. A trend of over-forecasting for high probabilities and under-forecasting for low probabilities is observed in some cases but such effect is minor considering the size of the error bars. In reliability diagrams, all models have points closer to the diagonal, indicating high reliability. In ROC plots, it is observed that the LSTM achieves higher AUC on the fused datasets (\texttt{FUSED\_SHARP} and \texttt{FUSED\_SMARP}) than on the single datasets (\texttt{SHARP\_ONLY} and \texttt{SMARP\_ONLY}). For the CNN, similar improvement is also observed in the comparison of \texttt{FUSED\_SMARP}{} and \fusedsharp{}, whereas the ROCs are almost indistinguishable for \fusedsharp{} and \texttt{SHARP\_ONLY}{}. In skill score profiles, the TSS for LSTM trained on fused datasets are at the same level as that trained on single datasets. For the CNN, on the other hand, \fusedsharp{} displays a disadvantage against \texttt{SHARP\_ONLY}{}, whereas \texttt{FUSED\_SMARP}{} displays an advantage over \texttt{SMARP\_ONLY}{}. This verifies the observations made from metrics. In all cases, the skill score profiles are high and relatively flat, indicating the robustness of the performance to the change of thresholds within a wide range of the varying threshold. \subsection{LSTM performs better than CNN} \label{sec:results_model} This section provides forecast verification to the LSTM and the CNN. We use the same evaluation results for 10 experiments in each setting mentioned in Section \ref{sec:results_data}, but present them in a way that makes it easier to compare the LSTM and the CNN. We note the differences between our verification set-up and that in an operational setting: \begin{enumerate} \item In terms of data, the test set of our sort has lots of samples removed based on their active regions, observational data, and flare activities. About 1/5 of tracked active region time series in the evaluation period (May 2010--December 2020) are selected. Within each active region series, only samples with good quality observation and certain flaring patterns are selected (detailed in Section \ref{sec:sample}). Negative samples (flare-quiet active regions) are significantly downsampled to match the number of positive samples (strong-flare-imminent active regions). In contrast, operational forecasts do not discard any sample unless absolutely necessary. \item In terms of outcomes, the forecast of our sort is independent for individual active regions, with the prediction result available every 96 minutes (i.e., MDI cadence) for valid active regions. In contrast, the end goal of an operational forecast is a full-disk forecast. For operational forecasts built upon active region based forecasts, the predictions for all active regions on the solar disk are aggregated to compute the full-disk prediction. In addition, operational forecasts are typically issued at a lower frequency (e.g., every 6 hours), but in a consistent manner. \end{enumerate} The verification results in this section should be interpreted with the above differences in mind. \begin{table} \centering \caption{Paired $t$-tests for significant improvement of the LSTM over the CNN in terms of different metrics $S$ on the test set of the four datasets. The alternative hypothesis $H_1$ claims $S_{\textrm{LSTM}} > S_{\textrm{CNN}}$. The bold font $p$-values are less than 0.01 and considered to be significant.} \input{table_ttest_est} \label{tab:ttest_est} \end{table} It can be seen from Table \ref{tab:member} that the LSTM generally scores higher than the CNN in terms of mean performance. We performed paired $t$-test to validate this observation. The results in Table \ref{tab:ttest_est} confirm that the LSTM scores significantly higher ($p<0.01$) than the CNN across all metrics on all datasets except for \texttt{FUSED\_SMARP}. On \texttt{FUSED\_SMARP}, although we cannot claim statistical significance, the LSTM's performance is slightly better or at the same level as the CNN as is observed from Table \ref{tab:member}. We only present the graphical verification results for both models trained and tested \fusedsharp{}, given that SHARP is widely used and validated by a wealth of studies. For the results on other datasets, the visualization can be obtained by simply rearranging the same results shown in Figure \ref{fig:graphical_sharp} and \ref{fig:graphical_smarp}. \begin{figure} \centering \gridline{ \fig{graphical/model_comp_fused_sharp/reliability}{0.3\textwidth}{} \fig{graphical/model_comp_fused_sharp/roc}{0.3\textwidth}{} \fig{graphical/model_comp_fused_sharp/ssp}{0.3\textwidth}{} } \caption{Verification plots of the LSTM and the CNN on \fusedsharp{}. Shown are the reliability diagram, ROC, and SSP, from left to right. This figure essentially extracts the blue curves (representing \fusedsharp{}) in both rows of Figure \ref{fig:graphical_sharp} and overlaps them together.} \label{fig:graphical_model_comp} \end{figure} The reliability diagram in Figure \ref{fig:graphical_model_comp} shows that the probabilistic prediction given by the LSTM is closer to the diagonal than the CNN, and hence more reliable. The CNN exhibits a trend of under-forecasting especially when the predicted probability is less than 0.5. The histogram of predicted probability shows that probabilistic forecast by the LSTM is ``more confident", or has higher resolution, than LSTM, with most of the predicted probabilities close to 0 or 1. The ROC in Figure \ref{fig:graphical_model_comp} shows a clear advantage of the LSTM over the CNN, in the sense that it achieves a higher probability of detection with the same false alarm rate. This trend is also manifested in terms of AUC. The SSP in Figure \ref{fig:graphical_model_comp} shows LSTM achieves higher TSS on average for all thresholds within 0.2--0.9. It is also observed that the TSS for LSTM is maximized by a threshold very close to the climatological rate on the test set (which is 0.5 in our case), a necessary condition for a reliable predictor \citep{kubo2019some}. \subsection{Stacking LSTM and CNN leads to better prediction} \label{sec:results_stacking} In this paper, we only consider stacking methods to combine the CNN and the LSTM hoping for better predictive performance. We evaluate the test set performance of stacking methods using four different criteria: \begin{itemize} \item \texttt{CROSS\_ENTROPY}: weights are optimized to minimize cross-entropy loss on the validation set. \item \texttt{BSS}: weights are optimized to maximize BSS on the validation set. \item \texttt{AUC}: weights are optimized to maximize AUC on the validation set. \item \texttt{TSS}: weights are optimized to maximize TSS on the validation set. \end{itemize} Among these criteria, cross-entropy and negative $\textrm{BSS}$ are known to be convex; $\textrm{TSS}$ is neither convex nor concave; we observe $\textrm{AUC}$ to be concave but we do not have proof other than empirical evidence. Criteria $\texttt{HSS}$ and $\texttt{ACC}$ are excluded from the evaluation since their stacking weights are the same as that of $\texttt{TSS}$ due to the perfect correlation mentioned in Section \ref{sec:evaluation}. To provide baseline performances, we include the evaluation results for the two base learners, \texttt{LSTM} and \texttt{CNN}. In addition to the above stacking methods, we consider two other meta-learning schemes: \begin{itemize} \item \texttt{AVG} outputs the average of predicted probabilities of two base learners. \item \texttt{BEST} \citep{dvzeroski2004combining} selects the base learner that performs the best on the validation set and applies it to the test set. \end{itemize} Splitting and undersampling are randomly performed 10 times on each of the four datasets \fusedsharp{}, \texttt{FUSED\_SMARP}{}, \texttt{SHARP\_ONLY}{}, and \texttt{SMARP\_ONLY}{}. The test set TSS of the 10 random experiments for each criterion on each dataset are summarized as box plots in Figure \ref{fig:stacking}. The optimal stacking weights for the four stacking ensembles are summarized in Figure \ref{fig:stacking_weights}. Figure \ref{fig:stacking} shows that stacking methods perform slightly better than the \texttt{BEST} meta-learner, especially on \texttt{FUSED\_SMARP}{} and \texttt{SMARP\_ONLY}{}. Of note, the wide error bars are partially due to the randomness originating from data sampling. To fairly compare the methods, we perform paired $t$-tests with significance level 0.05. It turned out stacking is significantly better than \texttt{BEST} in the following three settings: \texttt{BSS} on \texttt{FUSED\_SMARP}{} ($p = 0.048$), \texttt{AUC} on \texttt{SMARP\_ONLY}{} ($p = 0.025$), and \texttt{TSS} on \texttt{SMARP\_ONLY}{} ($p = 0.013$). We also note in Figure \ref{fig:stacking} that \texttt{BEST} unsurprisingly achieves better performance than \texttt{AVG} but is slightly worse than the better performing base learner \texttt{LSTM}, most noticeably on \fusedsharp{}. In fact, \texttt{BEST} decided that \texttt{CNN} is the better model in 3 out of 10 experiments on \fusedsharp{}. This is not unexpected because the ``best" model on the validation set is not necessarily the best on the test set. From Figure \ref{fig:stacking_weights}, we can see that $\alpha$ is greater than 0.5 in most experiments, with the median falling between 0.55 and 0.9 in all settings. This suggests that stacking ensembles generally depend more on the LSTM than on the CNN. The variance of $\alpha$ is large in some settings, especially for the \texttt{AUC} on \texttt{FUSED\_SMARP}{}. The variance of convex criteria (\texttt{CROSS\_ENTROPY} and \texttt{BSS}) is not smaller than that of nonconvex criteria (\texttt{TSS}), indicating that the local minima of non-convex loss functions is not the major source of variance. We suspect the major source of the variance comes from the data sampling bias among experiments, which is, in turn, a collective consequence of the insufficient sample size, heterogeneity across active regions, and possibly a small amount of information leakage because the validation set is used both in the validation of base learners and the training of the meta-learner. \begin{figure} \centering \includegraphics[width=\textwidth]{stacking/stacking.png} \caption{Test set TSS for base learners and meta-learners using different criteria.} \label{fig:stacking} \end{figure} \begin{figure} \centering \includegraphics[width=\textwidth]{stacking/stacking_weights.png} \caption{Stacking weight $\alpha$ fitted using different criteria on different datasets. All 10 values of $\alpha$ in an experiment setting are shown as points next to the corresponding box.} \label{fig:stacking_weights} \end{figure} We inspect one experiment of stacking with criterion \texttt{ACC} and the results are presented in Figure \ref{fig:data}. Figure \ref{fig:data} (a1)--(a3) show the predicted probabilities by the LSTM and the CNN of each instance in the training, the validation, and the test set. The points are colored by their labels, with red representing the positive class and blue representing the negative class. The green solid line in (a2) and (a3) shows the decision boundary by the meta-learner with $\alpha$ fitted on the validation set to maximize ACC. The points $(p, q)$ on the upper right side of the boundary are classified as positive because they satisfy $r = \alpha p + (1-\alpha) q > 0.5$. In this experiment, the fitted $\alpha=0.586$, suggesting the stacking ensemble relies almost equally on the CNN and the LSTM. The violet dashed line in (a3) is the decision boundary with $\alpha$ fitted on the test set, and hence can be seen as the oracle. It can be observed that the distribution of predicted probabilities on the validation set (a2) and the test set (a3) are similar. The distribution of predicted probabilities on the training data in (a1), on the other hand, looks completely different, with the CNN achieving almost perfect separation. In fact, the CNN overfitted on the in-sample data, as indicated by a significantly lower positive recall rate in (a2) and (a3). This validates the decision that meta-learners should not be fitted on the predicted probabilities of the same data used to train the base learners. Figure \ref{fig:data}(b) exhibits the stacking optimization process for the same experiment, in which the ACC is calculated on the validation set (a2) and the test set (a3) by scanning over a fine grid of $\alpha\in[0,1]$ with resolution 0.001. Although the validation ACC (blue curve) is not concave with respect to $\alpha$, it does exhibits a maximum at $\alpha=0.586$ as indicated by the vertical green line. The stacking ensemble's test set performance over $\alpha$ is shown as the red curve. These curves indicate that stacking the LSTM and the CNN indeed results in a small improvement of performance relative to implementing either of them alone, corresponding to the values of ACC at $\alpha=1$ or $\alpha=0$. \begin{figure} \centering \gridline{ \fig{stacking/data_train.png}{0.25\textwidth}{(a1) Train set} \fig{stacking/data_val.png}{0.25\textwidth}{(a2) Validation set} \fig{stacking/data_test.png}{0.25\textwidth}{(a3) Test set} \fig{stacking/alpha.png}{0.23\textwidth}{(b) Optimization of ACC} } \caption{(a1)--(a3): CNN predicted probability (y-axis) vs. LSTM predicted probability (x-axis) for the train, the validation, and the test set. The green solid line in (a2) and (a3) is the decision boundary of the ensemble with meta-learner fitted on the validation set. The violet dashed line in (a3) is analogous to the green line except that it is fitted on the test set, and hence can be seen as the oracle. (b): ACC as a function of $\alpha$ on the validation and the test set. The vertical green line shows the value of $\alpha$ that maximizes the validation ACC. The leftmost values of the ACC curves ($\alpha=0$) correspond to the ACC of the CNN, and the rightmost values of these curves ($\alpha=1$) correspond to the ACC of the LSTM.} \label{fig:data} \end{figure} \subsection{CNN identifies the emergence of preflare features} We use visual attribution methods to extract flare-indicative characteristics of magnetograms from trained CNNs. First, we use synthetic images to examine patterns that contribute to a positive decision of CNNs. The results of synthetic images help us understand better the attribution maps of real magnetograms. Then, we apply visual attribution methods to image sequences of selected active regions that transition from a flare-quiescent state to a flare-imminent state. Setting the baseline to the first image in the sequence gives a time-varying attribution map that tracks magnetic field variations that contribute to the change in the predicted probability. \subsubsection{Synthetic image} To assist our understanding of attribution maps obtained by different methods, we first turned to synthetic magnetograms. We take the bipolar magnetic region (BMR) model in \citet{yeates2020good}, represented as line-of-sight magnetic field $B$ as a function of Heliographical location $(s,\phi)$, where $s$ denotes sine-latitude and $\phi$ denotes Carrington longitude. $B$ is parameterized by amplitude $B_0$, polarity separation $\rho$ (in radian), tilt angle $\gamma$ (in radian) with respect to the equator, and size factor $a$ fixed to be 0.56 to match the axial dipole moment of SHARP \citep{yeates2020good}. The untilted BMR centered at origin has the form \begin{align} B(s, \phi) = - B_0 \frac{\phi}{\rho} \exp\left[-\frac{\phi^2 + 2 \arcsin^2(s)}{(a\rho)^2}\right]. \end{align} We sweep a grid of $B_0$, $\rho$, and tilt angle $\gamma$ to generate a BMR dataset. Of particular interest are synthetic BMRs considered to be flare-imminent by CNNs. Figure \ref{fig:bipole} shows some examples of them and their attribution results, from which patterns of positive predictions can be summarized. Guided Backpropagation heatmaps have both poles highlighted with the signs matching the polarities. Integrated Gradients produces heatmaps that are more concentrated to polarity centers and attribute more credits to the negative polarities. DeepLIFT produces similar heatmaps to those by Integrated Gradients. Grad-CAM's results are not as interpretable as the above methods. They seem to avoid the polarities and highlight the background and sometimes the polarity inversion lines. \begin{figure} \centering \gridline{\fig{syn/0.png}{\textwidth}{}} \gridline{\fig{syn/5.png}{\textwidth}{}} \caption{Examples of synthetic bipole images and attribution maps.} \label{fig:bipole} \end{figure} \subsubsection{The emergence of preflare signatures in the active region evolution} We focus on the attribution results on SHARP as opposed to SMARP because the former has magnetograms of higher resolution and lower noise level. We choose the CNNs that are trained on \texttt{SHARP\_ONLY}{} as opposed to \fusedsharp{} because the former is observed to generalize better according to Section \ref{sec:data}. To get results that reflect the generalization performance as opposed to training artifacts, we need to make sure that active regions being investigated are out-of-sample. To evaluate any active region of interest in SHARP, we perform 5-fold cross-validation on \texttt{SHARP\_ONLY}{}, so that every active region is associated with a CNN that has never seen the active region in training. In addition, we do not enforce the flare-based sample selection rules and random undersampling, so that the evolution of attribution maps can be evaluated more coherently. As case studies, we select four HARP sequences that transition from a flare-quiescent state to a flare-imminent state. Figure \ref{fig:time_series} shows the labels and predicted probabilities of the four sample sequences. The attribution methods are performed on each HARP sequence in a frame-by-frame manner. \begin{figure} \centering \gridline{\fig{contour/384/time_series.pdf}{0.8\textwidth}{}}\vspace{-10px} \gridline{\fig{contour/1806/time_series.pdf}{0.8\textwidth}{}}\vspace{-10px} \gridline{\fig{contour/5107/time_series.pdf}{0.8\textwidth}{}}\vspace{-10px} \gridline{\fig{contour/5982/time_series.pdf}{0.8\textwidth}{}}\vspace{-10px} \caption{CNN predictions of part of time series of in HARP 384, 1806, 5107, and 5982. The labels are shown as blue open boxes and predicted probabilities as green plus symbols. The point-in-time instance is labeled as positive if an M1.0+ flare occurred in the future 24 hours in that active region. GOES flare events during and 24 hours within the sample sequence are shown as short vertical bars, with y-coordinates indicating flare intensities (peak flux in W/m$^2$) on a log scale. } \label{fig:time_series} \end{figure} Figure \ref{fig:attribution} shows the last image of the four HARP sample sequences. The attribution maps of the same size as the input of the CNN (128$\times$ 128 pixels) are upsampled to the original resolution of the SHARP magnetogram using the \texttt{resize} method of the Python package \texttt{skimage.transform} with 2nd-order spline interpolation. The attribution maps of DeepLIFT and Integrated Gradients are similar. As such, only the results of the former are shown. The results for Integrated Gradients can be accessed online with the link shown in the caption. In Figure \ref{fig:attribution}, the attribution maps of Guided Backpropagation are observed to be more concentrated in strong fields compared to that of Deconvolution. The reference image of DeepLIFT and Integrated Gradients are chosen as the first sample in each sequence. From these two methods, the change of the prediction scores is attributed to the change of magnetic configuration of the last frame relative to the first frame, with red pixels indicating positive contribution and blue pixels indicating negative contribution. Since the predicted event probability of the last frame is higher than the first frame for all HARPs (Figure \ref{fig:time_series}), the red pixels outweigh the blue pixels in the attribution maps of DeepLIFT and Integrated Gradients. The Grad-CAM results roughly reveal the position of the strong fields and polarity inversion lines. \begin{figure} \centering % \gridline{\fig{contour/384/last.pdf}{\textwidth}{HARP 384}} \gridline{\fig{contour/1806/last.pdf}{\textwidth}{HARP 1806}} \gridline{\fig{contour/5107/last.pdf}{\textwidth}{HARP 5107}} \gridline{\fig{contour/5982/last.pdf}{\textwidth}{HARP 5982}} \caption{Attribution results of Deconvolution, Guided Backpropagation, DeepLIFT, and Grad-CAM on the last magnetogram in the sample sequences of HARP 384, 1806, 5107, and 5982. DeepLIFT chooses the first sample in the sequence as the reference. ``LayerGradCam-4" means Grad-CAM with respect to the output of the fourth, or the second to last, convolutional layer. The interactive movie of heatmaps on all 9 samples in HARP 5982 using more attribution methods can be accessed at \url{https://zeyusun.github.io/attribution/captum_movie_first.html}.} \label{fig:attribution} \end{figure} From the visual attribution map, the CNN's prediction of a flaring active region can be accredited to the elements in the magnetogram. Figure \ref{fig:contour} shows the contour plots of attribution maps generated by Integrated Gradients overlaid on magnetograms of the four HARP series. The contours enclose areas with large absolute values of Integrated Gradients in the last frame of each series, with red/blue contours indicating the region contributing positively/negatively to the increase in predicted probability. A general pattern is that the flux is emerging in red contours and canceling in blue contours. From the attribution maps, we can explain the increase in prediction scores as the consequence of the emerging flux outweighing the canceling flux. The visual attribution maps can not only be used to identify preflare signatures in an active region; comparing them with our knowledge of flaring active regions can provide insights to diagnose, and potentially improve, the machine learning method used to predict flares. Here we provide an example. A known artifact in magnetograms is the fake polarity inversion line (PIL) caused by the projection effect when the magnetic vector's inclination relative to the line-of-sight surpasses 90$^\circ$ \citep{leka2017evaluating}. In Figure \ref{fig:contour}(d), the emerging polarity inversion line in the penumbra of the leading polarity (on the right/west part of the active region) is picked up as a preflare signature by the largest red contour. However, HARP 5982 is on the limb of the solar disk at the time (Figure \ref{fig:harp5982}), and the emerging PIL is caused by the highly inclined magnetic field in the penumbra as the flux rope is elevating from the surface. This shows that the CNN trained to associate magnetograms and flaring activities is not able to discern the polarity artifact by itself. This also suggests that the model could be potentially improved if we feed the location information to the CNN to help it correct such artifact. A similar PIL artifact is also observed in the following polarity of HARP 5107 in Figure \ref{fig:contour}(c). Since this artifact does not change much during the observation interval, it does not contribute as much to the change of the prediction score. We remark the attribution maps obtained by Integrated Gradients are better in terms of resolution and interpretability than what were used in \citet{bhattacharjee2020supervised} and \citet{yi2021visual}. The occlusion method in \citet{bhattacharjee2020supervised} was shown to highlight the area between the opposite polarities, providing only crude attribution. This is because the size of the occlusion mask is usually chosen to be big enough to cover the informative regions. The result of Grad-CAM, being the attribution to a convolutional layer as opposed to the input, also suffers from the low-resolution issue. Both the Grad-CAM results in Figure \ref{fig:attribution} and in \cite{yi2021visual} are able to highlight active regions, but the resolution is not high enough to reveal any structural information within the active region at the level of magnetic elements. Guided Backpropagation in \citet{yi2021visual} is able to identify polarity inversion lines. However, it has been observed (and theoretically assessed) that Guided Backpropagation and Deconvolution behave similarly to an edge detector, i.e., they are activated by strong gradients in the image and insensitive to network decisions \citep[e.g.][]{nie2018theoretical,adebayo2018sanity}. In contrast, the method of Integrated Gradients needs a baseline, which aligns with the natural way in which the human interprets an observation: by assigning credit or blame to a certain cause, we implicitly consider the absence of the cause \citep{sundararajan2017axiomatic}. In addition, Integrated Gradients essentially ``decomposes" the change of the network's prediction score to pixels in the input image, leading to a high-resolution attribution map. \begin{figure} \centering \gridline{\fig{contour/384/15.png}{\textwidth}{\vspace{-15px}(a)}}\vspace{-15px \gridline{\fig{contour/1806/15.png}{\textwidth}{\vspace{-15px}(b)}}\vspace{-15px \gridline{\fig{contour/5107/15.png}{\textwidth}{\vspace{-15px}(c)}}\vspace{-15px \gridline{\fig{contour/5982/8.png}{\textwidth}{\vspace{-20px}(d)}}\vspace{-15px \caption{Highly attributed pixels in the last frame by Integrated Gradients on four select HARPs shown in rows. In (a), the left/right panel shows the first/last magnetogram in the sample sequence of HARP 384. The magnetograms are in the SHARP resolution, with ticks on the axes indicating pixels. Pixel values saturate at $\pm 500$~Gs. The red/blue contours on the right panel (last frame) highlight the areas with strong positive/negative Integrated Gradients relative to the first frame. The same contours are mapped to the left panel (first frame) for contrast. The contours are drawn on the attribution map smoothed with a Gaussian kernel with a standard deviation of 3 pixels. Figures in (b), (c), and (d) are similar to (a) but for other HARPs. An animated version of this figure that serially presents the complete sample sequences of the four HARPs is available online.} \label{fig:contour} \end{figure} \begin{figure} \centering \gridline{ \fig{contour/5982/hmi_ar_12423.png}{0.4\textwidth}{(a) HMI magnetogram} \fig{contour/5982/aia_171_ar_12423.png}{0.4\textwidth}{(b) AIA 171 \AA{} image} } \caption{Line-of-sight magnetic field (a) and solar EUV image (b) of HARP 5982 (NOAA AR 12423) at 23:10:17 on Sep 27, 2015. Images are taken from \url{https://solarmonitor.org/}. Note that the image title of (b) should be ``AIA 171~\AA{}" instead of ``AIA 174~\AA{}".} \label{fig:harp5982} \end{figure} \section{Conclusions and discussion} \label{sec:discussion} In this paper, we used two solar cycles of active region observational data from SMARP \citep{bobra2021smarps} and SHARP to examine the improvement in flare predictive performance of two deep learning models, namely the LSTM and the CNN, when trained on the fused datasets. When tested on SMARP, both models showed significant improvement. When tested on SHARP, LSTM showed significant improvement. The results of the controlled comparative studies indicate such an improvement is due to the significantly increased sample size from the other solar cycle. Then, in our setting of flare prediction, we verified the performance of the LSTM and the CNN using skill scores, reliability diagrams, ROC, and skill score profiles. The comparison showed that the LSTM is generally a better model than the CNN. After that, we explored the possibility of combining the LSTM and the CNN for a better prediction performance in the framework of a meta-learning paradigm called stacking. The results showed that in some settings, the stacking model outperforms the best member in the ensemble. Lastly, we applied visual attribution methods to CNNs. The results demonstrate the utility of visual attribution methods in identifying flare-related signatures in active regions, including the flux emergence and new polarity inversion lines. The attribution map on one particular region on the limb of the solar disk revealed one limitation of the CNN and suggested potential modifications for improvement. The questions raised in Section \ref{sec:results} are arguably broad and general. We have taken one particular path to partially address each question. To inspire future studies, we provide additional comments and discussions related to these questions. \paragraph{Task-based sample selection} In this work, we studied the task of distinguishing M- and X-class flare producing regions from flare-quiet regions. This ``strong-vs-quiet" task focuses on the increase or the continuation of flare activity and does not require distinguishing between flares of closer energy levels like M- and C-class flares. The samples that indicate a decay in flare activity and the samples that only lead to weak flares are therefore excluded. This ensures good baseline classification performance against which our proposed predictors could be reliably compared. Our findings may not extend to other task definitions, e.g., all-clear forecasts, where flare-quiet active regions that evolve from a flare-active state are of interest \citep{barnes2016comparison,ji2020all}; and operationally-evaluated forecasts, where flare activities in the prediction period are considered as prescience and can not be used to select samples. The principal challenge to extending our analysis to these tasks is that weak- and no-flare activity annotations have higher uncertainty due to background radiation and other factors \citep{mccloskey2018flare}. The higher levels of ``label noise" when including weak flares as negative samples would make learning a reliable predictor substantially more difficult. A possible solution, and topic of future work, is to predict the continuous flare intensity level instead of the GOES flare activity class. \paragraph{Evaluation under a realistic event rate} As mentioned in Section~\ref{sec:rus}, we rebalance the training and the validation set to prevent the predictor biasing towards the majority non-event class simply due to its volume, and we rebalance the test set to evaluate the predictor's generalization ability under the same climatological rate as it is trained. Evaluating the performance under a realistic event rate requires more work other than simply applying the predictor on a test set that is not rebalanced: predictors trained on the balanced dataset will bias towards the minority class on a test set under a realistic rare event rate, producing an undesirably high false alarm rate. One possible solution to correct such bias is to treat the class proportions as priors and apply the Bayes rule \citep{elkan2001foundations}. This method requires an accurate estimate of the true event rate of the testing period. \paragraph{The importance and challenges of data fusion} Fusing data from multiple sources to produce more consistent, accurate, and useful information is a universal problem in astronomy. Although the astrophysics community is funding projects like DKIST \citep{rimmele2020daniel} and the Vera Rubin Observatory \citep{ivezic2019vera}, both of which will take 25--50 TB of data a day, astronomers cannot study long-term trends without including historical or old data sets (or waiting a decade for these instruments to take enough data). In this work, we took a straightforward approach to add the new data in the training set with minimal calibration, and train the models as usual. Based on our experiments, we have shown that this simple approach can result in improvement. We anticipate that, with more accurate cross-calibration between the SMARP and SHARP, the benefit of combining them may be better than demonstrated here. There are several possible ways to improve upon the fusion method: \begin{itemize} \item To simulate the effect of unresolved structures in SMARP magnetograms, Gaussian blur can be applied to the higher quality SHARP magnetogram. This approach is used in comparing full-disk line-of-sight magnetograms of HMI and MDI in \citet{liu2012comparison}, in which the parameters of the Gaussian filters are tuned to minimize the root mean squared difference between them. \item Point spread functions can be estimated for MDI and HMI magnetograms separately, and deconvolution can be performed to remove stray light that is instrument-specific \citep{mathew2007properties,yeo2014point}. \item Magnetogram fusion can be performed in the other direction: super-resolving magnetograms in SMARP to mimic those in SHARP. Such an approach has been recently explored using deep neural networks \citep{gitiaux2019probabilistic, jungbluth2019single}. The improved overall image quality of super-resolved SMARP magnetograms could capture higher resolution magnetic field distributions and hence improve the accuracy of the active region summary parameters in SMARP. \item For the active region summary parameters, we took a ``post-facto" correction approach by correcting the parameters of the same name in the two data products via linear regression. Alternatively, with fused magnetograms available, one can also re-compute the parameters on those transformed image data. This approach avoids the linear assumption and leads to parameters more consistent with the manipulated magnetograms, with the caveat that the manipulated magnetograms also suffer from the loss of information. More concretely, the effects of spatial resolution on the inferred magnetic field and derived quantities have been examined by \citet{leka2012modeling}, who found that, to preserve the underlying character of the magnetic field, post-facto binning can be employed with some confidence, albeit less so for derived quantities like vertical current density. In short, a universal and accurate fusing strategy that accounts for the instrumental spatial resolution is still hindered by our ignorance of the ground truth magnetic field structure, and the benefits and drawbacks of different fusing methods have to be evaluated case by case. \end{itemize} \paragraph{Machine learning with multi-source data} Learning from multi-source data is also a prevalent topic in machine learning. In our work, machine learning models are trained as usual with new data added to the training set. An alternative approach would use transfer learning: train on the additional data first, then switch to the original data for fine-tuning. In heliophysics, this idea is recently explored by \citet{covas2020transfer} in the prediction of the solar surface longitudinally averaged radial magnetic field distribution, using historical data from 1874 to 1975 in addition to newer data obtained by SoHO and SDO. \paragraph{Performance comparison between the LSTM and the CNN} The active region summary parameters used by the LSTM are derived from magnetograms. In that sense, the data used by the CNN contains complete information of the data used by the LSTM. However, our experiments show that the LSTM generally has better performance. There are many potential reasons that the CNN does not perform better than, or as well as the LSTM: (1) the CNN takes in uniformly sized magnetograms whose size and aspect ratio are distorted. (2) the CNN only uses the image of the last frame in the sequence, whereas the LSTM uses all the data in the sequence; (3) the CNN learns the features by itself, wheres the LSTM uses hand-crafted parameterizations that are known to be relevant to flaring activity; (4) the CNN uses subsampled images with information loss, whereas the LSTM uses parameters derived from full resolution images; (5) the CNN has more parameters and more prone to overfitting (which reflects on the lower training loss but not validation loss of the CNN in many experiments). \paragraph{On the comparison of flare forecast methods} Many flare forecast studies quote the skill scores directly from other studies for comparison. Even though the forecast goal is somewhat standard and used in many studies (e.g. to predict whether there will be an M1.0+ class flare occurring in the future 24 hours), to conclude the superiority of one method against another, both methods have to be evaluated on the same dataset. However, it is not trivial to come up with such a ``common ground" for methods to compete because research codes are not usually publicly available, and because different opinions exist on the ways the data should be processed. The difficulty in methodical comparison spawns the effort in fairly comparing existing forecasts (e.g., the ``All-Clear" workshops \citep{barnes2016comparison,leka2019comparison}) and developing common datasets (e.g., SWAN-SF benchmark dataset \citep{angryk2020multivariate}) or platforms (e.g., the FLARECAST project \citep{georgoulis2021flare}). To advocate credible comparisons, we do not quote skill scores from other studies. Instead, we follow the above studies and make our code publicly available to facilitate future comparisons. \paragraph{On stacking ensemble} In our experiments, stacking CNN and LSTM performs similarly to the ``select best" strategy but not significantly better in most settings. However, another stacking study \citet{guerra2020ensemble} used a larger number of base learners and showed that most ensembles achieved a better skill score (between 5\% to 15\%) than any of the members alone. This suggests that improved performance may be obtained by training a larger number of base learners on the SMARP/SHARP data studied here. \paragraph{Choice of the baseline in interpretability methods} Some visual attribution methods require reference input, such as Integrated Gradients and DeepLIFT. One naive choice is an image with all values equal to zero. Images of this sort imply a lack of patterns. These are the baselines mostly used for interpretation in computer vision tasks like object detection. In our case, the images are magnetic field component measurements, which can take on positive or negative values and a wide dynamic range, unlike normal images in real life. We choose the first image in the sequence as the reference, so that the visual attribution methods can attribute the change of prediction scores to the change of magnetic field configuration, which is of actual interest. There are other choices of baselines. One example is input images with Gaussian noise. Using this type of reference may reveal the sensitivity of the network's prediction to local changes. Furthermore, integration may benefit from going beyond simply linearly interpolating the reference and the input on the original image space, i.e., the 2D cartesian plane. For example, one could consider applying attribution methods to the path of time series of magnetograms. The Integrated Gradients calculated with this approach would integrate temporal dependency of each point-in-time in the sequence, exploiting more information about the evolution of active regions. The authors would like to thank K. D. Leka for valuable discussions on the polarity artifacts of the line-of-sight component of the photospheric magnetic field, and on the effect of spatial resolution on magnetograms and derived quantities. This work was supported by NASA DRIVE Science Center grant 80NSSC20K0600. \software{Our codes for data processing, model training, and performance evaluating are openly available at Zenodo via DOI \href{https://doi.org/10.5281/zenodo.6415849}{10.5281/zenodo.6415849} \citep{sun2022zenodo} or GitHub at \url{https://github.com/ZeyuSun/flare-prediction-smarp}.}
{ "redpajama_set_name": "RedPajamaArXiv" }
7,306
Chaden Halfhill Title: Founder & CEO Certifications: LEED AP, CGR Summary: Entrepreneur, artist, educator, community leader, environmental and sustainability advocate Chaden Halfhill is owner and chief visionary of Silent Rivers Design+Build. Chaden has grown Silent Rivers over the past 25 years to reflect a passion for conscientious design, building excellence and healthy relationships with the natural environment as well as our local community. Always motivated by opportunity to make things better creatively, Chaden is first a fine artist and sculptor. Wesleyan University, art degree in sculpture Columbia University, architecture studies Center on Sustainable Communities (COSC); Co-Founder, past Board President, Instructor Home Builders Association of Greater Des Moines, Board of Directors Des Moines Remodeler's Council, Executive Committee, Former Chair Iowa Arts Council; Artist, Educator Metro Arts Alliance; Artist, Educator, Board Member, Current Chairperson pArtners Unlimited; Artist, Educator, Board President Clive's Public Art Commission, Founding Member Indigo Dawn LLC, Founder & President Green & Main, Founder Verified Green; Associate Experience: After completing his formal art education, Chaden built a nationally-recognized portfolio of works in sculpture, design and photography that explored the conflict between structural design and the natural environment. Using that foundation, he engaged design+build as an informed way to develop vibrant and balanced living environments. In 1993, he founded Silent Rivers Design+Build. Chaden's vision inspires the Silent Rivers Design+Build team to incorporate these ideals into every project. Focused on the power of collaboration, Chaden encourages Silent Rivers to be an authentic cooperative experience for his team and his clients. The Silent Rivers team upholds a quality and level of continuing education that surpasses industry standards and is known for quality craftsmanship and providing artistic solutions for every client. In 2017, Chaden was awarded the G. David Hurd Innovator of the Arts award by the Des Moines Arts Festival. He is shown here with Trudy Holman Hurd, wife of the late G. David Hurd. If you have the desire to do something great, that's what will come out of it. The people who have joined Silent Rivers joined me because they want to put their heart and soul into creating unique and beautiful spaces and improving our client's lives. That's why we are here." More About Chaden Halfhill: As a community leader, educator, and entrepreneur, Chaden is committed to sustainability in the built environment. He co-founded the Center on Sustainable Communities and helped guide its growth into a trusted educational resource that supports statewide promotional and networking regarding sustainable building practices. Chaden also established Indigo Dawn LLC, a green development firm dedicated to urban infill and rehabilitation of existing buildings. Chaden continues to enjoy creating art and working with design+build clients while engaging in broad policy discussions and community engagement. Projects by Chaden Halfhill: Old Farmhouse Addition Creates Master Suite & Courtyard Jessica Bishop-Smyser
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
8,732
{"url":"https:\/\/robotics.stackexchange.com\/questions\/18115\/defining-c-space-state-for-an-arm-in-ompl","text":"# Defining C-space state for an arm in OMPL\n\nI'm very new to Robotics, so please bear with my ignorance about the subject. That said, I'm trying to use Open Motion Planning Library (OMPL) to set up the configuration space for a robotic arm modeled after the human arm. In my model, the shoulder has 3 DOFs for translation and 3 DOFs for rotation. The elbow has only 1 DOF in rotation.\n\nWhat would the correct way to setup the state space in OMPL? I can think of the following ways:\n\nMy confusion comes from the fact that since different dimensions of my C state space will have different bounds then, correct me if I'm wrong, shouldn't those be normalized to a common range before distance computations? Otherwise, they'll contribute disproportionately to the distance. If OMPL does this automatically, will it do this for both $$R^7$$ as well as the compound state space?\n\nAnother source of confusion is that one also needs to assign weights to individual subspaces in the compound state space. What's the rationale for having weights? Is it just to prioritize some motions over the others?\n\nI think what you need in this case is a real vector state space.\n\nYou can have different bounds for each variable of your state space:\n\nob::RealVectorStateSpace* jointSpace(new ob::RealVectorStateSpace(_ndof));\nob::RealVectorBounds bounds(_ndof);\n\nstd::vector<double> lower_limits(_ndof);\nstd::vector<double> upper_limits(_ndof);\n\nbounds.setLow(0, SOME_LIMIT);\nbounds.setHigh(0, SOME_LIMIT);\n\/\/.\n\/\/.\n\/\/.\nbounds.setLow(n, SOME_LIMIT);\nbounds.setHigh(n, SOME_LIMIT);\n\njointSpace->setBounds(bounds);\n\n\u2022 While I understand that the code snippet is only an example, I would suggest avoiding the new keyword to create raw pointers and use unique pointers instead. It helps avoid memory leaks. \u2013\u00a0Akshay Kumar Jan 31 at 18:08","date":"2019-02-18 02:49:34","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 4, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.5658308863639832, \"perplexity\": 949.07363543306}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.3, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-09\/segments\/1550247484020.33\/warc\/CC-MAIN-20190218013525-20190218035525-00372.warc.gz\"}"}
null
null
{"url":"http:\/\/www.ck12.org\/probability\/Probability-of-Compound-Events\/lesson\/user:Y2FpdGxpbnBvbHppbmJAZ21haWwuY29t\/Basic-Probability\/","text":"<img src=\"https:\/\/d5nxst8fruw4z.cloudfront.net\/atrk.gif?account=iA1Pi1a8Dy00ym\" style=\"display:none\" height=\"1\" width=\"1\" alt=\"\" \/>\n\n# Probability of Compound Events\n\n## Chance of an event occurring: # of successes divided by possible outcomes.\n\nEstimated15 minsto complete\n%\nProgress\nPractice Probability of Compound Events\nProgress\nEstimated15 minsto complete\n%\nBasic Probability\n\nMost are familiar with how flipping a coin or rolling dice works and yet probability remains one of the most counterintuitive branches of mathematics for many people. The idea that flipping a coin and getting 10 heads in a row is just as unlikely as getting the following sequence of heads and tails is hard to comprehend.\n\n$HHTHTTTHTH$\n\nAssume a plane crashes on average once every 100 days (extremely inaccurate). Given a plane crashed today, what day in the next 100 days is the plane most likely to crash next?\n\n#### Watch This\n\nhttp:\/\/www.youtube.com\/watch?v=YWt_u5l_jHs James Sousa: Introduction to Probability\n\n#### Guidance\n\nProbability is the chance of an event occurring. Simple probability is defined as the number of outcomes you are looking for (also called successes) divided by the total number of outcomes. The notation\u00a0 $P(E)$ is read \u201cthe probability of event $E$ \u201d.\n\n$P(E)=\\frac{\\# \\ successes}{\\# \\ possible \\ outcomes}$\n\nProbabilities can be represented with fractions, decimals, or percents. Since the number of possible outcomes is in the denominator, the probability is always between zero and one. A probability of 0 means the event will definitely not happen, while a probability of 1 means the event will definitely happen.\n\n$0 \\le P(E) \\le 1$\n\nThe probability of something not happening is called the complement and is found by subtracting the probability from one.\n\n$P(E^C)=1-P(E)$\n\nYou will often be looking at probabilities of two or more independent experiments. Experiments are independent when the outcome of one experiment has no effect on the outcome of the other experiment. If there are two experiments, one with outcome $A$ and the other with outcome $B$ , then the probability of $A$ and $B$ is:\n\n$P(A \\ and \\ B)=P(A) \\cdot P(B)$\n\nThe probability of $A$ \u00a0or $B$ \u00a0is:\n\n$P(A \\ or \\ B)=P(A)+P(B)-P(A \\ and \\ B)$\n\nExample A\n\nIf you are dealt one card from a 52 card deck, what is the probability that you are dealt a heart? What is the probability that you are dealt a 3? What is the probability that you are dealt the three of hearts?\n\nSolution: There are 13 hearts in a deck of 52 cards. $P(heart)=\\frac{13}{52}=\\frac{1}{4}$\n\nThere are 4 threes in the deck of 52.\u00a0 $P(three)=\\frac{4}{52}=\\frac{1}{13}$\n\nThere is only one three of hearts. $P(three \\ and \\ heart) =\\frac{1}{52}$\n\nExample B\n\nDean and his friend Randy like to play a special poker game with their friends. Dean goes home a winner 60% of the time and Randy goes home a winner 75% of the time.\n\n1. What is the probability that they both win in the same night?\n2. What is the probability that Randy wins and Dean loses?\n3. What is the probability that they both lose?\n\nSolution:\u00a0 First represent the information with probability symbols.\n\nLet $D$ be the event that Dean wins. Let $R$ be the event that Randy wins. The complement of each probability is when Dean or Randy loses instead.\n\n$P(D)=0.60, \\quad P(D^C)=0.40$\n\n$P(R)=0.75, \\quad P(R^C)=0.25$\n\n1. $P(D \\ and \\ R)=P(D) \\cdot P(R)=0.60 \\cdot 0.75=0.45$\n2. $P(R \\ and \\ D^C) =P(R) \\cdot P(D^C)=0.75 \\cdot 0.40 =0.30$\n3. $P(D^C \\ and \\ R^C)=P(D^C) \\cdot P(R^C)=0.40 \\cdot 0.25=0.10$\n\nExample C\n\nIf a plane crashes on average once every hundred days, what is the probability that the plane will crash in the next 100 days?\n\nSolution: The na\u00efve and incorrect approach would be to interpret the question as \u201cwhat is the sum of the probabilities for each of the days?\u201d Since there are 100 days and each day has a probability of 0.01 for a plane crash, then by this logic, there is a 100% chance that a plane crashes. This isn\u2019t true because if on average the plane crashes once every hundred days, some stretches of 100 days there will be more crashes and some stretches there will be no crashes. The 100% solution does not hold.\n\nIn order to solve this question, you need to rephrase the question and ask a slightly different one that will help as an intermediate step. What is the probability that a plane does not crash in the next 100 days?\n\nIn order for this to happen it must not crash on day 1 and not crash on day 2 and not crash on day 3 etc.\n\nThe probability of the plane not crashing on any day is $P(no \\ crash)=1-P(crash)=1-0.01=0.99$ .\n\nThe product of each of these probabilities for the 100 days is:\n\n$0.99^{100} \\approx 0.366$\n\nTherefore, the probability that a plane does not crash in the next 100 days is about 36.6%. To answer the original question, the probability that a plane does crash in the next 100 days is $1-0.366=0.634$ \u00a0or about\u00a0 $63.4 \\%$ .\n\nConcept Problem Revisited\n\nWhether or not a plane crashes today does not matter. The probability that a plane crashes tomorrow is\u00a0 $p=0.01$ .\u00a0The probability that it crashes any day in the next 100 days is equally\u00a0 $p=0.01$ . The key part of the question is the word \u201cnext\u201d.\n\nThe probability that a plane does not crash on the first day and does crash on the second day is a compound probability, which means you multiply the probability of each event.\n\n$P(Day \\ 1 \\ no \\ crash \\ AND \\ Day \\ 2 \\ crash)=0.99 \\cdot 0.01=0.0099$\n\nNotice that this probability is slightly smaller than 0.01. Each successive day has a slightly smaller probability of being the next day that a plane crashes. Therefore, the day with the highest probability of a plane crashing next is tomorrow.\n\n#### Vocabulary\n\nThe probability of an event is the number of outcomes you are looking for (called successes) divided by the total number of outcomes.\n\nThe complement of an event is the event not happening.\n\nIndependent events are events where the occurrence of the first event does not impact the probability of the second event.\n\n#### Guided Practice\n\n1. Jack is a basketball player with a free throw average of 0.77. What is the probability that in a game where he has 8 shots that he makes all 8? What is the probability that he only makes 1?\n\n2. If it has a 20% chance of raining on Tuesday, your phone has 30% chance of running out of batteries, and there is a 10% chance that you forget your wallet, what is the probability that you are in the rain without money or a phone?\n\n3. Consider the previous question with the rain, wallet and phone. What is the probability that at least one of the three events does occur?\n\n1. Let\u00a0 $J$ \u00a0represent the event that Jack makes the free throw shot and $J^C$ \u00a0represent the event that Jack misses the shot.\n\n$P(J)=0.77, \\ P(J^C)=0.23$\n\nThe probability that Jack makes all 8 shots is the same as Jack making one shot and making the second shot and making the third shot etc.\n\n$P(J)^8=0.77^8 \\approx 12.36 \\%$\n\nThere are 8 ways that Jack could make 1 shot and miss the rest. The probability of each of these cases occurring is:\n\n$P(J^C)^7 \\cdot P(J)=0.23^7 \\cdot 0.77$\n\nTherefore, the overall probability of Jack making 1 shot and missing the rest is:\n\n$0.23^7 \\cdot 0.77 \\cdot 8=0.0002097 =0.02097\\%$\n\n2. While a pessimist may believe that all the improbable negative events will occur at the same time, the actual probability of this happening is less than one percent:\n\n$0.20 \\cdot 0.30 \\cdot 0.1=0.006=0.6 \\%$\n\n3. The na\u00efve approach would be to simply add the three probabilities together. This is incorrect. The better way to approach the problem is to ask the question: what is the probability that none of the events occur?\n\n$0.8 \\cdot 0.7 \\cdot 0.9=0.504$\n\nThe probability that at least one occurs is the complement of none occurring.\n\n$1-0.504=0.496=49.6 \\%$","date":"2016-02-14 22:00:59","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 40, \"texerror\": 0, \"math_score\": 0.6633055210113525, \"perplexity\": 367.6761390663506}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2016-07\/segments\/1454702032759.79\/warc\/CC-MAIN-20160205195352-00328-ip-10-236-182-209.ec2.internal.warc.gz\"}"}
null
null
College celebrates Global Awareness Month throughout November October 25, 2018 | Global Awareness An abundance of activities are planned throughout November in celebration of Global Awareness Month. Christy Burke, Director of Education Abroad and International Recruitment, said her office, in conjunction with the office of Diversity & Inclusion, and the Asian Studies and Latin American Studies programs, has planned events starting... Marietta College Education Department Call for Comments May 16, 2017 | Accreditation The Education Department of Marietta College is hosting an accreditation visit by the Council for the Accreditation of Educator Preparation (CAEP) on October 22-24, 2017. Interested parties are invited to submit third-party comments to the visiting team. Please note that comments must address substantive matters related to the quality of... WWI speaker series continues with Dr. Doug Anderson on Jan. 26 Press Release WWI speaker series continues with Dr. Doug Anderson on Jan. 26 January 14, 2015 | World War I Marietta College's World War I colloquium — Exploring the Great War — returns with its fourth presentation by Dr. Douglas Anderson at 7 p.m., Monday, Jan. 26. Anderson, Director of Legacy Library, will speak on "The Music of World War I," in Thomas 124. Anderson's presentation is free and open to the public. Marietta students present research at American Physical Society meeting April 22, 2019 | physics Marietta College's David Erzen '19 (Bethel Park, Pennsylvania) and Xuan Zhu '19 (Shandong Province, China) recently shared their research at the spring meeting of the Ohio-Region Section of the American Physical Society. The meeting was held at the College of Wooster. Erzen, who will graduate in May with a Bachelor of Science degree in... Marietta students place second in Peoples Bank Case Competition February 16, 2018 | Business Abby Tornes '18 (Lowell, Ohio) learned something about herself during the sixth annual Peoples Bank Undergraduate Business Case Competition on Saturday, Feb. 10. Study puts Marietta among top colleges, universities in student success September 29, 2016 | Annual Rankings Marietta College ranks No. 221 nationally in the inaugural Wall Street Journal/Times Higher Education survey of more than 1,000 U.S. colleges and universities released this week. Marietta College political science student to present at conference Feature Marietta College political science student to present at conference March 28, 2008 | 20008 Marietta College's Nicholas Busch '08, who is majoring in political science, will attend the 31st annual Appalachian Studies Association Conference at Marshall University on March 28-30. "It's quite an accomplishment for Nick to have had his paper chosen for presentation at the annual Appalachian Studies meeting because this is a well known... McCoy Professor pleased to complete two publications October 18, 2018 | Faculty News Marietta College's Grace Johnson, McCoy Professor of Management and Accounting, recently completed two publications. McCoy Scholarship, hard work help senior transform life May 2, 2017 | Commencement The metamorphosis of Brian Raiff '17 (Galena, Ohio) during the past four years at Marietta College is something even he couldn't have predicted. Raiff arrived on campus in the Fall of 2013 as a 285-pound former football player, who openly admits he didn't know much about being a petroleum engineer. But four years later, it's easy to see... Civic Engagement proud of volunteer work being provided Feature Civic Engagement proud of volunteer work being provided October 20, 2014 | Civic Engagement Marietta College's Office of Civic Engagement recently completed an inventory of community service completed by students, faculty and staff and are pleased to announce the group accumulated 5,791 hours during the 2013-14 academic year. Marietta College professor receives grant from Ohio Academy of History April 10, 2019 | history When Dr. Brandon Downing was growing up in western Pennsylvania, he had no idea he would someday write about the very Native Americans who called Kittanning home centuries before him. The story of the Delaware Indians is written in American history but it is from the viewpoint of non-Native peoples. Downing, a Marietta College History... McCoy Professors recognize the talent in Marietta's entire faculty January 31, 2018 | McCoy Professor Tim Catalano and Kevin Pate were part of the same faculty cohort that arrived at Marietta College in the fall of 2001. Both of them are respected "veteran" faculty members today — Catalano in the English Department and Pate in the Chemistry Department. Recently, they learned how much their efforts and contributions in and out of the... Marietta jumps 63 spots in latest Money Magazine rankings August 1, 2016 | rankings1617 Money magazine has once again recognized Marietta College as one of The Best Colleges for Your Money, according to the publication's inaugural list that was recently released. The magazine states the rankings "are the first to combine the most accurate pricing estimates available with students' likely earnings after graduation and a unique... College's third class of Physician Assistant students graduate News College's third class of Physician Assistant students graduate September 7, 2006 | news2006 Dr. Gloria Stewart, director of the Marietta College Physician Assistant Studies Program, is pleased to announce that the 21 members of the third class of master's students completed their degree requirements in early August. Four Marietta students awarded scholarships from Ohio Space Grant Consortium April 10, 2017 | Student News After being awarded scholarships to conduct research throughout the year, four Marietta College students made presentations at the 2017 Ohio Space Grant Consortium (OSGC) Student Research Symposium at the Ohio Aerospace Institute in Cleveland recently. Sheldon Mullet '17 (Dundee, Ohio), Jennifer Starkey '17 (El Cajon, California), Emily... Board of Trustees approves tenure for nine Marietta faculty members Feature Board of Trustees approves tenure for nine Marietta faculty members February 24, 2014 | Board of Trustees Marietta College's Board of Trustees recently approved tenure for nine faculty members, which will be implemented on July 1. Former PetSmart executive to deliver 2019 Commencement address April 1, 2019 | Commencement 2019 Barbara Perry Fitzgerald '73 has had many roles at Marietta College since stepping onto campus as a freshman in 1969. On Sunday, May 5th, after completing her 14th and final year as a member of Marietta's Board of Trustees, she will add to that list: the 2019 Commencement Keynote Speaker. Eight Marietta College students selected to perform with honor bands January 4, 2018 | Honors Jazz Band Eight Marietta College students, who are all members of the Symphonic Band, Wind Ensemble and Jazz Ensemble, have been selected to perform with the honor bands during the Ohio Private College Instrumental Conductors Association (OPCICA) Honor Band Festival Weekend from January 20-21 at The College of Wooster. This will be the 30th annual... Members of the Class of 2020 join the College during Matriculation ceremony August 24, 2016 | Class of 2020 President William N. Ruud, presiding over his first Marietta College Matriculation ceremony, thanked the more than 300 parents in attendance for entrusting the safety and future of their children with the College. Political Science professor presenting China Case Study in October Press Release Political Science professor presenting China Case Study in October September 15, 2005 | news2005 Dr. Jacqueline DeLaat, a McCoy Professor of Political Science at Marietta College, is presenting a case study, "One Step Forward, Two Steps Back" at the annual meeting of the International Academy for Case Studies, in Las Vegas from Oct. 12-16. A previous case study of DeLaat's won an award from this group in 1995. The current case study... McCoy Professor publishes a book on teaching leadership Marietta College's Dr. Gama Perruci, McCoy Professor and Dean of the McDonough Leadership Center, recently published his second book, entitled Teaching Leadership: Bridging Theory and Practice (Edward Elgar Publishing). Seven Marietta seniors to present at ACS' national meeting March 24, 2017 | ACS Seven Marietta College seniors will present their chemistry research when they attend the American Chemical Society National Meeting being held April 2-6 in San Francisco. Presenting their research at the National Meeting and Exposition will cap off the undergraduate careers of Megan Bache '17 (Westland, Michigan), Adam Garlow '17 (Vincent,... Psychology professor earns distinguished teaching award Feature Psychology professor earns distinguished teaching award December 2, 2013 | McCoy Professor Students from Marietta, Shenandoah high schools take top honors at 8th annual Honor High School Solo Recital March 12, 2019 | Music Two Marietta High School students and another from Shenandoah High School claimed the top spots in Marietta College's eighth annual Honor High School Solo Recital on March 2. Marietta High School's Christian MesMarteau, a tenor, earned the top spot in the vocal division, while Jodan Weber, marimba, of Scott Kitchen private studio, finished... Class of 2019 PA students honored during annual White Coat ceremony October 31, 2017 | White Coat ceremony Thirty-seven members of Marietta College's Physician Assistant Studies Class of 2019 participated in the annual White Coat ceremony held on Saturday, October 21st. The ceremony marked a day of transition for the Physician Assistant (PA) students as they progressed from being students to being officially welcomed into the PA profession,...
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
7,807
\section{} \subsection{} \subsubsection{} \section{Introduction} Long distance quantum communication requires quantum entanglement distributed over two end nodes of a quantum communication channel \cite{ekert1991quantum, kimble2008quantum, wehner2018quantum}. Due to the optical absorption and other noise in the channel, the error of direct communication increases exponentially with the distance, thus reduces the key rates in quantum key distribution. To overcome this problem, the quantum repeater protocol has been proposed, where a series of entanglement generation and swapping operations are performed to extend the entanglement to farther and farther nodes with only polynomial cost \cite{briegel1998quantum}. The practical utilization of a quantum repeater requires quantum memories \cite{duan2001long, bussieres2013prospective}. Pioneering works have been demonstrated toward the implementation of a quantum repeater with atomic quantum memory. For example, photonic qubits have been stored as collective spin wave excitations in the atomic ensemble \cite{hsiao2018highly, wang2019efficient, usmani2010mapping}; entanglement between the memory and transmitting photons has also been realized \cite{matsukevich2005entanglement, farrera2018entanglement, PhysRevX.9.041033}. Several methods have been proposed to further improve the quantum repeater protocol. One is to use multiplexed quantum memories, which significantly reduce the required time to establish entanglement in the quantum communication channel \cite{collins2007multiplexed, lan2009multiplexed, parniak2017wavevector}. Another possibility is to explore high-dimensional entanglement in the quantum network \cite{bechmann2000quantum, cerf2002security}, which increases the capacity of the communication channel and thus enhances the quantum communication efficiency \cite{simon2007quantum}. High-dimensional entanglement also has plenty of applications beyond quantum communication, such as quantum teleportation with high capacity \cite{steinlechner2017distribution, hu2019experimental, PhysRevLett.123.070505}, quantum distillation \cite{li2014entanglement, kwiat2001experimental} and robust Bell tests \cite{dada2011experimental, wang2018multidimensional}. Recently, many efforts have been devoted to creating high-dimensional entanglement sources in different systems, like rare-earth-doped crystals \cite{kutluer2017solid, ikuta2018four, martin2017quantifying, tiranov2017quantification, edgar2012imaging, schneeloch2019quantifying, kues2017chip}, integrated devices \cite{wang2018multidimensional, kues2017chip} and atomic systems \cite{wen2019multiplexed, ding2016high, pan2019orbital}. Moreover, the photonic qudits possessing the high-dimensional entanglement have been stored in quantum memory elements, based on atomic ensembles \cite{parigi2015storage, zhang2016experimental} or rare-earth-doped crystals \cite{PhysRevLett.123.080502, usmani2010mapping,kutluer2017solid}. Despite its importance, the verification of a high-dimensional entangled state and the certification of the entanglement dimension are sophisticated tasks for the experiments. The standard method to estimate the entanglement fidelity is to reconstruct the full quantum state, for example, through quantum state tomography \cite{PhysRevA.64.052312, thew2002qudit}. It works well for low-dimensional entangled states \cite{ikuta2018four}; but the measurement costs increase significantly with the system dimension. Besides being time consuming, it also requires the setup to be stable during the whole measurements. Furthermore, some measurement settings may not be available for a given experimental platform \cite{tiranov2017quantification, schneeloch2019quantifying}. To overcome these difficulties, several methods are proposed and experimentally demonstrated to efficiently characterize the high-dimensional entanglement with sparse data, such as entanglement witness \cite{dada2011experimental, tiranov2017quantification, schneeloch2019quantifying, krenn2014generation} and compressed sensing \cite{gross2010quantum, riofrio2017experimental}. In this paper, we experimentally demonstrate an alternative and effective method to generate high-dimensional entanglement between a flying photon pulse and a spin wave stored in an atomic quantum memory based on the use of spatial multiplexing. Through excitation of a one-dimensional (1D) array of $10$ atomic memory cells \cite{pu2017experimental, jiang2019experimental}, we generate high-dimensional entanglement carried by different spatial modes of the photon and the atoms. These different modes are brought together for interference through an acoustic-optical deflector (AOD) to confirm the multi-dimensional entanglement. The high-dimensional entanglement is verified through different kinds of entanglement witnesses, and we confirm that at least eight-dimensional entanglement is achieved experimentally \cite{krenn2014generation}. Entanglement of formation is also measured to confirm a lower bound of 4-dimensional entanglement \cite{tiranov2017quantification}. Finally, as an application, we demonstrate the violation of high-dimensional Bell-type inequalities using the stored entangled state \cite{collins2002bell}. \begin{figure*}[htb] \centering \includegraphics[width=18cm]{figure1.pdf}\\ \caption{\textbf{Generation of high dimensional entanglement between a photon and a multiplexed atomic quantum memory (MAQM).} \textbf{(a)} Setup. HWP represents half-wave plate, QWP for quarter-wave plate, and SPD for single photon detector. The two programmable acousto-optic deflectors (AOD) (W-AOD/R-AOD or S-AOD/I-AOD), two lenses and the atomic ensemble in the middle under 4f configuration form the multiplexing/demultiplexing optical circuits. The write pulse is split into ten paths in the $x$ direction (4 paths shown here for clarity). The entanglement between a signal photon and a spin wave excitation in the memory array is generated by the DLCZ protocol. To verify the entanglement, the spin wave is retrieved into an idler photon. The ten optical modes of the signal/idler photon are further combined in the S-AOD/I-AOD; measurements in different bases can be performed by adjusting the amplitudes and phases of the RF signals on the AODs. \textbf{(b)} Energy diagram $|g\rangle\equiv|5S_{1/2}, F=1\rangle$, $|s\rangle\equiv|5S_{1/2}, F=2\rangle$, $|e\rangle\equiv|5P_{1/2}, F=2\rangle$. The write beam is blued-detuned by $\Delta=16\,$MHz from resonance at the central memory cell. \textbf{(c)} Basis choice for AODs. the blue (green) lines are the signal (idler) modes; the pink squares represent the individual memory cells, and the red rectangles are the S-AOD/I-AOD. The AODs can be set to measure the X-basis (upper row), or the K-basis (lower row). More details can be found in Appendix~B.} \end{figure*} \begin{figure*}[tbp] \centering \includegraphics[width=18cm]{figure2.pdf}\\ \caption{\textbf{Measurement of entanglement witness.} The sum of visibility $V_x$, $V_y$ and $V_z$ in X-space and K-space are shown respectively. Each pixel in the plot corresponds to a two-dimensional subspace spanned by $X=x_1$ and $X=x_2$ ($K=k_1$ and $K=k_2$) for both the signal and the idler photons. Every mode pair requires 12 measurement settings to obtain the visibility, and the ideal sum of visibility is 3 for each pair. } \end{figure*} \section{Experimental setup} Our experiment is illustrated in Fig.~1 schematically. A cold $^{87}$Rb ensemble is loaded into a 3D MOT with about 2-3 billion atoms. After a $10\,$ms compression and a $7\,$ms polarization gradient cooling (PGC) processes, the temperature of the ensemble is reduced to $25\,\mu$K. Then we further prepare the atoms to the ground state $|g\rangle\equiv|5S_{1/2}, F=1\rangle$ through optical pumping. The experiment starts with a $100\,$ns write pulse, which is $16\,$MHz blue-detuned to the D1 transition $|g\rangle\rightarrow|e\rangle\equiv|5P_{1/2}, F=2\rangle$. Upon the detection of a signal photon from spontaneous Raman scattering, we know that an atom has been scattered into the storage state $|s\rangle\equiv|5S_{1/2}, F=2\rangle$, and that a spin wave has been generated in the atomic ensemble. Then it can be retrieved back into an idler photon by a $500\,$ns strong read pulse, resonant to the $|s\rangle\rightarrow|e\rangle$ transition. The interval time between the write and the read pulse, $7.9\,\mu$s, is the same as the Larmor period of the ensemble. In this way, we can achieve the highest retrieval efficiency of the spin wave excitation \cite{li2019quantum}. On the other hand, if there is no signal photon detected following the write pulse, a strong clean pulse identical to the read pulse will be applied to bring the atoms back to the ground state $|g\rangle$, and the experimental cycle will be repeated. The multiplexing/demultiplexing optical circuits consists of the acousto-optic deflectors (AODs) and lenses under 4f configuration to address different regions of the cold $^{87}$Rb ensemble. Each region serves as an individually addressable memory cell with low cross talk errors \cite{pu2017experimental}. We use the Duan-Lukin-Cirac-Zoller (DLCZ) protocol to generate the high dimensional entanglement between the signal photon and the spin wave in the memory array \cite{duan2001long}. In this work, we divide the weak write pulse into 10 spatial modes (4 shown in Fig.~1(a) for clarity). Since the signal photon and the spin wave come from the same spatial mode, the entangled state can be written as: \begin{equation} |\Psi\rangle = \sum_{i=0}^{9}C_i|i\rangle_s|i\rangle_a, \end{equation} where $|i\rangle_s$ $(|i\rangle_a)$ refers to the signal photon (spin wave) in the mode $i$, and the coefficients $C_i$ can be controlled by adjusting the amplitudes and phases of RF tones in the W-AOD. Ideally we want $C_i=1/\sqrt{10}$ ($i=0,\,1,\,\cdots,\,9$) for the maximally entangled state. To verify the high-dimensional entanglement, we first retrieve the spin wave excitation into an idler photon by a strong read pulse. The signal modes and the idler modes are combined by the S-AOD and I-AOD respectively for different measurement bases. Two types of bases will be used: the spatial basis $\hat{X}=|x\rangle\langle x|$ ($x=0,\,1,\,\cdots,\,9$), and the momentum basis $\hat{K}=|k\rangle\langle k|$, where $|k\rangle = \sum_{x=0}^{9} \exp (2\pi ixk/10)|x\rangle/\sqrt{10}$ ($k=0,\,1,\,\cdots,\,9$). \section{Entanglement witness} Quantum state tomography can reconstruct the full density matrix, but it is time-consuming for high-dimensional entangled states. Here, to efficiently verify the entanglement, we use the entanglement witness method \cite{ding2016high, krenn2014generation}. The entanglement witness we use here was originally designed for photons carrying orbital angular momentum (OAM). Three mutually unbiased bases (MUBs) are measured for every 2-dimensional subspace; following Ref.~\cite{ding2016high}, we call them diagonal/anti-diagonal ($\sigma_x$), left/right ($\sigma_y$), and horizontal/vertical ($\sigma_z$) bases. The visibility is defined as the correlation between the signal and the idler photons $V_i=|\sigma_i^{(s)} \otimes \sigma_i^{(i)}|$ ($i=x, y, z$). We measure the visibility for all the two-dimensional subspaces in the X (K) space spanned by $X=x_1$ and $X=x_2$ ($K=k_1$ and $K=k_2$), as shown in Fig.~2. If the system has at most $d$-dimensional entanglement, it can be shown that the sum of visibilities for all the mode pairs $W$ satisfies \cite{ding2016high}: \begin{equation} W \le f(d) \equiv 3D(D - 1) / 2 - D(D - d), \end{equation} where $D=10$ is the total number of modes. When the measured $W$ exceeds this bound, we can conclude that the entanglement is at least $(d+1)$-dimensional. Here we measure $W_X=111.6\pm0.8$ for the X-space, and $W_K=111.1\pm0.8$ for the K-space. Both of them are larger than $f(d)=105$ for $d=7$, which verifies that at least an 8-dimensional entangled state is generated. The major obstruction to reach higher visibility is the accidental coincidence counts \cite{de2006direct, jing2019entanglement}. It reduces the signal-to-noise ratio and hence the visibility of all the two-dimensional subspaces. If we subtract the accidental coincidence from the data, as described in Appendix~C, the calculated total visibilities are $W_{X}^{\prime}=126.5\pm 1.0$ and $W_{K}^{\prime}=125.5\pm 0.9$, which correspond to at least 10- and 9- dimensional entanglement, respectively. \begin{figure}[ptb] \centering \includegraphics[width=8.6cm]{figure4.pdf}\\ \caption{\textbf{Entanglement of formation.} The entanglement of formation is measured for both X-basis and K-basis after subtracting accidental coincidence. The maximal values are 1.793 ebits for the $X$ basis and 1.90 ebits for the $K$ basis. The red dashed line represents the results for an ideal maximally entangled state with $\mathrm{log}_2 d$ ebits. } \end{figure} \section{entanglement of formation} We also bound the entanglement of formation $E_F$ to verify the entangled state. The entanglement of formation gives the minimal number of maximally entangled qubit-qubit states (ebits) that is required to get one copy of the desired entangled state through local operations and classical communication (LOCC) \cite{martin2017quantifying, tiranov2017quantification}. A lower bound of $E_F$ is given by \cite{tiranov2017quantification} \begin{equation} E_F\ge- \log_2(1-B^2/2), \end{equation} where $B$ is defined as \begin{align} B=\frac{2}{\sqrt{|C|}} \sum_{(j,k) \in C\atop j< k} &\Big( |\langle j,j|\rho|k,k\rangle|\nonumber\\ &-\sqrt{\langle j,k|\rho|j,k\rangle\langle k,j|\rho|k,j\rangle} \Big). \end{align} In the above equation, $\rho$ is the density matrix of the entangled state; $j$ $(k)$ indicates the mode $j$ $(k)$ of the signal and the idler photons; $C$ is a set of mode pairs $(j,\,k)$ and $|C|$ denotes the number of pairs in the set. To lower bound $ E_F$, we need to measure the $\langle j,k|\rho|j,k\rangle$ and $\langle j,j|\rho|k,k\rangle$ terms, which can be obtained in a similar way as the visibility in the entanglement witness experiment. Note that in Ref.~\cite{tiranov2017quantification} the $\langle j,j|\rho|k,k\rangle$ terms for $|j-k|>1$ are bounded by the nearest neighbor terms, and the bounds become looser for farther away pairs; hence the bound on the entanglement of formation ceases to improve for these pairs. In comparison, here we are able to measure all these terms directly; since the entangled state we prepare has high fidelity (even though we are not able to measure the fidelity by quantum state tomography), it will be beneficial to include all the mode pairs in Eq.~(8). In Fig.~3 we show the increase of the bound on the entanglement of formation as we consider more and more X or K modes. In this way, we get the tightest bound on the entanglement of formation as $1.79\pm0.06$ and $1.90\pm0.07$ for the X-space and K-space measurements respectively (after subtraction of accidental coincidence; details can be found in Appendix~C). This result verifies genuine 4-dimensional entanglement. \section{Bell-type inequality} Finally, we use our entangled state to study the violation of high-dimensional Bell-type inequalities. The original Bell inequality focuses on a qubit-qubit entangled state \cite{bell1964physics}. It has been generalized to higher-dimensional entangled systems with the advantage of stronger resistance to experimental noise \cite{kaszlikowski2000violations, collins2002bell}. Consider a bipartite quantum system of dimension $d\times d$. If the correlation can be described by local hidden variable theories, the Bell-type parameter $S_d$ will satisfy the CGLMP inequalities \cite{dada2011experimental, collins2002bell}: \begin{widetext} \begin{align} S_d\equiv&\sum_{l=0}^{[d/2]-1}\left(1-\frac{2l}{d-1}\right) \Big\{[P(I_0=S_0+l)-P(I_0=S_0-l-1)]+[P(S_1=I_0+l)-P(S_1=I_0-l-1)]\nonumber\\ &\qquad\qquad\qquad\qquad +[P(S_0=I_1+l+1)-P(S_0=I_1-l)]+[P(I_1=S_1+l)-P(I_1=S_1-l-1)]\Big\}\nonumber\\ \le& 2. \end{align} \end{widetext} Here two possible detector settings can be used for the signal and the idler photons respectively, and for each detector setting there are $d$ possible measurement outcomes: $S_s,\,I_i=0,\,\cdots,\,d-1$ ($s,\,i=0,\,1$ represent the measurement settings). $P(I_i=S_s)$ is the probability that the signal photon and the idler photon outcomes are the same: \begin{equation} P(S_s=I_i)=\sum_{k=0}^{d-1}P(S_s=k,I_i=k). \end{equation} Similarly \begin{equation} P(S_s=I_i+l)=\sum_{k=0}^{d-1}P(S_s=k,I_i=(k-l)\text{ mod }d). \end{equation} The measurement bases for the signal and the idler photons are \begin{equation} |k\rangle_{S,s}=\frac{1}{\sqrt{d}}\sum_{x=0}^{d-1}\mathrm{exp}[2\pi ix(k+\Theta_s)/d]|x\rangle_S, \end{equation} \begin{equation} |l\rangle_{I,i}=\frac{1}{\sqrt{d}}\sum_{x=0}^{d-1}\mathrm{exp}[2\pi ix(-l+\Phi_i)/d]|x\rangle_I, \end{equation} where $k,\,l=0,\,\cdots,\,d-1$ and $\Theta_0=0$, $\Theta_1=0.5$, $\Phi_0=0.25$, $\Phi_1=-0.25$. \begin{figure}[ptb] \centering \includegraphics[width=8.6cm]{figure3.pdf}\\ \caption{\textbf{Violation of the CGLMP inequalities.} The error bars of the Bell parameter $S_d$ are estimated assuming a Poisson distribution of the photon counting. The violation of the Bell-type inequalities is observed for up to $d=6$ (red line), and up to $d=10$ after subtracting the accidental coincidence (green line).} \end{figure} Figure~4 shows the measured $S_d$ as a function of the dimension $d$, comparing with the classical bound. The violation of the CGLMP inequality remains up to $d=6$ (red line), and it can be extended to $d=10$ (green line) when the accidental coincidence is subtracted. This reveals the quantum nature of the correlation between the signal photon and the idler photon (thus the spin wave excitation in the atomic ensemble). \section{Discussion and Conclusion} The measured entanglement dimension is mainly affected by two factors: the accidental coincidence of photon counting on the signal and the idler detectors (details in Appendix~C), and the difference in the amplitude and phase settings on different AODs (details in Appendix~B and Appendix~D). There are also high order excitations and background noises, which are not the dominant errors in our experiment. To improve the visibility, the cross correlation $g_2$ between the signal and the idler photons can be increased by reducing the generation rate of the signal photon, so long as the coincidence is kept much higher than the background noise. In principle, a 2D array of memory cells can be used to create a higher dimensional entangled state, but there the visibility measurement will be difficult for two memory cells that are neither in the same row nor in the same column. Although the visibility can still be bounded based on the methods of Ref.~\cite{tiranov2017quantification}, it is not tight enough to give larger number of ebits or witness parameter $W$. To summarize, in this work, we generate an entangled state between a signal photon and a spin wave excitation in a 1D MAQM array with 10 spatial modes. Entanglement witness and entanglement of formation are used to verify the entanglement, and we confirm at least 8- and 4-dimensional entanglement respectively using these two methods. The Bell-type inequality is studied as an application, which in turn proves the existence of entanglement in the system. Our experiment is an important step toward quantum repeaters and quantum networks using multiplexed quantum memories and high-dimensional entanglement. If each memory cell can store a photon carrying other degrees of freedom \cite{zhang2016experimental}, we can combine the advantages of high dimensional entangled state and multiplexed quantum memory together and expect further improvement in the performance. \begin{acknowledgments} This work was supported by the Ministry of Education of China, Tsinghua University, and the National key Research and Development Program of China (2016YFA0301902). Y.K.W. acknowledges support from Shuimu Tsinghua Scholar Program and International Postdoctoral Exchange Fellowship Program (Talent-Introduction Program). \end{acknowledgments} \begin{figure*}[ht] \centering \includegraphics[width=18cm]{figureS1.pdf}\\ \caption{\textbf{The $g_2$ correlation and retrieval efficiency of the MAQM.} The $g_2$ correlation and retrieval efficiency of the MAQM is measured before the experiment. The collecting probability of a signal photon is about $0.6\%$. We choose one $X$ row (specified by the purple box) to carry out the experiment. Its $g_2$ correlation and retrieval efficiency for each memory cell are both high enough and close to each other.} \end{figure*}
{ "redpajama_set_name": "RedPajamaArXiv" }
2,492
#include "config.h" #include <math.h> #include <stdlib.h> #include "cache.h" #include "vtim.h" /*-------------------------------------------------------------------- * TTL and Age calculation in Varnish * * RFC2616 has a lot to say about how caches should calculate the TTL * and expiry times of objects, but it sort of misses the case that * applies to Varnish: the server-side cache. * * A normal cache, shared or single-client, has no symbiotic relationship * with the server, and therefore must take a very defensive attitude * if the Data/Expiry/Age/max-age data does not make sense. Overall * the policy described in section 13 of RFC 2616 results in no caching * happening on the first little sign of trouble. * * Varnish on the other hand tries to offload as many transactions from * the backend as possible, and therefore just passing through everything * if there is a clock-skew between backend and Varnish is not a workable * choice. * * Varnish implements a policy which is RFC2616 compliant when there * is no clockskew, and falls as gracefully as possible otherwise. * Our "clockless cache" model is syntehsized from the bits of RFC2616 * that talks about how a cache should react to a clockless origin server, * and more or less uses the inverse logic for the opposite relationship. * */ void RFC2616_Ttl(struct busyobj *bo) { unsigned max_age, age; double h_date, h_expires; char *p; const struct http *hp; struct exp *expp; CHECK_OBJ_NOTNULL(bo, BUSYOBJ_MAGIC); expp = &bo->exp; hp = bo->beresp; assert(expp->entered != 0.0 && !isnan(expp->entered)); /* If all else fails, cache using default ttl */ expp->ttl = cache_param->default_ttl; max_age = age = 0; h_expires = 0; h_date = 0; /* * Initial cacheability determination per [RFC2616, 13.4] * We do not support ranges yet, so 206 is out. */ if (http_GetHdr(hp, H_Age, &p)) { age = strtoul(p, NULL, 0); expp->age = age; } if (http_GetHdr(hp, H_Expires, &p)) h_expires = VTIM_parse(p); if (http_GetHdr(hp, H_Date, &p)) h_date = VTIM_parse(p); switch (http_GetStatus(hp)) { default: expp->ttl = -1.; break; case 200: /* OK */ case 203: /* Non-Authoritative Information */ case 300: /* Multiple Choices */ case 301: /* Moved Permanently */ case 302: /* Moved Temporarily */ case 307: /* Temporary Redirect */ case 410: /* Gone */ case 404: /* Not Found */ /* * First find any relative specification from the backend * These take precedence according to RFC2616, 13.2.4 */ if ((http_GetHdrField(hp, H_Cache_Control, "s-maxage", &p) || http_GetHdrField(hp, H_Cache_Control, "max-age", &p)) && p != NULL) { if (*p == '-') max_age = 0; else max_age = strtoul(p, NULL, 0); if (age > max_age) expp->ttl = 0; else expp->ttl = max_age - age; break; } /* No expire header, fall back to default */ if (h_expires == 0) break; /* If backend told us it is expired already, don't cache. */ if (h_expires < h_date) { expp->ttl = 0; break; } if (h_date == 0 || fabs(h_date - expp->entered) < cache_param->clock_skew) { /* * If we have no Date: header or if it is * sufficiently close to our clock we will * trust Expires: relative to our own clock. */ if (h_expires < expp->entered) expp->ttl = 0; else expp->ttl = h_expires - expp->entered; break; } else { /* * But even if the clocks are out of whack we can still * derive a relative time from the two headers. * (the negative ttl case is caught above) */ expp->ttl = (int)(h_expires - h_date); } } /* calculated TTL, Our time, Date, Expires, max-age, age */ VSLb(bo->vsl, SLT_TTL, "RFC %.0f %.0f %.0f %.0f %.0f %.0f %.0f %u", expp->ttl, -1., -1., expp->entered, expp->age, h_date, h_expires, max_age); } /*-------------------------------------------------------------------- * Body existence, fetch method and close policy. */ enum body_status RFC2616_Body(struct busyobj *bo, struct dstat *stats) { struct http *hp; char *b; hp = bo->beresp; if (hp->protover < 11 && !http_HdrIs(hp, H_Connection, "keep-alive")) bo->should_close = 1; else if (http_HdrIs(hp, H_Connection, "close")) bo->should_close = 1; else bo->should_close = 0; if (!strcasecmp(http_GetReq(bo->bereq), "head")) { /* * A HEAD request can never have a body in the reply, * no matter what the headers might say. * [RFC2516 4.3 p33] */ stats->fetch_head++; return (BS_NONE); } if (hp->status <= 199) { /* * 1xx responses never have a body. * [RFC2616 4.3 p33] */ stats->fetch_1xx++; return (BS_NONE); } if (hp->status == 204) { /* * 204 is "No Content", obviously don't expect a body. * [RFC2616 10.2.5 p60] */ stats->fetch_204++; return (BS_NONE); } if (hp->status == 304) { /* * 304 is "Not Modified" it has no body. * [RFC2616 10.3.5 p63] */ stats->fetch_304++; return (BS_NONE); } if (http_HdrIs(hp, H_Transfer_Encoding, "chunked")) { stats->fetch_chunked++; return (BS_CHUNKED); } if (http_GetHdr(hp, H_Transfer_Encoding, &b)) { stats->fetch_bad++; return (BS_ERROR); } if (http_GetHdr(hp, H_Content_Length, &bo->h_content_length)) { stats->fetch_length++; return (BS_LENGTH); } if (http_HdrIs(hp, H_Connection, "keep-alive")) { /* * Keep alive with neither TE=Chunked or C-Len is impossible. * We assume a zero length body. */ stats->fetch_zero++; return (BS_ZERO); } if (http_HdrIs(hp, H_Connection, "close")) { /* * In this case, it is safe to just read what comes. */ stats->fetch_close++; return (BS_EOF); } if (hp->protover < 11) { /* * With no Connection header, assume EOF. */ stats->fetch_oldhttp++; return (BS_EOF); } /* * Fall back to EOF transfer. */ stats->fetch_eof++; return (BS_EOF); } /*-------------------------------------------------------------------- * Find out if the request can receive a gzip'ed response */ unsigned RFC2616_Req_Gzip(const struct http *hp) { /* * "x-gzip" is for http/1.0 backwards compat, final note in 14.3 * p104 says to not do q values for x-gzip, so we just test * for its existence. */ if (http_GetHdrData(hp, H_Accept_Encoding, "x-gzip", NULL)) return (1); /* * "gzip" is the real thing, but the 'q' value must be nonzero. * We do not care a hoot if the client prefers some other * compression more than gzip: Varnish only does gzip. */ if (http_GetHdrQ(hp, H_Accept_Encoding, "gzip") > 0.) return (1); /* Bad client, no gzip. */ return (0); } /*--------------------------------------------------------------------*/ int RFC2616_Do_Cond(const struct req *req) { char *p, *e; double ims; int do_cond = 0; /* RFC 2616 13.3.4 states we need to match both ETag and If-Modified-Since if present*/ if (http_GetHdr(req->http, H_If_Modified_Since, &p) ) { if (!req->obj->last_modified) return (0); ims = VTIM_parse(p); if (ims > req->t_req) /* [RFC2616 14.25] */ return (0); if (req->obj->last_modified > ims) return (0); do_cond = 1; } if (http_GetHdr(req->http, H_If_None_Match, &p) && http_GetHdr(req->obj->http, H_ETag, &e)) { if (strcmp(p,e) != 0) return (0); do_cond = 1; } return (do_cond); }
{ "redpajama_set_name": "RedPajamaGithub" }
4,084
{"url":"https:\/\/www.sunclipse.org\/?p=2222","text":"# 17 Equations that Clogged My Social-Media Timeline\n\nAn image burbled up in my social-media feed the other day, purporting to be a list of \u201c17 Equations that Changed the World.\u201d It\u2019s actually been circulating for a while (since early 2014), and purports to summarize the book by that name written by Ian Stewart. This list is typo-ridden, historically inaccurate and generally indicative of a lousy knowledge-distribution process that lets us down at every stage, from background research to fact-checking to copy-editing.\n\nThe following comments are meant to be representative, not exhaustive.\n\nIt\u2019s not known whether Pythagoras proved the theorem we named for him\u2014or if any of the stories about him are more than legends, really. When you go back that far, the history of mathematics and science becomes semi-legendary. The best one can typically do for \u201cevidence\u201d is a fragment of a lost book quoted in another book that happened to survive, and all of it dating to decades or centuries after the events ostensibly being chronicled. Did Pythagoras actually prove the theorem we named after him, or did he merely observe that it held true in a few special cases, like the 3-4-5 right triangle? Tough to say, but the latter would have been easier, and it would seem to appeal to a number mystic, for whom it\u2019s all about the Benjamins successive whole numbers. Pythagoras himself probably wrote nothing, and nothing in his own words survives. It\u2019s not clear whether his contemporaries viewed him as a mathematician or primarily as a propounder of an ethical code. (Even only 150 years after the time he purportedly lived, the ancient authorities disagreed about whether Pythagoras was a vegetarian, with Aristoxenus saying no and Eudoxus yes.) If Pythagoras had never lived, and a cult had attributed their work to that name in ritual self-denial; if the stories of his visiting Egypt and being the son of a Tyrian corn merchant began as parables and were later taken as biography\u2014it would be hard to tell the result from what we have today. (And, in fact, groups of mathematicians do sometimes publish under a collective pseudonym: witness the Bourbaki collective.)\n\nTypical, really: Indian and Chinese people do the actual work, and the white guy who likely didn\u2019t gets all the credit.\n\nI\u2019ll outsource the criticism of the \u201clogarithms\u201d part:\n\nOnce again [the] simple attribution to John Napier is exactly that, simplistic and historically misleading. We can find the principle on which logarithms are based in the work of several earlier mathematicians. We can find forms of proto-logarithms in both Babylonian and Indian mathematics and also in the system that Archimedes invented to describe very large numbers. In the fifteenth century Triparty, of the French mathematician Nicolas Chuquet we find the comparison between the arithmetical and geometrical progressions that underlay the concept of logarithms but if Chuquet ever took the next step is not clear. In the sixteenth century the German mathematician Michael Stifel studied the same comparison of progressions in his Arithmetica integra and did take the next step outlining the principle of logarithms but doesn\u2019t seem to have developed the idea further.\n\nIt was in fact John Napier who took the final step and published the first set of logarithmic tables in his book Mirifici Logarithmorum Canonis Descriptio in 1614. However the Swiss clockmaker and mathematician, Jost B\u00fcrgi developed logarithms independently of Napier during the same period although his book of tables, Arithmetische und Geometrische Progress Tabulen, was first published in 1620.\n\nThe \u201ccalculus\u201d line is a mess. For starters, in at least one version circulating online, it\u2019s got an extra \u201c=\u201d thrown in, which makes the whole thing gibberish. The $df$ over $dt$ notation is due to Leibniz, but the list attributes it to Newton, his bitter enemy (and a pretty bitter guy overall, by many accounts). Pierre de Fermat understood quite a bit of the subject before Newton worked on it, getting as far as computing the maxima and minima of curves by finding where their tangent lines are horizontal. And the philosophy of setting up the subject of calculus using limits is really a nineteenth-century approach to its foundations.\n\nInverse-square gravity was considered before Newton, and imaginary numbers before Euler.\n\nCredit for the normal distribution should also go to de Moivre (earlier than Gauss) and Laplace (contemporaneous).\n\nMaxwell never wrote his Equations in that manner; that came later, with Heaviside, Gibbs, Hertz and vector calculus. The simplification provided by the vector calculus is really nothing short of astonishing.\n\nThe idea of entropy came via Clausius, who found inspiration in the work of Carnot. The statement that entropy either increases or stays the same, which we could write as $dS \\geq 0$, predates Boltzmann. What Boltzmann provided was an understanding of how entropy arises in statistical physics, the study of systems with zillions of pieces whose behavior we can\u2019t study individually, but only in the aggregate. If you want to attribute an equation to Boltzmann in recognition of his accomplishments, it\u2019d be better to use the one that is actually carved on his tombstone,\n$$S = k \\log W.$$\n\nI am not sure that $E = mc^2$ is the proper way to encapsulate the essence of relativity theory. It is a consequence, not a postulate or a premise. The Lorentz transformation equations would do a better job at cutting to the heart of the subject. Note that these formulae are named after Lorentz, not Einstein; to put the history very, very briefly, Lorentz wrote the equations down first, but Einstein understood what they meant. (And the prehistory of $E = mc^2$ is pretty fascinating, too.)\n\nPlucking out the Schr\u00f6dinger equation (the list omits the umlaut because careless) does a disservice to the history of quantum mechanics. There are ways of doing quantum physics without invoking the Schr\u00f6dinger equation: Heisenberg\u2019s matrix mechanics, the Dirac\u2013Feynman path integral, and the one it\u2019s my day job to work on. In fact, not only did Heisenberg\u2019s formulation come first, but we didn\u2019t know what Schr\u00f6dinger\u2019s work meant until Max Born clarified that the square of the size of Schr\u00f6dinger\u2019s complex number $\\Psi$ is a probability.\n\nThe number of names in that last paragraph\u2014and I wasn\u2019t even trying\u2014is a clue that factoids and bullet points are not a good way of learning physics.\n\nYes, Robert May did write about the logistic map,\n$$x_{t+1} = k x_t(1-x_t),$$\nbut he was hardly the first to poke at it. In his influential paper \u201cSimple mathematical models with very complicated dynamics,\u201d there\u2019s a moment which expresses pretty well how science happens sometimes:\n\nHow are these various cycles arranged along the interval of relevant parameter values? This question has to my knowledge been answered independently by at least 6 groups of people, who have seen the problem in the context of combinatorial theory, numerical analysis, population biology, and dynamical systems theory (broadly defined).\n\nAlso, d\u2019Alembert was not named \u201cd\u2019Almbert.\u201d","date":"2019-09-23 20:07:54","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.5735375285148621, \"perplexity\": 1206.4607150040483}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-39\/segments\/1568514578201.99\/warc\/CC-MAIN-20190923193125-20190923215125-00058.warc.gz\"}"}
null
null
package fr.univ.nantes.roomanager.dao.reservation import fr.univ.nantes.roomanager.bean.ReservationBean /** * @author Pierre Gaultier * @author Alexis Giraudet */ class ReservationDaoImpl extends ReservationDao { private var increment: Int = 0 private var reservations: Set[ReservationBean] = Set() override def get(id: Int): ReservationBean = { val reservation: ReservationBean = reservations.find((reservation: ReservationBean) => reservation.getId() == id).get new Reservation(reservation.getId(), reservation) } override def update(reservation: ReservationBean): Unit = { if (reservations.contains(reservation)) { var newReservation: Reservation = new Reservation(reservation.getId(), reservation) if (!reservations.exists((other: ReservationBean) => newReservation.uniqueConstraint(other))) reservations += newReservation else throw new Exception() } else throw new Exception() } override def delete(reservation: ReservationBean): Unit = reservations -= reservation override def find(predicate: (ReservationBean) => Boolean): Traversable[ReservationBean] = { var retReservations: Set[ReservationBean] = Set() reservations.filter(predicate).foreach((reservation: ReservationBean) => retReservations += new Reservation(reservation.getId(), reservation)) retReservations } override def create(reservation: ReservationBean): ReservationBean = { var newReservation: Reservation = new Reservation(increment, reservation) if (reservations.exists((other: ReservationBean) => newReservation.uniqueConstraint(other))) throw new Exception() reservations += newReservation increment += 1 new Reservation(newReservation.getId(), newReservation) } override def getAll(): Traversable[ReservationBean] = { var retReservations: Set[ReservationBean] = Set() reservations.foreach((reservation: ReservationBean) => retReservations += new Reservation(reservation.getId(), reservation)) retReservations } }
{ "redpajama_set_name": "RedPajamaGithub" }
2,392
Q: Embedding LaTeX for PDF generation I'd like to use LaTeX as an document generation backend in my application (mainly because it is well known, feature rich and output is of very high quality). Let's assume the application creates a small set of documents with content generated (or calculated) from user input. Of course, I could require a working installation of TeX with the (relatively small) set of used packages on the installation target. Another option would be providing this during the installation process of the application itself. This would allow for feeding the generated LaTeX source to the latex command. This adds a large and perhaps unstable dependency to the (comparatively) small application. As TeX installations can easily exceed several hundred MB in size, I'd rather not like to bother users with this. What I'm looking for is a way of embedding LaTeX in my application, specifically tailored to the used document class and packages in the generated documents, ideally without the need to kpathsea the required files. I assume this has been done before, but I haven't managed to find any traces of this. A: This is fairly straight-forward: * *get a minimal (La)TeX distribution such as w32tex (if the license will allow it) or kergis (which should be okay anywhere since it's MIT licensed) http://www.kergis.com/en/kertex.html *TeX a sample document which includes every element you plan to support *use a utility like http://ctan.org/pkg/snapshot to get a list of the files needed *put all of the files in a directory w/in your application and set the TeX binary to look file files there first
{ "redpajama_set_name": "RedPajamaStackExchange" }
9,374
\section{Introduction} Repetitions and periodicities in strings are one of the fundamental topics in combinatorics on words \cite{Karhumaki,Lothaire}. They are also important in other areas: lossless compression, word representation, computational biology, etc. In this paper we consider bounds on the sum of exponents of repetitions that a string of a given length may contain. In general, repetitions are studied also from other points of view, like: the classification of words (both finite and infinite) not containing repetitions of a given exponent, efficient identification of factors being repetitions of different types and computing the bounds on the number of various types of repetitions occurring in a string. The known results in the topic and a deeper description of the motivation can be found in a survey by Crochemore et al.~\cite{Survey}. The concept of runs (also called maximal repetitions) has been introduced to represent all repetitions in a string in a succinct manner. The crucial property of runs is that their maximal number in a string of length $n$ (denoted as $\rho(n)$) is $O(n)$, see Kolpakov \& Kucherov \cite{KolpakovKucherov}. This fact is the cornerstone of any algorithm computing all repetitions in strings of length $n$ in $O(n)$ time. Due to the work of many people, much better bounds on $\rho(n)$ have been obtained. The lower bound $0.927\, n$ was first proved by Franek \& Yang \cite{Franek08}. Afterwards, it was improved by Kusano et al.~\cite{Matsubara} to $0.944565\, n$ employing computer experiments, and very recently by Simpson~\cite{Simpson10} to $0.944575712\, n$. On the other hand, the first explicit upper bound $5\,n$ was settled by Rytter~\cite{Rytter06}, afterwards it was systematically improved to $3.48\,n$ by Puglisi et al.~\cite{Puglisi08}, $3.44\, n$ by Rytter~\cite{Rytter07}, $1.6\, n$ by Crochemore \& Ilie~\cite{CrochemoreIlie,Crochemore08} and $1.52\, n$ by Giraud~\cite{Giraud08}. The best known result $\rho(n) \le 1.029\, n$ is due to Crochemore et al.~\cite{DBLP:conf/cpm/CrochemoreIT08}, but it is conjectured \cite{KolpakovKucherov} that $\rho(n)<n$. Some results are known also for repetitions of exponent higher than 2. For instance, the maximal number of cubic runs (maximal repetitions with exponent at least 3) in a string of length $n$ (denoted $\rho_{cubic}(n)$) is known to be between $0.406\,n$ and $0.5\,n$, see Crochemore et al.~\cite{Lata10}. A stronger property of runs is that the maximal sum of their exponents in a string of length $n$ (notation: $\sigma(n)$) is linear in terms of $n$, see Kolpakov \& Kucherov~\cite{KolpakovKucherovLORIA}. It has applications to the analysis of various algorithms, such as computing branching tandem repeats: the linearity of the sum of exponents solves a conjecture of \cite{Gusfield98} concerning the linearity of the number of maximal tandem repeats and implies that all can be found in linear time. For other applications, we refer to \cite{KolpakovKucherovLORIA}. The proof that $\sigma(n) < cn$ in Kolpakov and Kucherov's paper \cite{KolpakovKucherovLORIA} is very complex and does not provide any particular value for the constant $c$. A bound can be derived from the proof of Rytter \cite{Rytter06} but he mentioned only that the bound that he obtains is ``unsatisfactory'' (it seems to be $25\,n$). The first explicit bound $5.6\,n$ for $\sigma(n)$ was provided by Crochemore and Ilie \cite{Crochemore08}, who claim that it could be improved to $2.9\,n$ employing computer experiments. As for the lower bound on $\sigma(n)$, no exact values were previously known and it was conjectured \cite{Kolpakov99,KolpakovKucherovLORIA} that $\sigma(n) < 2n$. In this paper we provide an upper bound of $4.1\,n$ on the maximal sum of exponents of runs in a string of length $n$ and also a stronger upper bound of $2.5\,n$ for the maximal sum of exponents of cubic runs in a string of length $n$. As for the lower bound, we bring down the conjecture $\sigma(n) < 2n$ by providing an infinite family of binary strings for which the sum of exponents of runs is greater than $2.035\,n$. \section{Preliminaries} We consider \emph{words} (\emph{strings}) $u$ over a finite alphabet $\Sigma$, $u \in \Sigma^*$; the empty word is denoted by $\varepsilon$; the positions in $u$ are numbered from $1$ to $|u|$. For $u=u_1u_2\ldots u_m$, let us denote by $u[i \mathinner{\ldotp\ldotp} j]$ a \textit{factor} of $u$ equal to $u_i\ldots u_j$ (in particular $u[i]=u[i \mathinner{\ldotp\ldotp} i]$). Words $u[1 \mathinner{\ldotp\ldotp} i]$ are called prefixes of $u$, and words $u[i \mathinner{\ldotp\ldotp} |u|]$ suffixes of $u$. We say that an integer $p$ is the (shortest) \emph{period} of a word $u=u_1\ldots u_m$ (notation: $p=\textsf{per}(u)$) if $p$ is the smallest positive integer such that $u_i=u_{i+p}$ holds for all $1\le i\le m-p$. We say that words $u$ and $v$ are cyclically equivalent (or that one of them is a cyclic rotation of the other) if $u=xy$ and $v=yx$ for some $x, y \in \Sigma^*$. A \emph{run} (also called a maximal repetition) in a string $u$ is an interval $[i\mathinner{\ldotp\ldotp} j]$ such that: \begin{itemize} \item the period $p$ of the associated factor $u[i\mathinner{\ldotp\ldotp} j]$ satisfies $2p \le j-i+1$, \item the interval cannot be extended to the right nor to the left, without violating the above property, that is, $u[i-1] \ne u[i+p-1]$ and $u[j-p+1] \ne u[j+1]$. \end{itemize} A \emph{cubic run} is a run $[i\mathinner{\ldotp\ldotp} j]$ for which the shortest period $p$ satisfies $3p\le j-i+1$. For simplicity, in the rest of the text we sometimes refer to runs and cubic runs as to occurrences of the corresponding factors of $u$. The (fractional) \emph{exponent} of a run is defined as $(j-i+1)/p$. For a given word $u \in \Sigma^*$, we introduce the following notation: \begin{itemize} \item $\rho(u)$ and $\rho_{cubic}(u)$ are the numbers of runs and cubic runs in $u$ resp. \item $\sigma(u)$ and $\sigma_{cubic}(u)$ are the sums of exponents of runs and cubic runs in $u$ resp. \end{itemize} For a non-negative integer $n$, we use the same notations $\rho(n)$, $\rho_{cubic}(n)$, $\sigma(n)$ and $\sigma_{cubic}(n)$ to denote the maximal value of the respective function for a word of length $n$. \section{Lower bound for $\sigma(n)$} Tables \ref{fig:franek} and \ref{fig:padovan} list the sums of exponents of runs for several words of two known families that contain very large number of runs: the words $x_i$ defined by Franek and Yang \cite{Franek08} (giving the lower bound $\rho(n) \ge 0.927\,n$, conjectured for some time to be optimal) and the modified Padovan words $y_i$ defined by Simpson \cite{Simpson10} (giving the best known lower bound $\rho(n) \ge 0.944575712\,n$). These values have been computed experimentally. They suggest that for the families of words $x_i$ and $y_i$ the maximal sum of exponents could be less than $2n$. \begin{table} \begin{center} \begin{tabular*}{0.6\textwidth}{@{\extracolsep{\fill}}|r|r|r|r|r|} \hline $i$ & $|x_i|$ & $\rho(x_i)/|x_i|$ & $\sigma(x_i)$ & $\sigma(x_i)/|x_i|$ \\\hline $1$ & $6$ & $0.3333$ & $4.00$ & $0.6667$ \\\hline $2$ & $27$ & $0.7037$ & $39.18$ & $1.4510$ \\\hline $3$ & $116$ & $0.8534$ & $209.70$ & $1.8078$ \\\hline $4$ & $493$ & $0.9047$ & $954.27$ & $1.9356$ \\\hline $5$ & $2090$ & $0.9206$ & $4130.66$ & $1.9764$ \\\hline $6$ & $8855$ & $0.9252$ & $17608.48$ & $1.9885$ \\\hline $7$ & $37512$ & $0.9266$ & $74723.85$ & $1.9920$ \\\hline $8$ & $158905$ & $0.9269$ & $316690.85$ & $1.9930$ \\\hline $9$ & $673134$ & $0.9270$ & $1341701.95$ & $1.9932$ \\\hline \end{tabular*} \end{center} \caption{\label{fig:franek} Number of runs and sum of exponents of runs in Franek \& Yang's \cite{Franek08} words $x_i$. } \end{table} \begin{table} \begin{center} \begin{tabular*}{0.6\textwidth}{@{\extracolsep{\fill}}|r|r|r|r|r|} \hline $i$ & $|y_i|$ & $\rho(y_i)/|y_i|$ & $\sigma(y_i)$ & $\sigma(y_i)/|y_i|$ \\\hline $4$ & $37$ & $0.7568$ & $57.98$ & $1.5671$ \\\hline $8$ & $125$ & $0.8640$ & $225.75$ & $1.8060$ \\\hline $12$ & $380$ & $0.9079$ & $726.66$ & $1.9123$ \\\hline $16$ & $1172$ & $0.9309$ & $2303.21$ & $1.9652$ \\\hline $20$ & $3609$ & $0.9396$ & $7165.93$ & $1.9856$ \\\hline $24$ & $11114$ & $0.9427$ & $22148.78$ & $1.9929$ \\\hline $28$ & $34227$ & $0.9439$ & $68307.62$ & $1.9957$ \\\hline $32$ & $105405$ & $0.9443$ & $210467.18$ & $1.9967$ \\\hline $36$ & $324605$ & $0.9445$ & $648270.74$ & $1.9971$ \\\hline $40$ & $999652$ & $0.9445$ & $1996544.30$ & $1.9972$ \\\hline \end{tabular*} \end{center} \caption{\label{fig:padovan} Number of runs and sum of exponents of runs in Simpson's \cite{Simpson10} modified Padovan words $y_i$. } \end{table} We show, however, a lower bound for $\sigma(n)$ that is greater than $2n$. \begin{theorem} There are infinitely many binary strings $w$ such that $$\frac{\sigma(w)}{|w|} > 2.035.$$ \end{theorem} \begin{proof} Let us define two morphisms $\phi:\{a,b,c\}\mapsto\{a,b,c\}$ and $\psi:\{a,b,c\}\mapsto\{0,1\}$ as follows: $$\phi(a)=baaba, \quad \phi(b)=ca, \quad \phi(c)=bca$$ $$\psi(a)=01011, \quad \psi(b)=\psi(c)=01001011$$ We define $w_i=\psi(\phi^i(a))$. Table~\ref{fig:lower} shows the sums of exponents of runs in words $w_i$, computed experimentally. Clearly, for any word $w=(w_8)^k$, $k \ge 1$, we have $$\frac{\sigma(w)}{|w|} > 2.035.$$ \begin{table} \begin{center} \begin{tabular*}{0.5\textwidth}{@{\extracolsep{\fill}}|r|r|r|r|} \hline $i$ & $|w_i|$ & $\sigma(w_i)$ & $\sigma(w_i)/|w_i|$ \\\hline $1$ & $31$ & $47.10$ & $1.5194$ \\\hline $2$ & $119$ & $222.26$ & $1.8677$ \\\hline $3$ & $461$ & $911.68$ & $1.9776$ \\\hline $4$ & $1751$ & $3533.34$ & $2.0179$ \\\hline $5$ & $6647$ & $13498.20$ & $2.0307$ \\\hline $6$ & $25205$ & $51264.37$ & $2.0339$ \\\hline $7$ & $95567$ & $194470.30$ & $2.0349$ \\\hline $8$ & $362327$ & $737393.11$ & $2.0352$ \\\hline $9$ & $1373693$ & $2795792.39$ & $2.0352$ \\\hline $10$ & $5208071$ & $10599765.15$ & $2.0353$ \\\hline \end{tabular*} \end{center} \caption{\label{fig:lower} Sums of exponents of runs in words $w_i$. } \end{table} \begin{comment} \bigskip \noindent \emph{TODO: Analysis of structure of runs in $w_i$. There are also some alternative families of words that could be analyzed instead:} \begin{verbatim} a -> aaca b -> aac c -> b a -> 0101101011010 b -> 01011010 c -> 11010 The limit is at least: 2.03482 a -> abababbaba b -> babbaba a -> 10100101 b -> 10101 The limit is at least: 2.03341 a -> abababbaba b -> babbaba a -> 10100101 b -> 00101 The limit is at least: 2.03343 a -> ababb b -> abbabbbabb a -> 101 b -> 00101 The limit is at least: 2.03349 a -> ababb b -> abbbabbabb a -> 101 b -> 00101 The limit is at least: 2.03350 \end{verbatim} \end{comment} \qed \end{proof} \section{Upper bounds for $\sigma(n)$ and $\sigma_{cubic}(n)$} In this section we utilize the concept of \emph{handles} of runs as defined in \cite{Lata10}. The original definition refers only to cubic runs, but here we extend it also to ordinary runs. Let $u \in \Sigma^*$ be a word of length $n$. Let us denote by $P=\{p_1,p_2,\ldots,p_{n-1}\}$ the set of inter-positions in $u$ that are located \emph{between} pairs of consecutive letters of $u$. We define a function $H$ assigning to each run $v$ in $u$ a set of some inter-positions within $v$ (called later on \emph{handles}) --- $H$ is a mapping from the set of runs occurring in $u$ to the set $2^P$ of subsets of $P$. Let $v$ be a run with period $p$ and let $w$ be the prefix of $v$ of length $p$. Let $w_{\min}$ and $w_{\max}$ be the minimal and maximal words (in lexicographical order) cyclically equivalent to $w$. $H(v)$ is defined as follows: \begin{enumerate}[a)] \item if $w_{\min}=w_{\max}$ then $H(v)$ contains all inter-positions within $v$, \item if $w_{\min} \ne w_{\max}$ then $H(v)$ contains inter-positions between consecutive occurrences of $w_{\min}$ in $v$ and between consecutive occurrences of $w_{\max}$ in $v$. \end{enumerate} Note that $H(v)$ can be empty for a non-cubic-run $v$. \begin{figure}[th] \begin{center} \includegraphics[width=7cm]{rys3} \caption{\label{f:handles_ex} An example of a word with two highlighted runs $v_1$ and $v_2$. For $v_1$ we have $w_{\mbox{\scriptsize min1}} \ne w_{\mbox{\scriptsize max1}}$ and for $v_2$ the corresponding words are equal to $b$ (a one-letter word). The inter-positions belonging to the sets $H(v_1)$ and $H(v_2)$ are pointed by arrows } \end{center} \end{figure} Proofs of the following properties of handles of runs can be found in \cite{Lata10}: \begin{enumerate} \item Case (a) in the definition of $H(v)$ implies that $|w_{\min}|=1$. \item $H(v_1) \cap H(v_2) = \emptyset$ for any two distinct runs $v_1$ and $v_2$ in $u$. \end{enumerate} To prove the upper bound for $\sigma(n)$, we need to state an additional property of handles of runs. Let $\mathcal{R}(u)$ be the set of all runs in a word $u$, and let $\mathcal{R}_1(u)$ and $\mathcal{R}_{\ge 2}(u)$ be the sets of runs with period 1 and at least 2 respectively. \begin{lemma} \label{lem:propH}~ \noindent If $v \in \mathcal{R}_1(u)$ then $\sigma(v) = |H(v)|+1$.\\ If $v \in \mathcal{R}_{\ge 2}(u)$ then $\lceil\sigma(v)\rceil \le \frac{|H(v)|}2+3$. \end{lemma} \begin{proof} For the case of $v \in \mathcal{R}_1(u)$, the proof is straightforward from the definition of handles. In the opposite case, it is sufficient to note that both words $w_{\min}^k$ and $w_{\max}^k$ for $k=\lfloor\sigma(v)\rfloor-1$ are factors of $v$, and thus $$|H(v)| \ge 2\cdot(\lfloor\sigma(v)\rfloor-2).$$ \qed \end{proof} Now we are ready to prove the upper bound for $\sigma(n)$. In the proof we use the bound $\rho(n) \le 1.029\,n$ on the number of runs from \cite{DBLP:conf/cpm/CrochemoreIT08}. \begin{theorem}\label{thm:upper_bound} The sum of the exponents of runs in a string of length $n$ is less than $4.1\,n$. \end{theorem} \begin{proof} Let $u$ be a word of length $n$. Using Lemma \ref{lem:propH}, we obtain: \begin{eqnarray} \nonumber \sum_{v \in \mathcal{R}(u)} \sigma(v) & = & \sum_{v \in \mathcal{R}_1(u)} \sigma(v) + \sum_{v \in \mathcal{R}_{\ge 2}(u)} \sigma(v) \\ \nonumber & \le\ & \sum_{v \in \mathcal{R}_1(u)} \left(|H(v)|+1\right) + \sum_{v \in \mathcal{R}_{\ge 2}(u)} \left(\frac{|H(v)|}2+3\right) \\ \nonumber & =\ & \sum_{v \in \mathcal{R}_1(u)} |H(v)| + |\mathcal{R}_1(u)| + \sum_{v \in \mathcal{R}_{\ge 2}(u)} \frac{|H(v)|}2 + 3\cdot|\mathcal{R}_{\ge 2}(u)| \\ \label{eq:Ru} & \le\ & 3\cdot|\mathcal{R}(u)| + A + B/2, \end{eqnarray} where $A=\sum_{v \in \mathcal{R}_1(u)} |H(v)|$ and $B=\sum_{v \in \mathcal{R}_{\ge 2}(u)} |H(v)|$. Due to the disjointness of handles of runs (the second property of handles), $A+B<n$, and thus, $A+B/2<n$. Combining this with \eqref{eq:Ru}, we obtain: $$ \sum_{v \in \mathcal{R}(u)} \sigma(v)\ <\ 3\cdot|\mathcal{R}(u)|+n\ \le\ 3\cdot\rho(n)+n\ \le\ 3\cdot1.029\,n+n\ <\ 4.1\,n. $$ \qed \end{proof} A similar approach for cubic runs, this time using the bound of $0.5\,n$ for $\rho_{cubic}(n)$ from \cite{Lata10}, enables us to immediately provide a stronger upper bound for the function $\sigma_{cubic}(n)$. \begin{theorem} The sum of the exponents of cubic runs in a string of length $n$ is less than $2.5\,n$. \end{theorem} \begin{proof} Let $u$ be a word of length $n$. Using same inequalities as in the proof of Theorem \ref{thm:upper_bound}, we obtain: $$\sum_{v \in \mathcal{R}_{cubic}(u)} \sigma(v) \ <\ 3\cdot|\mathcal{R}_{cubic}(u)|+n \ \le\ 3\cdot\rho_{cubic}(n)+n\ \le\ 3\cdot0.5\,n+n\ =\ 2.5\,n,$$ where $\mathcal{R}_{cubic}(u)$ denotes the set of all cubic runs of $u$. \qed \end{proof} \bibliographystyle{abbrv}
{ "redpajama_set_name": "RedPajamaArXiv" }
3,118
\section{Divergences and divergence statistics} Many of the divergence measures used in statistics are of the $f$-divergence type introduced independently by I. Csisz\'{a}r \cite{Csiszar1963}, T. Morimoto \cite{morimoto1963}, and Ali and Silvey \cite{Ali1966}. Such divergence measures have been studied in great detail in \cite{Liese1987}. Often one is interested inequalities for one $f$-divergence in terms of another $f$-divergence. Such inequalities are for instance needed in order to calculate the relative efficiency of two $f$-divergences when used for testing goodness of fit but there are many other applications. In this paper we shall study the more general problem of determining the joint range of any pair of $f$-divergences. The results are useful in determining general conditions under which information divergence is a more efficient statistic for testing goodness of fit than another $f$-divergence, but will not be discussed in this short paper. Let $f:\left( 0,\infty\right) \rightarrow\mathbb{R}$ denote a convex function satisfying $f\left( 1\right) =0.$ We define $f\left( 0\right) $ as the limit $\lim_{t\rightarrow0}f\left( t\right) $. We define $f^{\ast }\left( t\right) =tf\left( t^{-1}\right) .$ Then $f^{\ast}$ is a convex function and $f^{\ast}\left( 0\right) $ is defined as $\lim_{t\rightarrow 0}tf\left( t^{-1}\right) =\lim_{t\rightarrow\infty}\frac{f\left( t\right) }{t}.$ Assume that $P$ and $Q$ are absolutely continuous with respect to a measure $\mu,$ and that $p=\frac{dP}{d\mu}$ and $q=\frac{dQ}{d\mu}.$ For arbitrary distributions $P$ and $Q$ the $f$-divergence $D_{f}(P,Q)\geq0$\ is defined by the formula \begin{equation} D_{f}(P,Q)=\int_{\left\{ q>0\right\} }f\left( \frac{p}{q}\right) ~dQ+f^{\ast}\left( 0\right) P\left( q=0\right) \label{4 \end{equation} (for details about the definition (\ref{4}) and properties of the $f$-divergences, see \cite{Liese2006}, \cite{Liese1987} or \cite{Read1988}). With this definition \[ D_{f}\left( P,Q\right) =D_{f^{\ast}}\left( Q,P\right) . \] \begin{example} The function $f(t)=\left\vert t-1\right\vert $ defines the $L^{1}$-distanc \begin{equation} \left\Vert P-Q\right\Vert =\sum_{j=1}^{k}q_{j}\,\left\vert {\frac{p_{j} {q_{j}}-1}\right\vert =\sum_{j=1}^{k}\,\left\vert p_{j}{-q_{j}}\right\vert \text{ \ \ (cf. (\ref{4}))} \label{V \end{equation} which plays an important role in information theory and mathematical statistics (cf. \cite{Barron1992} or \cite{Fedotov2003}). \input{pinsker.TpX} \end{example} In (\ref{4}) is often taken the convex function $f$ which is one of the power functions $\phi_{\alpha}$\ of order $\alpha\in\mathbb{R}$ given in the domain $t>0$ by the formula \begin{equation} \phi_{\alpha}(t)={\frac{t^{\alpha}-\alpha(t-1)-1}{\alpha(\alpha-1)}}\text{ \ \ \ when \ }\alpha(\alpha-1)\neq0 \label{4a \end{equation} and by the corresponding limits \begin{equation} \phi_{0}(t)=-\ln t+t-1\text{ \ \ and \ \ }\phi_{1}(t)=t\ln t-t+1. \label{4b \end{equation} The $\phi$-divergences \begin{equation} D_{\alpha}(P,Q)\overset{def}{=}D_{\phi_{\alpha}}(P,Q),\text{ \ \ }\alpha \in\mathbb{R} \label{4c \end{equation} based on (\ref{4a}) and (\ref{4b}) are usually referred to as power divergences of orders $\alpha.$ For details about the properties of power divergences, see \cite{Liese2006} or \cite{Read1988}. Next we\ mention the best known members of the family of statistics (\ref{4c}), with a reference to the skew symmetry $D_{\alpha}(P,Q)=D_{1-\alpha}(Q,P)$ of the power divergences (\ref{4c}).$\medskip$ \begin{example} The $\chi^{2}$-divergence or quadratic divergence \begin{equation} D_{2}(P,Q)=D_{-1}(Q,P)={\frac{1}{2}}\sum_{j=1}^{k}{\frac{(p_{j}-q_{j})^{2} }{q_{j}}} \label{chi \end{equation} leads to the well known Pearson and Neyman statistics. The information divergence \begin{equation} D_{1}(P,Q)=D_{0}(Q,P)=\sum_{j=1}^{k}p_{j}\ln{\frac{p_{j}}{q_{j}}} \label{7 \end{equation} leads to the log-likelihood ratio and reversed log-likelihood ratio statistics. The symmetric Hellinger divergence \[ D_{1/2}(P,Q)=D_{1/2}(Q,P)=H(P,Q) \] leads to the Freeman--Tukey statistic. \end{example} \begin{example} The Hellinger divergence and the total variation are symmetric in the arguments $P$ and $Q.$ Non-symmetric divergences may be symmetrized. For instance the LeCam divergence is nothing but the symmetrized Pearson divergence given by \[ D_{LeCam}\left( P,Q\right) =\frac{1}{2}D_{2}\left( P,\frac{P+Q}{2}\right) +\frac{1}{2}D_{2}\left( Q,\frac{P+Q}{2}\right) \] Another symmetrized divergence is the Jensen Shannon divergence defined by \[ JD_{1}\left( P,Q\right) =\frac{1}{2}D\left( P\left\Vert \frac{P+Q} {2}\right. \right) +\frac{1}{2}D\left( Q\left\Vert \frac{P+Q}{2}\right. \right) . \] The joint range of total variation with Jensen Shannon divergence was studied by Bri\"{e}t and Harremo\"{e}s \cite{Briet2009} and is illustrated on Figure \ref{vsjd}. \input{vsjd.TpX} \end{example} In this paper we shall prove that the joint range of any pair of $f$-divergences is essentially determined by the range of distributions on a two-element set. In special cases the significance of determining the range over two-element set has been pointed out explicitly in \cite{Topsoe2001a}. Here we shall prove that a reduction to two-elemnt sets can always be made. \section{\label{sec1}Joint range of $f$-divergences} In this section we are interested in the range of the map $\left( P,Q\right) \rightarrow\left( D_{f}\left( P,Q\right) ,D_{g}\left( P,Q\right) \right) $ where $P$ and $Q$ are probability distributions on the same set. \begin{definition} A point $\left( x,y\right) \in\mathbb{R}^{2}$ is a $(f,g)$\emph{-divergence pair} if there exist a Borel space $\left( \mathcal{X},\mathcal{F}\right) $ with probability measures $P$ and $Q$ such $\left( x,y\right) =\left( D_{f}\left( P,Q\right) ,D_{g}\left( P,Q\right) \right) .$ A $(f,g)$-divergence pair $\left( x,y\right) $ is \emph{achievable in }$\mathbb{R}^{d} $ if there exist probability vectors $P,Q\in\mathbb{R}^{d}$ such that \[ \left( x,y\right) =\left( D_{f}\left( P,Q\right) ,D_{g}\left( P,Q\right) \right) . \] \end{definition} \begin{lemma} Assume that \[ P_{0}\left( A\right) =Q_{0}\left( A\right) =1 \] and \[ P_{1}\left( B\right) =Q_{1}\left( B\right) =1 \] and that $A\cap B=\varnothing.$ If $P_{\alpha}=\left( 1-\alpha\right) P_{0}+\alpha P_{1}$ and $Q_{\alpha}=\left( 1-\alpha\right) Q_{0}+\alpha Q_{1} $ then \[ D_{f}\left( P_{\alpha},Q_{\alpha}\right) =\left( 1-\alpha\right) D_{f}\left( P_{0},Q_{0}\right) +\alpha D_{f}\left( P_{1},Q_{1}\right) . \] \end{lemma} \begin{theorem} \label{TheoremConvex}The set of $(f,g)$-divergence pairs is convex. \end{theorem} \begin{proof} Assume that $\left( P,Q\right) $ and $\left( \tilde{P},\tilde{Q}\right) $ are two pairs of probability distributions on a space $\left( \mathcal{X} ,\mathcal{F}\right) .$ Introduce a two-element set $B=\left\{ 0,1\right\} $ and the product space $\mathcal{X\times}B$ as a measurable space. Let $\phi$ denote projection on $B.$ Now we define a pair $\left( \tilde{P},\tilde {Q}\right) $of joint distribution on $\mathcal{X\times}B.$ The marginal distribution of both $\tilde{P}$ is $\tilde{Q}$ on $B$ is $\left( 1-\alpha,\alpha\right) .$ The conditional distributions are given by $P\left( \cdot\mid\phi=i\right) =P_{i}$ and $Q\left( \cdot\mid \phi=i\right) =Q_{i}$ where $i=0,1.$ Then \begin{multline*} \left( \begin{array} [c]{c D_{f}\left( P_{\alpha},Q_{\alpha}\right) \\ D_{g}\left( P_{\alpha},Q_{\alpha}\right) \end{array} \right) =\\ \left( \begin{array} [c]{c \left( 1-\alpha\right) D_{f}\left( P_{0},Q_{0}\right) +\alpha D_{f}\left( P_{1},Q_{1}\right) \\ \left( 1-\alpha\right) D_{g}\left( P_{0},Q_{0}\right) +\alpha D_{g}\left( P_{1},Q_{1}\right) \end{array} \right) \\ =\left( 1-\alpha\right) \left( \begin{array} [c]{c D_{f}\left( P_{0},Q_{0}\right) \\ D_{g}\left( P_{0},Q_{0}\right) \end{array} \right) +\alpha\left( \begin{array} [c]{c D_{f}\left( P_{1},Q_{1}\right) \\ D_{g}\left( P_{1},Q_{1}\right) \end{array} \right) \\ =\left( 1-\alpha\right) \left( \begin{array} [c]{c D_{f}\left( P,Q\right) \\ D_{g}\left( P,Q\right) \end{array} \right) +\alpha\left( \begin{array} [c]{c D_{f}\left( \tilde{P},\tilde{Q}\right) \\ D_{g}\left( \tilde{P},\tilde{Q}\right) \end{array} \right) . \end{multline*} \end{proof} \begin{example} For the joint range of total variation and Jensen Shannon divergence illustrated on Figure \ref{vsjd} the set of pairs achievable in $\mathbb{R ^{2}$ is not convex but the set of pairs achievable in $\mathbb{R}^{3}$ is convex and equals the set of all $(f,g)$-divergence pairs. \end{example} \begin{theorem} Any $(f,g)$-divergence pair is a convex combination of two $(f,g)$-divergence pairs, both of them achievable in $\mathbb{R}^{2}$. Consequently, any $(f,g)$-divergence pair is achievable in $\mathbb{R}^{4}$. \end{theorem} \begin{proof} Let $P$ and $Q$ denote probability measures on the same measurable space. Define the set $A=\left\{ q>0\right\} $ and the function $X=p/q$ on $A.$ Then $Q$ satisfies \begin{align} Q\left( A\right) & =1,\label{norm}\\ \int_{A}X~dQ & \leq1.\nonumber \end{align} Now we fix $X$ and $A.$ The formulas for the divergences become \begin{align*} D_{f}\left( P,Q\right) & =\int_{A}f\left( X\right) ~dQ+f^{\ast}\left( 0\right) P\left( \complement A\right) \\ & =\int_{A}f\left( X\right) ~dQ+f^{\ast}\left( 0\right) \left( 1-\int_{A}X~dQ\right) \\ & =\int_{A}\left( f\left( X\right) ~+f^{\ast}\left( 0\right) \left( 1-X\right) \right) ~dQ\\ & =\mathrm{E}\left[ f\left( X\right) +f^{\ast}\left( 0\right) \left( 1-X\right) \right] \end{align*} and similarly \[ D_{g}\left( P,Q\right) =\mathrm{E}\left[ g\left( X\right) ~+g^{\ast }\left( 0\right) \left( 1-X\right) \right] . \] Hence, the divergences only depend on the distribution of $X.$ Therefore we may without loss of generality assume that $Q$ is a probability measure on $\left[ 0,\infty\right) $. Define $C$ as the set of probability measures on $\left[ 0,\infty\right) $ satisfying $\mathrm{E}\left[ X\right] \leq1.$ Let $C^{+}$ be the set of additive measures $\mu$ on $\left[ 0,\infty\right) $ satisfying $\mu\left( A\right) \leq1$ and $\int_{A}X~d\mu\leq1.$ Then $C^{+}$ is convex and thus compact under setwise convergence. According to the Choquet--Bishop--de Leeuw theorem \cite[Sec. 4]{Phelps2001} any other point in $C^{+}$ is the barycenter of a probability measure over such extreme points. In particular an element $Q\in C$ is the barycenter of a probability measure $P_{bary}$ over extreme points of $C^{+}$ and these extreme points must in addition be probability measures with $P_{bary}$-probability 1. Hence $Q\in C$ is a barycenter of a probability measure over extreme points in $C.$ Let $Q$ be an element in $C.$ Let $A_{i},i=1,2,3$ be a disjoint cover of $\left[ 0,\infty\right) $ and assume that $Q\left( A_{i}\right) >0.$ Then \[ Q=\sum_{i=1}^{3}Q\left( A_{i}\right) Q\left( \cdot\mid A_{i}\right) . \] For a probability vector $\lambda=\left( \lambda_{1},\lambda_{2},\lambda _{2}\right) $ let $Q_{\lambda}$ denote the distribution \[ Q_{\lambda}=\sum_{i=1}^{3}\lambda_{i}Q\left( \cdot\mid A_{i}\right) . \] Then $Q_{\lambda}$ is element in $C$ if and only if \begin{equation} \sum_{i=1}^{3}\lambda_{i}\int_{A}X~dQ\left( \cdot\mid A_{i}\right) \leq1. \label{reduceret \end{equation} An extreme probability vector $\lambda$ that satisfies (\ref{reduceret}) has one or two of its weights equal to 0. Hence, if $Q$ is extreme in $C$ and $A_{i},i=1,2,3$ is a disjoint cover of $A,$ then at least one of the three sets satisfies $Q\left( A_{i}\right) =0.$ Therefore an extreme point $Q\in C$ is of one of the following two types: \begin{enumerate} \item $Q$ is concentrated in one point. \item $Q$ has support on two points. In this case the inequality $\int _{A}X~dQ\leq1$ holds with equality and $P\left( A\right) =1$ so that $P$ is absolutely continuous with respect to $Q$ and therefore supported by the same two-element set. \end{enumerate} The formulas for divergence are linear in $Q.$ Hence any $(f,g)$-divergence pair is a the barycenter of a probability measure $P_{bary}$ over pairs generated by extreme distributions $Q\in C.$ The extreme distributions of type $2$ generate pairs achievable in $\mathbb{R}^{2}$. For extreme points $Q$ concentrated in a single point we can reverse the argument at make a barycentric decomposition with respect to $P$. If an extreme $P$ has a two-point support then $Q$ is absolutely continuous with respect to $P$ and generates a $(f,g)$-divergence pair achievable in $\mathbb{R}^{2}$. If $P$ is concentrated in a point then this point may either be identical with the support of $Q$ and the two probability measures are identical, or the support points are different and $P$ and $Q$ are singular but still $\left( P,Q\right) $ is supported on two points. Therefore any $(f,g)$-divergence pair has a barycentric decomposition into pairs achievable in $\mathbb{R}^{2}.$ \input{trekant.TpX} Let $\mathbf{y}=\left( y,z\right) $ be a $(f,g)$-divergence pair. As we have seen $\mathbf{y}$ is a barycenter of $(f,g)$-divergence pairs achievable in $\mathbb{R}^{2}$. According to the Carath\'{e}odory's theorem \cite{Boltyanski2001} any barycentric decomposition in two dimensions may be obtained as a convex combination of at most three points $\mathbf{y} _{i},~i=1,2,3.$ as illustrated in Figure \ref{trekant}. Assume that all three points have positive weight. Let $\ell_{i}$ be the line through $\mathbf{y}$ and $\mathbf{y}_{i}.$ The point $\mathbf{y}$ divides the line $\ell_{i}$ in two half-lines $\ell_{i}^{+}$ and $\ell_{i}^{-}~,$ where $\ell_{i}^{-}$ denotes the halfline that contains $\mathbf{y}_{i}.$ The lines $\ell_{i} ^{+},i=1,2,3$ divide $\mathbb{R}^{2}$ into three sectors, each of them containing one of the points $\mathbf{y}_{i},i=1,2,3.$ The set of $(f,g) $-divergence pairs achievable in $\mathbb{R}^{3}$ is curve-connected so there exist a continuous curve of $(f,g)$-divergence pairs achievable in $\mathbb{R}^{2}$ from $\mathbf{y}_{1}$ to $\mathbf{y}_{2}$ that must intersect $\ell_{1}^{+}\cup\ell_{3}^{+}$ in a point $\mathbf{z}.$ If $\mathbf{z}$ lies on $\ell_{i}^{+}$ then $\mathbf{y}$ is a convex combination of the two points $\mathbf{y}_{i}$ and $\mathbf{z}.$ Hence, any $(f,g)$-divergence pair is a convex combination of two points that are $(f,g)$-divergence pairs achievable in $\mathbb{R}^{2}$. From the construction in the proof of Theorem \ref{TheoremConvex} we see that any $(f,g)$-divergence pair is achievable in $\mathbb{R}^{4}$. \end{proof} \begin{remark} We do not have any example of functions $(f,g)$ such that the set of pairs achievable in $\mathbb{R}^{3}$ is not convex. \end{remark} \begin{remark} An $f$-divergence on a arbitrary $\sigma$-algebra can be approximated by the $f$-divergence on its finite sub-algebras. Any finite $\sigma$-algebra is a Borel $\sigma$-algebra for discrete space so for probability measures $P,Q$ on a $\sigma$-algebra the point $\left( D_{f}\left( P,Q\right) ,D_{g}\left( P,Q\right) \right) $ is in the closure of the pairs achievable in $\mathbb{R}^{4}$. For many function pairs $\left( (f,g)\right) $ the set of pairs achievable in $\mathbb{R}^{2}$ is closed and then the set of all $(f,g)$-divergence pairs is closed and contains $\left( D_{f}\left( P,Q\right) ,D_{g}\left( P,Q\right) \right) $ even if $P,Q$ are measures on a non-atomic $\sigma$-algebra. \end{remark} The set of $(f,g)$-divergence pair that are achievable in $\mathbb{R}^{2}$ can be parametrized as $P=\left( 1-p,p\right) $ and $Q=\left( 1-q,q\right) .$ If we define $\overline{\left( 1-p,p\right) }=\left( p,1-p\right) $ then $D_{f}\left( P,Q\right) =D_{f}\left( \overline{P},\overline{Q}\right) .$ Hence we may assume without loss of generality assume that $p\leq q$ and just have to determine the image of the simplex $\Delta=\left\{ \left( p,q\right) \mid0\leq p\leq q\leq1\right\} .$ This result makes it very easy to make a numerical plot of the $(f,g)$-divergence pair achievable in $\mathbb{R}^{2}$ and the joint range is just the convex hull. \section{Image of the triangle} In order to determine the image of the triangle $\Delta$ we have to check what happens at inner points and what happens at or near the boundary. Most inner points are mapped into inner points of the range. On subsets of $\Delta$ where the derivative matrix is non-singular the mapping $\left( P,Q\right) \rightarrow\left( D_{f},D_{g}\right) $ is open according to the open mapping theorem from calculus. Hence, all inner points that are not mapped into interior points of the range must satisfy \[ \left\vert \begin{array} [c]{cc \frac{\partial D_{f}}{\partial p} & \frac{\partial D_{g}}{\partial p}\\ \frac{\partial D_{f}}{\partial q} & \frac{\partial D_{g}}{\partial q \end{array} \right\vert =0. \] Depending on functions $f$ and $g$ this equation may be easy or difficult to solve, but in most cases the solutions will lie on a 1-dimensional manifold that will cut the triangle $\Delta$ into pieces, such that each piece is mapped isomorphically into subsets of the range of $\left( P,Q\right) \rightarrow\left( D_{f},D_{g}\right) .$ Each pair of functions $(f,g)$ will require its own analysis. The diagonal $p=q$ in $\Delta$ is easy to analyze. It is mapped into $\left( D_{f},D_{g}\right) =\left( 0,0\right) .$ \begin{lemma} If $f\left( 0\right) =\infty,$ and $\lim_{t\rightarrow0}\inf\frac{g\left( t\right) }{f\left( t\right) }=\beta_{0},$ then the supremum of \[ \beta\cdot D_{f}\left( P,Q\right) -D_{g}\left( P,Q\right) \] over all distributions $P,Q$ is $\infty$ if $\beta>\beta_{0}.$ If $f^{\ast}\left( 0\right) =\infty,$ and $\lim_{t\rightarrow\infty} \inf\frac{g\left( t\right) }{f\left( t\right) }=\beta_{0},$ then the supremum of \[ \beta\cdot D_{f}\left( P,Q\right) -D_{g}\left( P,Q\right) \] over all distributions $P,Q$ is $\infty$ if $\beta>\beta_{0}.$ \end{lemma} If $g\left( 0\right) =\infty,$ and $\lim_{t\rightarrow0}\sup\frac{g\left( t\right) }{f\left( t\right) }=\gamma_{0},$ then the supremum of \[ D_{g}\left( P,Q\right) -\gamma D_{f}\left( P,Q\right) \] over all distributions $P,Q$ is $\infty$ if $\gamma<\gamma_{0}.$ If $g^{\ast}\left( 0\right) =\infty,$ and $\lim_{t\rightarrow\infty} \sup\frac{g\left( t\right) }{f\left( t\right) }=\gamma_{0},$ then the supremum of \[ D_{g}\left( Q,P\right) -\gamma D_{f}\left( Q,P\right) \] over all distributions $P,Q$ is $\infty$ if $\gamma<\gamma_{0}.$ \begin{proof} Assume that \[ f\left( 0\right) =\infty\text{ \ and \ }\lim_{t\rightarrow0}\inf \frac{g\left( t\right) }{f\left( t\right) }=\beta_{0}. \] The first condition implies \[ D_{f}\left( \left( 1,0\right) ,\left( 1/2,1/2\right) \right) =\infty \] and the second condition implies that $g\left( 0\right) =\infty$ and \[ D_{g}\left( \left( 1,0\right) ,\left( 1/2,1/2\right) \right) =\infty. \] We have \begin{multline*} \frac{D_{g}\left( \left( p,1-p\right) ,\left( 1/2,1/2\right) \right) }{D_{f}\left( \left( p,1-p\right) ,\left( 1/2,1/2\right) \right) }\\ =\frac{g\left( 2p\right) /2+g\left( 2\left( 1-p\right) \right) /2}{f\left( 2p\right) /2+f\left( 2\left( 1-p\right) \right) /2}\\ =\frac{g\left( 2p\right) +g\left( 2\left( 1-p\right) \right) }{f\left( 2p\right) +f\left( 2\left( 1-p\right) \right) }. \end{multline*} Let $\left( t_{n}\right) _{n}$ be a sequence such that $\frac{g\left( t_{n}\right) }{f\left( t_{n}\right) }\rightarrow\beta$ for $n\rightarrow \infty.$ Then \[ \frac{D_{g}\left( \left( \frac{t_{n}}{2},1-\frac{t_{n}}{2}\right) ,\left( 1/2,1/2\right) \right) }{D_{f}\left( \left( \frac{t_{n}}{2},1-\frac{t_{n} }{2}\right) ,\left( 1/2,1/2\right) \right) }\rightarrow\beta \] and the first result follows. The other three cases follows by interchanging $f$ and $g,$ and/or replacing $f$ by $f^{\ast}$ and $g$ by $g^{\ast}.$ We have used that \[ \lim_{t\rightarrow0}\inf\frac{g^{\ast}\left( t\right) }{f^{\ast}\left( t\right) }=\lim_{t\rightarrow0}\inf\frac{tg\left( t^{-1}\right) }{tf\left( t^{-1}\right) }=\lim_{t\rightarrow\infty}\inf\frac{g\left( t\right) }{f\left( t\right) }. \] \end{proof} \begin{proposition} Assume that $f$ and $g$ are $C^{2}$ and that $f^{\prime\prime}\left( 1\right) >0$ and $g^{\prime\prime}\left( 1\right) >0.$ Assume that $\lim_{t\rightarrow0}\inf\frac{g\left( t\right) }{f\left( t\right) }>0,$ and that $\lim_{t\rightarrow\infty}\inf\frac{g\left( t\right) }{f\left( t\right) }>0.$ Then there exists $\beta>0$ such that \begin{equation} D_{g}\left( P,Q\right) \geq\beta\cdot D_{f}\left( P,Q\right) \label{nederen \end{equation} for all distributions $P,Q.$ \end{proposition} \begin{proof} The inequality $\lim_{t\rightarrow0}\inf\frac{g\left( t\right) }{f\left( t\right) }>0$ implies that there exist $\beta_{0}$,$t_{0}>0$ such that $g\left( t\right) \geq\beta_{0}f\left( t\right) $ for $t<t_{0}.$ The Inequality $\lim_{t\rightarrow\infty}\inf\frac{g\left( t\right) }{f\left( t\right) }>0$ implies that there exists $\beta_{\infty}>0$ and $t_{\infty}>0$ such that $g\left( t\right) \geq\beta_{\infty}f\left( t\right) $ for $t>t_{\infty.}$ According to Taylor's formula we have \begin{align*} f\left( t\right) & =\frac{f^{\prime\prime}\left( \theta\right) } {2}\left( t-1\right) ^{2},\\ g\left( t\right) & =\frac{g^{\prime\prime}\left( \eta\right) }{2}\left( t-1\right) ^{2 \end{align*} for some $\theta$ and $\eta$ between $1$ and $t.$ Hence \[ \frac{g\left( t\right) }{f\left( t\right) }=\frac{f^{\prime\prime}\left( \theta\right) }{g^{\prime\prime}\left( \eta\right) }\rightarrow \frac{f^{\prime\prime}\left( 1\right) }{g^{\prime\prime}\left( 1\right) }\text{ for }t\rightarrow1. \] Therefore there there exists $\beta_{1}>0$ and an interval $\left] t_{-},t_{+}\right[ $ around $1$ such that $\frac{g\left( t\right) }{f\left( t\right) }\geq\beta_{1}$ for $t\in\left] t_{-},t_{+}\right[ .$ The function $t\rightarrow\frac{g\left( t\right) }{f\left( t\right) }$ is continuous on the compact set $\left[ t_{0},t_{-}\right] \cup\left[ t_{+},t_{\infty}\right] $ so it has a minimum $\tilde{\beta}>0$ on this set. Inequality \ref{nederen} holds for $\beta=\min\left\{ \beta_{0},\beta _{1},\beta_{\infty},\tilde{\beta}\right\} .$ \end{proof} \section{Bounds for power divergences} As an example we shall determine the exact range of a pair of power divergences. We have \begin{align*} f\left( t\right) & =\phi_{2}(t){,}\\ g\left( t\right) & =\phi_{3}(t){. \end{align*} In this case we have \begin{gather*} D_{f}\left( \left( p,1-p\right) ,\left( q,1-q\right) \right) =\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \\ \frac{1}{2}\left( \frac{\left( p-q\right) ^{2}}{q}+\frac{\left( p-q\right) ^{2}}{1-q}\right) ,\\ D_{g}\left( \left( p,1-p\right) ,\left( q,1-q\right) \right) =\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \frac{1}{6}\left( \left( \frac{p}{q}\right) ^{3}q+\left( \frac{1-p}{1-q}\right) ^{3}\left( 1-q\right) -1\right) . \end{gather*} First we determine the image of the triangle. The derivatives are \begin{align*} \frac{\partial D_{f}}{\partial p} & =\frac{2}{2}\cdot\frac{\left( p-q\right) }{\left( 1-q\right) q}~,\\ \frac{\partial D_{f}}{\partial q} & =\frac{1}{2}\cdot\frac{\left( 2pq-q-p\right) \left( p-q\right) }{\left( 1-q\right) ^{2}q^{2}}~,\\ \frac{\partial D_{g}}{\partial p} & =\frac{-3}{6}\cdot\frac{\left( 2pq-q-p\right) \left( p-q\right) }{\left( 1-q\right) ^{2}q^{2}}~,\\ \frac{\partial D_{g}}{\partial q} & =\frac{2}{6}\cdot\frac{\left( \begin{array} [c]{c pq+p^{2}+q^{2}-\\ 3pq^{2}-3p^{2}q+3p^{2}q^{2 \end{array} \right) \allowbreak\left( p-q\right) }{\left( q-1\right) ^{3}q^{3}}~. \end{align*} The determinant of derivatives is \begin{multline*} \left\vert \begin{array} [c]{cc \frac{\partial D_{f}}{\partial p} & \frac{\partial D_{g}}{\partial p}\\ \frac{\partial D_{f}}{\partial q} & \frac{\partial D_{g}}{\partial q \end{array} \right\vert =\\ \frac{\left( p-q\right) ^{2}}{12q^{4}\left( 1-q\right) ^{4}}\left\vert \begin{array} [c]{cc 2 & 3p+3q-6pq\allowbreak\\ 2pq-q-p & \left( \begin{array} [c]{c 6pq^{2}-2p^{2}-2q^{2}\\ -2pq+6p^{2}q-6p^{2}q^{2}\allowbreak \end{array} \right) \end{array} \right\vert \\ =-\frac{1}{12}\left( \frac{p-q}{q\left( 1-q\right) }\right) ^{4}. \end{multline*} $\allowbreak$We see that the determinant of derivatives is different from zero for $p\neq q$ so the interior of $\Delta$ is mapped one-to-one to the image. Hence we just have to determine the image of points on the boundary of $\Delta$ (or near the boundary if undefined on the boundary). For $P=\left( 1,0\right) $ and $Q=\left( 1-q,q\right) $ we get \begin{align*} D_{f}\left( P,Q\right) & =\frac{1}{2}\left( q+\frac{q^{2}}{1-q}\right) =\frac{1}{2}\left( \frac{1}{1-q}-1\right) ,\\ D_{g}\left( P,Q\right) & =\frac{1}{6}\left( \frac{1}{\left( 1-q\right) ^{2}}-1\right) =\frac{1}{6}\frac{\left( 2-q\right) q}{\left( 1-q\right) ^{2}}. \end{align*} The first equation leads to \[ q=\left( 1-\frac{1}{2D_{f}+1}\right) \] and hence \[ D_{g}=\frac{2}{3}D_{f}\left( D_{f}+1\right) . \] We have \[ \frac{f\left( t\right) }{g\left( t\right) }=\frac{{\frac{t^{2} -2(t-1)-1}{2}}}{{\frac{t^{3}-3(t-1)-1}{6}}}\rightarrow\infty\text{ for }t\rightarrow\infty. \] All points $\left( 0,s\right) ,s\in\left[ 0,\infty\right) $ are in the closure of the range of $\left( P,Q\right) \rightarrow\left( D_{f} ,D_{g}\right) .$ By combing these two results we see that the range consists of the point $\left( 0,0\right) ,$ all points on the curve $\left( x,\frac{2}{3}x\left( x+1\right) \right) ,x\in\left( 0,\infty\right) $, and all point above this curve. Similar results holds for any pair of power divergences, but for other pairs than $\left( D_{2},D_{3}\right) $ the computations become much more involved. Note that the R\'{e}nyi divergences are monotone functions of the power divergences so our results easily translate into the results on R\'{e}nyi divergences. More details on R\'{e}nyi divergences can be found in \cite{Erven2010}. \section{Acknowledgement} The authors thank Job Bri\"{e}t and Tim van Erven for comments to a draft of this paper. This work was supported by the European Network of Excellence and the GA\v{C}R grants 102/07/1131 and 202/10/0618. \setlength{\itemsep}{5pt} \bibliographystyle{ieeetr}
{ "redpajama_set_name": "RedPajamaArXiv" }
4,845
Martinsa-Fadesa, S. A. fue una de las grandes empresas inmobiliarias de España, nacida de la fusión entre Fadesa y Martinsa en 2007. En 2008, apenas un año después de su formación, protagonizó el mayor concurso de acreedores de la historia de España con una deuda de 7000 millones de euros. Formaba parte del G-14. En 2011 volvió a entrar en concurso de acreedores. En 2015, tras no llegar a un acuerdo con los bancos acreedores, entra en proceso de liquidación, que se inició en septiembre del mismo año. Historia Compra de Fadesa La empresa surgió en 2007 por la fusión de los negocios de Martinsa y de Fadesa, tras la OPA realizada por Martinsa sobre Fadesa por 4.045 millones de euros. Concurso de acreedores En julio de 2008, presentó el concurso de acreedores voluntario, con un pasivo de alrededor de 7000 millones de euros, marcando un récord en la historia de España. El Juzgado de lo Mercantil número 1 de La Coruña fue el encargado de tramitarlo. Sus principales acreedores fueron entidades financieras españolas: El convenio de acreedores establece dos opciones de pago: La primera ofrece un periodo de 8 años para pagar la deuda, más 2 años de prórroga, con el siguiente calendario de pagos: El 15% de la deuda se convertiría en préstamos participativos. La segunda ofrece un periodo de pago de 5 años y una quita del 70%. El 6 de enero de 2011 salió del concurso de acreedores, después de dos años y medio, tras ser aceptado el convenio por el 73,69% de la masa acreedora. Durante el concurso ha entregado 5.700 viviendas y ha facturado 1.340 millones de euros. En 2015, los bancos acreedores no aceptan la oferta de saneamiento económico diseñada por la empresa y en septiembre de 2015 empieza su disolución tras ser ordenada por un juez concursal de un juzgado de La Coruña. Estados financieros Deuda total: 6.905 millones de euros. Deuda con entidades financieras: 5.202 millones de euros ( Trimestre 2010) Accionistas El accionista que controla la empresa es Fernando Martín. Como accionistas minoritarios encontramos a Cajas de Ahorros (Bancaja y Ahorro Corporación) y a grandes empresarios como Dolores Ortega (sobrina de Amancio Ortega) y a Jesús Salazar. Filiales Referencias Enlaces externos Las viviendas no construidas La filial de Martinsa Fadesa en México, en concurso de acreedores Empresas inmobiliarias de España Empresas de Galicia Empresas con sede en La Coruña
{ "redpajama_set_name": "RedPajamaWikipedia" }
5,493
namespace DotNet.Highcharts.Enums { #region DefaultSeriesType enum /// <summary> /// The default series type for the chart. /// </summary> public enum ChartTypes { Line, Spline, Area, Areaspline, Column, Bar, Pie, Scatter, Arearange, Areasplinerange, Columnrange, Gauge, Solidgauge, Boxplot, Waterfall, Funnel, Bubble, Heatmap } #endregion }
{ "redpajama_set_name": "RedPajamaGithub" }
1,190
The Voice & Data Warehouse is here to help increase the return on your Avaya IP500 BRI So8 Module investment. Avaya Maintenance programs have their benefits, but these programs can be very costly, have long turn around times and the manufacturer will eventually no longer support your equipment. If you are self maintianied, looking for an alternative to your current Avaya maintenance program or Avaya has released an end of support notice for your equipment and you need an Avaya Repair Vendor please contact us and let TheVoiceandDataWarehouse help you increase the useful life cycle of your Avaya IP Office equipment. The Voice & Data Warehouse will Repair your Avaya IP500 BRI So8 Module at component level making your Expansion Module look and perform like brand new. You may be having a problem with a port, or possibly your Module will not power up. If you are experiencing these problems or any other problems with your Avaya IP500 BRI So8 Module and need a reliable source to repair your Avaya IP Office Expansion Module, protect your investment, and extend the life of your Avaya IP500 BRI So8 Module TheVoice&Datawarehouse can help. We take our IP Office repair services seriously and work to provide the best customer service possible. Our expertise and knowledge of manufacturer's defects help prevent future problems with your IP500 BRI So8 Module. The IP500 BRI So8 module provides 8 S-Bus interfaces for Basic Rate ISDN devices, such as video conferencing, fax servers or ISDN telephones. For installations in a rack, this module requires the IP500 Rack Mounting Kit. The IP500 BRI So8 Module is functionally identical to the IP400 So8 Module. The IP500 BRI So8 expansion module supports both point-to-point and point-to-multipoint connections. A maximum of 10 terminal endpoints identifiers (TEIs) are supported on each bus.
{ "redpajama_set_name": "RedPajamaC4" }
5,371
Q: Can i set which .so to link in php scripts? I have several copies of same ".so" file under different system directory. Can I set link path in my php scripts so that i can link to a certain ".so'? A: See dl() and Extension Loading Directives A: You should specify which so you want to load in your /etc/php.ini file
{ "redpajama_set_name": "RedPajamaStackExchange" }
1,183
You can read all of this at [Brain Discount](http://brain.discount). ## License Yeah, you can copy and republish any code under the MIT license. I remain copyright on the styling of the website. Feel free to learn anything from my code though.
{ "redpajama_set_name": "RedPajamaGithub" }
965
<?php namespace Google\Service\ManagedServiceforMicrosoftActiveDirectoryConsumerAPI\Resource; /** * The "projects" collection of methods. * Typical usage is: * <code> * $managedidentitiesService = new Google\Service\ManagedServiceforMicrosoftActiveDirectoryConsumerAPI(...); * $projects = $managedidentitiesService->projects; * </code> */ class Projects extends \Google\Service\Resource { } // Adding a class alias for backwards compatibility with the previous class name. class_alias(Projects::class, 'Google_Service_ManagedServiceforMicrosoftActiveDirectoryConsumerAPI_Resource_Projects');
{ "redpajama_set_name": "RedPajamaGithub" }
5,552
Jul 14, 2021 6:40 pm 2021-07-14T18:41:01-05:00 Netflix reportedly aiming to expand into video games in 2022, hires former EA and Facebook executive The delivery method is still a mystery. Cale Michael Image via Netflix Netflix is reportedly taking its first steps into the games industry, hiring former Electronic Arts Inc. and Facebook executive Mike Verdu to lead the effort, according to Bloomberg. Building on previous reports, this has been something Netflix has looked into for a while now, approaching multiple video game industry experts back in May when it was considering a "bundle" of games. Now, the company plans to add video games to its service in some form starting in 2022. Bloomberg reports that Verdu, who previously worked at Facebook working with developers to bring games to Oculus devices, will join Netflix as vice president of game development. He will work under the company's chief operating officer Greg Peters. This move is an expansion that Netflix has likely been planning for years, as, back in 2019, the company said its main competition was Epic Games' Fortnite, not other streaming services. The company has also been expanding with more content based on videogame content, such as the already released DOTA: Dragon's Blood and The Witcher series, with a League of Legends animated series, Arcane, coming in the near future. Related: Netflix releases first clip from League of Legends animated series, Arcane The specifics of Netflix's plans in the games industry haven't been shared, but whatever this expansion adds to the streaming service will likely enter a cloud-streaming market that has already features Amazon, Google, Microsoft, and potentially Walmart, too. If Netflix wants to compete against these pre-existing services, specifically Xbox Game Pass, it will need to launch with some heavy-hitting offers and a plan to grow at a sustainable rate. This could also complicate how Netflix operates on iOS devices, since Apple doesn't allow Game Pass or other similar services to host apps in the App Store. Rather, the services have to offer alternative access through iOS browsers.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
2,075
Q: MOSS2007, set Search Incremental Crawl to night I want to set Incremental Crawl to night but don't finally understand how to do it. Microsoft says: If you want to crawl more frequently than once a day, select the Repeat within the day box and type the number of minutes to wait between crawls in the Every box and type the total number of minutes in which to repeat the crawl within the same day in the For box. For example, if you select Repeat within the day and type 60 in the Every box and 720 in the For box, an incremental crawl of the content source you are configuring starts every 60 minutes up to 720 minutes (12 hours) after the first scheduled incremental crawl for this content source ran. https://technet.microsoft.com/en-us/library/cc263373(v=office.12).aspx But I don't understand it. I want to * *Turn off full crawl(it is already indexed) *Set Incremental crawl to night - between by example 9 PM and 8 AM How can I realize it with these options? A: The configuration seems to be alright. Full crawl schedule is set to 'none' and for incremental crawl, you specified everything correct. If you want it to run just once per day, untick the "repeat within the day" box. If you want it to run from 9pm to 8am, keep the box ticked and set values for "every" and "for". For example: every 5 minutes for 660 minutes (11 hours (from 9pm to 8am))
{ "redpajama_set_name": "RedPajamaStackExchange" }
4,350
Does a NASA Photo Show 'Blue Dunes' on Mars? The picture was produced by Odyssey, the longest-serving space craft orbiting Mars. Bethania Palma Image via NASA A NASA photograph shoes actual blue-colored dunes on Mars. NASA released a picture in April 2021 depicting dunes colored bright blue surrounded by the red surface of Mars. The picture didn't show actual colors on Mars — the colors represented variance in temperature on the surface. On April 8, 2021, NASA released a photograph that appeared to show bright, cerulean blue dunes surrounded by the red surface of Mars. The picture can be seen in the tweet posted by Space.com, a space and astronomy news website: Weird 'blue' dunes speckle the surface of Mars in NASA photo https://t.co/Pu7LnCm5Eo pic.twitter.com/QuGQez7gRX — SPACE.com (@SPACEdotcom) April 14, 2021 The picture is beautiful, but it doesn't show the actual colors on the surface of the Red Planet. NASA pointed out that the picture is a "false color image" that depicts temperature, not color. "In this false-color image, areas with cooler temperatures are recorded in bluer tints, while warmer features are depicted in yellows and oranges. Thus, the dark, sun-warmed dunes glow with a golden color," per NASA. Per Space.com, the picture was presented as part of a series marking the 20th anniversary of the Odyssey, the "longest-working Mars exploration craft in history." The picture is a "combination of images taken between December 2002 and November 2004 by the Thermal Emission Imaging System instrument on the Mars Odyssey orbiter."
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
47
The Retreat #3 By: Joe McKinney, Craig DiLouie, Stephen Knight Narrated by: R.C. Bray Series: The Retreat Series , Book 3 Categories: Mysteries & Thrillers, Military By: Joe McKinney,Craig DiLouie,Stephen Knight By: Stephen Knight, Craig DiLouie, Joe McKinney Emerging from the smoking ruins of Boston, Lieutenant Colonel Harry Lee leads the First Battalion, 55th Infantry Regiment on a perilous trek to its besieged home post of Fort Drum. Along the way, the unit must battle through the legions of diseased killers lying in wait, evading clever ambushes and fighting through terrifying attacks. Lee struggles to hold the battalion together while epitomizing its motto, "Bounding Forward". Extinction Ashes Extinction Cycle: Dark Age, Book 3 By: Nicholas Sansbury Smith, Anthony J. Melchiorri Beyond the reach of the nuclear infernos, the masterminds survived and continue their assault on outposts across the Allied States. Timothy Temper and Sergeant Ruckley narrowly escaped one of those attacks. On the run from hostile forces, they fight to get an urgent message to command - the monsters aren't the only threat. Their human allies have a nuclear weapon. Refusing to hide underground, President Jan Ringgold heads to New York. Doctor Kate Lovato and her team are there working to infiltrate and sabotage enemy communications. Phenomenal book!! By Natalie @ ABookLoversLife on 01-01-20 Extinction Inferno The government said the Variants were dying off.... That the beasts would be extinct in a matter of years.... That the Allied States had returned to prosperity and freedom.... The government was dead wrong. Deep under the cities, the Variants weren't just hiding, they were breeding. While the human survivors of the Extinction Cycle built outposts and brought back industries, the monsters were building something of their own. Phenomenal!!! The Retreat, Book 1 By: Craig DiLouie, Stephen Knight, Joe McKinney In Boston, a battalion of light infantry struggles to maintain order as a new disease turns people into sadistic, laughing killers. As the numbers of infected grow, the battalion loses control, and the soldiers find themselves fighting for their lives against the very people they once swore an oath to protect. During the ensuing collapse, the lost battalion learns the Army is still holding out in Florida, which has been cleared of the infected. Harry Lee, its commander, decides the only hope for his men is to get there. A great book helped by the "Bray Effect" An Arisen Standalone By: Michael Stephen Fuchs One Tier-1 Naval Special Warfare Operator, trained at the highest levels of any military, with 20-plus years of operational experience. One ex-cop civilian survivalist, who lived through two years of post-apocalyptic hell, unlike seven billion of her fellow humans. And an entire continent heaving with 400 million dead guys. To save the lives of his children, and rejoin Alpha team and the fight to save humanity, Homer must journey across a thousand miles of undead North America with Sarah Cameron, battling heavily armed bands of marauders who shoot first and ask questions never. Homer's odyssey revealed!! By Areana S on 07-27-19 Extinction Shadow Narrated by: R. C. Bray Eight years ago, an engineered virus ravaged the globe, infecting and transforming humans into apex predators called Variants. Now, almost a decade after the end of the war, civilization has slowly clawed toward recovery. But evil and intelligent forces dwell in the shadows with the starving beasts, scheming to restart the extinction cycle and end humanity forever. And once again, Beckham, Fitz, and Kate will rise to fight them, joining forces with new heroes to try and save what's left of the world. Highly, Highly Recommend!! Tin Man: A Galaxy's Edge Prequel By: Jason Anspach, Nick Cole In the wilds of a jungle planet, the Legion fights in brutal combat as Republic marines fly their SLICS from one tragedy to the next. H292, a repurposed warbot, shows the heart of a hero as he wades into the battle not to destroy - but to save. Arisen series, Book 14 This is it. Alpha team goes once more into the breach. On what will be their final missions ever in the Zulu Alpha - and the fight of their lives — Homer & Ali, and Predator & Juice, launch out into the terminal post-Apocalypse, as London is overrun by the very last rush of death covering the entire planet. Neither mission can fail, but can all four teammates survive? Or will some go down safeguarding the lives of those they love more than life itself? Just Stop at Book 13 - This Nearly Ruins The Serie By Rochambeaux on 07-08-18 The Retreat Series This set includes the first three audiobooks from the Retreat series: Pandemic, Slaughterhouse, and Die Laughing. Frighteningly Not Funny Clowns By John on 07-23-18 By: Jeremy Robinson For Henry, a 17-year-old who feels no fear, the day starts like any other - homeless and alone on the streets of Boston. For Sarah, a 20-year-old college dropout, it's an early morning serving donuts and coffee to commuters at North Station. Fate brings them together at the scene of a bank robbery, which they foil together, along with a mysterious and wealthy woman named Helen, who offers to reward them for their bravery. THIS... IS... SPARTA! By Alex on 11-26-19 Arisen, Book 13 In the north, the collapsed section of the Wall is defended only by the decimated remains of 2 PARA. Even reinforced by Jameson and the exhausted Royal Marines of One Troop, it's a doomed delaying action at best - and the last choke point in the entire ZA faces an incoming tide of dead stretching to the ends of the Earth. Meanwhile, hunkered down at CentCom, the unified ranks of our heroes, led by Ali and Fick, work furiously to fortify and reinforce what will be humanity's final fallback position - forever. Frick & Pred By J. Gordon on 03-08-18 These Dead Lands: Immolation By: Stephen Knight, Scott Wolf The United States of America is falling before the armies of the dead. Leading the sole survivors of the US Army's 10th Mountain Division out of the overrun city of New York, Captain Phil Hastings heads for the safety of Fort Indiantown Gap, a National Guard training facility deep in the woodlands of Pennsylvania. Joining with other remnants of the military, government, and civilian communities, Hastings and his men must try to keep the tsunami of corpses from taking over the world and plan the resurrection of the nation. An "Arisen-ish" great listen, keep em coming! By M. Bowles on 03-02-18 Expeditionary Force, Book 9 By: Craig Alanson After saving the world many times, the Merry Band of Pirates have accepted the inevitable: Earth is doomed. All they can do is try to bring a few thousand people to safety, before vicious aliens arrive to destroy humanity's homeworld. No. There is one other thing they can do: hit the enemy so hard that the aliens will regret they ever heard of humans. Lee Harden, Book 1 By: D.J. Molles Narrated by: Christian Rummel It's been three years since Lee Harden emerged from his bunker into a world gone mad. Governments have fallen. And new ones have taken their place. In the United Eastern States, Lee and his team of battle-hardened operatives walk a tightrope of survival, keeping their fledgling society safe from the creatures beyond their gates - and from the enemies within. Then, an ambush in hostile territory reveals a traitor in their midst - and leaves Lee barely clinging to life. Wounded and on the run, Lee and his team race to uncover the leak before it's too late. Truly Exceptional By smb072 on 10-27-18 Extinction Red Line The Extinction Cycle By: Nicholas Sansbury Smith, Tom Abrahams Narrated by: Bronson Pinchot For a dozen years, villagers along the ancient Da River in Vietnam have feared a nightmarish creature who hunts and consumes human flesh. Only in whispers do they mention its name, The White Ghost. To the United States military, this creature has a different name - Marine Lieutenant Trevor Brett, the chemically engineered experiment gone wrong that they will do anything to hide. Sole survivor of his platoon, Brett has stalked the jungle for prey. But the men who made him into a monster are searching for him, and when they find him, the line between hunter and hunted will be blurred. The Prequel we have been waiting for! By Jorge on 12-20-18 The Age of Embers (A Post-Apocalyptic Survival Thriller) The Age of Embers, Book 1 By: Ryan Schow Narrated by: Kevin Pierce The end is almost here, are you ready? A high-tech nightmare. A crippling attack on America and her infrastructure. Are humans in their final days? When a sudden, unexplained spasm of violence rocks the country from coast to coast, when our military is hijacked and the president can't be found, the survivors will be left to wonder if this is the start of a new world war, or if this is the last chapter of humanity itself. Undercover DEA agent, Fire Dimas, is drowning in problems: an unrelenting job, a difficult marriage and a teenage daughter with suitors of the worst kind. Very gritty and explicit, BUT SO WORTH IT! By hamandeggs74 on 09-03-19 Riker's Apocalypse, Book 1 By: Shawn Chesser Narrated by: Adam Paul Army veteran Lee Riker is staying in an Atlanta shelter and supporting himself with the odd carpentry job when his sister, Tara, summons him home to Indiana for the reading of their mother's will. Riker boards a Greyhound bus in Atlanta with his duffel bag, less than $200 to his name, and a secret he must protect at all costs. Riker makes it to Middletown only to learn his sister has recently witnessed a gruesome death. Insisting she saw the victim rise from a pool of his own blood to attack the Samaritan rendering aid, Tara floats the idea that the man may have been a zombie. Great Book, ... what a Promise By Brian Adler on 08-04-18 Lost Valley By: Walt Browning John Eric Carver and Shrek are a retired Navy SEAL war dog team, now living in the mountains outside of San Diego. Both man and dog thought their life was settled, finding peace on a 40-acre ranch. But a mutated virus changes everything. Taking refuge in a nearby Boy Scout camp, John leads a group of teens and their parents as they are forced to deal with infected creatures that are consuming the world. Lost Valley takes the listener through the first months of the apocalypse that threatens humanity. took a chance By lawrence on 04-24-19 Episode three of the best-selling The Retreat series, from acclaimed horror writers Joe McKinney, Craig DiLouie, and Stephen Knight. The long flight out of Boston. After weeks of non-stop fighting across an America overrun by fearless, shrieking, mad killers, and a daring assault to retake their unit's home base of Fort Drum, Lt. Colonel Harry Lee and the men of the First Battalion, 55th Infantry Regiment have made it to the outskirts of Philadelphia. The blood of the innocent. Along the way, they've picked up thousands of refugees, innocents in need of protection against impossible odds. Shelter from the storm. The City of Brotherly Love is currently under the protection of General Anthony Bell, commander of the famed 56th Stryker Brigade, the Independence Brigade. But in a world gone mad, nothing is as it seems. And when Bell has Lee arrested and put up on charges, it's up to Lee's right-hand man, Major Chris Walker, to pick up the pieces. Can he snatch Lee from the hangman's noose, and can he guide his troops and his civilian charges through the gathering horde of maniacal killers? Or will they all die laughing? ©2015 Craig DiLouie, Stephen Knight, Joe McKinney (P)2018 Blue Heron Audio Dead City Flesh Eaters The Hospital: The FREE Short Story: The First Mountain Man Story Russell Austin Arenz III Great story but way to short ! Can't wait to get the next one. Dr A
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
8,045
Q: update attributes working even if it is nil I have a reset password page in which user has to fill password and password confirmation but even if he not fills and click on submit it he is redirect to the page .Also my 2nd error is if the user fills only confirm password and skip the password field still he is redirected to the page .I don't understand why @user.update_attributes not working properly. [user.rb] class User < ActiveRecord::Base validates_presence_of :password end [users_controller] def change_password @user = User.find_by(reset_password_token: params[:users_reset_password_path][:token]) if @user.update_attributes(:name => @user.name,:email => @user.email,:status => @user.status, :password => params[:users_reset_password_path][:password],:password_confirmation => params[:users_reset_password_path][:password_confirmation]) flash[:notice] = "password successfully updated" redirect_to users_path else @token = @user.reset_password_token render users_reset_password_path end end [users/_reset_password.html.erb] <div id="nav-col-submenu"></div> </div> <div id="content-wrapper"> <div class="row"> <div class="col-lg-12"> <div class=" clearfix"> <div id="login-box"> <%= render :partial => "shared/error_messages", :locals => { :errors => @user.errors } %> <div id="login-box-holder"> <div class="row"> <div class="col-xs-12"> <header id="login-header"> <div id="login-logo"> <img src="/assets/gionee_logo1.png" alt=""/> </div> </header> <div id="login-box-inner"> <%= form_for :users_reset_password_path do |f| %> <div class="input-group"> <span class="input-group-addon"><i class="fa fa-user"></i></span> <%= f.password_field :password,class: "form-control",placeholder: "Password" %> </div> <div class="input-group"> <span class="input-group-addon"><i class="fa fa-key"></i></span> <%= f.password_field :password_confirmation,class: "form-control",placeholder: "Confirm Password" %> </div> <%= f.hidden_field :token ,value: if params[:token] != nil then params[:token] else @token end%> <div class="row"> <div class="col-xs-12"> <%= f.submit "Reset Password",class: "btn btn-success col-xs-12" %> </div> </div> <% end %> </div> </div> </div> </div> </div> </div> </div> </div> </div> <%= render 'page_js/user_search'%> A: Try something like this, validates :password, presence: true, length: { minimum: 6 }
{ "redpajama_set_name": "RedPajamaStackExchange" }
1,056
DIR=$1 COMMIT=$2 function do_commit { if [ $COMMIT == 1 ] ;then git commit $1 --author="uncrustify <pat@moreproductive.org>" \ --message="uncrustify $1"; fi } function format_cpp { echo 'format_cpp' $1 uncrustify --no-backup -c uncrustify_cpp.cfg $1 do_commit $1 } function format_header { echo 'format_header' $1 uncrustify --no-backup -c uncrustify_headers.cfg $1 do_commit $1 } PDEFILES=`find $DIR -name '*.pde' -print` CPPFILES=`find $DIR -name '*.cpp' -print` CFILES=`find $DIR -name '*.c' -print` HFILES=`find $DIR -name '*.h' -print` for f in $PDEFILES $CPPFILES $CFILES; do format_cpp $f done for f in $HFILES; do format_header $f done
{ "redpajama_set_name": "RedPajamaGithub" }
4,720
IETF > About Essential Tremor > Publications Library > Tremor Gram – January 2020 Tremor Gram – January 2020 More Money Dedicated to ET Research Research is key to finding better treatments and a cure for essential tremor, and the IETF is dedicated to encouraging and supporting these efforts. Recently, the IETF Board of Directors approved increasing the cap on research grants offered by the IETF from $25,000 to $50,000 (or up to $100,000 annually). This increase may incentivize more research in the area of essential tremor. Since 2001, the IETF has dedicated $850,000 toward ET research. Research grants have funded studies addressing the nosology, etiology, pathogenesis and treatment of ET. Scientists and researchers have delved into the areas of genetic variants, the role of ion channels in ET, understanding how and why neurons in ET patients work and the effect of cannabidiol on ET. They have helped support a brain bank where the brains of people with ET can be studied post mortem. An overview of all IETF funded research is available on the website. Applications are currently being taken for 2020 research grants. Grants will be announced in June. Congrats to Our Latest Scholarship Recipients! The IETF is proud to announce its spring 2020 scholarship recipients through the Catherine Rice Scholarship Program: Robbie Holder, Georgia Southern University, Statesboro Alyssa Jones, Trinity University, San Antonio, TX Colin Pool, University of California Berkeley Madison Young, Arkansas Tech University, Russellville, AR The IETF's scholarship program started in 2011 to recognize the achievements of students dealing with essential tremor. It was named to honor longtime IETF Executive Director Catherine Rice who was passionate about providing support for students with ET. To date, the IETF has awarded 63 college scholarships totaling more than $43,000. Applications are open now for fall scholarships. Any student with essential tremor, pursuing a higher education degree (including a master's or doctorate) from a licensed, accredited educational institution or trade school may apply. 17,000 References to ET in Wiley Online Library If you're looking for studies and research on movement disorders, you might check out the Wiley Online Library. It includes one of the largest and most authoritative collections of online journals, books and research resources covering health, life, social and physical sciences. There are more than 1,600 journals, 22,000 online books and hundreds of referenced works. Key in "essential tremor" in the search feature on the homepage and you will see there are more than 17,000 references to ET. You can read summaries and abstracts for free. Some journal articles may require a fee to download, if you're looking for the full study. The key take away here is that research into ET is happening. We have included a link to this site on our website under our research section. Medtronic Receives CE Mark Approval for the Percept™ PC Neurostimulator DBS System Should You Disclose Your Illness to Your Employer? How Essential Tremor is Diagnosed and Treated
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
5,960
Q: Getting DLL load failed while importing keras pip install keras import keras gives the following error: Using TensorFlow backend. ERROR:root:Internal Python error in the inspect module. Below is the traceback from this internal error. ERROR:root:Internal Python error in the inspect module. Below is the traceback from this internal error. Traceback (most recent call last): File "C:\Users\User\AppData\Roaming\Python\Python37\site-packages\tensorflow_core\python\pywrap_tensorflow.py", line 58, in from tensorflow.python.pywrap_tensorflow_internal import * File "C:\Users\User\AppData\Roaming\Python\Python37\site-packages\tensorflow_core\python\pywrap_tensorflow_internal.py", line 28, in _pywrap_tensorflow_internal = swig_import_helper() File "C:\Users\User\AppData\Roaming\Python\Python37\site-packages\tensorflow_core\python\pywrap_tensorflow_internal.py", line 24, in swig_import_helper _mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description) File "C:\ProgramData\Anaconda3\lib\imp.py", line 242, in load_module return load_dynamic(name, filename, file) File "C:\ProgramData\Anaconda3\lib\imp.py", line 342, in load_dynamic return _load(spec) ImportError: DLL load failed: The specified module could not be found. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:\ProgramData\Anaconda3\lib\site-packages\IPython\core\interactiveshell.py", line 3296, in run_code exec(code_obj, self.user_global_ns, self.user_ns) File "", line 1, in import keras File "C:\ProgramData\Anaconda3\lib\site-packages\keras__init__.py", line 3, in from . import utils File "C:\ProgramData\Anaconda3\lib\site-packages\keras\utils__init__.py", line 6, in from . import conv_utils File "C:\ProgramData\Anaconda3\lib\site-packages\keras\utils\conv_utils.py", line 9, in from .. import backend as K File "C:\ProgramData\Anaconda3\lib\site-packages\keras\backend__init__.py", line 1, in from .load_backend import epsilon File "C:\ProgramData\Anaconda3\lib\site-packages\keras\backend\load_backend.py", line 90, in from .tensorflow_backend import * File "C:\ProgramData\Anaconda3\lib\site-packages\keras\backend\tensorflow_backend.py", line 5, in import tensorflow as tf File "C:\Users\User\AppData\Roaming\Python\Python37\site-packages\tensorflow__init__.py", line 101, in from tensorflow_core import * File "C:\Users\User\AppData\Roaming\Python\Python37\site-packages\tensorflow_core__init__.py", line 40, in from tensorflow.python.tools import module_util as _module_util File "C:\Users\User\AppData\Roaming\Python\Python37\site-packages\tensorflow__init__.py", line 50, in getattr module = self._load() File "C:\Users\User\AppData\Roaming\Python\Python37\site-packages\tensorflow__init__.py", line 44, in _load module = _importlib.import_module(self.name) File "C:\ProgramData\Anaconda3\lib\importlib__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "C:\Users\User\AppData\Roaming\Python\Python37\site-packages\tensorflow_core\python__init__.py", line 49, in from tensorflow.python import pywrap_tensorflow File "C:\Users\User\AppData\Roaming\Python\Python37\site-packages\tensorflow_core\python\pywrap_tensorflow.py", line 74, in raise ImportError(msg) ImportError: Traceback (most recent call last): File "C:\Users\User\AppData\Roaming\Python\Python37\site-packages\tensorflow_core\python\pywrap_tensorflow.py", line 58, in from tensorflow.python.pywrap_tensorflow_internal import * File "C:\Users\User\AppData\Roaming\Python\Python37\site-packages\tensorflow_core\python\pywrap_tensorflow_internal.py", line 28, in _pywrap_tensorflow_internal = swig_import_helper() File "C:\Users\User\AppData\Roaming\Python\Python37\site-packages\tensorflow_core\python\pywrap_tensorflow_internal.py", line 24, in swig_import_helper _mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description) File "C:\ProgramData\Anaconda3\lib\imp.py", line 242, in load_module return load_dynamic(name, filename, file) File "C:\ProgramData\Anaconda3\lib\imp.py", line 342, in load_dynamic return _load(spec) ImportError: DLL load failed: The specified module could not be found. Failed to load the native TensorFlow runtime. See https://www.tensorflow.org/install/errors for some common reasons and solutions. Include the entire stack trace above this error message when asking for help. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:\ProgramData\Anaconda3\lib\site-packages\IPython\core\interactiveshell.py", line 2033, in showtraceback stb = value._render_traceback_() AttributeError: 'ImportError' object has no attribute '_render_traceback_' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:\Users\User\AppData\Roaming\Python\Python37\site-packages\tensorflow_core\python\pywrap_tensorflow.py", line 58, in from tensorflow.python.pywrap_tensorflow_internal import * File "C:\Users\User\AppData\Roaming\Python\Python37\site-packages\tensorflow_core\python\pywrap_tensorflow_internal.py", line 28, in _pywrap_tensorflow_internal = swig_import_helper() File "C:\Users\User\AppData\Roaming\Python\Python37\site-packages\tensorflow_core\python\pywrap_tensorflow_internal.py", line 24, in swig_import_helper _mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description) File "C:\ProgramData\Anaconda3\lib\imp.py", line 242, in load_module return load_dynamic(name, filename, file) File "C:\ProgramData\Anaconda3\lib\imp.py", line 342, in load_dynamic return _load(spec) ImportError: DLL load failed: The specified module could not be found. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:\ProgramData\Anaconda3\lib\site-packages\IPython\core\ultratb.py", line 1095, in get_records return _fixed_getinnerframes(etb, number_of_lines_of_context, tb_offset) File "C:\ProgramData\Anaconda3\lib\site-packages\IPython\core\ultratb.py", line 313, in wrapped return f(*args, **kwargs) File "C:\ProgramData\Anaconda3\lib\site-packages\IPython\core\ultratb.py", line 347, in _fixed_getinnerframes records = fix_frame_records_filenames(inspect.getinnerframes(etb, context)) File "C:\ProgramData\Anaconda3\lib\inspect.py", line 1502, in getinnerframes frameinfo = (tb.tb_frame,) + getframeinfo(tb, context) File "C:\ProgramData\Anaconda3\lib\inspect.py", line 1460, in getframeinfo filename = getsourcefile(frame) or getfile(frame) File "C:\ProgramData\Anaconda3\lib\inspect.py", line 696, in getsourcefile if getattr(getmodule(object, filename), 'loader', None) is not None: File "C:\ProgramData\Anaconda3\lib\inspect.py", line 733, in getmodule if ismodule(module) and hasattr(module, 'file'): File "C:\Users\User\AppData\Roaming\Python\Python37\site-packages\tensorflow__init__.py", line 50, in getattr module = self._load() File "C:\Users\User\AppData\Roaming\Python\Python37\site-packages\tensorflow__init__.py", line 44, in _load module = _importlib.import_module(self.name) File "C:\ProgramData\Anaconda3\lib\importlib__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "", line 1006, in _gcd_import File "", line 983, in _find_and_load File "", line 953, in _find_and_load_unlocked File "", line 219, in _call_with_frames_removed File "", line 1006, in _gcd_import File "", line 983, in _find_and_load File "", line 967, in _find_and_load_unlocked File "", line 677, in _load_unlocked File "", line 728, in exec_module File "", line 219, in _call_with_frames_removed File "C:\Users\User\AppData\Roaming\Python\Python37\site-packages\tensorflow_core__init__.py", line 42, in from . _api.v2 import audio File "C:\Users\User\AppData\Roaming\Python\Python37\site-packages\tensorflow_core_api\v2\audio__init__.py", line 10, in from tensorflow.python.ops.gen_audio_ops import decode_wav File "C:\Users\User\AppData\Roaming\Python\Python37\site-packages\tensorflow_core\python\ops\gen_audio_ops.py", line 9, in from tensorflow.python import pywrap_tensorflow as _pywrap_tensorflow File "C:\Users\User\AppData\Roaming\Python\Python37\site-packages\tensorflow__init__.py", line 50, in getattr module = self._load() File "C:\Users\User\AppData\Roaming\Python\Python37\site-packages\tensorflow__init__.py", line 44, in _load module = _importlib.import_module(self.name) File "C:\ProgramData\Anaconda3\lib\importlib__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "C:\Users\User\AppData\Roaming\Python\Python37\site-packages\tensorflow_core\python__init__.py", line 49, in from tensorflow.python import pywrap_tensorflow File "C:\Users\User\AppData\Roaming\Python\Python37\site-packages\tensorflow_core\python\pywrap_tensorflow.py", line 74, in raise ImportError(msg) ImportError: Traceback (most recent call last): File "C:\Users\User\AppData\Roaming\Python\Python37\site-packages\tensorflow_core\python\pywrap_tensorflow.py", line 58, in from tensorflow.python.pywrap_tensorflow_internal import * File "C:\Users\User\AppData\Roaming\Python\Python37\site-packages\tensorflow_core\python\pywrap_tensorflow_internal.py", line 28, in _pywrap_tensorflow_internal = swig_import_helper() File "C:\Users\User\AppData\Roaming\Python\Python37\site-packages\tensorflow_core\python\pywrap_tensorflow_internal.py", line 24, in swig_import_helper _mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description) File "C:\ProgramData\Anaconda3\lib\imp.py", line 242, in load_module return load_dynamic(name, filename, file) File "C:\ProgramData\Anaconda3\lib\imp.py", line 342, in load_dynamic return _load(spec) ImportError: DLL load failed: The specified module could not be found. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:\ProgramData\Anaconda3\lib\site-packages\IPython\core\interactiveshell.py", line 3296, in run_code exec(code_obj, self.user_global_ns, self.user_ns) File "", line 1, in import keras File "C:\ProgramData\Anaconda3\lib\site-packages\keras__init__.py", line 3, in from . import utils File "C:\ProgramData\Anaconda3\lib\site-packages\keras\utils__init__.py", line 6, in from . import conv_utils File "C:\ProgramData\Anaconda3\lib\site-packages\keras\utils\conv_utils.py", line 9, in from .. import backend as K File "C:\ProgramData\Anaconda3\lib\site-packages\keras\backend__init__.py", line 1, in from .load_backend import epsilon File "C:\ProgramData\Anaconda3\lib\site-packages\keras\backend\load_backend.py", line 90, in from .tensorflow_backend import * File "C:\ProgramData\Anaconda3\lib\site-packages\keras\backend\tensorflow_backend.py", line 5, in import tensorflow as tf File "C:\Users\User\AppData\Roaming\Python\Python37\site-packages\tensorflow__init__.py", line 101, in from tensorflow_core import * File "C:\Users\User\AppData\Roaming\Python\Python37\site-packages\tensorflow_core__init__.py", line 40, in from tensorflow.python.tools import module_util as _module_util File "C:\Users\User\AppData\Roaming\Python\Python37\site-packages\tensorflow__init__.py", line 50, in getattr module = self._load() File "C:\Users\User\AppData\Roaming\Python\Python37\site-packages\tensorflow__init__.py", line 44, in _load module = _importlib.import_module(self.name) File "C:\ProgramData\Anaconda3\lib\importlib__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "C:\Users\User\AppData\Roaming\Python\Python37\site-packages\tensorflow_core\python__init__.py", line 49, in from tensorflow.python import pywrap_tensorflow File "C:\Users\User\AppData\Roaming\Python\Python37\site-packages\tensorflow_core\python\pywrap_tensorflow.py", line 74, in raise ImportError(msg) ImportError: Traceback (most recent call last): File "C:\Users\User\AppData\Roaming\Python\Python37\site-packages\tensorflow_core\python\pywrap_tensorflow.py", line 58, in from tensorflow.python.pywrap_tensorflow_internal import * File "C:\Users\User\AppData\Roaming\Python\Python37\site-packages\tensorflow_core\python\pywrap_tensorflow_internal.py", line 28, in _pywrap_tensorflow_internal = swig_import_helper() File "C:\Users\User\AppData\Roaming\Python\Python37\site-packages\tensorflow_core\python\pywrap_tensorflow_internal.py", line 24, in swig_import_helper _mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description) File "C:\ProgramData\Anaconda3\lib\imp.py", line 242, in load_module return load_dynamic(name, filename, file) File "C:\ProgramData\Anaconda3\lib\imp.py", line 342, in load_dynamic return _load(spec) ImportError: DLL load failed: The specified module could not be found. Failed to load the native TensorFlow runtime. See https://www.tensorflow.org/install/errors for some common reasons and solutions. Include the entire stack trace above this error message when asking for help. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:\ProgramData\Anaconda3\lib\site-packages\IPython\core\interactiveshell.py", line 2033, in showtraceback stb = value._render_traceback_() AttributeError: 'ImportError' object has no attribute '_render_traceback_' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:\Users\User\AppData\Roaming\Python\Python37\site-packages\tensorflow_core\python\pywrap_tensorflow.py", line 58, in from tensorflow.python.pywrap_tensorflow_internal import * File "C:\Users\User\AppData\Roaming\Python\Python37\site-packages\tensorflow_core\python\pywrap_tensorflow_internal.py", line 28, in _pywrap_tensorflow_internal = swig_import_helper() File "C:\Users\User\AppData\Roaming\Python\Python37\site-packages\tensorflow_core\python\pywrap_tensorflow_internal.py", line 24, in swig_import_helper _mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description) File "C:\ProgramData\Anaconda3\lib\imp.py", line 242, in load_module return load_dynamic(name, filename, file) File "C:\ProgramData\Anaconda3\lib\imp.py", line 342, in load_dynamic return _load(spec) ImportError: DLL load failed: The specified module could not be found. Failed to load the native TensorFlow runtime. See https://www.tensorflow.org/install/errors for some common reasons and solutions. Include the entire stack trace above this error message when asking for help. Traceback (most recent call last): File "C:\Users\User\AppData\Roaming\Python\Python37\site-packages\tensorflow_core\python\pywrap_tensorflow.py", line 58, in from tensorflow.python.pywrap_tensorflow_internal import * File "C:\Users\User\AppData\Roaming\Python\Python37\site-packages\tensorflow_core\python\pywrap_tensorflow_internal.py", line 28, in _pywrap_tensorflow_internal = swig_import_helper() File "C:\Users\User\AppData\Roaming\Python\Python37\site-packages\tensorflow_core\python\pywrap_tensorflow_internal.py", line 24, in swig_import_helper _mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description) File "C:\ProgramData\Anaconda3\lib\imp.py", line 242, in load_module return load_dynamic(name, filename, file) File "C:\ProgramData\Anaconda3\lib\imp.py", line 342, in load_dynamic return _load(spec) ImportError: DLL load failed: The specified module could not be found. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:\ProgramData\Anaconda3\lib\site-packages\IPython\core\interactiveshell.py", line 3296, in run_code exec(code_obj, self.user_global_ns, self.user_ns) File "", line 1, in import keras File "C:\ProgramData\Anaconda3\lib\site-packages\keras__init__.py", line 3, in from . import utils File "C:\ProgramData\Anaconda3\lib\site-packages\keras\utils__init__.py", line 6, in from . import conv_utils File "C:\ProgramData\Anaconda3\lib\site-packages\keras\utils\conv_utils.py", line 9, in from .. import backend as K File "C:\ProgramData\Anaconda3\lib\site-packages\keras\backend__init__.py", line 1, in from .load_backend import epsilon File "C:\ProgramData\Anaconda3\lib\site-packages\keras\backend\load_backend.py", line 90, in from .tensorflow_backend import * File "C:\ProgramData\Anaconda3\lib\site-packages\keras\backend\tensorflow_backend.py", line 5, in import tensorflow as tf File "C:\Users\User\AppData\Roaming\Python\Python37\site-packages\tensorflow__init__.py", line 101, in from tensorflow_core import * File "C:\Users\User\AppData\Roaming\Python\Python37\site-packages\tensorflow_core__init__.py", line 40, in from tensorflow.python.tools import module_util as _module_util File "C:\Users\User\AppData\Roaming\Python\Python37\site-packages\tensorflow__init__.py", line 50, in getattr module = self._load() File "C:\Users\User\AppData\Roaming\Python\Python37\site-packages\tensorflow__init__.py", line 44, in _load module = _importlib.import_module(self.name) File "C:\ProgramData\Anaconda3\lib\importlib__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "C:\Users\User\AppData\Roaming\Python\Python37\site-packages\tensorflow_core\python__init__.py", line 49, in from tensorflow.python import pywrap_tensorflow File "C:\Users\User\AppData\Roaming\Python\Python37\site-packages\tensorflow_core\python\pywrap_tensorflow.py", line 74, in raise ImportError(msg) ImportError: Traceback (most recent call last): File "C:\Users\User\AppData\Roaming\Python\Python37\site-packages\tensorflow_core\python\pywrap_tensorflow.py", line 58, in from tensorflow.python.pywrap_tensorflow_internal import * File "C:\Users\User\AppData\Roaming\Python\Python37\site-packages\tensorflow_core\python\pywrap_tensorflow_internal.py", line 28, in _pywrap_tensorflow_internal = swig_import_helper() File "C:\Users\User\AppData\Roaming\Python\Python37\site-packages\tensorflow_core\python\pywrap_tensorflow_internal.py", line 24, in swig_import_helper _mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description) File "C:\ProgramData\Anaconda3\lib\imp.py", line 242, in load_module return load_dynamic(name, filename, file) File "C:\ProgramData\Anaconda3\lib\imp.py", line 342, in load_dynamic return _load(spec) ImportError: DLL load failed: The specified module could not be found. Failed to load the native TensorFlow runtime. See https://www.tensorflow.org/install/errors for some common reasons and solutions. Include the entire stack trace above this error message when asking for help. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:\Users\User\AppData\Roaming\Python\Python37\site-packages\tensorflow_core\python\pywrap_tensorflow.py", line 58, in from tensorflow.python.pywrap_tensorflow_internal import * File "C:\Users\User\AppData\Roaming\Python\Python37\site-packages\tensorflow_core\python\pywrap_tensorflow_internal.py", line 28, in _pywrap_tensorflow_internal = swig_import_helper() File "C:\Users\User\AppData\Roaming\Python\Python37\site-packages\tensorflow_core\python\pywrap_tensorflow_internal.py", line 24, in swig_import_helper _mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description) File "C:\ProgramData\Anaconda3\lib\imp.py", line 242, in load_module return load_dynamic(name, filename, file) File "C:\ProgramData\Anaconda3\lib\imp.py", line 342, in load_dynamic return _load(spec) ImportError: DLL load failed: The specified module could not be found. Failed to load the native TensorFlow runtime. See https://www.tensorflow.org/install/errors for some common reasons and solutions. Include the entire stack trace above this error message when asking for help. A: Nothing to do with Keras, it's your installation of Tensorflow that's not correct. Try to run python -c 'import tensorflow'. It should fail in your current situation. If you're using Tensorflow-GPU, installation of required third party software can be tricky. You'll find instructions here: https://www.tensorflow.org/install/gpu For Tensorflow-CPU, installation may sometimes fail if your processor does not support AVX instructions (At a certain point, Tensorflow started to use AVX during build, so if your processor does not support it, import will crash). In this case, you need to install an older version. I'm not sure but I think Tensorflow 1.12 is the latest version before they started to enable AVX. Give it a try with: pip install tensorflow==1.12
{ "redpajama_set_name": "RedPajamaStackExchange" }
1,112
World 'must commit to ending South Sudan war' as violence continues and hunger looms 4.6million people face hunger in war-torn South Sudan where civil war has killed thousands. By Ludovica Iaccino November 21, 2016 15:03 GMT South Sudan 'on staircase to former Yugoslavia' IBTimes UK The humanitarian crisis in South Sudan is worsening due to the deterioration of a conflict that erupted in 2013. Latest estimates suggest at least 4.6 million people across the country are facing hunger, while hundreds of thousands keep crossing into neighbouring nations fleeing violence. Rebel leader Riek Machar says government 'must go' after interview ban South Sudan can achieve peace 'if Riek Machar and Salva Kiir go' South Sudan on same staircase to break up as former Yugoslavia says former defence chief South Sudan became the world's newest nation when it declared independence from Sudan in 2011. However, the country descended into civil war in 2013 when President Salva Kiir, of the Dinka ethnic group, fired his deputy Riek Machar – from the Nuer group – and his cabinet. Ethnic-related violence spread, with militia groups carrying out attacks in villages and areas known to be inhabited by either the Dinka or Nuer tribes. An estimated 50,000 people have been killed, amid allegations of crimes against humanity committed by both sides, including rape, torture and the use of child soldiers. Earlier this month, the World Food Programme (WFP) warned the country was witnessing an "unprecedented" level of malnutrition, which was already above the 15% "emergency" level in seven out of South Sudan's 10 states. In Unity and Northern Bahr el Ghazal states, the malnutrition level was of about 30%. People are fleeing violence leaving their crops to rot in the fields. In addition, the heavy rainy season has made some roads inaccessible hindering food deliverance. Many fear hunger will deepen as the conflict has now spilled into the Equatoria region, considered one of South Sudan's breadbaskets. IBTimes UK's full interview with Machar: Peace agreement is the salvation of South Sudan People receive food aid and other items such as soap, plastic mats and buckets from a recent ICRC (International Committee of the Red Cross) delivery in Minkammen, Awerial County, 16 miles south of Bor Nichole Sobecki/AFP/Getty 'There is hope in South Sudan' "Food insecurity is not only getting larger in numbers year on year, but it is getting deeper and it is spreading to areas that were previously considered stable," Jeremiah Young, policy, advocacy and peace-building advisor at World Vision South Sudan, told IBTimes UK. South Sudan crisis Kiir and Machar have agreed on several peace deals, but have failed to control their troops, who have broken every ceasefire signed since 2014. Machar, who leads the opposing faction Sudan People's Liberation Movement-in-Opposition (SPLM-IO), originally left South Sudan in 2013. His return, and his reinstatement as vice president in April had restored hopes for the implementation of the peace process. However, Machar fled again following deadly fighting in July and was replaced by Taban Deng Gai. A recent UN probe concluded its mission in South Sudan (Unmiss) failed to protect civilians in July due to "a lack of leadership on the part of key senior mission personnel". The probe resulted in the sacking of Unmiss chief. The dismissal angered Kenya, which decided to withdraw its troops from the UN peacekeeping mission. "The western Equatoria region has vast potential to not only provide food for the country itself, but also to be a food exporter to the region," he continued. "The longer the insecurity remains, the less we are going to be able to develop those potential opportunities for the country to not have to always be suffering from food insecurity, relying on aids and developing as a new nation." Young explained the South Sudanese section of the global charity is assisting civilians with holistic programmes ranging from food distribution to community capacity building. "We do cover the whole sphere of the various types of intervention that match the vulnerability that are being experienced in South Sudan," he said. Young added international donours and humanitarian organisations were at a crucial point in terms of the response to the crisis and called on stakeholders to "really commit themselves to South Sudan." "There is hope and South Sudanese people express this hope," he concluded.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
8,168
Fanangat es una isla de los Estados Federados de Micronesia. Se ubica en el municipio de Weno-Choniro, Estado de Chuuk, en la parte occidental del país, 700 km al oeste de Palikir. El clima es moderado. La temperatura media es de 20 °C. El mes más cálido es enero, a 24 °C, y el más frío agosto, a 17 °C. La precipitación media es de 3.807 milímetros por año. El mes más lluvioso es agosto, con 448 milímetros de lluvia, y el menos lluvioso enero, con 165 milímetros. Referencias Islas del estado de Chuuk
{ "redpajama_set_name": "RedPajamaWikipedia" }
4,448
Higgsův boson je hmotná skalární elementární částice ve standardním modelu částic. Hraje klíčovou roli ve vysvětlení původu hmotnosti ostatních elementárních částic, zejména rozdílu mezi nehmotným fotonem a velmi těžkými bosony W a Z. Hmotnosti elementárních částic a rozdíly mezi elektromagnetismem a slabou interakcí jsou rozhodující v mnoha mikroskopických dějích, a tak má Higgsův boson na náš vesmír významný vliv. Historie Higgsův boson poprvé předpověděl roku 1964 britský fyzik Peter Higgs, který rozpracoval myšlenky s Philipem Andersonem a nezávisle několika dalšími fyziky. Na konferenci ICHEP2012 v australském Melbourne 4. července 2012 byl na základě dat z experimentů ATLAS a CMS v CERNu oznámen objev nového bosonu, jehož vlastnosti jsou konzistentní s Higgsovým bosonem. 14. března 2013 vědci z CERNu objev na základě dalších experimentů potvrdili. Dle zjištění vědců se Higgsův boson rozpadá prostřednictvím čtyř módů: na fotony, bosony W, bosony Z a tauony. V srpnu 2018 dokázali fyzici experimentálního programu ATLAS ve výzkumném centru CERN rozpad Higgsova bosonu na kvark b a antikvark b. Název Higgsův boson je často masmédii označován jako "božská částice", a to na základě názvu knihy ( Božská částice: Pokud je vesmír odpovědí, co je otázkou?) Leona Ledermana. Zatímco takové pojmenování zvyšuje zájem médií o částicovou fyziku a Velký hadronový urychlovač, mnoho vědců jej odmítá. Ve snaze částici přejmenovat vybral výbor fyziků název "boson šampaňského" jako její nejpopulárnější označení. Vlastnosti Higgsův boson nelze pozorovat přímo, neboť má příliš krátkou dobu života. Je ale možné zaznamenat produkty jeho rozpadu a z nich rekonstruovat vlastnosti původní částice. Bez existence Higgsova bosonu by vesmírem létaly všechny částice standardního částicového modelu světelnou rychlostí a nebylo by možné, aby utvořily atomy, předměty, planety, hvězdy apod. Odkazy Reference Související články Klasické Higgsovo pole Externí odkazy Confirmed! Newfound Particle Is the Higgs CARENA, M.; GROJEAN, C.; KADO, M.; SHARMA, V.: Status of Higgs Boson Physics () – aktuální přehled fyziky Higgsova bosonu (anglicky) Elementární částice
{ "redpajama_set_name": "RedPajamaWikipedia" }
6,711
At first, the logic of my friend's argument was quite compelling. Their reasoning was that four cores would consume more power than a single core and, because multithreaded performance gains aren't linear, power must be wasted. Not wanting to concede defeat without at least looking into the theory a bit, I did some research into power management in modern consumer electronics processors. I expected to find that processor power consumption increases with operating frequency and, indeed, that is the case. What surprised me was that, thanks to some very clever power management techniques built into modern silicon, the internal voltage is also varied depending upon the processing load. The combined effect of managing both the voltage and the frequency is that running four cores at a little under half the speed of one core running flat out uses less energy overall. Although this halves the performance of each core, which would reduce the speed of a traditional browser, Flow's ability to use all four cores means that it can deliver the same user experience using less energy. SoC providers don't publish their power consumption data, but I was able to find fairly detailed power measurements for an ARM based mobile phone device. I used this data to calculate the power consumption of a single core when providing enough DMIPS for a traditional browser to deliver a good user experience. I then compared this with the power that Flow would consume when delivering the same user experience. Because Flow uses all four cores, it requires a lower level of DMIPS per core which means the same user experience is delivered with less power. The results showed that the power saving delivered through Flow's efficient use of multicore silicon was just over 36% – or, in the case of the smart watch that started the conversation, four days between recharges rather than three. This was more than enough to convince my friend that our new browser would benefit industries beyond Ekioh's established TV market. A longer form version of my findings, including a useful graph of power consumption v processing performance, is detailed here.
{ "redpajama_set_name": "RedPajamaC4" }
7,262
\section{Introduction}} \longpaper{\subsection{Motivation}} \subsubsection{The main questions.} Quantum communication offers the possibility of distributed computation with extraordinary \emph{provable\/} savings in communication as compared with classical communication (see, e.g., \cite{Regev:2011} and the references therein). Most often, if not always, the savings are achieved by protocols that assume access to \emph{noiseless\/} communication channels. In practice, though, imperfection in channels is inevitable. Is it possible to make the protocols robust to noise while maintaining the advantages offered by quantum communication? If so, what is the cost of making the protocols robust, and how much noise can be tolerated? In this article, we address these questions in the context of quantum communication protocols involving two parties, in the low noise regime. Following convention, we call the two parties Alice and Bob. \subsubsection{ Channel coding theory as a special case.} \label{sec:channel-coding} In the special case when the communication is one-way (say, from Alice to Bob), techniques for making the message noise-tolerant, via error correcting codes, have been studied for a long time. Coding allows us to simulate a noiseless communication protocol using a noisy channel, under certain assumptions about the noise process (such as having a memoryless channel). Typically, such simulation is possible when the \emph{error rate\/} (the fraction of the messages corrupted) is lower than a certain threshold. A desirable goal is to also maximize the \emph{communication rate\/} (also called the \emph{information rate\/}), which is the length of the original message, as a fraction of the length of its encoding. In the classical setting, Shannon established the capacity (i.e., the optimal communication rate) of \emph{arbitrarily accurate\/} transmission, in the limit of \emph{asymptotically large\/} number of channel uses, through the Noisy Coding Theorem~\cite{Shannon48a}. Since then, researchers have discovered many explicit codes with desirable properties such as good rate, and efficient encoding and decoding procedures (see, for example,~\cite{Stolte:2002,Arikan:2009}). Analogous results have been developed over the past two decades in the quantum setting. In particular, capacity expressions for a quantum channel transmitting classical data~\cite{Hol98,SW97} or quantum data~\cite{Lloyd97,Shor02,Dev05} have been derived. Even though it is not known how we may evaluate these capacity expressions for a general quantum channel, useful error correcting codes have been developed for many channels of interest (see, for example,~\cite{CS96-goodcode,CRSS97,BDSW96,Bombin15}). Remarkably, quantum effects give rise to surprising phenomena without classical counterparts, including \emph{superadditivity\/}~\cite{DSS98,Hastings2009}, and \emph{superactivation\/}~\cite{SY09}. All of these highlight the non-trivial nature of coding for noisy quantum channels. \subsubsection{Communication complexity as a special case.} In general two-party protocols, data are transmitted in each direction alternately, potentially over a number of rounds. In a computation problem, the number of rounds may grow as a function of the input size. Such protocols are at the core of several important areas including distributed computation, cryptography, interactive proof systems, and communication complexity. For example, in the case of the Disjointness function, a canonical task in the two-party communication model, an $n$-bit input is given to each party, who jointly compute the function with as little communication as possible. The optimal quantum protocol for this task consists of~$\Theta\br{\sqrt{n\,}}$ rounds of communication, each with a constant length message~\cite{BCW98, HdW:2002, aaronson:2003}, and such a high level of interaction has been shown to be necessary~\cite{KNTZ07, JainRS:09, BGK+15}. Furthermore, quantum communication leads to provable advantages over the classical setting, without any complexity-theoretic assumptions. For example, some specially crafted problems (see, for example,~\cite{Raz99,Regev:2011}) exhibit exponential quantum advantages, and others display the power of quantum interaction by showing that just one additional round can sometimes lead to exponential savings~\cite{KNTZ07}. \subsubsection{The problem, and motivation for the investigation.} \label{sec:problem} In this paper, we consider two-party interactive communication protocols using \emph{noisy\/} communication. The goal is to effectively implement an interactive communication protocol to arbitrary accuracy despite noise in the available channels. We want to minimize the number of uses of the noisy channel, and the complexity of the coding operations. The motivation is two-fold and applies to both the classical and the quantum setting. First, this problem is a natural generalization of channel coding from the 1-way to the 2-way setting, with the ``capacity'' being the best ratio of the number of channel uses in the original protocol divided by that needed in the noisy implementation. Here, we consider the combined number of channel uses in both directions. Note that this scenario is different from ``assisted capacities'' where some auxiliary noiseless resources such as a classical side channel for quantum transmission are given to the parties for free. Second, we would like to generalize interactive protocols to the noisy communication regime. If an interactive protocol can be implemented using noisy channels while preserving the complexity, then the corresponding communication complexity results become robust against channel noise. In particular, an important motivation is to investigate whether the quantum advantage in interactive communication protocols is robust against quantum noise. Due to the ubiquitous nature of quantum noise and fragility of quantum data, noise-resilience is of fundamental importance for the realization of quantum communication networks. The coding problem for interactive quantum communication was first studied in~\cite{BNTTU14}. In Section~\ref{sec:intro-prior}, we elaborate on this work and the questions that arise from it. \longpaper{\subsection{Fundamental difficulties in coding for quantum interactive communication}\label{sec:intro-diff}} For some natural problems the optimal interactive protocols require a lot of interaction. For example, distributed quantum search over $n$ items~\cite{BCW98, HdW:2002, aaronson:2003} requires $\Theta \br{\sqrt{n}}$ rounds of constant-sized messages~\cite{KNTZ07,JainRS:09, BGK+15}. How can we implement such highly interactive protocols over noisy channels? What are the major obstacles? \subsubsection{Standard error correcting codes are inapplicable.} \label{sec:ecc-inapp} In both the classical and quantum settings, standard error correcting codes are inapplicable. To see this, first suppose we encode each message separately. Then the corruption of even a single encoded message can already derail the rest of the protocol. Thus, for the entire protocol to be simulated with high fidelity, we need to reduce the decoding error for each message to be inversely proportional to the length of the protocol, say $n$. For constant size messages, the overhead of coding then grows with the problem size $n$, increasing the complexity and suppressing the rate of simulation to $0$ as $n$ increases. The situation is even worse with adversarial errors: the adversary can invest the entire error budget to corrupt the shortest critical message, and it is impossible to tolerate an error rate above $\approx 1$/number of rounds, no matter what the rate of communication is. To circumvent this barrier, one must employ a coding strategy acting collectively over many messages. However, most of these are generated dynamically during the protocol and are unknown to the sender earlier. Furthermore, error correction or detection may require communication between the parties, which is also corruptible. The problem is thus reminiscent of fault-tolerant computation in that the steps needed to implement error correction are themselves subject to errors. \subsubsection{The no-cloning quantum problem.} A fundamental property of quantum mechanics is that learning about an unknown quantum state from a given specimen disturbs the state~\cite{BBJMPSW94}. In particular, an unknown \longpaper{quantum} state cannot be cloned~\cite{Dieks82, WZ82}. This affects our problem in two fundamental ways. First, any logical quantum data leaked into the environment due to the noisy channel cannot be recovered by the communicating parties. Second, the parties hold a joint quantum state that evolves with the protocol, but they cannot make copies \longpaper{of the joint state} without corrupting it. \longpaper{\subsection{Prior classical and quantum work }\label{sec:intro-prior}} \blurb{\vspace*{1.5ex}{\bf \large 3. Prior classical and quantum work}\vspace*{0.5ex}} Despite the difficulties in coding for interactive communication, many interesting results have been discovered over the last 25 years, with a notable extension in the quantum setting. \subsubsection{Classical results showing positive rates.} Schulman first raised the question of simulating noiseless interactive communication protocols using noisy channels in the classical setting~\cite{Sch92,Sch93,Sch96}. He developed \emph{tree codes\/} to work with messages that are determined one at a time, and generated dynamically during the course of the interaction. These codes have constant overhead, and the capacity is thus a positive constant. Furthermore, these codes protect data against adversarial noise that corrupts up to a $\frac{1}{240}$ fraction of the channel uses. This tolerable noise rate was improved by subsequent work, culminating to the results by Braverman and Rao~\cite{BR11}. They showed that $<\frac{1}{4}$ adversarial errors can be tolerated provided one can use large constant alphabet sizes and that this bound on noise rate is optimal. \subsubsection{Classical results with efficient encoding and decoding.} The aforementioned coding schemes are not known to be computationally efficient, as they are built on tree codes; the computational complexity of encoding and decoding tree codes is unknown. Other computationally efficient encoding schemes have been developed~\cite{BK12,BN13,BrakerskiKN:2014,GMS12,GellesMS:2014,GH14}. The communication rates under various scenarios have also been studied \cite{BravermanK:2017,GhaffariHS:2014,EfremenkoGH:2015,FranklinGOS:2015}. However, the rates do not approach the capacity expected of the noise rate. \subsubsection{Classical results with optimal rates.} Kol and Raz~\cite{KR13} first established coding with rate approaching $1$ as the noise parameter goes to $0$, for the binary symmetric channel. Haeupler~\cite{Haeupler:2014} extended the above result to adversarial binary channels corrupting at most an $\epsilon$ fraction of the symbols, with communication rate $1 - O\,\br{\!\sqrt{\epsilon \log \log \br{\frac{1}{\epsilon}}\!}\,}$, which is conjectured to be optimal. For oblivious adversaries, this increases to $1 - O(\sqrt{\epsilon})$. Further studies of capacity have been conducted, for example, in~\cite{HaeuplerV:2017,BenYishaiSK:2017}. For further details about recent results on interactive coding, see the extensive survey by Gelles~\cite{Gelles17}. \subsubsection{Quantum results showing positive rates.} All coding for classical interactive protocols relies on ``backtracking'': if an error is detected, the parties go back to an earlier stage of the protocol and resume from there. Backtracking is impossible in the quantum setting due to the no cloning principle described in the previous subsection. There is no generic way to make copies of the quantum state at earlier stages without restarting the protocol. Brassard, Nayak, Tapp, Touchette, and Unger~\cite{BNTTU14} provided the first coding scheme with constant overhead by using two ideas. The first idea is to teleport each quantum message. This splits the quantum data into a protected quantum share and an unprotected classical share that is transmitted through the noisy channels using tree codes. Second, backtracking is replaced by \emph{reversing\/} of steps to \emph{return\/} to a desirable earlier stage; i.e., the joint quantum state is evolved back to that of an earlier stage, which circumvents the no-cloning theorem. This is possible since local operations can be made unitary, and communication can be reversed (up to more noise). Together, a positive simulation rate (or constant overhead) can be achieved. In the noisy analogue to the Cleve-Buhrman communication model where entanglement is free, error rate $<\frac{1}{2}$ can be tolerated. In the noisy analogue to the Yao (plain) model, a noisy quantum channel with one-way quantum capacity $Q > 0$ can be used to simulate an $n$-message protocol given $O \br{\frac{1}{Q} n}$ uses. However, the rate can be suboptimal and the coding complexity is unknown due to the use of tree codes. The rate is further reduced by a large constant in order to match the quantum and classical data in teleportation, and in coordinating the action of the parties (advancing or reversing the protocol). \longpaper{\subsection{Results in this paper, overview of techniques, and our contributions}} \blurb{\vspace*{1.5ex}{\bf \large 4. Results in this paper, overview of techniques, and our contributions}\vspace*{0.5ex}} Inspired by the recent results on rate optimal coding for the classical setting~\cite{KR13,Haeupler:2014} and the rate suboptimal coding in the quantum setting~\cite{BNTTU14}, a fundamental question is: can we likewise avoid the loss of communication rate for interactive \emph{quantum\/} protocols? In particular, is it possible to protect quantum data without pre-shared free entanglement, and if we have to generate it at a cost, can we still achieve rate approaching $1$ as the error rate vanishes? Further, can erroneous steps be reversed with noisy resources, and with negligible overhead as the error rate vanishes? What is the complexity of rate optimal protocols, if one exists? Are there other new obstacles? \suppress{Our main resul ~addresses all these questions.} To address all these questions, in this paper we start by studying a simpler setting where the input protocol $\Pi$ and the noisy communication channel operate on the same communication alphabet of polynomial size in the length of $\Pi$. This simplifies the algorithm while still capturing the main challenges we need to address. The analysis is easier to follow and shares the same outline and structure with our main result, namely simulation of noiseless interactive communication over constant-size alphabets, which we will present in an upcoming paper. The framework we develop in this work, sets the stage for a smooth transition to the small alphabet case. We focus on alternating protocols, in which Alice and Bob exchange qudits back and forth in alternation. Our main result in this paper is the following: \begin{theorem} Consider any alternating communication protocol $\Pi$ in the plain quantum model, communicating $n$ messages over a noiseless channel with an alphabet $\Sigma$ of bit-size $\Theta\br{\log n}$. We provide a simulation protocol $\Pi'$ which given $\Pi$, simulates it with probability at least $1-2^{-\Theta\br{n\epsilon}}$, over any fully adversarial error quantum channel with alphabet $\Sigma$ and error rate $\epsilon$. The simulation uses $n\br{1+\Theta\br{\sqrt{\epsilon}}}$ rounds of communication, and therefore achieves a communication rate of $1-\Theta\br{\sqrt{\epsilon}}$. \end{theorem} Our rate optimal protocol requires a careful combination of ideas to overcome various obstacles. Some of these ideas are well-established, some are not so well known, some require significant modifications, and some are new. A priori, it is not clear whether these previously developed tools would be useful in the context of the problem. For the clarity of presentation, we first introduce our main ideas in a simpler communication model, where Alice and Bob have access to free entanglement and communicate over a fully adversarial error classical channel. We introduce several key ideas while developing a basic solution to approach the optimal rate in this scenario. Inspired by~\cite{BNTTU14}, we use teleportation to protect the communication and the simulation is actively rewound whenever an error is detected. We develop a framework which allows the two parties to obtain a global view of the simulation by locally maintaining a classical data structure. We adapt ideas due to Haeupler~\cite{Haeupler:2014} to efficiently update this data structure over the noisy channel and evolve the simulation. Then, we extend these ideas to the plain model of quantum communication with large alphabet size. In the plain quantum model, Alice and Bob communicate over a fully adversarial error quantum channel and do not have access to any pre-shared resources such as entanglement or shared randomness. As a result any such resources need to be established through extra communication. This in particular makes it more challenging to achieve a high communication rate in this setting. Surprisingly, an adaptation of an old technique called the Quantum Vernam Cipher (QVC)~\cite{Leung:2002} turns out to be the perfect method to protect quantum data in our application. QVC allows the two parties to recycle and reuse entanglement as needed throughout the simulation. Building on the ideas introduced in the teleportation-based protocol, one of our main contributions in this model is developing a mechanism to reliably recycle entanglement in a communication efficient way. \suppress{ \subsubsection{Remarks on our main result.} Besides resolving the question concerning rate optimal coding for quantum interactive communication in the low-noise regime, our work achieves a few additional goals. First, the above result is achieved in the plain quantum model, where the two parties have no pre-shared resource (such as secret key or entanglement). Remarkably, our rate outperforms the conjectured optimal bound in the corresponding plain classical model! Intuitively, this is possible in the quantum setting because a secret key can be obtained from low noise quantum communication (or from entanglement) and then more efficient hashing can be performed. Second, our work provides the first computationally efficient interactive coding scheme in the quantum setting. Third, our result is the first of its kind for establishing the capacity for a noisy quantum channel used in both directions to leading order. \subsubsection{Outline of the ideas and our contributions.} Our rate optimal protocol requires a careful combination of ideas to overcome various obstacles. Some of these ideas are well-established, some are not so well known, some require significant modifications, and some are new. A priori, it is not clear whether any of the previously developed tools would be useful in the context of the problem. For the clarity of presentation, we start with two simplifications, namely free entanglement and large alphabet size $d={\rm poly}\br{n}$. We introduce several key ideas while developing a basic solution to approach the optimal rate in this scenario. Then, we extend these ideas to the plain model with large alphabet size. Finally, we adapt our protocols to the binary alphabet in both settings. In the process, we solve the coding problem in all 4 scenarios. The relations between these scenarios and what ideas go in which scenarios are summarized \longpaper{in Figure~\ref{fig:diagram}.} \blurb{in Figure~1 at the end of this extended abstract.} We also illustrate our contribution vs existing techniques by different colors. \longpaper{ \begin{figure}[!t] \centering \includegraphics[width=400pt]{./../drawings/Diagram.pdf} \caption{Diagram representing the sequence of cases that we prove, and what main ingredients go into each case.} \label{fig:diagram} \end{figure} } A priori, there is little reason to expect that the simulation framework and the tools developed for each successive case extend to the next. However, the extensions are surprisingly seamless and without serious obstacles, culminating in the final result. This testifies to the power of the framework and choice of tools we deploy. \ul{\bf Ideas and solution with free entanglement and large alphabet size}~We adapt from ~\cite{BNTTU14} the ideas to teleport each quantum message and to rewind the protocol instead of backtracking. (These techniques are relatively well known.) We also adapt Haeupler's template \cite{Haeupler:2014} to make a conversation robust to noise: Both parties conduct their original conversation as if there were no noise, except for the following: \vspace*{-2ex} \begin{itemize} \item At regular intervals they exchange concise summaries (a $\Theta (1)$ or $\Theta (\log \log n)$-bit hash value) of the conversation up to the point of the exchange. \vspace*{-1ex} \item If the summary is consistent, they continue the conversation. \vspace*{-1ex} \item If the summary is inconsistent, an error is detected. The parties backtrack to an earlier stage of the conversation and resume from there. \end{itemize} \vspace*{-2ex} This template can be interpreted as an error correcting code over many messages, with trivial (and most importantly \emph{message-wise\/}) encoding. The 2-way summaries measure the error syndromes over a large number of messages, thereby preserving the rate. It works (in the classical setting) by limiting the maximum amount of communication wasted by a single error to $O_\epsilon (1)$. The worst case error disrupts the consistency checks, but Alice and Bob agree to backtrack a constant amount when an inconsistency is detected. As the error fraction vanishes, the communication rate goes to $1$. In addition, these consistency tests are efficient, consisting of evaluation of hash functions. {\bf Insufficiency of simply combining \cite{BNTTU14} and \cite{Haeupler:2014}.}~ Suppose we have to simulate an interactive protocol $\Pi$ that uses noiseless classical channels in the Cleve-Burhman model. When implementing $\Pi$ with noisy classical channels, it is \emph{not sufficient\/} to apply Haeupler's template to the classical messages used in teleportation, and rewind as in \cite{BNTTU14} when an error is detected. The reason is that, in \cite{BNTTU14}, each message is expanded to convey different types of actions in one step (simulating the protocol forward or reversing it). This also maintains the matching between classical data with the corresponding MES, and the matching between systems containing MESs. However, this method incurs a large constant factor overhead which we cannot afford to incur. {\bf New difficulty in rate-optimal simulations.}~ Due to errors in communication the parties need to actively rewind the simulation to correct errors on their joint quantum state. This itself can lead to a situation where the parties may not agree on how they proceed with the simulation (to rewind simulation or to proceed forward). In order to move on, both parties first need to know what the other party has done so far in the simulation. This allows them to obtain a global view of the current joint state and decide on their next action. In Ref. [BNT+14], this reconciliation step was facilitated by the extra information sent by each party and the use of tree codes. This mechanism is not available to us. {\bf Our first new idea (a framework)} is to introduce sufficient yet concise data structure so that the parties can detect inconsistencies in (1) the stage in which they are in the protocol, (2) what type of action they should be taking, (3) histories leading to the above, (4) histories of measurement outcomes generated by one party versus the potentially different (corrupted) received instruction for teleportation decoding, (5) which system contains the next MES to be used, (6) a classical description of the joint quantum state, which is only partially known to each party. Each of Alice and Bob maintain her/his data (we collectively call these $D_A, D_B$ respectively, here), and also an estimate of the other party's data ($\widetilde{D_B}, \widetilde{D_A}$ respectively). Without channel noise, these data are equal to their estimates. \longpaper{Figure~fig:\ref{fig:teleportation-representation} in Section~\ref{sec:general-descrpition-largeclasscial}} \blurb{Figure~3 at the end of this extended abstract} contains a graphical representations of the operations being performed in one block of simulation, and \longpaper{Figure~\ref{fig:flow-telep} in Section~\ref{sec:general-descrpition-largeclasscial}} \blurb{Figure~4 at the end of this extended abstract} contains a flowchart depicting the main interactions in our framework. {\bf A major new obstacle: out of sync teleportation.}~ Now, at every step in the simulation protocol $\Pi'$, Alice and Bob may engage in one of three actions: a forward step in $\Pi$, step in reverse, or the exchange of classical summaries. However, the summaries can also be corrupted. This leads to a new difficulty: errors in the summaries can trigger Alice and Bob to engage in different actions. In particular, it is possible that one party tries to teleport while the other expects classical communication, with only one party consuming his/her half of an MES. They then become out-of-sync over which MESs to use. This kind of problem, to the best of our knowledge, has not been encountered before, and it is not clear if quantum data can be protected from such error. (For example, Alice may try to teleport a message into an MES that Bob already ``used'' earlier.) One of our main technical contributions is to show that the quantum data can always be located and recovered when Alice and Bob resolve the inconsistencies in their data $(D_A, \widetilde{D_B})$ and $(\widetilde{D_A},D_B)$ in the low noise regime. This is particularly surprising since quantum data can potentially leak irreversibly to the environment (or the adversary): Alice and Bob potentially operate in an open system due to channel noise, and out-of-sync teleportation a priori does not protect the messages so sent. {\bf The intuition why this work} is as follows. Even mismatched MESs form a ``closed system'' where the quantum data continue to reside; the quantum data are simply mislocated and rotated. (See \longpaper{Figure~\ref{fig:EPR_registers}.)} \blurb{Figure~2 at the end of this extended abstract.)} Our data structure endows each of Alice and Bob with his/her half of the correct classical information ($D_A, D_B$) at every stage. They estimate the other half of the information $\widetilde{D_A}, \widetilde{D_B}$ based on noisy communication and these can potentially be corrupted. We then apply Haeupler's template to regain consistency in their views, so that both views ($(D_A, \widetilde{D_B}$ for Alice and $(\widetilde{D_A},D_B)$ for Bob) can be progressively corrected towards $(D_A, D_B)$ as the protocol proceeds. The shared joint quantum state, including the MESs, is completely determined by $(D_A, D_B)$ so that they can resynchronize their moves. {\bf Tight rope between robustness and rate.}~The simulation maintains sufficient data structures to store information about each party's view so that Alice and Bob can overcome all the obstacles described above. The simulation makes progress so long as Alice's and Bob's views are consistent. The robustness of the simulation requires that the consistency checks be frequent and sensitive enough so that errors are caught quickly. On the other hand, to optimize interactive channel capacity, the checks have to remain communication efficient and not too frequent neither. This calls for delicate analysis in which we balance the two. We also put in some redundancy in the data structures to simplify the analysis. A high-level description of this solution is included in \longpaper{Section~\ref{sec:general-descrpition-largeclasscial}. Section~\ref{sec:general-discription-large-alphabet-cleveburhman} gives the general description of our approach, while Section~\ref{sec:Out-of-sync teleportation} gives more details about out-of-sync teleportation and how to correct for it. Section~\ref{subsec:polysizeclassicalanalysis} contains a detailed analysis and proof of correctness.} \blurb{Section 3.2 of the full manuscript: Section 3.2.1 gives the general description of our approach, while Section 3.2.2 gives more details about out-of-sync teleportation and how to correct for it. Section~3.4 contains a detailed analysis and proof of correctness.} \ul{\bf Ideas and solution in the plain model with large alphabet size} {\bf Teleportation is inapplicable.}~ Switching from the Cleve-Burhman model to the Yao model, suppose we are given a protocol $\Pi$ using noiseless quantum communication, and we are asked to provide a protocol $\Pi'$ using noisy quantum channels under the strongly adversarial model described earlier. In the absence of free entanglement, how can we protect quantum data from leaking to the environment without incurring a non-negligible overhead? First, note that some form of protection is necessary, as discussed in \longpaper{Section~\ref{sec:intro-diff}.} \blurb{Section~2 of this extended abstract.} Second, teleportation would be too expensive to use, since it incurs an overhead of at least $3$: we have to pay for the MES as well as the classical communication required. Surprisingly, an old and relatively unknown idea called the Quantum Vernam Cipher (QVC)~\cite{Leung:2002} turns out to be a perfect alternative method to protect quantum data with negligible overhead as the noise rate approaches 0. {\bf The Quantum Vernam Cipher (QVC)~\cite{Leung:2002}.}~ Suppose Alice and Bob share two copies of MESs, each over two $d$-dimensional systems. For Alice to send a message to Bob, she applies a controlled $X^k$ Pauli operation with her half of the first MES as control (when the control qudit is in state $k$), and the message as the target. She applies a controlled $Z^k$ Pauli operation from her half of the second MES to the message. When Bob receives the message, he reverses the controlled operations using his halves of the MESs. (The operations are similar for the opposite direction of communication). A detailed description is provided in \longpaper{Section~\ref{subsec:protocoloverqudits}.} \blurb{Section~2.5 of the full manuscript; Figure~7 at the end of this extended abstract depicts the protocol.} The QVC is designed so that {\bf if} Alice and Bob have access to an authenticated classical channel from Alice to Bob, they can determine and correct any error in the transmission. This can simply be done by measuring $Z^l$ type changes to one half of the two MES. They can also run the QVC many times, determine the errors in a large block using a method called ``random hashing'', and recycle the MESs if the error rate (as defined in our adversarial model) is low. This is a crucial property of QVC and leads to one of the earliest (quantum) key recycling results known. In fact, this was the reason why it was studied in Ref.~\cite{Leung:2002}. What makes QVC particularly suitable for our problem is that encoding and decoding are performed message-wise, while error detection can be done in large blocks, and entanglement can be recycled if no error is detected. It may thus be viewed as a natural quantum generalization to Haeupler's consistency checks. As an aside, in Appendix E of Ref.~\cite{Leung:2002}, the relative merits of teleportation and QVC were compared (those are the only two ciphers with complete quantum reliability), and it was determined that entanglement generation over an insecure noisy quantum channel followed by teleportation is more entanglement efficient than QVC with entanglement recycling in some test settings. However, this difference vanishes for low noise. Furthermore, the comparison assumes authenticated noiseless classical communication to be free. QVC requires an amount of classical communication for the consistency checks which vanishes with the noise parameter (but this cost was not a concern in that study). Furthermore, QVC was also proposed as an authentication scheme, but the requirement for interaction to authenticate and to recycle the key or entanglement was considered a disadvantage, compared to non-interactive schemes. (Those are only efficient for large block length, and cannot identify the error when one is detected. So, these authentication schemes are inapplicable). We thus provide renewed insight into QVC when interaction is natural (while it is considered expensive in many other settings). {\bf Adaptations of QVC for the current problem.}~ In the current scenario, we have neither free MESs nor an authenticated classical channel. Instead, Alice and Bob start the protocol by generating $O(\sqrt{\epsilon}n)$ near-perfect MESs, using high rate quantum error correcting codes over the low-noise channel, where $n$ is the total length of the original protocol $\Pi$, and $\epsilon$ is the noise parameter. Then, they occasionally check for errors and recycle MESs in a communication efficient way, using noisy quantum channels instead of an authenticated classical channel. If they detect an inconsistency, they try to determine the error in a small block in the recent past, and rewind to correct the error. Otherwise, they perform ``quantum hashing''~\cite{BDSW96,Leung:2002} to efficiently recycle the entanglement to be reused. {\bf Additional out-of-sync problems using QVC and entanglement recycling.}~ As in the previous scenario, one of Alice and Bob can make a step forward, and the other a step in reverse. They can also go out of sync about which MESs they are using. Furthermore, the parties may not agree on which MESs to recycle, how much to recycle, and whether they can even recycle! In particular, corruptions that lead only one party to recycle can cause a significant discrepancy in how many MESs the two parties are holding. It is much more involved to analyse the joint quantum state. \longpaper{Figure~\ref{fig:flow-qvc} in Section~\ref{sec:description-large-quantum}} \blurb{Figure~6 at the end of this extended abstract} contains a flowchart depicting the main modifications versus our Cleve-Buhrman framework which are needed for entanglement recycling. To tackle these problems, we develop further data structures and adapt the ``quantum hashing'' procedure of Ref.~\cite{BDSW96,Leung:2002} to our setting. Surprisingly, once again, the quantum data can be recovered as Alice and Bob reconcile the differences in the data structures developed for the task. This is in spite of the fact that there is no reason to expect the out-of-sync QVC to be sufficient to protect the potentially incorrectly encoded quantum data sent via noisy quantum channels. (See \longpaper{Figure~\ref{fig:EPR_circle}.)} \blurb{Figure~5 at the end of this extended abstract.)} We note that entanglement generation of $O(\sqrt{\epsilon} n)$ MES is sufficient to last through the whole protocol. Intuitively, this amount of MES is still much more than the number of adversarial errors allowed, even after taking into account the entanglement lost due to a single channel error. Due to entanglement recycling, there is no need to generate more entanglement mid-protocol, unlike the randomness generated in the plain classical setting that has to be regenerated mid-protocol. A high-level description of the solution in this case can be found in \longpaper{Section~\ref{sec:description-large-quantum}. Section~\ref{subsec:description-large-quantum} gives the general description of our approach, while Sections~\ref{sec:Qhashing} and~\ref{sec:out-of-sync QVC} } \blurb{Section 4.2 in the full manuscript: Section 4.2.1 gives the general description of our approach, while Sections 4.2.3 and 4.2.4 } give more detail about out-of-sync Quantum Hashing and Quantum Vernam Cipher, respectively. \underline{\smash{\bf Transitioning to small alphabet size}} We can witness the power of the framework when going from the two previous cases to work with small alphabet size. Great care is taken when establishing the framework in the large alphabet setting so as to make the transition to small alphabet largely seamless. One difficulty of applying the large alphabet coding scheme in the small alphabet case is that $O(\log n)$ messages are now required to exchange position information that is used for resynchronization. Following~\cite{Haeupler:2014}, we instead use the meeting point mechanism. {\bf Haeupler's meeting point mechanism.}~ In Haeupler's meeting point mechanism, a set of positions (called meeting points) is specified, and Alice and Bob can rewind to these. In the presence of an observed inconsistency, the error is more likely to be recent than far back in the past. So, accordingly, the meeting points are spaced more closely near the current position, and are sparse back in the past so Alice and Bob typically only rewind a small number of steps (this is needed to limit the wasted communication caused by one error, as in Haeupler's general template described above). At the same time, there are only two meeting points considered at once by each party (with more distant ones considered iteratively if closer ones are believed to be \emph{invalid\/}), so, they can be compared with $O(1)$ hashes. {\bf Combining the meeting point mechanism with our framework.} Combining this meeting point idea with the framework we developed to solve the large alphabet cases leads to solutions for the small alphabet cases. The protocol for the Cleve-Buhrman model is described in detail in \longpaper{Section~\ref{subsec:alg-small-alphabet-classical}} \blurb{Section 6.2 of the full manuscript} and fully analysed in \longpaper{Section~\ref{sec:small-alphabet-CB-analysis},} \blurb{Section 6.3,} while the protocol for the Yao model is described in detail in \longpaper{Section~\ref{sec:small-alphabet-Yao-algo}} \blurb{Section 7.1} and fully analysed in \longpaper{Section~\ref{sec:small-alphabet-Yao-analysis}.} \blurb{Section 7.2.} When entanglement is free, we have used the given entanglement to generate useful secret keys. In the plain model, we adapt the protocol to prevent the adversary from injecting too many collisions in the hashes. } \section{Preliminaries} We assume that the reader is familiar with the quantum formalism for finite dimensional systems; for a thorough treatment, we refer the interested reader to good introductions in a quantum information theory context \cite[Chapter 2]{NC00}, \cite[Chapter 2]{Wat08} \cite[ Chapters 3, 4, 5]{Wilde11}. Let $\CMcal{A}$ be a~$d$-dimensional Hilbert space with computational basis~$\set{\ket{0},\ldots, \ket{d-1}}$. Let ${\mathrm X}$ and ${\mathrm Z}$ be the operators such that~${\mathrm X}\ket{k}\defeq\ket{k+1}$ and~${\mathrm Z}\ket{k}\defeq e^{{\mathrm i} \cdot 2\pi\frac{k}{d}}\ket{k}$. The generalized Pauli operators, also known as the Heisenberg-Weyl operators, are defined as~$\set{{\mathrm X}^j{\mathrm Z}^k}_{0\leq j,k\leq d-1}$. Let~$\Sigma=\set{0,\ldots,d-1}$. For~$N\in \mathbb{N}$, the operators in \begin{equation}\label{eqn:set-of-Pauli-ops} \CMcal{P}_{d,N} \defeq \{ {\mathrm X}^{j_1} {\mathrm Z}^{k_1} \otimes \cdots \otimes {\mathrm X}^{j_N} {\mathrm Z}^{k_N} \}_{j_l k_l \in \Sigma^2,l\in \Br{N}} \end{equation} form a basis for the space of operators on $\CMcal{A}^{\otimes N}$. For~$E\in \CMcal{P}_{d,N}$, We denote by~$\mathrm{wt}\br{E}$ the weight of~$E$, i.e., the number of~$\CMcal{A}$ subsystems on which~$E$ acts non-trivially. For $j,k\in\Sigma$, we represent the single qudit Pauli error ${\mathrm X}^j{\mathrm Z}^k$ by the string $jk\in\Sigma^2$. Similarly, a Pauli error on multiple qudits is represented by a string in ${\br{\Sigma^2}}^*$. The Fourier transform operator $F$ is defined to be the operator such that~$F\ket{j}\defeq\frac{1}{\sqrt{d}}\sum_{k=0}^{d-1}e^{{\mathrm i}\cdot 2\pi \frac{jk}{d}}\ket{k}$. \begin{prop}\label{fac:paulioperatorcommute} Let $\set{{\mathrm X}^j{\mathrm Z}^k}_{0\leq j,k\leq d-1}$ be the set of generalized Pauli operators on a $d$-dimensional Hilbert space. It holds that ${\mathrm X}^j{\mathrm Z}^k=e^{-{\mathrm i} \cdot 2\pi\frac{jk}{d}}{\mathrm Z}^k{\mathrm X}^j, F{\mathrm X}^jF^{\dagger}={\mathrm Z}^j$ and $F{\mathrm Z}^jF^{\dagger}={\mathrm X}^j$ for every $j,k\in\set{\ket{0},\ldots, \ket{d-1}}$. \end{prop} \begin{definition}\label{def:Bellstates} Let $\CMcal{A},\CMcal{B}$ be $d$-dimensional Hilbert spaces with computational bases~$\set{\ket{i}_A}_{0\leq i\leq d-1}$ and~$\set{\ket{i}_B}_{0\leq i\leq d-1}$, respectively. The set of Bell states in $\CMcal{A}\otimes\CMcal{B}$ is defined as \[\set{\ket{\phi^{j,k}}_{AB}\defeq\br{{\mathrm X}_A^j{\mathrm Z}_A^k\otimes\id}\ket{\phi}_{AB}:0\leq j, k\leq d-1} \enspace,\] where $\ket{\phi}_{AB}\defeq\frac{1}{\sqrt{d}}\sum_{i=0}^{d-1}\ket{i}_A\ket{i}_B$. For~$j,k\in\mathbb{{\mathrm Z}}$, we define $\ket{\phi^{j,k}}\defeq\ket{\phi^{j~\textsf{mod}~d, k~\textsf{mod}~d}}$. \end{definition} It is easy to see that $\ket{\phi^{j,k}}=\frac{1}{\sqrt{d}}\sum_{t=0}^{d-1}e^{{\mathrm i} \cdot 2\pi\frac{tk}{d}}\ket{t+j, t}$. \begin{prop}\label{fac:bellbasis} The Bell states $\set{\ket{\phi^{j,k}}}_{0\leq j,k\leq d-1}$ form an orthonormal basis in $\CMcal{A}\otimes\CMcal{B}$. \end{prop} \begin{prop}\label{fac:uuepr} For any unitary operator $U$ on register $A$, it holds that $$(U\otimes\id)\ket{\phi}_{AB}=(\id\otimes U^T)\ket{\phi}_{AB}\enspace,$$ where $U^T=\sum_{j,k}\bra{j}U\ket{k}\ketbratwo{k}{j}$. In particular, $(F\otimes\id)\ket{\phi}_{AB}=(\id\otimes F)\ket{\phi}_{AB}$. \end{prop} \subsection{Quantum Communication Model} \label{sec:qucomm} The definitions for the noiseless and noisy quantum communication models are copied from Ref.~\cite{BNTTU14}. We refer the reader there for a more formal definition of the noisy quantum communication model, as well as the relationship of the noiseless quantum communication model to well-studied quantum communication complexity models such a Yao's model and the Cleve-Buhrman model. \subsubsection{Noiseless Communication Model} \label{sec:nslss} In the \emph{noiseless quantum communication model\/} that we want to simulate, there are five quantum registers: the $A$ register held by Alice, the $B$ register held by Bob, the $C$ register, which is the communication register exchanged back-and-forth between Alice and Bob and initially held by Alice, the $E$ register held by a potential adversary Eve, and finally the $R$ register, a reference system which purifies the state of the $ABCE$ registers throughout the protocol. The initial state $\ket{\psi_\mathrm{init}}^{ABCER} \in \CMcal{H} (A \otimes B \otimes C \otimes E \otimes R)$ is chosen arbitrarily from the set of possible inputs, and is fixed at the outset of the protocol, but possibly unknown (totally or partially) to Alice and Bob. Note that to allow for composition of quantum protocols in an arbitrary environment, we consider arbitrary quantum states as input, which may be entangled with systems $RE$. A protocol $\Pi$ is then defined by the sequence of unitary operations $U_1, U_2, \cdots , U_{n + 1}$, with $U_{i}$ for odd~$i$ known at least to Alice (or given to her in a black box) and acting on registers~$AC$, and $U_{i}$ for even~$i$ known at least to Bob (or given to him in a black box) and acting on registers~$BC$. For simplicity, we assume that $n$ is even. We can modify any protocol to satisfy this property, while increasing the total cost of communication by at most one communication of the $C$ register. The unitary operators of protocol $\Pi$ can be assumed to be public information, known to Eve. On a particular input state $\ket{\psi_\mathrm{init}}$, the protocol generates the final state $\ket{\psi_\mathrm{final}}^{ABCER} = U_{n + 1} \cdots U_1 \ket{\psi_\mathrm{init}}^{ABCER}$, for which at the end of the protocol the $A$ and $C$ registers are held by Alice, the $B$ register is held by Bob, and the $E$ register is held by Eve. The reference register $R$ is left untouched throughout the protocol. The output of the protocol resides in systems $ABC$, i.e., $\Pi (\ket{\psi_\mathrm{init}}) = \partrace{ER}{\kb{\psi_\mathrm{final}}{\psi_\mathrm{final}}^{ABCER}}$, and by a slight abuse of notation we also represent the induced quantum channel from $ABCE$ to $ABC$ simply by $\Pi$. This is depicted in Figure~\ref{fig:int_mod}. Note that while the protocol only acts on $ABC$, we wish to maintain correlations with the reference system $R$, while we simply disregard what happens on the $E$ system assumed to be in Eve's hand. Since we consider local computation to be free, the sizes of $A$ and $B$ can be arbitrarily large, but still of finite size, say $m_A$ and $m_B$ qubits, respectively. Since we are interested in high communication rate, we do not want to restrict ourselves to the case of a single-qubit communication register~$C$, since converting a general protocol to one of this form can incur a factor of two overhead. We thus consider alternating protocols in which the register $C$ is of fixed size, say $d$ dimensions, and is exchanged back-and-forth. We believe that non-alternating protocols can also be simulated by adapting our techniques, but we leave this extension to future work. Note that both the Yao and the Cleve-Buhrman models of quantum communication complexity can be recast in this framework; see Ref.~\cite{BNTTU14}. \suppress{ \begin{figure} \begin{overpic}[width=1\textwidth]{int_noiseless_prot_no_label.pdf} \put(0,66.5){Reference} \put(0,45){Alice} \put(0,22){Bob} \put(0,4.8){Eve} \put(6,31){ $\ket{\psi_{\mathrm{init}}}$} \put(15,66.5){\footnotesize$R$} \put(15,4.8){\footnotesize$E$} \put(15,54){\footnotesize$A$} \put(15,48){\footnotesize$C$} \put(15,13.7){\footnotesize$B$} \put(22.1,49.5){\footnotesize$U_1$} \put(26.2,54){\footnotesize$A$} \put(26.2,48){\footnotesize$C$} \put(33,15.5){\footnotesize$U_2$} \put(37.2,17.4){\footnotesize$C$} \put(37.2,13.7){\footnotesize$B$} \put(44.2,49.5){\footnotesize$U_3$} \put(48.2,54){\footnotesize$A$} \put(48.2,48){\footnotesize$C$} \put(55,33){\footnotesize$\cdots$} \put(59.5,54){\footnotesize$A$} \put(59.5,48){\footnotesize$C$} \put(59.5,13.7){\footnotesize$B$} \put(66.8,15.5){\footnotesize$U_{N}$} \put(73.5,23){\footnotesize$C$} \put(73.5,13.7){\footnotesize$B$} \put(75.9,49.5){\footnotesize$U_{N+1}$} \put(81.3,54){\footnotesize$A$} \put(81.3,48){\footnotesize$C$} \put(92,31){ $\ket{\psi_{\mathrm{final}}}$} \end{overpic} \caption{Depiction of a quantum protocol in the noiseless communication model, adapted from the long version of~\cite[Figure 1]{Tou15}.} \label{fig:int_mod} \end{figure} } \begin{figure}[ht] \begin{center} \includegraphics[width=440pt]{noiseless-protocol.pdf} \vspace*{2ex} \caption{Depiction of a quantum protocol in the noiseless communication model.} \label{fig:int_mod} \end{center} \end{figure} We later embed length $n$ protocols into others of larger length $n^\prime > n$. To perform such \emph{noiseless protocol embedding\/}, we define some dummy registers $\tilde{A}$, $\tilde{B}$, $\tilde{C}$ isomorphic to $A$, $B$, $C$, respectively. $\tilde{A}$ and $\tilde{C}$ are part of Alice's scratch register and $\tilde{B}$ is part of Bob's scratch register. Then, for any isomorphic quantum registers $D$, $\tilde{D}$, let SWAP$_{D \leftrightarrow \tilde{D}}$ denote the unitary operation that swaps the $D, \tilde{D}$ registers. Recall that~$n$ is assumed to be even. In a noiseless protocol embedding, for $i \in \{1, 2, \cdots n-1 \}$, we leave $U_i$ untouched. We replace $U_{n}$ by (SWAP$_{B \leftrightarrow \tilde{B}} U_{n})$ and $U_{n + 1}$ by (SWAP$_{AC \leftrightarrow \tilde{A} \tilde{C}} U_{n + 1})$. Finally, for $i \in \{n+2, n+3, \cdots n^\prime + 1 \}$, we define $U_i = {\mathrm I}$, the identity operator. This embedding is important in the setting of interactive quantum coding for the following reasons. First, adding these $U_i$ for $i > n$ makes the protocol well defined for $n^\prime +1$ steps. Then, swapping the important registers into the safe registers $\tilde{A}$, $\tilde{B}$, $\tilde{C}$ ensures that the important registers are never affected by noise arising after the first $n+1$ steps have been applied. Hence, in our simulation, as long as we succeed in implementing the first $n+1$ steps without errors, the simulation will succeed since the $\tilde{A}$, $\tilde{B}$, $\tilde{C}$ registers will then contain the output of the simulation, with no error acting on these registers. \subsubsection{Noisy Communication Model} \label{sec:noisy_comm_model} There are many possible models for noisy communication. For our main results, we focus on one in particular, analogous to the Yao model with no shared entanglement but noisy quantum communication, which we call the \emph{plain quantum model\/}. In Section~\ref{sec:BKlarge}, we consider and define an alternative model. For simplicity, we formally define in this section what we sometimes refer to as \emph{alternating\/} communication models, in which Alice and Bob take turns in transmitting the communication register to each other, and this is the model in which most of our protocols are defined. Our definitions easily adapt to somewhat more general models which we call \emph{oblivious\/} communication models, following Ref.~\cite{BR11}. In these models, Alice and Bob do not necessarily transmit their messages in alternation, but nevertheless in a fixed order and of fixed sizes known to all (Alice, Bob and Eve) depending only on the round, and not on the particular input or the actions of Eve. Communication models with a dependence on inputs or actions of Eve are called \emph{adaptive\/} communication models. \paragraph{Plain Quantum Model} In the \emph{plain quantum model\/}, Alice has workspace $A^\prime$, Bob has workspace $B^\prime$, the adversary Eve has workspace $E^\prime$, and there is some quantum communication register $C^\prime$ of some fixed size $d^\prime$ dimensions (we will consider only $d^\prime = d$ in this work), exchanged back and forth between them $n^\prime$ times, passing through Eve's hand each time. Alice and Bob can perform arbitrary local processing between each transmission, whereas Eve's processing when the $C^\prime$ register passes through her hand is limited by the noise model as described below. The input registers $ABCE$ are shared between Alice ($AC$), Bob ($B$) and Eve ($E$) and the output registers $\tilde{A} \tilde{B} \tilde{C}$ are shared between Alice ($\tilde{A} \tilde{C}$) and Bob ($\tilde{B}$). The reference register $R$ containing the purification of the input is left untouched throughout. Alice and Bob also possess registers $C_{\mathsf A}$ and $C_{\mathsf B}$, respectively, acting as virtual communication register $C$ from the original protocol $\Pi$ of length $n$ to be simulated. The communication rate of the simulation is given by the ratio $\frac{n \log d}{n^\prime \log d^\prime}$. We are interested in two models of errors, adversarial and random noise. In the \emph{adversarial\/} noise model, we are mainly interested in an adversary Eve with a bound $\delta n^\prime$ on the number of errors that she introduces on the quantum communication register $C^\prime$ that passes through her hand. The fraction $\delta$ of corrupted transmissions is called the error rate. More formally, an adversary in the quantum model with error rate bounded by~$\delta\in \Br{0,1}$ is specified by a sequence of instruments~$\CMcal{N}_1^{E^\prime C_1^\prime},\ldots,\CMcal{N}_{n^\prime}^{E^\prime C_{n^\prime}^\prime}$ acting on register $E^\prime$ of arbitrary dimension $d''$ and the communication register $C^\prime$ of dimension $d^\prime$ in protocols of length $n^\prime$. For any density operator~$\rho$ on~$\CMcal{H}( E^\prime \otimes C^{\prime \otimes {n^{\prime}}} )$, the action of such an adversary is \begin{equation}\label{eqn:noise-model-1} \CMcal{N}_1^{E^\prime C_1^\prime}\cdots\CMcal{N}_{n^\prime}^{E^\prime C_{n^\prime}^\prime} \br{\rho} = \sum_{i} G_i \rho G_i^\dagger \enspace, \end{equation} for~$i$ ranging over some finite set, subject to~$\sum_i G_i^\dagger G_i = \id^{E^\prime C^{\prime \otimes {n^{\prime}}}}$, where each $G_i$ is of the form \begin{equation}\label{eqn:noise-model-2} G_i = \sum_{F\in \CMcal{P}_{d'',1}} \sum_{\substack{H\in \CMcal{P}_{d',n'} \\ \mathrm{wt(H)\leq \delta n^\prime}}} \alpha^i_{F,H} F^{E^\prime} \otimes H^{C^{\prime \otimes n^\prime}} \enspace. \end{equation} \suppress{and is assessed by requiring that there exists a representation of the global action of Eve on the $n^\prime$ quantum communication registers with Kraus operators of weight at most $\delta n^\prime$.} In the random noise model, we consider $n^\prime$ independent and identically distributed uses of a noisy quantum channel acting on register $C^\prime$, half the time in each direction. Eve's workspace register $E^\prime$ (including her input register $E$) can be taken to be trivial in this noise model. Note that the adversarial noise model includes the random noise model as a special case. For both noise models, we say that the simulation succeeds with error $\epsilon$ if for any input, the output in register $\tilde{A} \tilde{B} \tilde{C}$ corresponds to that of running protocol $\Pi$ on the same input, while also maintaining correlations with system $R$, up to error $\epsilon$ in trace distance. Note that adversaries in the quantum model can inject fully quantum errors since the messages are quantum, in contrast to adversaries corrupting classical messages which are restricted to be modifications of classical symbols. On the other hand, for classical messages the adversary can read all the messages without the risk of corrupting them, whereas in the quantum model, any attempt to ``read'' messages will result in an error in general on some quantum message. \subsection{Entanglement distribution}\label{subsec:entanglement-dist} In our algorithm in the plain quantum model, Alice and Bob need to use MESs as a resource in order to simulate the input protocol. To establish the shared MESs, one party creates the states locally and sends half of each MES to the other party using an appropriate error correcting code of distance~$4n\epsilon$, as described in Algorithm~\ref{algo:Robust Entanglement Distribution}. \begin{algorithm} \Input{$\ell$ (desired number of MESs)} $C\leftarrow$ Error Correcting Code with rate $1-\Theta(H(\epsilon))$ guaranteed by quantum Gilbert-Varshamov bound~\cite{FMa:2004}\; \If {$\mathrm{Alice}$} { Prepare $\ell$ MESs in registers $A,B'$ each holding half of every MES\; Transmit $C\br{B'}$ to Bob\; } \ElseIf {$\mathrm{Bob}$} { Receive $C'(B')$\; Decode $C'(B')$ into register $B$\; } \Return{\textup{\textbf{\textsf{Robust Entanglement Distribution}}}}\; \caption{\textbf{\textsf{Robust Entanglement Distribution}($\ell$)}} \label{algo:Robust Entanglement Distribution} \end{algorithm} \subsection{Hashing for string comparison}\label{subsec:hash} We use randomized hashes to compare strings and catch disagreements probabilistically. The hash values can be viewed as summaries of the strings to be compared. A random bit string called the \emph{seed\/} is used to select a function from the family of hash functions. We say a \emph{hash collision\/} occurs when a hash function outputs the same value for two unequal strings. In this paper we use the following family of hash functions based on the $\epsilon$-biased probability spaces constructed in \cite{NaorNaor}. \begin{lemma}[from \cite{NaorNaor}] \label{lem:hashes} For any $l$, any alphabet $\Sigma$, and any probability $0<p<1$, there exist $s = \Theta(\log (l \log |\Sigma|) + \log \frac{1}{p})$, $o = \Theta(\log \frac{1}{p})$, and a simple function $h$, which given an $s$-bit uniformly random seed $S$ maps any string over $\Sigma$ of length at most $l$ into an $o$-bit output, such that the collision probability of any two $l$-symbol strings over $\Sigma$ is at most $p$. In short: {% $$\forall l,\Sigma,0<p<1: \\ \quad \exists s = \Theta(\log (l \log |\Sigma|) + \log \frac{1}{p}) \;,\; o = \Theta(\log \frac{1}{p}) \;,\; h: \{0,1\}^s \times \Sigma^{\leq l} \mapsto \{0,1\}^o\ \mathrm{s.t.}$$ $$\forall \texttt{X},\texttt{Y} \in \Sigma^{\leq l}, \texttt{X} \neq \texttt{Y}, \texttt{S} \in \{0,1\}^s \ \mathrm{i.i.d.}\ \mathrm{Bernoulli}(1/2): \\ \qquad P[h_\texttt{S}(\texttt{X}) = h_\texttt{S}(\texttt{Y})] \leq p$$ } \end{lemma} In our application, the hash family of Lemma~\ref{lem:hashes} is used to compare~$\Theta\br{n}$-bit strings, where $n$ is the length of the input protocol. Therefore, in the large alphabet setting, the collision probability can be chosen to be as low as~$p=1/\mathrm{poly}\br{n}$, while still allowing the hash values to be exchanged using only a constant number of symbols. In the teleportation-based model, where Alice and Bob have access to free pre-shared entanglement, they generate the seeds by measuring the MESs they share in the computational basis. In our simulation protocol in the plain quantum model, where Alice and Bob do not start with pre-shared entanglement, they use Algorithm~\ref{algo:Robust Entanglement Distribution} at the outset of the simulation to distribute the MESs they need for the simulation. A fraction of these MESs are measured by both parties to obtain the seeds. One advantage of generating the seeds in this way is that the seeds are unknown to the adversary. This is in contrast to the corresponding classical model with no pre-shared randomness, were the seeds need to be communicated over the classical channel and the adversary gets to know the seeds. The knowledge of the seeds enables the adversary to introduce errors which remain undetected with certainty. As a result, Haeupler~\cite{Haeupler:2014} adds another layer of hashing to his algorithm for the oblivious noise model to protect against fully adversarial noise, dropping the simulation rate from~$1-\Theta\br{\sqrt{\epsilon}}$ to~$1-\Theta\br{\sqrt{\epsilon \log \log 1/\epsilon}}$. \subsection{Extending randomness to pseudo-randomness}\label{subsec:extending-randomness} In our simulation algorithm in the plain quantum model, Alice and Bob need to share a very long random string which they need to establish through communication. A direct approach would be for them to distribute enough MESs and measure them in the computational basis to obtain a uniformly random shared string. However, this would lead to a vanishing simulation rate. Instead, they distribute a much shorter i.i.d. random bit string~$R'$ and using Lemma~\ref{lem:stretch} below stretch it to a pseudo-random string~$R$ of the desired length which is statistically indistinguishable from being independent. Before stating Lemma~\ref{lem:stretch}, we need the following definitions and propositions. \begin{definition} Let $X$ be a random variable distributed over $\{0,1\}^n$ and $J\subseteq \Br{n}$ be a non-empty set. The \emph{bias\/} of $J$ with respect to distribution $X$, denoted $\mathrm{bias}_J\br{X}$, is defined as \[ \mathrm{bias}_J\br{X} \defeq \left| \Pr\br{\sum_{i\in J}X_i=1}-\Pr\br{\sum_{i\in J}X_i=0} \right|\enspace, \] where the summation is mod $2$. For $J=\emptyset$, bias is defined to be zero, i.e., $\mathrm{bias}_{\emptyset} \br{X}=0$. \end{definition} \begin{definition} Let $\delta\in\Br{0,1}$. A distribution $X$ over $\{0,1\}^n$ is called a \emph{$\delta$-biased sample space\/} if $\mathrm{bias}_J\br{X}\leq \delta$, for all non-empty subsets $J\subseteq \Br{n}$. \end{definition} Intuitively, a small-bias random variable is statistically close to being uniformly distributed. The following lemma quantifies this statement. \begin{prop} \label{prop:uniform-vs-delta-biased} Let $X$ be an arbitrary distribution over $\{0,1\}^n$ and let $U$ denote the uniform distribution over $\{0,1\}^n$. Then we have \[ \|X-U\|_2^2 \leq 2^{-n}\sum_{J\subseteq\Br{n}} \mathrm{bias}^2_J\br{X}\enspace. \] In particular, for $\delta$-biased $x$ we have~$\|X-U\|_2 \leq \delta$. \end{prop} We will make use of the following proposition providing an alternative characterization of the $L_1$-distance between two probability distributions. \begin{prop} \label{prop:l1-distance} Let $p$ and $q$ be probability distributions over some (countable) set $\mathcal{Z}$, then \[ \|p-q\|_1 = 2 \sup_{A\subseteq \mathcal{Z}} \left|p(A)-q(A)\right|\enspace. \] \end{prop} We use the following lemma in our algorithms. \begin{lemma}[\cite{NaorNaor}]\label{lem:stretch} For every $\delta\in(0,1)$, there exists a deterministic algorithm which given $O\br{\log n+\log\frac{1}{\delta}}$ uniformly random bits outputs a $\delta$-biased pseudo-random string of $n$ bits. Any such $\delta$-biased string is also $\epsilon$-statistically close to being $k$-wise independent for $\epsilon=\delta^{\Theta\br{1}}$ and $k=\Theta\br{\log \frac{1}{\delta}}$. \end{lemma} \suppress{ We use the same hash functions as in~\cite{Haeupler:2014}. \begin{definition}~\cite{Haeupler:2014} (\textbf{Inner Product Hash Function})\label{def:innerproducthash} For any input of length $s$ and any output length $o$, the inner product hash function $h_S\br{\cdot}$ is defined as follows: for a given binary seed $S$ of length at least $2os$ it takes any binary input $X$ of length $\ell\leq s$, concatenates this input with its length $\tilde{X}=\br{X,\abs{X}}$ to form a string of length $\tilde{\ell}\leq2s$ and the outputs the $o$ inner products $\left\langle \tilde{X}, S[i\cdot 2s+1, i\cdot 2s+\tilde{\ell}] \right\rangle$ for every $i\in[0,o-1]$. \end{definition} \begin{lemma} ~\cite{Haeupler:2014} Let $X\neq Y$ be a pair of binary strings of length $s$ and $h$ be the inner product hash function given in Definition~\ref{def:innerproducthash} for input length $s$ and output length $o$. Suppose $S$ is seed string of length at least $n\cdot 2os$ sampled independently from $XY$. Then the collision probability $\prob{h_S\br{X}=h_S\br{Y}}=2^{-o}$ if $S$ is sampled from the uniform distribution. Furthermore, if the seed $S$ is sampled from a $\delta$-biased distribution the collision probability is at most $2^{-o}+\delta$. \end{lemma} \begin{lemma}~\cite{Haeupler:2014}\label{fac:hash} Consider $n$ pairs of binary strings $\br{X_1,Y_1},\ldots,\br{X_n,Y_n}$ where each string is of length at most $s$, and suppose $h$ is the inner product hash function for input length $s$ and output length $o$. Suppose furthermore that $S=\br{S_1,\ldots, S_n}$ is a random seed string of length at least $n\cdot 2os$ which is sampled independently of $XY$. Then the output distribution $\br{h_{S_1}\br{X_1}-h_{S_1}\br{Y_1},\ldots,h_{S_n}\br{X_n}-h_{S_n}\br{Y_n}}$ for a $S$ sampled from a $\delta$-biased distribution is $\delta$-statistically close to the output distribution for a uniformly sampled $S$ for which each $x_i$ is equal to $0$ if $X_i=Y_i$ and independently uniformly random, i.e., $\prob{x_i=0}=2^{-o}$, otherwise. \end{lemma} } \subsection{Protocols over qudits}\label{subsec:protocoloverqudits} In this section, we revisit two quantum communication protocols, both of which are essential to our simulation algorithms and analyze the effect of noise on these protocols. \subsubsection{Quantum teleportation over noisy channels} The protocol given here is an extension of quantum teleportation to qudits. Readers may refer to Chapter 6 in~\cite{Wilde11} for more details. \begin{definition}\label{def:teleportation}\textbf{Quantum teleportation protocol} Alice possesses an arbitrary $d$-dimensional qudit in state $\ket{\psi}_{A}$, which she wishes to communicate to Bob. They share an MES in the state $\ket{\phi}_{A_1B_1}$. \begin{enumerate} \item Alice performs a measurement on registers $AA_1$ with respect to the Bell basis $\set{\ket{\phi^{j,k}}}_{j,k}$. \item She transmits the measurement outcome $\br{j,k}$ to Bob. \item Bob applies the unitary transformation ${\mathrm Z}_{B_1}^k{\mathrm X}_{B_1}^j$ on his state to recover $\ket{\psi}$. \end{enumerate} \end{definition} \vspace{3mm} In the rest of the paper the measurements implemented in Definition~\ref{def:teleportation} are referred to as the {\em teleportation measurements} and the receiver's unitary transformation to recover the target state is referred to as {\em teleportation decoding operation}. If Bob receives $\br{j',k'}$ due to a corruption on Alice's message, the state he gets after decryption will be the following: \begin{equation}\label{eqn:corruptedteleportation} {\mathrm Z}_B^{k'}{\mathrm X}_B^{j'}{\mathrm X}_B^{d-j}{\mathrm Z}_B^{d-k}\ket{\psi}=e^{{\mathrm i} \cdot \frac{2\pi}{d}(j'-j)k'} {\mathrm X}^{j'-j}{\mathrm Z}^{k'-k}\ket{\psi}\enspace. \end{equation} \subsubsection{Quantum Vernam cipher over noisy qudit channels}\label{sec:qvc} In this section, we revisit {\em quantum Vernam cipher} (QVC) introduced by Leung~\cite{Leung:2002}, which is a quantum analog of Vernam cipher (one-time-pad). For a unitary operation $U$, the controlled gate $\control{U}$ is defined as \[\br{\control{U}}_{AB}\ket{j}_A\ket{k}_B\defeq\ket{j}U^j\ket{k} \enspace.\] The extension of quantum Vernam cipher to qudit systems goes as follows. \begin{definition}\label{def:Quantum Vernam Cipher}\textbf{Quantum Vernam cipher} Alice possesses an arbitrary $d$-dimensional qudit in state $\ket{\psi}_{A}$, which she wishes to communicate to Bob. They share an MES pair in the state $\ket{\phi}_{A_1B_1}\ket{\phi}_{A_2B_2}$, with Alice and Bob holding registers $A_1A_2$ and $B_1B_2$, respectively. \begin{enumerate} \item Alice applies the unitary transformation $\br{\control{{\mathrm Z}}}_{A_2A}\br{\control{{\mathrm X}}}_{A_1A}$. \item She transmits the register $A$ to Bob. \item Bob applies the unitary transformation $\br{\control{{\mathrm X}^{-1}}}_{B_1B}\br{\control{{\mathrm Z}^{-1}}}_{B_2B}$. \end{enumerate} \end{definition} \vspace{3mm} Quantum Vernam cipher uses entanglement as the key to encrypt quantum information sent through an insecure quantum channel. In sharp contrast with the classical Vernam cipher, the quantum key can be recycled securely. Note that if no error occurs on Alice's message, then Bob recovers the state $\ket{\psi}$ perfectly, and at the end of the protocol the MES pair remain intact. The scheme detects and corrects for arbitrary transmission errors, and it only requires local operations and classical communication between the sender and the receiver. \begin{figure}[ht] \begin{center} \includegraphics[width=250pt]{QVC.jpg} \vspace*{2ex} \caption{sending one qudit through quantum channel $\mathcal{E}$ using quantum Vernam cipher.} \label{fig:scheme1} \end{center} \end{figure} In particular, if Alice's message is corrupted by the Pauli error ${\mathrm X}^j{\mathrm Z}^k$, the joint state after Bob's decryption is \begin{eqnarray} \label{QVC with error} &&\br{\control{{\mathrm X}^{-1}}}_{B_1A}\br{\control{{\mathrm Z}^{-1}}}_{B_2A}{\mathrm X}^j{\mathrm Z}^k\br{\control{{\mathrm Z}}}_{A_2A}\br{\control{{\mathrm X}}}_{A_1A}\ket{\phi}_{A_1B_1}\ket{\phi}_{A_2B_2}\ket{\psi}_{A}\nonumber\\ &=&\frac{1}{d}\sum_{t,t'=0}^{d-1}\br{\control{{\mathrm X}^{-1}}}_{B_1A}\br{\control{{\mathrm Z}^{-1}}}_{B_2A}{\mathrm X}^j{\mathrm Z}^k\br{\control{{\mathrm Z}}}_{A_2A}\br{\control{{\mathrm X}}}_{A_1A} \ket{t}_{A_1}\ket{t}_{B_1}\ket{t'}_{A_2}\ket{t'}_{B_2}\ket{\psi}_{A} \nonumber\\ &=&\frac{1}{d}\sum_{t,t'=0}^{d-1}\ket{t}_{A_1}\ket{t}_{B_1}\ket{t'}_{A_2}\ket{t'}_{B_2}{\mathrm X}^{-t}{\mathrm Z}^{-t'}{\mathrm X}^j{\mathrm Z}^k{\mathrm Z}^{t'}{\mathrm X}^t\ket{\psi}_{A}\nonumber\\ &=&\frac{1}{d}\sum_{t,t'=0}^{d-1}e^{{\mathrm i} \cdot \frac{2\pi}{d} (kt-jt')}\ket{t}_{A_1}\ket{t}_{B_1}\ket{t'}_{A_2}\ket{t'}_{B_2}{\mathrm X}^j{\mathrm Z}^k\ket{\psi}\nonumber\\ &=&\ket{\phi^{0,k}}_{A_1B_1}\ket{\phi^{0,-j}}_{A_2B_2}\otimes {\mathrm X}^j{\mathrm Z}^k\ket{\psi}.\label{eq:messagecorrupt} \end{eqnarray} Note that by Equation \eqref{eq:messagecorrupt}, there is a one-to-one correspondence between the Pauli errors and the state of the maximally entangled pair. An ${\mathrm X}^{j}$ error on the cipher-text is reflected in the state of the second MES as a ${\mathrm Z}^{-j}$ error and a ${\mathrm Z}^{k}$ error on the cipher-text is reflected in the state of the first MES as a ${\mathrm Z}^{k}$ error. Note that for every integer $s$ we have $$\br{F\otimes F^{\dagger}} \ket{\phi^{0,s}} \quad=\quad \br{F{\mathrm Z}^{s}\otimes F^{\dagger}}\ket{\phi} \quad=\quad \br{F{\mathrm Z}^{s}F^{\dagger}\otimes\id}\ket{\phi} \quad=\quad \br{{\mathrm X}^{s}\otimes \id}\ket{\phi}.$$ Therefore, in order to extract the error syndrome, it suffices for Alice and Bob to apply $F$ and $F^{\dagger}$, respectively, on their marginals of the MESs and measure them in the computational basis. By comparing their measurement outcomes they can determine the Pauli error. When quantum Vernam cipher is used for communication of multiple messages, it is possible to detect errors without disturbing the state of the MES pairs at the cost of an additional fresh MES. This error detection procedure allows for recycling of MESs which is crucial in order to achieve a high communication rate, as explained in Section~\ref{sec:Qhashing}. Here we describe a simplified version of the detection procedure. First we need the following two lemma. \begin{prop}\label{lem:cnotbell} It holds that \[\br{\control{{\mathrm X}}}_{A_1A_2} \cdot \br{\control{{\mathrm X}} }_{B_1B_2}\ket{\phi^{j_1,k_1}}_{A_1B_1}\ket{\phi^{j_2,k_2}}_{A_2B_2}=\ket{\phi^{j_1,k_1-k_2}}_{A_1B_1}\ket{\phi^{j_1+j_2,k_2}}_{A_2B_2}.\] In particular, \[\br{\control{{\mathrm X}}}_{A_1A_2} \cdot \br{ \control{{\mathrm X}} }_{B_1B_2}\ket{\phi^{0,k_1}}_{A_1B_1}\ket{\phi^{0,k_2}}_{A_2B_2}=\ket{\phi^{0,k_1-k_2}}_{A_1B_1}\ket{\phi^{0,k_2}}_{A_2B_2}.\] \end{prop} \begin{proof} \begin{eqnarray*} &&\br{\control{{\mathrm X}}}_{A_1A_2}\cdot \br{\control{{\mathrm X}}}_{B_1B_2}\ket{\phi^{j_1,k_1}}_{A_1B_1}\ket{\phi^{j_2,k_2}}_{A_2B_2}\\&=&\frac{1}{d}\sum_{t_1,t_2=0}^{d-1}\br{\control{{\mathrm X}}}_{A_1A_2}\cdot \br{\control{{\mathrm X}}}_{B_1B_2}e^{{\mathrm i} \cdot \frac{2\pi}{d} (t_1k_1+t_2k_2) }\ket{t_1+j_1}_{A_1}\ket{t_1}_{B_1}\ket{t_2+j_2}_{A_2}\ket{t_2}_{B_2}\\ &=&\frac{1}{d}\sum_{t_1,t_2=0}^{d-1}e^{{\mathrm i} \cdot \frac{2\pi}{d}(t_1k_1+t_2k_2)}\ket{t_1+j_1}_{A_1}\ket{t_1}_{B_1}\ket{t_2+j_2+t_1+j_1}_{A_2}\ket{t_2+t_1}_{B_2}\\ &=&\ket{\phi^{j_1,k_1-k_2}}_{A_1B_1}\ket{\phi^{j_1+j_2,k_2}}_{A_2B_2}. \end{eqnarray*} \end{proof} \suppress{ \begin{lemma}\label{lem:hashorder} Given registers $A_0,A_1,A_2,B_0,B_1,B_2$, the operators $$\br{\control{{\mathrm X}}}_{A_0A_1}, \br{\control{{\mathrm X}}}_{B_0B_1}, \br{\control{{\mathrm X}}}_{A_0A_2}, \br{\control{{\mathrm X}}}_{B_0B_2}$$ commute with each other. \end{lemma} \begin{proof} $\br{\control{{\mathrm X}}}_{A_0A_1}$ commutes with $\br{\control{{\mathrm X}}}_{B_0B_1}$ and $\br{\control{{\mathrm X}}}_{B_0B_2}$ as they act on disjoint registers. For the same reason, $\br{\control{{\mathrm X}}}_{A_0A_2}$ commutes with $\br{\control{{\mathrm X}}}_{B_0B_1}$ and $\br{\control{{\mathrm X}}}_{B_0B_2}$. Thus, it suffices to show that $\br{\control{{\mathrm X}}}_{A_0A_1}$ commute with $\br{\control{{\mathrm X}}}_{A_0A_2}$. Let $\ket{a_0},\ket{a_1},\ket{a_2}$ be an element of the computational basis in the registers $A_0,A_1,A_2$, respectively. Then \begin{eqnarray*} &&\br{\control{{\mathrm X}}}_{A_0A_1}\cdot\br{\control{{\mathrm X}}}_{A_0A_2}\ket{a_0,a_1,a_2}\\ &=&\br{\control{{\mathrm X}}}_{A_0A_2}\cdot\br{\control{{\mathrm X}}}_{A_0A_1}\ket{a_0,a_1,a_2}\\ &=&\ket{a_0, \, a_0{+}a_1 \bmod d, \, a_0{+}a_2 \bmod d}. \end{eqnarray*} \end{proof} } Suppose that Alice and Bob start with $m$ copies of the MES $\ket{\phi}$ and use them in pairs to communicate messages using QVC over a noisy channel. By Equation~\eqref{eq:messagecorrupt} all the MESs remain in $\textsf{span}\set{\ket{\phi^{0,k}}:0\leq k\leq d-1}$. This invariance is crucial to the correctness of our simulation. Let $\ket{\phi^{0,k_i}}_{A_iB_i}$ be the state of the $i$-th MES after the communication is done. In order to detect errors, Alice and Bob use an additional MES $\ket{\phi}_{A_0B_0}$. For $i=1,...,m$, Alice and Bob apply $\br{\control{{\mathrm X}}}_{A_0A_i}$ and $\br{\control{{\mathrm X}}}_{B_0B_i}$, respectively. By Proposition~\ref{lem:cnotbell}, the joint state of the register $A_0B_0$ will be $\ket{\phi^{0,-\!\sum_{i=1}^m \! k_i}}_{A_0B_0}$. Now, all Alice and Bob need to do is to apply $F$ and $F^{\dagger}$ on registers $A_0$ and $B_0$, respectively, and measure their marginal states in the computational basis. By comparing their measurement outcomes they can decide whether any error has occurred. In this procedure the MESs used as the keys in QVC are not measured. Note that if the corruptions are chosen so that~$\sum_{i=1}^m \! k_i=0 \bmod d$ then this procedure fails to detect the errors. We will analyze a modified version of this error detection procedure in detail in Section~\ref{sec:Qhashing} which allows error detection with high probability independent of the error syndrome. \suppress{ In our application of the quantum Vernam cipher, we may not have the above perfect scenario, due to corruptions by the adversary. In the following, we analyze several possibilities we encounter. \begin{itemize} \item If the pre-shared MES pair are initially in the state $\ket{\phi^{0,s}}_{A_1B_1}\ket{\phi^{0,s'}}_{A_2B_2}$, then the state after Bob's decryption is \begin{eqnarray} &&\br{\control{{\mathrm X}^{-1}}}_{B_1A}\br{\control{{\mathrm Z}^{-1}}}_{B_2A}{\mathrm X}^j{\mathrm Z}^k\br{\control{{\mathrm Z}}}_{A_2A}\br{\control{{\mathrm X}}}_{A_1A}\ket{\phi^{0,s}}_{A_1B_1}\ket{\phi^{0,s'}}_{A_2B_2}\ket{\psi}_{A}\nonumber\\ &=&\frac{1}{d}\sum_{t,t'=0}^{d-1}e^{{\mathrm i} \cdot \frac{2\pi}{d} (ts+t's') }\br{\control{{\mathrm X}^{-1}}}_{B_1A}\br{\control{{\mathrm Z}^{-1}}}_{B_2A}{\mathrm X}^j{\mathrm Z}^k\br{\control{{\mathrm Z}}}_{A_2A}\br{\control{{\mathrm X}}}_{A_1A} \nonumber\\ && \hspace*{55ex} \ket{t}_{A_1}\ket{t}_{B_1}\ket{t'}_{A_2}\ket{t'}_{B_2}\ket{\psi}_{A}\nonumber\\ &=&\frac{1}{d}\sum_{t,t'=0}^{d-1}e^{{\mathrm i} \cdot \frac{2\pi}{d}(ts+t's')}\ket{t}_{A_1}\ket{t}_{B_1}\ket{t'}_{A_2}\ket{t'}_{B_2}{\mathrm X}^{-t}{\mathrm Z}^{-t'}{\mathrm X}^j{\mathrm Z}^k{\mathrm Z}^{t'}{\mathrm X}^t\ket{\psi}_{A}\nonumber\\ &=&\frac{1}{d}\sum_{t,t'=0}^{d-1}e^{{\mathrm i} \cdot \frac{2\pi}{d} (\br{s+k}t+\br{s'-j}t') }\ket{t}_{A_1}\ket{t}_{B_1}\ket{t'}_{A_2}\ket{t'}_{B_2}{\mathrm X}^j{\mathrm Z}^k\ket{\psi}\nonumber\\ &=&\ket{\phi^{0,s+k}}_{A_1B_1}\ket{\phi^{0,s'-j}}_{A_2B_2}\otimes {\mathrm X}^j{\mathrm Z}^k\ket{\psi}.\label{eqn:quantumvernamcipher} \end{eqnarray} \item If Bob performs his operation in Quantum Vernam Cipher before Alice, then \begin{eqnarray} &&\br{\control{{\mathrm Z}}}_{A_2A}\br{\control{{\mathrm X}}}_{A_1A}{\mathrm X}^j{\mathrm Z}^k\br{\control{{\mathrm X}^{-1}}}_{B_1A}\br{\control{{\mathrm Z}^{-1}}}_{B_2A}\ket{\phi^{0,s}}_{A_1B_1}\ket{\phi^{0,s'}}_{A_2B_2}\ket{\psi}_{A}\nonumber\\ &=&\frac{1}{d}\sum_{t,t'=0}^{d-1} e^{{\mathrm i} \cdot \frac{2\pi}{d}(ts+t's')} \br{\control{{\mathrm Z}}}_{A_2A}\br{\control{{\mathrm X}}}_{A_1A}{\mathrm X}^j{\mathrm Z}^k\br{\control{{\mathrm X}^{-1}}}_{B_1A}\br{\control{{\mathrm Z}^{-1}}}_{B_2A}\nonumber\\ &&\hspace*{55ex}\ket{t}_{A_1}\ket{t}_{B_1}\ket{t'}_{A_2}\ket{t'}_{B_2}\ket{\psi}_{A}\nonumber\\ &=&\frac{1}{d}\sum_{t,t'=0}^{d-1}e^{{\mathrm i} \cdot \frac{2\pi}{d} (ts+t's')}\ket{t}_{A_1}\ket{t}_{B_1}\ket{t'}_{A_2}\ket{t'}_{B_2}{\mathrm Z}^{t'}{\mathrm X}^t{\mathrm X}^j{\mathrm Z}^k{\mathrm X}^{-t}{\mathrm Z}^{-t'}\ket{\psi}_{A}\nonumber\\ &=&\frac{1}{d}\sum_{t,t'=0}^{d-1}e^{{\mathrm i} \cdot \frac{2\pi}{d}(\br{s-k}t+\br{s'+j}t')}\ket{t}_{A_1}\ket{t}_{B_1}\ket{t'}_{A_2}\ket{t'}_{B_2}{\mathrm X}^j{\mathrm Z}^k\ket{\psi}\nonumber\\ &=&\ket{\phi^{0,s-k}}_{A_1B_1}\ket{\phi^{0,s'+j}}_{A_2B_2}\otimes {\mathrm X}^j{\mathrm Z}^k\ket{\psi}.\label{eqn:reverseorderquantumvernamcipher} \end{eqnarray} \end{itemize} } \section{Teleportation-based protocols via classical channel with large alphabet}\label{sec:BKlarge} \subsection{Overview} We adapt from ~\cite{BNTTU14} the ideas to teleport each quantum message and to rewind the protocol instead of backtracking. We also adapt Haeupler's template~\cite{Haeupler:2014} to make a conversation robust to noise: Both parties conduct their original conversation as if there were no noise, except for the following: \begin{itemize} \item At regular intervals they exchange concise summaries (a $\Theta \br{1}$\suppress{ or $\Theta \br{\log \log n}$}-bit hash value) of the conversation up to the point of the exchange. \item If the summary is consistent, they continue the conversation. \item If the summary is inconsistent, an error is detected. The parties backtrack to an earlier stage of the conversation and resume from there. \end{itemize} This template can be interpreted as an error correcting code over many messages, with trivial (and most importantly \emph{message-wise\/}) encoding. The 2-way summaries measure the error syndromes over a large number of messages, thereby preserving the rate. It works (in the classical setting) by limiting the maximum amount of communication wasted by a single error to $O_\epsilon \br{1}$. The worst case error disrupts the consistency checks, but Alice and Bob agree to backtrack a constant amount when an inconsistency is detected. As the error fraction vanishes, the communication rate goes to~$1$. In addition, these consistency tests are efficient, consisting of evaluation of hash functions. \subsubsection{Insufficiency of simply combining \cite{BNTTU14} and~\cite{Haeupler:2014}.} Suppose we have to simulate an interactive protocol $\Pi$ that uses noiseless classical channels in the teleportation-based model. When implementing $\Pi$ with noisy classical channels, it is \emph{not sufficient\/} to apply Haeupler's template to the classical messages used in teleportation, and rewind as in~\cite{BNTTU14} when an error is detected. The reason is that, in~\cite{BNTTU14}, each message is expanded to convey different types of actions in one step (simulating the protocol forward or reversing it). This also maintains the matching between classical data with the corresponding MES, and the matching between systems containing MESs. However, this method incurs a large constant factor overhead which we cannot afford to incur. \subsubsection{New difficulties in rate-optimal simulations.} Due to errors in communication, the parties need to actively rewind the simulation to correct errors on their joint quantum state. This itself can lead to a situation where the parties may not agree on how they proceed with the simulation (to rewind simulation or to proceed forward). In order to move on, both parties first need to know what the other party has done so far in the simulation. This allows them to obtain a global view of the current joint state and decide on their next action. In Ref. \cite{BNTTU14}, this reconciliation step was facilitated by the extra information sent by each party and the use of tree codes. This mechanism is not available to us. \subsubsection{Framework.} Our first new idea is to introduce sufficient yet concise data structures so that the parties can detect inconsistencies in (1) the stage in which they are in the protocol, (2) what type of action they should be taking, (3) histories leading to the above, (4) histories of measurement outcomes generated by one party versus the potentially different (corrupted) received instruction for teleportation decoding, (5) which system contains the next MES to be used, (6) a classical description of the joint quantum state, which is only partially known to each party. Each of Alice and Bob maintain her/his data (we collectively call these $D_{\mathsf A}, D_{\mathsf B}$ respectively, here), and also an estimate of the other party's data ($\widetilde{D_{\mathsf B}}, \widetilde{D_{\mathsf A}}$ respectively). Without channel noise, these data are equal to their estimates. \subsubsection{A major new obstacle: out-of-sync teleportation.} At every step in the simulation protocol $\Pi'$, Alice and Bob may engage in one of three actions: a forward step in $\Pi$, step in reverse, or the exchange of classical summaries. However, the summaries can also be corrupted. This leads to a new difficulty: errors in the summaries can trigger Alice and Bob to engage in different actions. In particular, it is possible that one party tries to teleport while the other expects classical communication, with only one party consuming his/her half of an MES. They then become out-of-sync over which MESs to use. This kind of problem, to the best of our knowledge, has not been encountered before, and it is not clear if quantum data can be protected from such error. (For example, Alice may try to teleport a message into an MES that Bob already ``used'' earlier.) One of our main technical contributions is to show that the quantum data can always be located and recovered when Alice and Bob resolve the inconsistencies in their data $(D_{\mathsf A}, \widetilde{D_{\mathsf B}})$ and $(\widetilde{D_{\mathsf A}},D_{\mathsf B})$ in the low noise regime. This is particularly surprising since quantum data can potentially leak irreversibly to the environment (or the adversary): Alice and Bob potentially operate in an open system due to channel noise, and out-of-sync teleportation a priori does not protect the messages so sent. \subsubsection{Tight rope between robustness and rate.} The simulation maintains sufficient data structures to store information about each party's view so that Alice and Bob can overcome all the obstacles described above. The simulation makes progress so long as Alice's and Bob's views are consistent. The robustness of the simulation requires that the consistency checks be frequent and sensitive enough so that errors are caught quickly. On the other hand, to optimize interactive channel capacity, the checks have to remain communication efficient and not too frequent neither. This calls for delicate analysis in which we balance the two. We also put in some redundancy in the data structures to simplify the analysis. \subsection{Result} In this section, we focus on the teleportation-based quantum communication model with polynomial-size alphabet. In more detail, Alice and Bob share an unlimited number of copies of an MES before the protocol begins. The parties effectively send each other a qudit using an MES and communicating two classical symbols from the communication alphabet. The complexity of the protocol is the number of classical symbols exchanged, while the MESs used are available for free. We call this model noiseless if the classical channel is noiseless. The following is our main result in this model for simulation of an $n$-round noiseless communication protocol over an adversarial channel that corrupts any $\epsilon$ fraction of the transmitted symbols. \suppress{First, we state the result for large alphabets.} \begin{theorem} Consider any $n$-round alternating communication protocol $\Pi$ in the teleportation-based model, communicating messages over a noiseless channel with an alphabet $\Sigma$ of bit-size $\Theta\br{\log n}$. Algorithm \ref{algo:Mainalgorithm} is a computationally efficient coding scheme which given $\Pi$, simulates it with probability at least $1-2^{-\Theta\br{n\epsilon}}$, over any fully adversarial error channel with alphabet $\Sigma$ and error rate $\epsilon$. The simulation uses $n\br{1+\Theta\br{\sqrt{\epsilon}}}$ rounds of communication, and therefore achieves a communication rate of $1-\Theta\br{\sqrt{\epsilon}}$. Furthermore. the computational complexity of the coding operations is~$O\br{n^2}$. \end{theorem} \suppress{Consider teleportation-based noiseless communication protocols of length~$n$ defined over a channel with a $\Theta(\log n)$-bit alphabet, and the problem of simulating them with a noisy version of the channel over the same alphabet. There is a protocol that with probability at least~$1 - 2^{-\Omega(\epsilon n)}$, simulates any $n$-symbol teleportation-based noiseless communication protocol using $n(1+\Theta(\sqrt{\epsilon}))$ symbols over any fully adversarial error channel with error rate at most~$\epsilon$. In other words, the simulation achieves information rate $1-\Theta(\sqrt{\epsilon})$. The simulation of channels over constant-size alphabets is more challenging. Nonetheless, we show that a similar simulation is possible in this case as well. \begin{theorem} Consider teleportation-based noiseless communication protocols of length~$n$ defined over a channel with a constant-size alphabet, and the problem of simulating them with a noisy version of the channel over the same alphabet. There is a protocol that with probability at least~$1 - 2^{-\Omega(\epsilon n)}$, simulates any $n$-symbol teleportation-based noiseless communication protocol using $n(1+\Theta(\sqrt{\epsilon}))$ symbols over any fully adversarial error channel with error rate at most~$\epsilon$. In other words, the simulation achieves information rate $1-\Theta(\sqrt{\epsilon})$. \end{theorem} We prove this in Section~\ref{sec:small-alphabet-CB}. } \subsection{Description of Protocol}\label{sec:general-descrpition-largeclasscial} We follow the notation associated with quantum communication protocols introduced in Section~\ref{sec:qucomm} in the description below. Recall that in the teleportation-based quantum communication model, Alice and Bob implement a protocol~$\Pi_0$ with prior shared entanglement and quantum communication by substituting teleportation for quantum communication. For simplicity, we assume that~$\Pi_0$ is alternating, and begins with Alice. In the implementation~$\Pi$ of~$\Pi_0$, the message register~$C$ from~$\Pi_0$ has two counterparts, $C_{\mathsf A}$ and~$C_{\mathsf B}$, held by Alice and Bob, respectively. The unitary operations on~$AC$ in~$\Pi_0$ are applied by Alice on~$A C_{\mathsf A}$ in~$\Pi$. When Alice sends the qudit in~$C$ to Bob in~$\Pi_0$, she applies the teleportation measurement to~$C_{\mathsf A}$ and her share of the next available MES, and sends the measurement outcome to Bob in~$\Pi$. Then Bob applies a decoding operation on his share of the MES, based on the message received, and swaps the MES register with~$C_{\mathsf B}$. Bob and Alice's actions in~$\Pi$ when Bob wishes to do a local operation and send a qudit to Alice in~$\Pi_0$ are analogously defined. For ease of comparison with the joint state in~$\Pi_0$, we describe the joint state of the registers in~$\Pi$ (or its simulation over a noisy channel) in terms of registers~$ABC$. There, $C$ stands for~$C_{\mathsf A}$ if Alice is to send the next message or all messages have been sent, and for~$C_{\mathsf B}$ if Bob is to send the next message. Starting with such a protocol~$\Pi$ in the teleportation-based model, we design a simulation protocol~$\Pi'$ which uses a noisy classical channel. The simulation works with \emph{blocks\/} of even number of messages. By a \emph{block\/} of size~$r$ (for even~$r$) of~$\Pi$, we mean a sequence of~$r$ local operations and messages alternately sent in~$\Pi$ by Alice and Bob, starting with Alice. Roughly speaking, Alice and Bob run the steps of the original protocol~$\Pi$ as is, in blocks of size $r \defeq \Theta (\frac{1}{\sqrt{\epsilon}})$, with $r$ even. They exchange summary information between these blocks, in order to check whether they agree on the operations that have been applying to the quantum registers $A B C$ in the simulation. The MESs used for teleportations are correspondingly divided into blocks of $r$ MESs, implicitly numbered from $1$ to $r$: the odd numbered ones are used to simulate quantum communication from Alice to Bob, and the even numbered ones from Bob to Alice. If either party detects an error in transmission, they may run a block of~$\Pi$ in reverse, or simply communicate classically to help recover from the error. The classical communication is also conducted in sequences equal in length to the ones involving a block of~$\Pi$. A block of~$\Pi'$ refers to any of these types of sequences. \subsubsection{Metadata} In more detail, Alice uses an iteration in~$\Pi'$ for one out of four different types of operations: evolving the simulation by running a block of~$\Pi$ in the forward direction (denoted a ``$+1$'' block); reversing the simulation by applying inverses of unitary operations of~$\Pi$ (denoted a ``$-1$'' block); synchronizing with Bob on the number of MESs used so far by applying identity operators between rounds of teleportation or reversing such an iteration (denoted a ``$0$'' block, with $0$ standing for the application of unitary operations~$U_i^0$ which are~$\id^{AC}$); catching up on the description of the protocol so far by exchanging classical data with Bob (denoted a ``$\sC$'' block, with $\sC$ standing for ``classical''). Alice records the sequence of types of iterations as her ``metadata'' in the string $\mathit{FullMA} \in \{\pm1, 0, \sC \}^*$. $\mathit{FullMA}$ gets extended by one symbol for each new iteration of the simulation protocol~$\Pi'$. The number of blocks of $r$ MESs Alice has used is denoted $q_\mathit{MA}$ which corresponds to the number of non-$\sC$ symbols in $\mathit{FullMA}$. Similarly, Bob maintains data $\mathit{FullMB}$ and $q_\mathit{MB}$. $\mathit{FullMA}$ and $\mathit{FullMB}$ may not agree due to the transmission errors. To counter this, the two players exchange information about their metadata at the end of each block. Hence, Alice also holds $\widetilde{\MB}$ and $q_{\widetilde{\MB}}$ as her best estimation of Bob's metadata and the number of MESs he has used, respectively. Similarly, Bob holds $\widetilde{\MA}$ and $q_{\widetilde{\MA}}$. We use these data to control the simulation; before taking any action in~$\Pi'$, Alice checks if her guess $\widetilde{\MB}$ equals~$\mathit{FullMB}$. Bob does the analogous check for his data. \subsubsection{Number of MESs used} Once both parties reconcile their view of each other's metadata with the actual data, they might detect a discrepancy in the number of MESs they have used. The three drawings in Figure~\ref{fig:EPR_registers} represent the $\lceil \frac{n}{2r}(1 + O(r \epsilon))\rceil$ blocks of $r=O(\sqrt{1/\epsilon})$ MESs at different points in the protocol: first, before the protocol begins; second, when Alice and Bob have used the same number of MESs; and third, when they are not synchronized, say, Alice has used more blocks of MESs than Bob. A difference in~$q_{\mathit{MA}}$ and~$q_{\mathit{MB}}$ indicates that the joint state of the protocol~$\Pi$ can no longer be recovered from registers~$A C_{\mathsf A} C_{\mathsf B} B$ alone. Since one party did not correctly complete the teleportation operations, the (possibly erroneous) joint state may be thought of as having ``leaked'' into the partially measured MESs which were used by only one party. We will elaborate on this scenario in Section~\ref{sec:Out-of-sync teleportation}. \begin{figure}[!t] \centering \includegraphics[width=480pt]{EPR_register.jpg} \caption{These figures represent the MES blocks at different stages of the protocol. The systems depicted by circles have not been used yet for teleportation, those depicted by squares have been used already. (either ``Measured'' or teleportation-decoded.) Figure (a) represents the MES blocks at the beginning of the protocol, when none have been used. Figure (b) represents them when Alice and Bob have used the same number of them; this is the desired situation. Figure (c) represents a situation when Alice and Bob are out of sync; e.g., Alice has used more MES blocks than Bob. They then work to get back in sync before resuming the simulation.} \label{fig:EPR_registers} \end{figure} \subsubsection{Pauli data} The last piece of information required to complete the description of what has happened so far on the quantum registers $A B C$ is about the Pauli operators corresponding to teleportation, which we call the ``Pauli data''. These Pauli data contain information about the teleportation measurement outcomes as well as about the teleportation decoding operations. Since incorrect teleportation decoding may arise due to the transmission errors, we must allow the parties to apply Pauli corrections at some point. We choose to concentrate such Pauli corrections on the receiver's side at the end of each teleportation. These Pauli corrections are computed from the history of all classical data available, before the evolution or reversal of~$\Pi$ in a block starts.\suppress{ whereas the measurement and decoding Pauli data are exchanged online during the computation.} The measurement data are directly transmitted over the noisy classical communication channel and the decoding data are directly taken to be the data received over the noisy channel. If there is no transmission error, the decoding Pauli operation should correspond to the inverse of the effective measurement Pauli operation and cancel out to yield a noiseless quantum channel. Figure~\ref{fig:teleportation-representation} depicts the different types of Pauli data in a block corresponding to type $+1$ for Alice and $-1$ for Bob. The Pauli operations applied on Alice's side are in the following order: \begin{quote} teleportation measurement for the first qudit she sends, \\ decoding operation for the first qudit she receives, \\ correction operation for the same qudit (the first qudit she receives); teleportation measurement for the second qudit she sends, \\ decoding operation for the second qudit she receives, \\ correction operation for the same qudit (the second qudit she receives); and so on. \end{quote} The Pauli operations applied on Bob's side are in a different order: \begin{quote} decoding operation for the first qudit he receives, \\ correction operation for the same qudit (the first qudit he receives), \\ teleportation measurement for the first qudit he teleports; decoding operation for the second qudit he receives, \\ correction operation for the same qudit (the second qudit he receives), \\ teleportation measurement for the second qudit he sends; and so on. \end{quote} \noindent Alice records as her Pauli data in the string $\mathit{FullPA} \in (\Sigma^{3r})^*$, the sequence of Pauli operators that are applied on the quantum register on her side. Each block of $\mathit{FullPA}$ is divided into 3 parts of r symbols from the alphabet set $\Sigma$. The first part corresponds to the $\frac{r}{2}$ teleportation measurement outcomes with two symbols for each measurement outcome. Each of the $\frac{r}{2}$ teleportation decoding operations are represented by two symbols in the second part. Finally, the third part contains two symbols for each of the $\frac{r}{2}$ Pauli corrections. Similarly, Bob records the sequence of Pauli operators applied on his side in $\mathit{FullPB}$. As described above, the measurement outcome and the decoding Pauli operations are available to the sender and the receiver, respectively. Based on the message transcript in~$\Pi'$ so far, Alice maintains her best guess $\widetilde{\PB}$ for Bob's Pauli data and Bob maintains his best guess $\widetilde{\PA}$ for Alice's Pauli data. These data also play an important role in the simulation. Before taking any action in~$\Pi'$, Alice checks if her guess~$\widetilde{\PB}$ equals $\mathit{FullPB}$. Bob does the analogous check for his data. Alice and Bob check and synchronize their classical data, i.e., the metadata and Pauli data, by employing the ideas underlying the Haeupler algorithm~\cite{Haeupler:2014}. Once they agree on each other's metadata and Pauli data, they both possess enough information to compute the content of the quantum register (to the best of their knowledge). \begin{figure}[!t] \centering \includegraphics[width=350pt]{teleportation-Representation.jpg} \caption{Representation of the teleportation scheme for a size $r$ block. The figure on the left corresponds to Alice and Bob having blocks of type $+1$, the most common block type, and the one on the right to a block of type $-1$ for both. The large rectangles correspond to unitary operations of the original protocol or their inverses, or even an identity operator, being applied by Alice or by Bob to $A C$ or $B C$, respectively. Bob has $r/2$ rectangles and applies a unitary operation or an inverse in each of them whenever he has a block of type $\pm 1$. Alice has $r/2 + 1$ rectangles and uses the first $r/2$ to apply unitary operations in a block of type $+1$ and apply an identity on the last one, while she applies an identity in the first one and inverses of unitary operations in the $r/2$ last ones in a block of type $-1$. This is so that a $-1$ block for Alice can be the inverse of a $+1$ block for Alice, and vice-versa. The small circles correspond to the Pauli operations due to teleportation measurement and teleportation decoding, with the teleportation being from Alice to Bob on odd numbered MESs and from Bob to Alice on even numbered MESs. The small squares on the receiver side right after the teleportation decoding circle corresponds to the Pauli corrections made in order to try to correct errors in previous blocks.} \label{fig:teleportation-representation} \end{figure} \subsubsection{Out-of-Sync Teleportation} \label{sec:Out-of-sync teleportation} \suppress{ Consider the situation where the classical data on both sides are of full length but do not match at the beginning of an iteration of the simulation. Suppose that the adversary corrupts the classical data that the parties communicate to each other in this iteration. If no error or hash collision occurs in the next round, the parties will realize there is an error by checking the hash values and will try to fix their incorrect information. Now if a hash collision or corruption of the hashes by the adversary makes only Alice believe that her current guess of Bob's local data is correct, she will continue the simulation of the input protocol for another round, consuming the next block of MESs. On the other hand, Bob will try to resolve the inconsistency in their classical data, without accessing the quantum registers.} \subsubsection*{Basic out-of-sync scenario} Consider an iteration in which Alice believes she should implement a~$+1$ block, while Bob believes he has to resolve an inconsistency in their classical data. Alice will simulate one block of the input protocol~$\Pi$, consuming the next block of MESs. On the other hand, Bob will try to resolve the inconsistency through classical communication alone, and not access the quantum registers. Thus Alice will treat Bob's messages as the outcomes of his teleportation measurements, and she performs the teleportation decoding operations according to these messages. The situation is even worse, since Alice sends quantum information to Bob through teleportation of which Bob is unaware, and Bob views the teleportation measurement outcomes sent by Alice as classical information about Alice's local Pauli data and metadata corresponding to previous iterations. Note that at this point the quantum state in registers~$ABC$ may potentially be lost. This scenario could continue for several iterations and derail the simulation completely. To recover from such a situation, especially to retrieve the quantum information in the unused MESs at his end, it would seem that Alice and Bob would have to rewind the simulation steps in~$\Pi'$ (and not only the steps of the original protocol~$\Pi'$) to an appropriate point in the past. This rewinding itself would be subject to error, and the situation seems hopeless. Nonetheless, we provide a simple solution to address this kind of error, which translates out-of-sync teleportation to errors in implementing the forward simulation or rewinding of the original protocol~$\Pi$. As explained in the previous subsection, Alice and Bob first reconcile their view of the history of the simulation stored in their metadata. Through this, suppose they both discover the discrepancy in the number of MESs used. (There are other scenarios as well; for example, they may both think that~$q_\mathit{MA} = q_\mathit{MB}$. These scenarios lead to further errors, but the simulation protocol~$\Pi'$ eventually discovers the difference in MESs used.) In the scenario in which Alice and Bob both discover that~$q_\mathit{MA} \neq q_\mathit{MB}$, they try to ``gather'' the quantum data hidden in the partially used MESs back into the registers~$A B C$. In more detail, suppose Bob has used fewer MESs than Alice, and he discovers this at the beginning of the~$i$-th iteration. Let~$E_1 E_2 \dotsb E_r$ be registers with Bob that hold the halves of the \emph{first\/} block of MESs that Alice has used but Bob has not. Note that~$E_1, E_3, \dotsc, E_{r-1}$ contain quantum information teleported by Alice, and~$E_2, E_4, \dotsc, E_r$ are MES-halves intended for teleportation by Bob. The MES-halves corresponding to~$E_2, E_4, \dotsc, E_r$ have already been used by Alice to ``complete'' the teleportations she assumed Bob has performed. Say Alice used this block of MESs in the~$i'$-th iteration. In the~$i$-th iteration, Bob teleports the qudit~$E_1$ using the MES-half~$E_2$, $E_3$ with~$E_4$, and so on. That is, Bob teleports qudit~$E_j$ using the MES-half~$E_{j+1}$ in increasing order of~$j$, for all odd~$j \in [r]$, as if the even numbered MESs had not been used by Alice. The effect of this teleportation is the same as if Alice and Bob had \emph{both\/} tried to simulate the local operations and communication from the original protocol in the~$i'$-th iteration (in the forward direction or to correct the joint state), \emph{except that the following also happened independently of channel error\/}: \begin{enumerate} \item the Pauli operations used by Bob to decode~$E_1, E_3, \dotsc, E_{r-1}$ were all the identity, \item the unitary operations used by Bob on the registers~$B C$ were all the identity, and \item the Pauli operations applied by Alice for decoding Bob's teleportation were unrelated to the outcome of Bob's teleportation measurements. \end{enumerate} This does not guarantee correctness of the joint state in~$A B C$, but has the advantage that quantum information in the MES-halves~$E_1, E_3, \dotsc, E_{r-1}$ that is required to restore correctness is redirected back into the registers~$A B C$. In particular, the difference in the number of MESs used by the two parties is reduced, while the errors in the joint quantum state in~$A B C$ potentially increase. The errors in the joint state are eventually corrected by reversing the incorrect unitary operations, as in the case when the teleportations are all synchronized. To understand the phenomenon described above, consider a simpler scenario where Bob wishes to teleport a qudit~$\ket{\xi}$ in register~$B_1$ to Alice using an MES in registers~$E_1' E_1$, after which Alice applies the unitary operation~$V$ to register~$E_1'$. If they follow the corresponding sequence of operations, the final state would be~$V\ket{\xi}$, stored in register~$E_1'$. Instead suppose they do the following. First, Alice applies~$V$ to register~$E_1'$, \emph{then\/} Bob measures registers~$B_1 E_1$ in the generalized Bell basis and gets measurement outcome~$(j,k)$. He sends this outcome to Alice. We may verify the state of register~$E_1'$ conditioned on the outcome is~$V({\mathrm X}^j {\mathrm Z}^k)\ket{\xi}$. Thus, the quantum information in~$\xi$ is redirected to the correct register, albeit with a Pauli error (that is known to Alice because of his message). In particular, Alice may later reverse~$V$ to correctly decode the teleported state. The chain of teleportation steps described in the previous paragraph has a similar effect. \subsubsection{First representation of the quantum registers} \label{sec:large-classical-first-rep} A first representation for the content of the quantum registers~$ABC$ in~$\Pi'$ can be obtained directly and explicitly from the metadata and the Pauli data, and is denoted $\mathit{JS}1$, as in Eq.~\eqref{eqn:JS1} below, with $\mathit{JS}$ standing for ``joint state''. We emphasize that this is the state conditioned on the outcomes of the teleportation measurements as well as the transcript of classical messages received by the two parties. However, the form $\mathit{JS}1$ is essentially useless for deciding the next action that the simulation protocol~$\Pi'$ should take, but it can be simplified into a more useful representation. This latter form, denoted $\mathit{JS}2$, as in Eq.~\eqref{eqn:JS2_1} below, directly corresponds to the further actions we may take in order to evolve the simulation of the original protocol or to actively reverse previous errors. We first consider $\mathit{JS}1$ and $\mathit{JS}2$ in the case when $q_{\mathit{MA}} = q_{\mathit{MB}}$. We sketch how to obtain $\mathit{JS}1$ from $\mathit{FullMA}$, $\mathit{FullMB}$, $\mathit{FullPA}$ and $\mathit{FullPB}$ (when~$q_\mathit{MA} = q_\mathit{MB}$). Each block of $r$ MESs which have been used by both Alice and Bob corresponds to a bracketed expression~$[*j]$ for some content ``$*j$'' corresponding to the $j$-th block that we describe below. The content of the quantum registers is then the $A B C$ part of \begin{align}\label{eqn:JS1} \mathit{JS}1 = [*q_{\mathit{MA}}] \cdots [*2] [*1] \ket{\psi_{\mathrm{init}}}^{A B C E R}, \end{align} with $\ket{\psi_{\mathrm{init}}}^{ABCER}$ being the initial state of the original protocol. (To be accurate, the representation corresponds to the sequence of operations that have been applied to $\ket{\psi_{\mathrm{init}}}$, and knowledge of $\ket{\psi_{\mathrm{init}}}$ is not required to compute the representation.) It remains to describe the content $*j$ of the $j$-th bracket. It contains from right to left $\frac{r}{2}$ iterations of the following: \begin{quote} Alice's unitary operation - Alice's teleportation measurement outcome - \\ Bob's teleportation decoding - Bob's Pauli correction - Bob's unitary operation - Bob's teleportation measurement outcome - \\ Alice's teleportation decoding - Alice's Pauli correction. \end{quote} It also allows for an additional unitary operation of Alice on the far left when she is implementing a block of type~$-1$; we elaborate on this later. If Alice's block type is $+1$, all her unitary operations are consecutive unitary operations from the original protocol (with the index of the unitary operations depending on the number of $\pm 1$ in $\mathit{FullMA}$), while if it is $-1$, they are inverses of such unitary operations. If Alice's block type is $0$, all unitary operations are equal to the identity on registers $A C_{\mathsf A}$. Similar properties hold for Bob's unitary operations on registers $BC$. Alice's block type corresponds to the content of the $j$-th non-$\sC$ element in $\mathit{FullMA}$, and Bob's to the content of the $j$-th non-$\sC$ element in $\mathit{FullMB}$. Alice's Pauli data corresponds to the content of the $j$-th block in $\mathit{FullPA}$, and Bob's to the content of the $j$-th block in $\mathit{FullPB}$. The precise rules by which Alice and Bob determine their respective types for a block in~$\Pi'$, and which blocks of~$\Pi$ (if any) are involved, are deferred to the next section. Note that when~$q_\mathit{MA} = q_\mathit{MB}$, the first $q_\mathit{MA}$ MES blocks have been used by both parties but not necessarily in the same iterations. Nevertheless, the remedial actions the parties have taken to recover from out-of-sync teleportation have reduced the error on the joint state to transmission errors as if all the teleportations were synchronized and the adversary had introduced those additional errors; see Section\ref{sec:Out-of-sync teleportation}. To give a concrete example, suppose from her classical data, Alice determines that in \emph{her\/} $j$-th non-$\sC$ block of~$\Pi'$, she should actively reverse the unitary operations of block~$k$ of~$\Pi$ to correct some error in the joint state. So her~$j$-th non-$\sC$ block of~$\Pi'$ is of type~$-1$. Suppose Alice's Pauli data in the~$j$-th block of $\mathit{FullPA}$ correspond to Pauli operators~$p_{{\mathsf A},1} p_{{\mathsf A},2} \cdots p_{{\mathsf A},3r/2}$ in the order affecting the joint state. I.e., the Pauli operators~$p_{{\mathsf A},1} \,,\, p_{{\mathsf A},4} \,,\, \ldots \,,\, p_{{\mathsf A}, 3 (r/2 -1) + 1}$ correspond to the sequence of Alice's teleportation measurement outcomes, the Pauli operators~$p_{{\mathsf A},2} \,,\, p_{{\mathsf A},5} \,,\, \ldots \,,\, p_{{\mathsf A}, 3 (r/2 -1) + 2}$ are her teleportation decoding operations and~$p_{{\mathsf A},3} \,,\, p_{{\mathsf A},6} \,,\, \ldots \,,\, p_{{\mathsf A},3r/2}$ are her Pauli corrections, respectively. Consider Bob's~$j$-th non-$\sC$ block of~$\Pi'$. Note that this may be a different block of~$\Pi'$ than Alice's $j$-th non-$\sC$ block. Suppose from \emph{his\/} classical data, Bob determines that in his~$j$-th non-$\sC$ block of~$\Pi'$, he should apply the unitary operations of block~$l$ of~$\Pi$ to evolve the joint state further. So his~$j$-th non-$\sC$ block of~$\Pi'$ is of type~$+1$. Suppose Bob's Pauli data in the~$j$-th block of $\mathit{FullPB}$ correspond to Pauli operators~$p_{{\mathsf B},1} p_{{\mathsf B},2} \cdots p_{{\mathsf B},3r/2} $, in the order affecting the joint state. I.e., the Pauli operators~$p_{{\mathsf B},1} \,,\, p_{{\mathsf B},4} \,,\, \ldots \,,\, p_{{\mathsf B}, 3 (r/2 -1) + 1}$ are Bob's decoding operations and~$p_{{\mathsf B},2} \,,\, p_{{\mathsf B},5} \,,\, \ldots \,,\, p_{{\mathsf B}, 3 (r/2 -1) + 2}$ are his Pauli corrections and~$p_{{\mathsf B},3} \,,\, p_{{\mathsf B},6} \,,\, \ldots \,,\, p_{{\mathsf B},3r/2}$ correspond to his teleportation measurement outcomes, respectively. Then from $\mathit{FullMA},\mathit{FullMB},\mathit{FullPA},\mathit{FullPB}$, we can compute a description of the joint state as in Eq.~\eqref{eqn:JS1}, with~$*j$ equal to \suppress{ \begin{align} \sigma_{B,en (i+1)r}U^{m_{b,i}}_{B,(i+1)r}\tilde{\sigma}_{ir+r-1}\ldots\hat{\sigma}_{ir+s}U^{m_{b,i}}_{B,ir+s}\tilde{\sigma}_{ir+s}U^{m_{a,i}}_{A,ir+s}\hat{\sigma}_{ir+s-1}\nonumber\\ \cdots\hat{\sigma}_{ir+1}U^{m_{b,i}}_{B,ir+1}\tilde{\sigma}_{ir+1}U^{m_{a,i}}_{A,ir+1}\sigma_{A,pc,ir+1}\sigma_{A,de,ir+1}, \end{align} where \[\tilde{\sigma}_{ir+s}=\sigma_{B,pc,ir+s}\sigma_{B,de,ir+s}\sigma_{A,en,ir+s},\] and \[\hat{\sigma}_{ir+s}=\sigma_{A,pc,ir+s+1}\sigma_{A,de,ir+s+1}\sigma_{B,en,ir+s}.\] } \begin{align*} & U_{kr + 1}^{-1} \\ \times & \left( p_{{\mathsf A}, 3 (r/2 -1) + 3}\;\; p_{{\mathsf A}, 3(r/2 -1) + 2} \right) \left( p_{{\mathsf B}, 3 (r/2 -1) + 3}\;\; U_{lr + r} \;\; p_{{\mathsf B}, 3(r/2 -1) + 2} \;\; p_{{\mathsf B}, 3(r/2 -1) + 1} \right) \\ & \qquad \times \left( p_{{\mathsf A}, 3 (r/2 -1) + 1} \;\; U_{kr + 3}^{-1} \right) \\ \times & \dotsb \\ \times & \left( p_{{\mathsf A}, 3 (s-1) + 3}\;\; p_{{\mathsf A}, 3(s - 1) + 2} \right) \left( p_{{\mathsf B}, 3 (s -1) + 3}\;\; U_{lr + 2s} \;\; p_{{\mathsf B}, 3(s -1) + 2} \;\; p_{{\mathsf B}, 3(s -1) + 1} \right) \\ & \qquad \times \left( p_{{\mathsf A}, 3 (s -1) + 1} \;\; U_{kr + (r - 2s + 3)}^{-1} \right) \\ \times & \dotsb \\ \times & \left( p_{{\mathsf A}, 6}\;\; p_{{\mathsf A}, 5} \right) \left( p_{{\mathsf B}, 6}\;\; U_{lr + 4} \;\; p_{{\mathsf B}, 5} \;\; p_{{\mathsf B}, 4} \right) \left( p_{{\mathsf A}, 4} \;\; U_{kr + (r - 1)}^{-1} \right) \\ \times & \left( p_{{\mathsf A}, 3}\;\; p_{{\mathsf A}, 2} \right) \left( p_{{\mathsf B}, 3}\;\; U_{lr + 2} \;\; p_{{\mathsf B}, 2} \;\; p_{{\mathsf B}, 1} \right) \left( p_{{\mathsf A}, 1} \;\; \id \right) \enspace. \end{align*} Note that Alice and Bob are not necessarily able to compute the state $\mathit{JS}1$. Instead, they use their best guess for the other party's metadata and Pauli data in the procedure described in this section to compute their estimates $\JSoneA$ and $\mathit{JS}1^\mathrm{B}$ of $\mathit{JS}1$, respectively. Note that Alice and Bob will not compute their estimates of $\mathit{JS}1$ unless they believe that they both know each other's metadata and Pauli data and have used the same number of MES blocks. \subsubsection{Second representation of the quantum registers} \label{sec:large-classical-second-rep} To obtain $\mathit{JS}2$ from $\mathit{JS}1$, we first look inside each bracket and recursively cancel consecutive Pauli operators inside the bracket. In case a bracket evaluates to the identity operator on registers $A B C$, we remove it. Once each bracket has been cleaned up in this way, we recursively try to cancel consecutive brackets if their contents correspond to the inverse of one another (assuming that no two $U_i$ of the original protocol are the same or inverses of one another). Once no such cancellation works out anymore, what we are left with is representation $\mathit{JS}2$, which is of the following form (when $q_{\mathit{MA}} = q_{\mathit{MB}}$): \begin{align}\label{eqn:JS2_1} \mathit{JS}2 = [\#b] \cdots [\#1] [U_{gr} \cdots U_{(g-1)r + 2} U_{(g-1)r + 1} ] \cdots [U_r \cdots U_2 U_1] \ket{\psi_{\mathrm{init}}}^{A B C E R}. \end{align} Here, the first $g$ brackets starting from the right correspond to the ``good'' part of the simulation, while the last $b$ brackets correspond to the ``bad'' part of the simulation, the part that Alice and Bob have to actively rewind later. The integer~$g$ is determined by the left-most bracket such that along with its contents, those of the brackets to the right equal the sequence of unitary operations~$U_1, U_2, \dotsc, U_{gr}$ from the original protocol~$\Pi$ in reverse. The brackets to the left of the last~$g$ brackets are all considered bad blocks. Thus, the content of $[\#1]$ is not $[U_{(g+1)r} \cdots U_{gr + 1}]$, while the contents of $[\#2]$ to $[\#b]$ are arbitrary and have to be actively rewound before Alice and Bob can reverse the content of $[\# 1]$. Once the two parties synchronize their metadata, the number of MESs they have used and their Pauli data, they compute their estimates of $\mathit{JS}1$. Alice uses $\JSoneA$ in the above procedure to compute her estimate $\JStwoA$ of $\mathit{JS}2$. Similarly, Bob computes $\mathit{JS}2^\mathrm{B}$ from $\mathit{JS}1^\mathrm{B}$. These in turn determine their course of action in the simulation as described next. If $b > 0$, they actively reverse the incorrect unitary operators in the last bad block, while assuming the other party does the same. They start by applying the inverse of $[\#b]$, choosing appropriately whether to have a type $\pm 1$ or $0$ block, and also choosing appropriate Pauli corrections. Else, if $b=0$, they continue implementing unitary operations $U_{gr+1}$ to $U_{(g+1)r}$ of the original input protocol~$\Pi$ to evolve the simulation. Note that each player has their independent view of the joint state, and takes actions assuming that their view is correct. In this process, Alice and Bob use their view of the joint state to predict each other's next action in the simulation and extend their estimates of each other's metadata and Pauli data accordingly. We describe a few additional subtleties on how the parties access the quantum register in a given block, as represented in Figure~\ref{fig:teleportation-representation}. First, each block begins and ends with Alice holding register $C$ and being able to perform a unitary operation. In $+1$ blocks, she applies a unitary operation at the beginning and not at the end, whereas in $-1$ blocks she applies the inverse of a unitary operation at the end and not at the beginning. This is in order to allow a $-1$ block to be the inverse of a $+1$ block, and vice-versa. Second, whenever Alice and Bob are not synchronized in the number of MESs they have used so far, as explained in Section~\ref{sec:Out-of-sync teleportation}, the party who has used more will wait for the other to catch up by creating a new type $\sC$ block while the party who has used less will try to catch up by creating a type $0$ block, sequentially feeding the $C$ register at the output of a teleportation decoding to the input of the next teleportation measurement. Notice that due to errors in communication, it might happen that $+1$ blocks are used to correct previous erroneous $-1$ blocks and $0$ blocks are used to correct previous erroneous $0$ blocks. As illustrated in Figure~\ref{fig:teleportation-representation}, the block on the right is the inverse of the one on the left if the corresponding Pauli operators are inverses of each other. \subsubsection{Representations of quantum registers while out-of-sync} We now define the $\mathit{JS}1$ and $\mathit{JS}2$ representations of the joint state in the case when $q_{\mathit{MA}} \neq q_{\mathit{MB}}$. Note that in this case, conditioned on the classical data with the two parties, $\mathit{JS}1$ and $\mathit{JS}2$ represent a pure state. However, in addition to the $ABCER$ registers, we must also include the half-used MES registers in the representation. Let $u\defeq |q_{\mathit{MA}}-q_{\mathit{MB}}|$. For concreteness, suppose that $q_{\mathit{MA}} > q_{\mathit{MB}}$. Then the $\mathit{JS}1$ representation is of the following form: \begin{align}\label{eqn:JS1-OoS} \mathit{JS}1 = [*q_\mathit{MA}] \cdots [*q_\mathit{MB}] \cdots [*2] [*1] \ket{\psi_{\mathrm{init}}}^{A B C E R}\enspace. \end{align} The content of the first $q_\mathit{MB}$ brackets from the right, corresponding to the MES blocks which have been used by both parties are obtained as described in Subsection~\ref{sec:large-classical-first-rep}. The leftmost $u$ brackets correspond to the MES blocks which have been used only by Alice. We refer to these blocks as the \emph{ugly\/} blocks. These brackets contain Alice's unitary operations from the input protocol, her teleportation decoding operations and Pauli correction operations in her last~$u$ non-classical iterations of the simulation. Additionally, they contain the $u$ blocks of MES registers used only by Alice. In each of these blocks, the registers indexed by an odd number have been measured on Alice's side and the state of the MES register has collapsed to a state which is obtained from Alice's Pauli data. The representation $\mathit{JS}2$ is obtained from $\mathit{JS}1$ as follows: We denote by $[@u]\cdots [@1]$ the leftmost $u$ brackets corresponding to the ugly blocks. We use the procedure described in Subsection~\ref{sec:large-classical-second-rep} on the rightmost $q_{\mathit{MB}}$ brackets in $\mathit{JS}1$ to obtain $\mathit{JS}2$ of the following form: \begin{align} \label{eqn:JS2_OoS} \mathit{JS}2 = [@u] \cdots [@1] [\#b] \cdots [\#1] [U_{gr} \cdots U_{(g-1)r + 2} U_{(g-1)r + 1} ] \cdots [U_r \cdots U_2 U_1] \ket{\psi_{\mathrm{init}}}^{A B C E R}\enspace, \end{align} with~$g$ \emph{good\/} blocks, and~$b$ \emph{bad\/} blocks, for some non-negative integers~$g,b$. \suppress{ The representations are obtained in the manner described in Sections~\ref{sec:large-classical-first-rep} and~\ref{sec:large-classical-second-rep}, except that we account for the additional blocks of unitary operations along with blocks of MES halves $F_1, F_2, \ldots, F_u$ which arise from MES blocks that have been used by only one party. We refer to these blocks of unitary operators as the \emph{ugly\/} blocks. More formally, we denote the final (leftmost)~$u$ blocks in the $\mathit{JS}1$ representation as~$[@ u] \cdots [@ 1]$. These are the blocks of unitary operations performed in the last~$u$ non-classical iterations for Alice in the simulation and they contain untouched MES registers $F_1, F_2, \ldots, F_u$ on Bob's side. Let $\mathit{JS}1'$ denote the state obtained from $\mathit{JS}1$ with these~$u$ blocks removed. Let $\mathit{JS}2'$ denote the state obtained from $\mathit{JS}1'$ by following the procedure described in Section~\ref{sec:large-classical-second-rep}. It is of the form \begin{align}\label{eqn:JS2goodbadform1} \mathit{JS}2' = [\#b] \cdots [\#1] [U_{gr} \cdots U_{(g-1)r+ 1 }] \cdots [U_r \cdots U_1] \ket{\psi_{init}}^{ABCER} \enspace, \end{align} with~$g$ ``good'' blocks, and~$b$ ``bad'' blocks, for some non-negative integers~$g,b$. Then, $\mathit{JS}2$ is defined as $\mathit{JS}2\defeq [@ u] \cdots [@ 1] \mathit{JS}2'$. } Thus, in the rest of this section, we assume that $\mathit{JS}2$ is of the form of Equation~\eqref{eqn:JS2_OoS} at the end of each iteration for some non-negative integers~$g,b,u$ which are given by \begin{align} &g \defeq \text{the number of good unitary blocks in $\mathit{JS}2$,}\label{eqn:g}\\ &b \defeq \text{the number of bad unitary blocks in $\mathit{JS}2$, and}\label{eqn:b}\\ &u\defeq |q_{\mathit{MA}}-q_{\mathit{MB}}|.\label{eqn:u} \end{align} We point out that Alice and Bob compute their estimates of $\mathit{JS}1$ and $\mathit{JS}2$ only if, based on their view of the simulation so far, they believe that they have used the same number of MES blocks. Therefore, whenever computed, $\JSoneA,\mathit{JS}1^\mathrm{B}$ and $\JStwoA,\mathit{JS}2^\mathrm{B}$ are always of the forms described in Subsections~\ref{sec:large-classical-first-rep} and~\ref{sec:large-classical-second-rep}, respectively. Notice that if there are no transmission errors or hash collisions and Alice and Bob do as described earlier in this section after realizing that $q_{\mathit{MA}} > q_{\mathit{MB}}$, then the ugly blocks $[@ u] \cdots [@ 2]$ remain as they were while block $[@ 1]$ becomes a standard block of unitary operations acting on registers $ABC$ only, quite probably being a new bad block, call it $[\#b+1]$. More generally, if there is either a transmission error or a hash collision, Bob might not realize that $q_{\mathit{MA}} > q_{\mathit{MB}}$. Then he might either have a $\sC$ type of iteration in which case block $[@ 1]$ also remain as is, or else it is a $+1$, $-1$ or $0$ (non-$\sC$) type of iteration and then he may apply non-identity Pauli operations and unitary operations on registers $BC$, which still results in block $[@ 1]$ becoming a standard block of unitary operations acting on registers $ABC$ only. Similarly if there is either a transmission error or a hash collision, Alice might not realize that $q_{\mathit{MA}} > q_{\mathit{MB}}$. Then she might have a non-$\sC$ type of iteration in which case a new ugly block, call it $[@ u+1]$, would be added to the left of $[@ u]$. \subsubsection{Summary of main steps} The different steps that Alice and Bob follow in the simulation protocol~$\Pi'$ are summarized in Algorithm~\ref{summary-TP-based}. Recall that each party runs the simulation algorithm based on their view of the simulation so far. \suppress In one iteration of the simulation, only one step involving communication is conducted (and this constitutes one block of operations).} \RestyleAlgo{boxruled} \begin{algorithm}\label{summary-TP-based} Agree on the history of the simulation contained in the metadata, i.e., ensure $\mathit{FullMA} = \widetilde{\MA}$ and $\mathit{FullMB} = \widetilde{\MB}$. This involves Algorithm \ref{algo:rewindMD}---\textbf{\textsf{rewindMD}}, and Algorithm \ref{algo:extendMD}---\textbf{\textsf{extendMD}}.\\ Synchronize the number of MESs used, in particular, ensure $q_{\mathit{MA}} = q_{\widetilde{\MB}}$ and $q_{\mathit{MB}} = q_{\widetilde{\MA}}$. This involves Algorithm \ref{algo:syncMES}---\textbf{\textsf{syncMES}}. \\ Agree on Pauli data for all the teleportation steps and additional Pauli corrections for addressing channel errors, i.e., ensure $\mathit{FullPA} = \widetilde{\PA}$ and $\mathit{FullPB} = \widetilde{\PB}$. This is done via Algorithm \ref{algo:rewindPD}---\textbf{\textsf{rewindPD}} and Algorithm \ref{algo:extendPD}---\textbf{\textsf{extendPD}}. \\ Compute the best guess for $\mathit{JS}1$ and $\mathit{JS}2$. If there are any ``bad'' blocks in the guess for $\mathit{JS}2$, reverse the last bad block of unitary operations. I.e., implement quantum rewinding so that~$b = 0$ in $\mathit{JS}2$. This is done in Algorithm \ref{algo:simulate}---\textbf{\textsf{simulate}}. \\ If no ``bad'' blocks remain, implement the next block of the original protocol. This results in an increase in $g$ in $\mathit{JS}2$, and is also done through Algorithm \ref{algo:simulate}---\textbf{\textsf{simulate}}. \\ \caption{Main steps in one iteration of the simulation for the large alphabet teleportation-based model} \label{algo:Main steps-large-alphabet-cleve-burhman} \end{algorithm} \RestyleAlgo{ruled} The algorithms mentioned in each step are presented in the next section. Figure~\ref{fig:flow-telep} summarizes the main steps in flowchart form. In every iteration exactly one of the steps listed in Algorithm~\ref{summary-TP-based} is conducted. Alice and Bob skip one step to the next only if the goal of the step has been achieved through the \emph{previous\/} iterations. The simulation protocol is designed so that unless there is a transmission error or a hash collision in comparing a given type of data, Alice and Bob will go down these steps in tandem, while never returning to a previous step. For instance, once Alice and Bob achieve the goal of step $1$, as long as no transmission error or hash collision occurs, their metadata will remain synchronized while they are conducting any of the next steps. This is in fact a crucial property which we utilize in the analysis of the algorithm. In particular, to ensure this property, Alice and Bob need to synchronize the number of MESs they have used \emph{before\/} synchronizing their Pauli data. \begin{figure}[!t] \centering \includegraphics[width=475pt]{Flowchart-Sec3.jpg} \caption{Flowchart of the teleportation-based scheme for high rate noisy interactive quantum communication. Most of the communication is spent actually trying to simulate the protocol, in the \textsf{simulate} subroutine.} \label{fig:flow-telep} \end{figure} \suppress Notice that unless there is a transmission error or a hash collision in comparing a given type of data (as in Ref.~\cite{Haeupler:2014}), Alice and Bob cycle through these steps in tandem.} \subsection{Algorithm}\label{subsec:algforlargealphabet} In this section, we present our simulation protocol $\Pi'$ in the teleportation-based model when the communication alphabet is polynomial-size. We first introduce the data structure used in our algorithm in this model, which summarizes the definition of the variables appearing in the pseudocodes. \subsubsection{Data structure}\label{sec:datastructurelargealphabetcleveburhman} \begin{itemize} \item \textbf{Metadata:} In every iteration $\mathit{NewMetaA} \in {\{\pm 1,0,\sC\}}$ corresponds to Alice's block type which determines how the simulation of the input protocol proceeds locally on Alice's side. $\mathit{NewMetaA}=\sC$ corresponds to a classical iteration, in which Alice does not access the quantum registers. $\mathit{NewMetaA} \in {\{\pm 1,0\}}$ determines the exponent of the unitary operators from the input protocol $\Pi$ applied by Alice in the current iteration of the simulation. Alice records her metadata in $\mathit{FullMA}$ which is concatenated with $\mathit{NewMetaA}$ in every iteration and has length $i$ after $i$ iterations. Her best guess of Bob's block type in the current iteration is denoted by $\widetilde{\NewMetaB}$. Alice maintains a guess for Bob's metadata in $\widetilde{\MB}$ which gets modified or corrected as she gains more information through interaction with Bob. Note that $\widetilde{\MB}$ is not necessarily full-length in every iteration and its length may decrease. $\ell_{\MBtilde}$ denotes the length of $\widetilde{\MB}$. Bob's local data, $\mathit{NewMetaB}$, $\mathit{FullMB}$, $\widetilde{\NewMetaA}$, $\widetilde{\MA}$ and $\ell_{\MAtilde}$ are defined similarly. Alice maintains a guess~$\ell_{\MA}$ for the length of~$\widetilde{\MA}$, which is with Bob. We define~$\mathit{MA}$ to be the prefix of $\mathit{FullMA}$ of length~$\ell_{\MA}$, i.e., ~$\mathit{MA} \defeq \mathit{FullMA}\Br{1:\ell_{\MA}}$. When $\mathit{MA}$ appears in any of the algorithms in this section, it is implicitly computed by Alice from $\mathit{FullMA}$ and $\ell_{\MA}$. The number of MES blocks used by Alice for teleportation is denoted by $q_{\mathit{MA}}$. We use $q_{\widetilde{\MB}}$ to denote Alice's guess of the number of MES blocks used by Bob. Note that $q_{\mathit{MA}}$ and $q_{\widetilde{\MB}}$ are the number of $0$, $1$ and $-1$ symbols in $\mathit{MA}$ and $\widetilde{\MB}$, respectively. Bob's $\mathit{MB}$, $\ell_{\MB}$, $q_{\mathit{MB}}$ and $q_{\widetilde{\MA}}$ are defined similarly. \item \textbf{Pauli data:} In every iteration $\mathit{NewPauliA} \in {\br{\Sigma^{r}}^3}$ consists of three parts: The first part corresponds to the outcomes of Alice's teleportation measurements in the current iteration; the second part corresponds to the received transmissions which determine the teleportation decoding operation and the last part which corresponds to Pauli corrections. The Pauli data are recorded locally by Alice in $\mathit{FullPA}$. Starting from the empty string, $\mathit{FullPA}$ is concatenated with $\mathit{NewPauliA}$ whenever Alice implements a non-$\sC$ iteration. Alice's best guess for Bob's $\mathit{NewMetaB}$ in each iteration is denoted by $\widetilde{\NewMetaB}$. She maintains a string $\widetilde{\PB}$ as an estimate of Bob's Pauli data. The length of $\widetilde{\PB}$ is denoted by $\ell_{\PBtilde}$. Alice also maintains $\ell_{\PA}$, her estimate for the length of $\widetilde{\PA}$, which is with Bob. $\mathit{PA}$ denotes the prefix of $\mathit{FullPA}$ of length $\ell_{\PA}$, i.e., $\mathit{PA}\defeq \mathit{FullPA}\Br{1:\ell_{\PA}}$. When $\mathit{PA}$ appears in any of the algorithms in this section, it is implicitly computed by Alice from $\mathit{FullPA}$ and $\ell_{\PA}$. Bob's local Pauli data $\mathit{NewPauliB}\;,\;\mathit{FullPB}\;,\;\widetilde{\NewPauliA}\;,\;\widetilde{\PA}\;,\;\ell_{\PAtilde}\;,\;\ell_{\PB}\;,\;\mathit{PB}$ are defined similarly. A critical difference between the metadata and the Pauli data is that the metadata assigns one symbol for each block while the Pauli data assigns $3r$ symbols for each block. \item We use $H$ with the corresponding data as subscript to denote the hashed data, e.g., $H_{\MA}$ denotes the hash value of the string $\mathit{MA}$. \item The data with ~$'$~ denote the received data after transmission over the noisy channel, e.g., $\ell_{\MB}'$ denotes what Alice receives when Bob sends $\ell_{\MB}$. \item The variable $\mathit{Itertype} \in \{\mathrm{MD},\mathrm{PD},\mathrm{MES},\mathrm{SIM}\}$ determines the iteration type for the party: $\mathrm{MD}$ and $\mathrm{PD}$ correspond to iterations where metadata and Pauli data are processed or modified, $\mathrm{MES}$ is used for iterations where the party is trying to catch up on the number of used MESs, and $\mathrm{SIM}$ corresponds to iterations where the party proceeds with evolving the simulation of $\Pi$ by applying a block of unitary operators from $\Pi$ or the inverse of such a block of unitary operators in order to fix an earlier error. \item The variable $\RewindExtend \in \{\sR,\sE\}$ determines in classical iterations if a string of the local metadata or Pauli data is extended or rewound in the current iteration. \end{itemize} \subsubsection{Pseudo-codes} This section contains the pseudo-codes for the main algorithm and the subroutines that each party runs locally in the simulation protocol. The subroutines are the following: \textsf{Preprocess}, which determines what will happen locally to the classical and quantum data in the current iteration of the simulation; \textsf{rewindMD} and \textsf{extendMD}, which process the local metadata; \textsf{syncMES} which handles the case when the two parties do not agree on the number of MES blocks they have used; \textsf{rewindPD} and \textsf{extendPD}, process the local Pauli data; and finally, \textsf{simulate}, in which the player moves on with the simulation of the input protocol according to the information from subroutine \textsf{Computejointstate} of \textsf{Preprocess}. When the party believes that the classical data are fully synchronized, he or she uses the subroutine \textsf{Computejointstate} to extract the necessary information to decide how to evolve the joint quantum state next. This information includes $\mathit{JS}1$ and $\mathit{JS}2$ defined in~\eqref{eqn:JS1} and~\eqref{eqn:JS2_1}, respectively, $\mathit{NewMetaA}$, $\RewindExtend$, $\widetilde{\NewMetaB}$, $\mathit{Block}$ which represents the index of the block of unitary operations from the input protocol $\Pi$ the party will perform, $\mathit{P}_\mathrm{Corr}$ representing Alice's Pauli corrections and $\widetilde{\mathit{P}_\mathrm{Corr}}$ representing Alice's guess of Bob's Pauli corrections. \suppress{ To get $\mathit{JS}2$ from $\mathit{JS}1$, Alice and Bob follow the procedure explained in Section \ref{sec:general-discription-large-alphabet-cleveburhman}. The computed joint state will be of the following form \begin{equation}\label{eqn:JS2} \mathit{JS}2=\prod_i\br{\sigma_{\br{2r+1}i}U^{b_i}_{B,\br{i+1}r}\sigma_{\br{2r+1}i-1}U^{a_i}_{A,\br{i+1}r}\sigma_{\br{2r+1}i-1}\ldots\sigma_{2ri+3}U^{b_i}_{B,ir+1}\sigma_{2ri+2}U^{a_i}_{A,ir+1}\sigma_{2ri+1}}\ket{\Psi_{\mathrm{init}}}. \end{equation} } For the subroutines used in the simulation protocol, we list all the global variables accessed by the subroutine as the \textbf{Input} at the beginning of the subroutine. Whenever applicable, the relation between the variables when the subroutine is called is stated as the \textbf{Promise} and the global variables which are modified by the subroutine are listed as the \textbf{Output}. \begin{remark} The amount of communication in each iteration of Algorithm \ref{algo:Mainalgorithm} is independent of the iteration type. \end{remark} \begin{remark} Since in every iteration of Algorithm \ref{algo:Mainalgorithm} the lengths of $\mathit{FullMA}$ and $\mathit{FullMB}$ increase by $1$, in order to be able to catch up on the metadata, Alice and Bob need to communicate two symbols at a time when extending the metadata. This is done by encoding the two symbols into strings of length $r$ of the channel alphabet $\Sigma$ using the mapping encodeMD in Algorithm \ref{algo:Preprocess} and decoding it using the mapping decodeMD in Algorithm \ref{algo:extendMD}. \end{remark} \begin{algorithm} \Input{$n$ round protocol $\Pi$ in teleportation-based model over polynomial-size alphabet $\Sigma$} \BlankLine Initialize \\ \nonl $\qquad r \leftarrow \Theta\br{1/\sqrt{\epsilon}}$ \; \nonl $\qquad R_\mathrm{total} \leftarrow \lceil \frac{n}{2r}+\Theta(n\epsilon)\rceil$ \; \nonl $\qquad q_{\mathit{MA}},\ell_{\MA},\ell_{\MBtilde},\ell_{\PA},\ell_{\PBtilde}\leftarrow 0$ \; \nonl $\qquad \mathit{MA},\widetilde{\MB},\mathit{PA},\widetilde{\PB}\leftarrow \emptyset$ \; \BlankLine $h\leftarrow$ hash function of Lemma~\ref{lem:hashes} with $p=1/n^5$ and $o=s=\Theta(\log n)$ \; Measure $\Theta\br{R_\mathrm{total}}$ MESs in the computational basis and record the binary representation of the outcomes in $S_1,\ldots,S_{4R_\mathrm{total}}$ \; \tcp*[f]{\textbf{$4R_\mathrm{total}$ seeds of length $s$ for the hash function $h$}}\\ \SetKwProg{ForLoop}{For}{}{} \SetAlgoLined \ForLoop{$i = 1 \to R_\mathrm{total}$} { \SetAlgoVlined \Comment*[f] {\textbf{Preprocessing phase}}\\ $H_{\MA} \leftarrow h_{S_{4i-3}}\br{\mathit{MA}}$ \; $H_{\MBtilde} \leftarrow h_{S_{4i-2}}\br{\widetilde{\MB}}$ \; $H_{\PA} \leftarrow h_{S_{4i-1}}\br{\mathit{PA}}$ \; $H_{\PBtilde} \leftarrow h_{S_{4i}}\br{\widetilde{\PB}}$ \; } \caption{\textbf{\textsf{Main algorithm }}(Alice's side)}\label{algo:Mainalgorithm} \end{algorithm} \setcounter{algocf}{2} \begin{algorithm} \setcounter{AlgoLine}{8} \SetKwBlock{Begin}{}{} \Begin{ Send $$\br{H_{\MA},\ell_{\MA},H_{\MBtilde},\ell_{\MBtilde},H_{\PA},\ell_{\PA},H_{\PBtilde},\ell_{\PBtilde}};$$\\ Receive $$\br{H_{\MAtilde}',\ell_{\MAtilde}',H_{\MB}',\ell_{\MB}',H_{\PAtilde}',\ell_{\PAtilde}',H_{\PB}',\ell_{\PB}'};$$\\ \BlankLine \textbf{\textsf{Preprocess}}\; \If {$\mathit{Itertype} \neq \mathrm{SIM}$} { Send $\mathit{msg}$\; Receive $\mathit{msg}'$\; } \tcp*[f]{\textbf{messages are communicated alternately}}\\ \BlankLine \Comment*[f] {\textbf{Case i.A}}\\ \If {$\mathit{Itertype} = \mathrm{MD} \;\mathrm{and}\; \RewindExtend = \sR$} { \textbf{\textsf{rewindMD}}\; } \Comment*[f] {\textbf{Case i.B}}\\ \ElseIf {$\mathit{Itertype} = \mathrm{MD} \;\mathrm{and}\; \RewindExtend = \sE$} { \textbf{\textsf{extendMD}}\; } \Comment*[f] {\textbf{Case ii.A}}\\ \ElseIf{$\mathit{Itertype} = \mathrm{MES} \;\mathrm{and}\; \mathit{NewMetaA}=\sC$} { return\; } \Comment*[f] {\textbf{Case ii.B}}\\ \ElseIf{$\mathit{Itertype} = \mathrm{MES} \;\mathrm{and}\; \mathit{NewMetaA}=\mathsf{0}$} { \textbf{syncMES}\; } \Comment*[f] {\textbf{Case iii.A}}\\ \ElseIf {$\mathit{Itertype} = \mathrm{PD} \;\mathrm{and}\; \RewindExtend = \sR$} { \textbf{\textsf{rewindPD}}\; } \Comment*[f] {\textbf{Case iii.B}}\\ \ElseIf{$\mathit{Itertype} = \mathrm{PD} \;\mathrm{and}\; \RewindExtend = \sE$} { \textbf{\textsf{extendPD}}\; } \tcp*[f] {\textbf{Classical data are synchronized}}\\ \Comment*[f] {\textbf{Case iv}}\\ \Else{\textbf{\textsf{simulate}}.} } \Return{\textup{\textbf{\textsf{Main algorithm}}}}\; \caption{\textbf{\textsf{Main algorithm }}(Alice's side, cont. from previous page)} \end{algorithm} \begin{algorithm} \Input{ $$\br{ \begin{array}{c} H_{\MA},\ell_{\MA},H_{\MBtilde},\ell_{\MBtilde},H_{\PA},\ell_{\PA},H_{\PBtilde},\ell_{\PBtilde} \\ H_{\MAtilde}',\ell_{\MAtilde}',H_{\MB}',\ell_{\MB}',H_{\PAtilde}',\ell_{\PAtilde}',H_{\PB}',\ell_{\PB}' \\ \mathit{FullMA},\widetilde{\MB},\mathit{FullPA},\widetilde{\PB},q_{\mathit{MA}} \end{array} }$$ } \Output{ $\br{\mathit{Itertype},\RewindExtend,\mathit{NewMetaA},\mathit{FullMA},\ell_{\MA}, \widetilde{\NewMetaB},\ell_{\MBtilde},\mathit{msg}}$ } \BlankLine \If { $\br{H_{\MA},H_{\MBtilde}}=\br{H_{\MAtilde}',H_{\MB}'} \;\mathrm{and}\; \ell_{\MA}=\ell_{\MAtilde}'=\ell_{\MBtilde}=\ell_{\MB}'=i-1$ } { Compute $q_{\widetilde{\MB}}$\; } \Comment*[f] {\textbf{Processing metadata}}\\ \Comment*[f] {\textbf{Case i.A}}\\ \If { $\br{H_{\MA},H_{\MBtilde},\ell_{\MA},\ell_{\MBtilde}}\neq \br{H_{\MAtilde}',H'_{\mathrm{MB}},\ell_{\MAtilde}',\ell_{\MB}'}$ } { $\mathit{Itertype} \leftarrow \mathrm{MD}$\; $\RewindExtend \leftarrow \sR$\; $\mathit{NewMetaA} \leftarrow \sC$\; $\mathit{FullMA} \leftarrow \left(\mathit{FullMA},\mathit{NewMetaA}\right)$\; $\mathit{msg} \leftarrow \text{dummy message of length } r$\; } \Comment*[f] {\textbf{Case i.B}}\\ \ElseIf { $\br{\ell_{\MA} < i-1} \;\mathrm{or}\; \br{\ell_{\MBtilde} < i-1}$ } { $\mathit{Itertype} \leftarrow \mathrm{MD}$\; $\RewindExtend \leftarrow \sE$\; $\mathit{NewMetaA} \leftarrow \sC$\; $\mathit{FullMA} \leftarrow \left(\mathit{FullMA},\mathit{NewMetaA}\right)$\; \If {$\ell_{\MA} < i-1$} { $\mathit{msg} \leftarrow \mathrm{encodeMD}\br{\mathit{FullMA}\Br{\ell_{\MA}+1,\ell_{\MA}+2}}\!;$ \tcp*[f]{\textbf{Encode MD in $\Sigma^r$}}\\ } \Else { $\mathit{msg} \leftarrow \text{dummy message of length } r$\; } } \Comment*[f] {\textbf{Comparing number of used MES blocks}}\\ \Comment*[f] {\textbf{Case ii.A}}\\ \ElseIf {$q_{\mathit{MA}} > q_{\widetilde{\MB}}$} { $\mathit{Itertype} \leftarrow \mathrm{MES}$\; $\mathit{NewMetaA} \leftarrow \sC$\; $\mathit{FullMA} \leftarrow \left(\mathit{FullMA},\mathit{NewMetaA}\right)$\; $\ell_{\MA} \leftarrow \ell_{\MA}+1$\; $\widetilde{\NewMetaB} \leftarrow 0$\; $\widetilde{\MB} \leftarrow \br{\widetilde{\MB},\widetilde{\NewMetaB}}$\; $\ell_{\MBtilde} \leftarrow \ell_{\MBtilde}+1$\; $\mathit{msg} \leftarrow \text{dummy message of length } r$\; } \caption{\textbf{\textsf{Preprocess }} (Alice's side)} \label{algo:Preprocess} \end{algorithm} \setcounter{algocf}{3} \begin{algorithm} \setcounter{AlgoLine}{25} \Comment*[f] {\textbf{Case ii.B}}\\ \ElseIf {$q_{\mathit{MA}} < q_{\widetilde{\MB}}$} { $\mathit{Itertype} \leftarrow \mathrm{MES}$\; $\mathit{NewMetaA} \leftarrow \mathsf{0}$\; $\mathit{FullMA} \leftarrow \left(\mathit{FullMA},\mathit{NewMetaA}\right)$\; $\ell_{\MA} \leftarrow \ell_{\MA}+1$\; $\widetilde{\NewMetaB} \leftarrow \sC$\; $\widetilde{\MB} \leftarrow \br{\widetilde{\MB},\widetilde{\NewMetaB}}$\; $\ell_{\MBtilde} \leftarrow \ell_{\MBtilde}+1$\; $\mathit{msg} \leftarrow \text{dummy message of length }r$\; } \Comment*[f] {\textbf{Processing Pauli data}}\\ \Comment*[f] {\textbf{Case iii.A}}\\ \ElseIf {$\br{H_{\PA},H_{\PBtilde},\ell_{\PA},\ell_{\PBtilde}}\neq \br{H_{\PAtilde}',H_{\PB}',\ell_{\PAtilde}',\ell_{\PB}'}$ } { $\mathit{Itertype} \leftarrow \mathrm{PD}$\; $\RewindExtend \leftarrow \sR$\; $\mathit{NewMetaA} \leftarrow \sC$\; $\mathit{FullMA} \leftarrow \left(\mathit{FullMA},\mathit{NewMetaA}\right)$\; $\ell_{\MA} \leftarrow \ell_{\MA}+1$\; $\widetilde{\NewMetaB} \leftarrow \sC$\; $\widetilde{\MB} \leftarrow \br{\widetilde{\MB},\widetilde{\NewMetaB}}$\; $\ell_{\MBtilde} \leftarrow \ell_{\MBtilde}+1$\; $\mathit{msg} \leftarrow \text{dummy message of length }r$\; } \Comment*[f] {\textbf{Case iii.B}}\\ \ElseIf {$\br{\ell_{\PA} < 3q_{\mathit{MA}} \cdot r} \;\mathrm{or}\; \br{\ell_{\PBtilde} < 3q_{\widetilde{\MB}} \cdot r}$} { $\mathit{Itertype} \leftarrow \mathrm{PD}$\; $\RewindExtend \leftarrow \sE$\; $\mathit{NewMetaA} \leftarrow \sC$\; $\mathit{FullMA} \leftarrow \left(\mathit{FullMA},\mathit{NewMetaA}\right)$\; $\ell_{\MA} \leftarrow \ell_{\MA}+1$\; $\widetilde{\NewMetaB} \leftarrow \sC$\; $\widetilde{\MB} \leftarrow \br{\widetilde{\MB},\widetilde{\NewMetaB}}$\; $\ell_{\MBtilde} \leftarrow \ell_{\MBtilde}+1$\; \If {$\ell_{\PA} < 3q_{\mathit{MA}} \cdot r$} { $\mathit{msg} \leftarrow {\mathit{FullPA}}\Br{\ell_{\PA}+1,\ell_{\PA}+r}$ } } \caption{\textbf{\textsf{Preprocess}} (Alice's side, cont. from previous page)} \end{algorithm} \setcounter{algocf}{3} \begin{algorithm} \setcounter{AlgoLine}{55} \Comment*[f] {\textbf{Processing joint quantum state}}\\ \Comment*[f] {\textbf{Case iv}}\\ \Else { $\mathit{Itertype} \leftarrow \mathrm{SIM}$\; \textsf{\textbf{computejointstate}}\; $\mathit{FullMA}=\left(\mathit{FullMA},\mathit{NewMetaA}\right)$\; $\ell_{\MA} \leftarrow \ell_{\MA}+1$\; $\widetilde{\MB} \leftarrow \br{\widetilde{\MB},\widetilde{\NewMetaB}}$\; $\ell_{\MBtilde} \leftarrow \ell_{\MBtilde}+1$\; } \Return{\textup{\textbf{\textsf{Preprocess}}}}\; \caption{\textbf{\textsf{Preprocess}} (Alice's side, cont. from previous page)} \end{algorithm} \begin{algorithm} \Input{$\br{H_{\MA},\ell_{\MA},H_{\MBtilde},\ell_{\MBtilde},H_{\MAtilde}',\ell_{\MAtilde}',H_{\MB}',\ell_{\MB}'}$} \Promise{$\br{H_{\MA},H_{\MBtilde},\ell_{\MA},\ell_{\MBtilde}}\neq \br{H_{\MAtilde}',H_{\MB}',\ell_{\MAtilde}',\ell_{\MB}'}$.} \Output{$\br{\ell_{\MA},\ell_{\MB}'}$} \If {$\ell_{\MA} \neq \ell_{\MAtilde}' \;\mathrm{or}\; \ell_{\MBtilde} \neq \ell_{\MB}'$} { \If {$\ell_{\MA} > \ell_{\MAtilde}'$} {$\ell_{\MA} \leftarrow \ell_{\MA}-1$\;} \If {$\ell_{\MBtilde} > \ell_{\MB}'$} {$\ell_{\MBtilde} \leftarrow \ell_{\MBtilde}-1$\;} } \Else { \If {$H_{\MA} \neq H_{\MAtilde}'$} {$\ell_{\MA}\leftarrow\ell_{\MA}-1$\;} \If {$H_{\MBtilde} \neq H_{\MB}'$} {$\ell_{\MBtilde}\leftarrow\ell_{\MBtilde}-1$\;} } \Return{\textup{\textbf{\textsf{rewindMD}}}}\; \caption{\textbf{\textsf{rewindMD}} (Alice's side)} \label{algo:rewindMD} \end{algorithm} \begin{algorithm} \Input{$\br{\ell_{\MA},\ell_{\MBtilde},\widetilde{\MB},\mathit{msg}',i}$} \Promise{ $\br{H_{\MA},H_{\MBtilde},\ell_{\MA},\ell_{\MBtilde}}=\br{H_{\MAtilde}',H_{\MB}',\ell_{\MAtilde}',\ell_{\MB}'}$, $\ell_{\MA} < i-1 \quad \mathrm{or} \quad \ell_{\MBtilde} < i-1$. } \Output{$\br{\ell_{\MA},\widetilde{\MB},\ell_{\MBtilde}}$} \If {$\ell_{\MA} < i-1$} { $\ell_{\MA} \leftarrow \ell_{\MA}+2$\; } \ElseIf {$\ell_{\MA} = i-1$} { $\ell_{\MA} \leftarrow \ell_{\MA}+1$\; } \If {$\ell_{\MBtilde} < i-1$} { $\widetilde{\MB}\Br{\ell_{\MBtilde}+1,\ell_{\MBtilde}+2} \leftarrow \mathrm{decodeMD}\br{\mathit{msg}'}\!;$ \tcp*[f]{\textbf{decode MD from $\Sigma^r$}}\\ $\ell_{\MBtilde} \leftarrow \ell_{\MBtilde}+2$\; } \ElseIf {$\ell_{\MBtilde}=i-1$} { $\widetilde{\MB} \leftarrow \br{\widetilde{\MB},\sC}$\; $\ell_{\MBtilde} \leftarrow \ell_{\MBtilde}+1$\; } \Return{\textup{\textbf{\textsf{extendMD}}}}\; \caption{\textbf{\textsf{extendMD}} (Alice's side) } \label{algo:extendMD} \end{algorithm} \begin{algorithm} \Input{$\br{\mathit{FullPA},q_{\mathit{MA}}}$} \Promise{ $\br{H_{\MA},H_{\MBtilde},\ell_{\MA},\ell_{\MBtilde}}=\br{H_{\MAtilde}',H_{\MB}',\ell_{\MAtilde}'+1,\ell_{\MB}'+1}$, $\ell_{\MA}=\ell_{\MBtilde}=i \;,\; q_{\mathit{MA}} < q_{\widetilde{\MB}}$.} \Output{$q_{\mathit{MA}},\mathit{NewPauliA},\mathit{FullPA}$} Recall that~$A' B' C'$ are the registers that are used to generate the joint quantum state of the protocol being simulated, and~$C'$ is the communication register\; Let~$E_1 E_2 \dotsb E_r$ be the~$r$ registers with Alice containing halves of the block of~$r$ MESs with indices in the interval~$(q_{\text{MA}} \cdot r, \: (q_{\text{MA}} + 1)\cdot r]$ \; Teleport~$C'$ using~$E_1$; then teleport~$E_2$ using~$E_3$, $E_4$ using~$E_5$, and so on (i.e., teleport~$E_j$ using $E_{j+1}$ for even~$j \in [r-2]$), and then store~$E_r$ in register~$C'$ \; \tcp*[f]{\textbf{See Section~\ref{sec:Out-of-sync teleportation} for the rationale, and Bob's analogue of this step}} \\ Store the teleportation measurement outcomes in $m\in{\Sigma}^r$\; $\mathit{NewPauliA} \leftarrow \br{m,0^r,0^r}$\; $\mathit{FullPA} \leftarrow \br{\mathit{FullPA},\mathit{NewPauliA}}$\; $q_{\mathit{MA}} \leftarrow q_{\mathit{MA}}+1$\; \Return{\textup{\textbf{\textsf{syncMES}}}}\; \caption{\textbf{\textsf{syncMES}} (Alice's side)} \label{algo:syncMES} \end{algorithm} \begin{algorithm} \Input{$\br{H_{\PA},\ell_{\PA},H_{\PBtilde},\ell_{\PBtilde},H_{\PAtilde}',\ell_{\PAtilde}',H_{\PB}',\ell_{\PB}'}$} \Promise{$\br{H_{\MA},H_{\MBtilde},\ell_{\MA},\ell_{\MBtilde}}=\br{H_{\MAtilde}',H_{\MB}',\ell_{\MAtilde}'+1,\ell_{\MB}'+1}$ , $\ell_{\MA}=\ell_{\MBtilde}=i \;,\; q_{\mathit{MA}}=q_{\widetilde{\MB}}$ , $\br{H_{\PA},H_{\PBtilde},\ell_{\PA},\ell_{\PBtilde}}\neq \br{H_{\PAtilde}',H_{\PB}',\ell_{\PAtilde}',\ell_{\PB}'}$.} \Output{$\br{\ell_{\PA},\ell_{\PBtilde}}$} \If {$\ell_{\PA} \neq \ell_{\PAtilde}' \;\mathrm{or}\; \ell_{\PBtilde} \neq \ell_{\PB}'$} { \If {$\ell_{\PA} > \ell_{\PAtilde}'$} {$\ell_{\PA} \leftarrow \ell_{\PA}-r$\;} \If {$\ell_{\PBtilde} > \ell_{\PB}'$} {$\ell_{\PBtilde} \leftarrow \ell_{\PBtilde}-r$\;} } \Else { \If {$H_{\PA} \neq H_{\PAtilde}'$} {$\ell_{\PA} \leftarrow \ell_{\PA}-r$\;} \If {$H_{\PBtilde} \neq H_{\PB}'$} {$\ell_{\PBtilde} \leftarrow \ell_{\PBtilde}-r$\;} } \Return{\textup{\textbf{\textsf{rewindPD}}}}\; \caption{\textbf{\textsf{rewindPD}} (Alice's side)} \label{algo:rewindPD} \end{algorithm} \begin{algorithm} \Input{$\br{\ell_{\PA},\ell_{\PBtilde},\widetilde{\PB},q_{\mathit{MA}},q_{\widetilde{\MB}},\mathit{msg}'}$} \Promise{$\br{H_{\MA},H_{\MBtilde},\ell_{\MA},\ell_{\MBtilde}}=\br{H_{\MAtilde}',H_{\MB}',\ell_{\MAtilde}'+1,\ell_{\MB}'+1}$ , $\ell_{\MA}=\ell_{\MBtilde}=i \;,\; q_{\mathit{MA}}=q_{\widetilde{\MB}}$ , $\br{H_{\PA},H_{\PBtilde},\ell_{\PA},\ell_{\PBtilde}}= \br{H_{\PAtilde}',H_{\PB}',\ell_{\PAtilde}',\ell_{\PB}'}$ , $\ell_{\PA} < 3q_{\mathit{MA}} \cdot r \quad \mathrm{or} \quad \ell_{\PBtilde} < 3q_{\widetilde{\MB}} \cdot r$.} \Output{$\br{\ell_{\PA},\widetilde{\PB},\ell_{\PBtilde}}$} \If {$\ell_{\PA} < 3q_{\mathit{MA}} \cdot r$} {$\ell_{\PA} \leftarrow \ell_{\PA}+r$\;} \If {$\ell_{\PBtilde} < 3q_{\widetilde{\MB}} \cdot r$} { $\widetilde{\PB}\Br{\ell_{\PBtilde}+1:\ell_{\PBtilde}+r} \leftarrow \mathit{msg}'$\; $\ell_{\PBtilde} \leftarrow \ell_{\PBtilde}+r$\; } \Return{\textup{\textbf{\textsf{extendPD}}}}\; \caption{\textbf{\textsf{extendPD}} (Alice's side) } \label{algo:extendPD} \end{algorithm} \begin{algorithm} \Input{$\br{\mathit{FullMA},\widetilde{\MB},\mathit{FullPA},\widetilde{\PB}}$} \Promise{$\br{H_{\MA},H_{\MBtilde},\ell_{\MA},\ell_{\MBtilde}}=\br{H_{\MAtilde}',H_{\MB}',\ell_{\MAtilde}',\ell_{\MB}'}$, $\ell_{\MA}=\ell_{\MBtilde}=i-1 \;,\; q_{\mathit{MA}}=q_{\widetilde{\MB}}$, $\br{H_{\PA},H_{\PBtilde},\ell_{\PA},\ell_{\PBtilde}}= \br{H_{\PAtilde}',H_{\PB}',\ell_{\PAtilde}',\ell_{\PB}'}$, $\ell_{\PA}= \ell_{\PAtilde}'=3q_{\mathit{MA}} \cdot r\;,\; \ell_{\PBtilde} =\ell_{\PB}'=3q_{\widetilde{\MB}} \cdot r$.} \Output{$\br{\JSoneA, \JStwoA,\mathit{NewMetaA},\widetilde{\NewMetaB}, \mathit{Block},\RewindExtend,\mathit{P}_\mathrm{Corr},\widetilde{\mathit{P}_\mathrm{Corr}}}$} Compute $\JSoneA$\; Compute $\JStwoA$\; Compute $\mathit{NewMetaA}$\; Compute $\RewindExtend$\; Compute $\widetilde{\NewMetaB}$\; Compute $\mathit{Block}$\; Compute $\mathit{P}_\mathrm{Corr}$\; Compute $\widetilde{\mathit{P}_\mathrm{Corr}}$\; \tcp*[f]{\textbf{Refer to Sections~\ref{sec:large-classical-first-rep},~\ref{sec:large-classical-second-rep} to see how these variables are computed}} \Return{\textup{\textbf{\textsf{Computejointstate}}}}\; \caption{\textbf{\textsf{Computejointstate}} (Alice's side)} \label{algo:Computejointstate} \end{algorithm} \begin{algorithm} \Input{$\br{q_{\mathit{MA}},\mathit{FullPA},\ell_{\PA},\widetilde{\PB},\ell_{\PBtilde},\RewindExtend,\mathit{NewMetaA},\mathit{Block},\mathit{P}_\mathrm{Corr},\widetilde{\mathit{P}_\mathrm{Corr}}}$} \Promise{$\br{H_{\MA},H_{\MBtilde},\ell_{\MA},\ell_{\MBtilde},q_{\mathit{MA}}}=\br{H_{\MAtilde}',H_{\MB}',\ell_{\MAtilde}'+1,\ell_{\MB}'+1,q_{\widetilde{\MB}}}$, $\ell_{\MA}=\ell_{\MBtilde}=i$, $\br{H_{\PA},H_{\PBtilde},\ell_{\PA},\ell_{\PBtilde}}= \br{H_{\PAtilde}',H_{\PB}',\ell_{\PAtilde}',\ell_{\PB}'}$, $\ell_{\PA}=\ell_{\PBtilde}=3q_{\mathit{MA}} \cdot r$} \Output{$\br{\mathit{FullPA},\ell_{\PA},\widetilde{\PB},\ell_{\PBtilde}}$} Continue the simulation of the input protocol according to $\mathit{Block}$, $\mathit{NewMetaA}$ and $\mathit{P}_\mathrm{Corr}$\; Record all teleportation measurement outcomes in $\alpha$\; Record all received Bob's teleportation measurement outcomes in $\beta$\; $\mathit{NewPauliA} \leftarrow \br{\alpha,\beta,\mathit{P}_\mathrm{Corr}}$\; $\mathit{FullPA} \leftarrow \br{\mathit{FullPA},\mathit{NewPauliA}}$\; $\ell_{\PA} \leftarrow \ell_{\PA}+3r$\; $\widetilde{\NewPauliB}\leftarrow \br{\beta,\alpha,\widetilde{\mathit{P}_\mathrm{Corr}}}$\; $\widetilde{\PB} \leftarrow \br{\widetilde{\PB},\widetilde{\NewPauliB}}$\; $\ell_{\PBtilde} \leftarrow \ell_{\PBtilde}+3r$\; $q_{\mathit{MA}} \leftarrow q_{\mathit{MA}}+1$\; \Return{\textup{\textbf{\textsf{simulate}}}}\; \caption{\textbf{\textsf{simulate}} (Alice's side)} \label{algo:simulate} \end{algorithm} \subsection{Analysis}\label{subsec:polysizeclassicalanalysis} In order to show the correctness of the above algorithm, we condition on some view of the metadata and Pauli data, i.e., $\mathit{FullMA}$, $\mathit{MA}$, $\widetilde{\MA}$, $\mathit{FullMB}$, $\mathit{MB}$, $\widetilde{\MB}$, $\mathit{FullPA}$, $\mathit{PA}$, $\widetilde{\PA}$, $\mathit{FullPB}$, $\mathit{PB}$ and $\widetilde{\PB}$. We define a potential function $\Phi$ as \begin{align*} \Phi\defeq \Phi_{\mathrm{Q}}+\Phi_{\mathrm{MD}}+\Phi_{\mathrm{PD}}\enspace, \end{align*} where $\Phi_{\mathrm{MD}}$ and $\Phi_{\mathrm{PD}}$ measure the correctness of the two parties' current estimate of each other's metadata and Pauli data, respectively, and $\Phi_{\mathrm{Q}}$ measures the progress in reproducing the joint state of the input protocol. We define \begin{align} &md_\mathrm{A}^+ \defeq~\text{the length of the longest prefix where $\mathit{MA}$ and $\widetilde{\MA}$ agree;}\label{eqn:mda+}\\ &md_\mathrm{B}^+ \defeq~\text{the length of the longest prefix where $\mathit{MB}$ and $\widetilde{\MB}$ agree;}\label{eqn:mdb+}\\ &md_\mathrm{A}^- \defeq \max\{\ell_{\MA},\ell_{\MAtilde}\}-md_\mathrm{A}^+;\label{eqn:mda-}\\ &md_\mathrm{B}^- \defeq \max\{\ell_{\MB},\ell_{\MBtilde}\}-md_\mathrm{B}^+;\label{eqn:mdb-}\\ &pd_\mathrm{A}^+ \defeq \lfloor\frac{1}{r} \times~\text{the length of the longest prefix where $\mathit{PA}$ and $\widetilde{\PA}$ agree}\rfloor;\label{eqn:pda+}\\ &pd_\mathrm{B}^+ \defeq \lfloor\frac{1}{r} \times~\text{the length of the longest prefix where $\mathit{PB}$ and $\widetilde{\PB}$ agree}\rfloor;\label{eqn:pdb+}\\ &pd_\mathrm{A}^- \defeq \frac{1}{r} \max\{\ell_{\PA},\ell_{\PAtilde}\}-pd_\mathrm{A}^+;\label{eqn:pda-}\\ &pd_\mathrm{B}^- \defeq \frac{1}{r} \max\{\ell_{\PB},\ell_{\PBtilde}\}-pd_\mathrm{B}^+.\label{eqn:pdb-} \end{align} Also, recall that \begin{align} &g \defeq \text{the number of good unitary blocks in $\mathit{JS}2$,}\label{eqn:g-analysis}\\ &b \defeq \text{the number of bad unitary blocks in $\mathit{JS}2$, and}\label{eqn:b-analysis}\\ &u\defeq |q_{\mathit{MA}}-q_{\mathit{MB}}|,\label{eqn:u-analysis} \end{align} with $q_{\mathit{MA}}$ and $q_{\mathit{MB}}$ the number of non-$\sC$ iterations for Alice and Bob, respectively. \suppress{ Note that by the following lemma, the $\mathit{JS}2$ representation defined above is well-defined. \begin{lemma}\label{lem:invariance} In the end of any iteration, the joint state is of the following form. \[\frac{1}{2^{\abs{(q_{\mathit{MA}}-q_{\mathit{MB}})\cdot r}}} \sum_{l\in\set{0,\ldots,d-1}^{\abs{(q_{\mathit{MA}}-q_{\mathit{MB}})\cdot r}}}\ketbra{l}\otimes\ketbra{\psi_l},\] where $\ket{\psi_l}$ is of the form JS1, and $d=|\Sigma|$. Moreover, The first $\min\br{q_{\mathit{MA}},q_{\mathit{MB}}}$ blocks (from right to left) of $\ket{\psi_l}$ are same for all $l$. If $q_{\mathit{MA}}>q_{\mathit{MB}}$ (resp. $q_{\mathit{MA}}<q_{\mathit{MB}}$), then in the last $q_{\mathit{MA}}-q_{\mathit{MB}}$ (resp. $q_{\mathit{MB}}-q_{\mathit{MA}}$) blocks, all the powers of $U_{B,i}$ (resp. $U_{A,i}$) are $0$. \end{lemma} \begin{proof} The above form is closed under all quantum operations occurred in the algorithms. \end{proof} } Now we are ready to define the components of the potential function. At the end of the $i$-th iteration, we let \begin{align} &\Phi_{\mathrm{Q}}\defeq g-b-5u, \label{eqn:phiQ}\\ &\Phi_{\mathrm{MD}}\defeq md_\mathrm{A}^+-3md_\mathrm{A}^-+md_\mathrm{B}^+-3md_\mathrm{B}^--2i,\label{eqn:phimd}\\ &\Phi_{\mathrm{PD}}\defeq pd_\mathrm{A}^+-pd_\mathrm{A}^-+pd_\mathrm{B}^+-pd_\mathrm{B}^--3q_{\mathit{MA}}-3q_{\mathit{MB}},\label{eqn:phipd}\\ &\Phi\defeq\Phi_{\mathrm{Q}}+\Phi_{\mathrm{MD}}+\Phi_{\mathrm{PD}}.\label{eqn:phi} \end{align} where $g$, $b$ and and $u$ are defined in Eqs.~\eqref{eqn:g-analysis}, \eqref{eqn:b-analysis}, and \eqref{eqn:u-analysis}. \begin{lemma}\label{lem:phimdpdnegativelargeclassical} Throughout the algorithm, it holds that \begin{itemize} \item $\Phi_{\mathrm{MD}} \leq 0$ with equality if and only if Alice and Bob have full knowledge of each other's metadata, i.e., $md_\mathrm{A}^+ = md_\mathrm{B}^+ = i$ and $md_\mathrm{A}^- = md_\mathrm{B}^- = 0$. \item $\Phi_{\mathrm{PD}} \leq 0$ with equality if and only if Alice and Bob have full knowledge of each other's Pauli data, i.e., $pd_\mathrm{A}^+ = 3q_{\mathit{MA}}$, $ pd_\mathrm{B}^+ = 3q_{\mathit{MB}}$ and $pd_\mathrm{A}^- = pd_\mathrm{B}^- = 0$. \end{itemize} \end{lemma} \begin{proof} The first statement follows from the property that $md_\mathrm{A}^+,md_\mathrm{B}^+ \leq i$, and the second statement holds since $pd_\mathrm{A}^+ \leq 3q_{\mathit{MA}}$ and $pd_\mathrm{B}^+ \leq 3q_{\mathit{MB}}$. \end{proof} Note that if $g-b-u\geq n/{2r}$, the noiseless protocol embedding described in Section \ref{sec:nslss}, guarantees that not only is the correct final state of the original protocol produced and swapped into the safe registers $\tilde{A}$, $\tilde{B}$ and $\tilde{C}$, but also they remain untouched by the bad and ugly blocks of the simulation. Therefore, by Lemma \ref{lem:phimdpdnegativelargeclassical}, for successful simulation of an $n$-round protocol it suffices to have $\Phi \geq n/{2r}$, at the end of the simulation. The main result of this section is the following: \setcounter{theorem}{0} \begin{theorem}[\textbf{Restated}]\label{theorem:simplealg} Consider any $n$-round alternating communication protocol $\Pi$ in the teleportation-based model, communicating messages over a noiseless channel with an alphabet $\Sigma$ of bit-size $\Theta\br{\log n}$. Algorithm \ref{algo:Mainalgorithm} is a computationally efficient coding scheme which given $\Pi$, simulates it with probability at least $1-2^{-\Theta\br{n\epsilon}}$, over any fully adversarial error channel with alphabet $\Sigma$ and error rate $\epsilon$. The simulation uses $n\br{1+\Theta\br{\sqrt{\epsilon}}}$ rounds of communication, and therefore achieves a communication rate of $1-\Theta\br{\sqrt{\epsilon}}$. Furthermore. the computational complexity of the coding operations is~$O\br{n^2}$. \end{theorem} {\bf Proof Outline.} We prove that any iteration without an error or hash collision increases the potential by at least one while any iteration with error or hash collision reduces the potential by at most some fixed constant. As in Ref.~\cite{Haeupler:2014}, with very high probability the number of hash collisions is at most $O(n\epsilon)$, the same order of magnitude as the number of errors, therefore negligible. Finally, our choice of the total number of iterations, $R_{total} \defeq \lceil n/2r + \kappa n \epsilon \rceil$ (for a sufficiently large constant $\kappa$), guarantees an overall potential increase of at least~$n/2r$. As explained above, this suffices to prove successful simulation of the input protocol. \setcounter{theorem}{4} \begin{lemma}\label{lem:potential increase} Each iteration of the Main Algorithm (Algorithm~\ref{algo:Mainalgorithm}) without a hash collision or error increases the potential $\Phi$ by at least $1$. \end{lemma} \begin{proof} Note that in an iteration with no error or hash collision, Alice and Bob agree on the iteration type. Moreover, if $\mathit{Itertype}=\;\mathrm{MD}$ or $\mathrm{PD}$ (Case i or iii), they also agree on whether they extend or rewind the data (the subcase A or B), and if $\mathit{Itertype}\;=\;\mathrm{MES}$ (Case ii), then exactly one of them is in \textsf{Case A} and the other one is in \textsf{Case B}. We analyze the potential function in each of the cases, keeping in mind that we only encounter Case~ii or later cases once the metadata of the two parties are consistent and of full length, and similarly, that we encounter Case~iv once the parties have used the same number of MESs and the Pauli data with the two parties are consistent and of full length. Lemma~\ref{lem:phimdpdnegativelargeclassical} guarantees that~$\Phi_{\mathrm{MD}}$ becomes~0 on entering Case~ii, and that~$\Phi_{\mathrm{MD}} = \Phi_{\mathrm{PD}} = 0$ on entering Case~iv. \begin{itemize} \item Alice and Bob are in \textsf{Case i.A}: \begin{itemize} \item $\Phi_{\mathrm{PD}}$ and $\Phi_{\mathrm{Q}}$ stay the same. \item $i$ increases by $1$. \item $md_\mathrm{A}^+$ and $md_\mathrm{B}^+$ stay the same. \item None of $md_\mathrm{A}^-$ and $md_\mathrm{B}^-$ increases, and at least one decreases by $1$. \end{itemize} Therefore, $\Phi_{\mathrm{MD}}$ increases at least by $3-2=1$, and so does $\Phi$. \item Alice and Bob are in \textsf{Case i.B}: \begin{itemize} \item $\Phi_{\mathrm{PD}}$ and $\Phi_{\mathrm{Q}}$ stay the same. \item $i$ increases by $1$. \item $md_\mathrm{A}^-$ and $md_\mathrm{B}^-$ stay at $0$. \item At least one of $\ell_{\MA}$ or $\ell_{\mathit{MB}}$ is smaller than $i-1$; If only $\ell_{\MA} < i-1$, then $md_\mathrm{A}^+$ increases by $2$, and $md_\mathrm{B}^+$ by $1$. The case where only $\ell_{\mathit{MB}} < i-1$ is similar. If both are smaller than $i-1$, then $md_\mathrm{A}^+$ and $md_\mathrm{B}^+$ both increase by $2$. \end{itemize} Therefore, $\Phi_{\mathrm{MD}}$ increases by at least $3-2=1$, and so does $\Phi$. \item Alice is in \textsf{Case ii.A}, Bob is in \textsf{Case ii.B}: \begin{itemize} \item $\Phi_{\mathrm{MD}}$ stays at $0$. \item $q_{\mathit{MB}}$ increases by $1$. \item $q_{\mathit{MA}}$, $pd_\mathrm{A}^+$, $pd_\mathrm{A}^-$, $pd_\mathrm{B}^+$, $pd_\mathrm{B}^-$ all stay the same. \item $g$ remains the same, $b$ increases by at most $1$, and $u$ decreases by 1. \end{itemize} Therefore, $\Phi_{\mathrm{Q}}$ increases by at least $5-1=4$, and $\Phi_{\mathrm{PD}}$ decreases by $3$. So $\Phi$ increases by at least $1$. \item Alice is in \textsf{Case ii.B}, Bob is in \textsf{Case ii.A}: This case is similar to the above one. \item Alice and Bob are in \textsf{Case iii.A} \begin{itemize} \item $\Phi_{\mathrm{MD}}$ stays at $0$, and $\Phi_{\mathrm{Q}}$ stays the same \item $pd_\mathrm{A}^+$, $pd_\mathrm{B}^+$, $q_{\mathit{MA}}$ and $q_{\mathit{MB}}$ stay the same. \item None of $pd_\mathrm{A}^-$ and $pd_\mathrm{B}^-$ increases, and at least one decreases by $1$. \end{itemize} Therefore, $\Phi_{\mathrm{PD}}$ increases by at least $1$, and so does $\Phi$. \item Alice and Bob are in \textsf{Case iii.B} \begin{itemize} \item $\Phi_{\mathrm{MD}}$ stays at $0$, and $\Phi_{\mathrm{Q}}$ stays the same. \item $pd_\mathrm{A}^-$, $pd_\mathrm{B}^-$ stay at $0$, and $q_{\mathit{MA}}$, $q_{\mathit{MB}}$ stay the same. \item At least one of the following holds: $\ell_{\PA} < 3q_{\mathit{MA}}\cdot r$, in which case $pd_\mathrm{A}^+$ increases by $1$ (otherwise it remains unchanged), or $\ell_{\PB} < 3q_{\mathit{MB}}\cdot r$, and then $pd_\mathrm{B}^+$ increases by $1$ (otherwise it remains unchanged). \end{itemize} Therefore, $\Phi_{\mathrm{PD}}$ increases by at least $1$, and so does $\Phi$. \item Alice and Bob are in \textsf{Case iv} \begin{itemize} \item $\Phi_{\mathrm{MD}}$ and $\Phi_{\mathrm{PD}}$ stay at $0$. \item $u$ stays at $0$ \item Either $g$ stays the same and $b$ decreases by 1 (when $b\neq0$) or $b$ stays at 0 and $g$ increases by 1. \end{itemize} Therefore, $\Phi_{\mathrm{Q}}$ increases by $1$, and so does $\Phi$. \end{itemize} Hence $\Phi$ increases at least by $1$ for each iteration of the algorithm without a hash collision or error. \end{proof} \begin{lemma}\label{lem:simpleerrorpotential} Each iterations of Algorithm \ref{algo:Mainalgorithm}, regardless of the number of hash collisions and errors, decreases the potential $\Phi$ by at most $45$. \end{lemma} \begin{proof} At each step, $i$ increases by $1$ while, in the worst case, $g$, $md_\mathrm{A}^+$,$md_\mathrm{B}^+$, $pd_\mathrm{A}^+$ and $pd_\mathrm{B}^+$ decrease by at most $1$, $b$, $u$, $q_{\mathit{MA}}$ and $q_{\mathit{MB}}$ increase by at most $1$, $md_\mathrm{A}^-$ and $md_\mathrm{B}^-$ increase by at most $3$ and $pd_\mathrm{A}^-$ and $pd_\mathrm{B}^-$ increase by at most $4$. Hence, $\Phi_{\mathrm{Q}}$, $\Phi_{\mathrm{MD}}$ and $\Phi_{\mathrm{PD}}$ decrease at most by $7$, $22$, and $16$, respectively. So in total, $\Phi$ decreases by at most $45$. \end{proof} The following lemma is from \cite{Haeupler:2014}. \begin{lemma}\label{lem:simplehashcollisions} The number of iterations of Algorithm \ref{algo:Mainalgorithm} suffering from a hash collision is at most $6n\epsilon$ with probability at least $1 - 2^{-\Theta(\epsilon n)}$. \end{lemma} \begin{proofof} {Theorem \ref{theorem:simplealg}} Let $R_\mathrm{total}=\lceil\frac{n}{2r}\rceil+368n\epsilon$. The total number of iterations is less than $2n$, so the total number of iterations with an error is at most $2n\epsilon$. By Lemma \ref{lem:simplehashcollisions}, with probability at least $1 - 2^{-\Theta(\epsilon n)}$, the number of iterations with a hash collision is at most $6n\epsilon$. Therefore, by Lemma \ref{lem:potential increase}, in the remaining $R_\mathrm{total}-8n\epsilon=\lceil\frac{n}{2r}\rceil+360n\epsilon$ iterations, the potential $\Phi$ increases by at least one. The potential decreases only when there is an error or hash collision and it decreases by at most $45$. So at the end of the simulation, we have \[g-b-u\geq\Phi_{\mathrm{Q}}\geq\Phi\geq R_\mathrm{total}-8n\epsilon-45\times 8n\epsilon\geq\frac{n}{2r}.\] Hence the simulation is successful. Furthermore, note that the of amount communication in each iteration is independent of the iteration type and is always $2r+\Theta(1)$ symbols: in every iteration each party sends $\Theta(1)$ symbols to communicate the hash values and the lengths of the metadata and Pauli data in line 9 of Algorithm \ref{algo:Mainalgorithm}; each party sends another $r$ symbols, either in line 13 of Algorithm \ref{algo:Mainalgorithm}, if $\mathit{Itertype} \neq \mathrm{SIM}$ or in Algorithm \ref{algo:simulate} to communicate the teleportation measurement outcomes. So the total number of communicated symbols is \begin{equation} R_\mathrm{total}\cdot(2r+\Theta(1))=\left(\lceil\frac{n}{2r}\rceil+\Theta(n\epsilon)\right)\left(2r+\Theta(1)\right)=n(1+\Theta(\sqrt{\epsilon})), \end{equation} as claimed. \end{proofof} \section{ Ideas and solution in the plain model with large alphabet size} \label{sec:BriefQuLarge} \subsection{Overview} \subsubsection{Teleportation is inapplicable} Switching from the teleportation-based model to the plain quantum model, suppose we are given a protocol $\Pi$ using noiseless quantum communication, and we are asked to provide a protocol $\Pi'$ using noisy quantum channels under the strongly adversarial model described earlier. In the absence of free entanglement, how can we protect quantum data from leaking to the environment without incurring a non-negligible overhead? First, note that some form of protection is necessary, as discussed in \longpaper{Section~\ref{sec:intro-diff}.} \blurb{Section~2 of this extended abstract.} Second, teleportation would be too expensive to use, since it incurs an overhead of at least $3$: we have to pay for the MES as well as the classical communication required. Surprisingly, an old and relatively unknown idea called the Quantum Vernam Cipher (QVC)~\cite{Leung:2002} turns out to be a perfect alternative method to protect quantum data with negligible overhead as the noise rate approaches 0. \subsubsection{Quantum Vernam Cipher (QVC)} Suppose Alice and Bob share two copies of MESs, each over two $d$-dimensional systems. For Alice to send a message to Bob, she applies a controlled-${\mathrm X}$ operation with her half of the first MES as control, and the message as the target. She applies a controlled-${\mathrm Z}$ operation from her half of the second MES to the message. When Bob receives the message, he reverses the controlled operations using his halves of the MESs. The operations are similar for the opposite direction of communication. A detailed description is provided in Section~\ref{sec:qvc}. \suppress{the full paper. \blurb{Section~2.5 of the full manuscript; Figure~7 at the end of this extended abstract depicts the protocol.}} QVC is designed so that given access to an authenticated classical channel from Alice to Bob, Bob can determine and correct any error in the transmission of the quantum message. This can simply be done by measuring ${\mathrm Z}^l$ type changes to one half of the two MES. They can also run QVC many times to send multiple messages and determine the errors in a large block using a method called ``random hashing'', and recycle the MESs if the error rate (as defined in our adversarial model) is low. This is a crucial property of QVC and leads to one of the earliest (quantum) key recycling results known. What makes QVC particularly suitable for our problem is that encoding and decoding are performed message-wise, while error detection can be done in large blocks, and entanglement can be recycled if no error is detected. It may thus be viewed as a natural quantum generalization to Haeupler's consistency checks. As an aside, in Appendix E of Ref.~\cite{Leung:2002}, the relative merits of teleportation and QVC were compared (those are the only two ciphers with complete quantum reliability), and it was determined that entanglement generation over an insecure noisy quantum channel followed by teleportation is more entanglement efficient than QVC with entanglement recycling in some test settings. However, this difference vanishes for low noise. Furthermore, the comparison assumes authenticated noiseless classical communication to be free. QVC requires an amount of classical communication for the consistency checks which vanishes with the noise parameter (but this cost was not a concern in that study). Furthermore, QVC was also proposed as an authentication scheme, but the requirement for interaction to authenticate and to recycle the key or entanglement was considered a disadvantage, compared to non-interactive schemes. (Those are only efficient for large block length, and cannot identify the error when one is detected. So, these authentication schemes are inapplicable). We thus provide renewed insight into QVC when interaction is natural (while it is considered expensive in many other settings). \subsubsection{Entanglement recycling and adaptations of QVC for the current problem} In the current scenario, we have neither free MESs nor an authenticated classical channel. Instead, Alice and Bob start the protocol by distributing the MESs they need, using a high rate quantum error correcting code over the low-noise channel. Then, they run the input protocol $\Pi$ as is over the noisy channel, while frequently checking for errors by performing \emph{quantum hashing\/}~\cite{BDSW96,Leung:2002}, using the same noisy quantum channel instead of an authenticated classical channel. If they detect an inconsistency, assuming that the errors are most likely recent, they measure a small block of MESs in the recent past to determine the errors. They continue this process until they get matching quantum hash values indicating (with constant probability) that they have located and identified all the errors and the remaining MESs can be recycled and reused to encrypt the messages. Frequent quantum hashing allows Alice and Bob to boost their confidence about recyclability of the earlier MESs and reuse MESs in a cyclic way. Note that for successful simulation it is crucial to ensure that the recycled MESs are indeed not corrupted and that Alice and Bob recycle the same sequence of MESs. One of our main contributions in this section is developing a framework for recycling entanglement in a communication efficient way. We show that entanglement generation of $O\br{n\sqrt{\epsilon}}$ MESs, where $n$ is the length of the input protocol $\Pi$ and $\epsilon$ is the noise parameter, is sufficient to last through the whole simulation. \subsubsection{Framework} As in the case of the teleportation-based protocols, due to transmission errors and collisions, Alice and Bob do not necessarily always agree on their actions in the simulation. Therefore, in every iteration both parties need to obtain a global view of the history of the simulation so far to correctly decide their next actions. They achieve this goal by maintaining a similar data structure as in the teleportation-based case. The data structure now contains additional information to keep track of their measurements, which Alice and Bob use in the recycling process. \subsubsection{Additional out-of-sync problems} Due to transmission errors introduced by the adversary, Alice and Bob may get out of sync in QVC. In such a scenario, the QVC operations are performed by only one party and the quantum data intended to be sent to the other party leaks into the MES registers used to encrypt the messages. Furthermore, the parties may not agree on the subset of MESs they have already measured when they perform quantum hashing. As we will explain in Section~\ref{sec:out-of-sync QH}, in the worst case, this can further lead to the leakage of the quantum data into all the MES registers involved in the quantum hashing procedure. \suppress In particular, corruptions can lead only one party to recycle an MES can cause a significant discrepancy in how many MESs the two parties are holding. It is much more involved to analyse the joint quantum state.} \suppress{To tackle these problems, we develop further data structures and adapt the ``quantum hashing'' procedure of Ref.~\cite{BDSW96,Leung:2002} to our setting.} We show that, surprisingly, once again the quantum data can be recovered once Alice and Bob reconcile the differences in the data structure developed for the task. This is in spite of the fact that there is no reason to expect out-of-sync QVC to be sufficient to protect the quantum data from leaking to the environment when encoding and decoding operations are performed incorrectly and quantum data is sent via the noisy quantum channel. \subsection{Result} Our main result in the plain quantum model with polynomial-size communication alphabet is the simulation of an $n$-round noiseless communication protocol over a fully adversarial channel of error-rate $\epsilon$ defined in Section~\ref{sec:noisy_comm_model}. \suppress{First, we state the result in the large alphabet case.} \begin{theorem} \label{thm:Qmessagelargealphabet} Consider any $n$-round alternating communication protocol $\Pi$ in the plain quantum model, communicating messages over a noiseless channel with an alphabet $\Sigma$ of bit-size $\Theta\br{\log n}$. Algorithm \ref{algo:MainalgorithmQMessage} is a quantum coding scheme which given $\Pi$, simulates it with probability at least $1-2^{-\Theta\br{n\epsilon}}$, over any fully adversarial error channel with alphabet $\Sigma$ and error rate $\epsilon$. The simulation uses $n\br{1+\Theta\br{\sqrt{\epsilon}}}$ rounds of communication, and therefore achieves a communication rate of $1-\Theta\br{\sqrt{\epsilon}}$. \end{theorem} \suppress{ The simulation when the communication alphabet is constant-sized is more complicated. Nonetheless, in Section~\ref{sec:small-alphabet-Yao}, we prove that the same simulation rate is achievable in this case as well. \begin{theorem} \label{thm:Qmessagesmallalphabet} Consider any $n$-round alternating communication protocol $\Pi$ in the plain quantum model, communicating messages over a noiseless channel with an alphabet $\Sigma$ of constant bit-size. Algorithm \ref{algo:MainalgorithmQMessage} is a \textbf{(computationally efficient?)} quantum coding scheme which given $\Pi$, simulates it with probability at least $1-2^{-\Theta\br{n\epsilon}}$, over any fully adversarial error channel with alphabet $\Sigma$ and error rate $\epsilon$. The simulation uses $n\br{1+\Theta\br{\sqrt{\epsilon}}}$ rounds of communication, and therefore achieves a communication rate of $1-\Theta\br{\sqrt{\epsilon}}$. \end{theorem} Note that in the classical setting with constant-size alphabet and no pre-shared randomness, the current best simulation rate is $1-\Theta\br{\sqrt{\epsilon}}$ for oblivious channels and $1-\Theta\br{\sqrt{\epsilon\log\log\frac{1}{\epsilon}}}$ for fully adversarial channels~\cite{Haeupler:2014}. Our simulation in the plain quantum model outperforms the best known protocol in the corresponding classical setting. This advantage may be interpreted as follows. When using quantum communication, Alice and Bob can establish MESs and measure them to generate the hash seeds. The advantage of establishing the shared randomness in this way is that the seeds remain unknown to the adversary. Thus the adversary is not able to create hash collisions purposely. Therefore, we do not need to modify the simulation algorithm for oblivious errors to handle fully adversarial errors, in contrast to~\cite{Haeupler:2014}. } \subsection{Description of Protocol}\label{sec:description-large-quantum} \subsubsection{General Description}\label{subsec:description-large-quantum} Our simulation of noiseless protocols in the plain quantum model of communication proceeds using the same idea of running $O_\epsilon\br{1}$ rounds of the input protocol as is, while checking if the adversary has corrupted the communication during the previous iterations and if necessary, actively rewinding the simulation to correct errors. The quantum messages are protected using QVC against corruptions by the adversary. In order to detect potential transmission errors, the MES pairs used as the key in QVC may be measured after each communication round. The measurement outcomes may be stored and later on compared to obtain the error syndrome. Therefore, using a data structure similar to the one introduced in the previous section, one can obtain a coding scheme for simulating any protocol in the plain quantum model. However, this approach is not efficient in using the entanglement. Recall that in the plain quantum model, the parties do not pre-share any entanglement, hence they need to establish the shared MESs through extra communication. Rather than measuring the MES pairs immediately after each round of communication, we use the quantum hashing procedure described in Section ~\ref{sec:Qhashing} to check whether any transmission error has occurred so far. Note that if Alice and Bob detect an error, they need to determine the error and eventually actively rewind the simulation to correct it and resume the simulation from there. However, similar to the teleportation-based protocol, due to transmission errors Alice and Bob may not always agree on how they proceed with the simulation in every iteration. Thus, in every iteration before taking further actions, each party needs to know the actions of the other party so far. More accurately, they first need to obtain a global view of their joint quantum state. Alice and Bob locally maintain a similar data structure as in the teleportation-based protocol containing metadata and Pauli data, which needs to be synchronized in the algorithm. They first need to ensure they have full knowledge of each other's metadata. Then similar to the teleportation-based case, in order to avoid out-of-sync scenarios in communication using QVC, it is crucial for them to synchronize the number of MESs they have used (see Section \ref{subsec:out-of-sync QVC}). We denote by $\ellQVCA$ and $\ell_\mathrm{QVC}^\mathrm{B}$, the number of blocks of MES pairs used by Alice and Bob, respectively. After ensuring $\ellQVCA=\ell_\mathrm{QVC}^\mathrm{B}$ (to the best of their knowledge), they compare their quantum hash values to check for errors. Note that quantum hashing does not indicate in which round of communication the error has occurred. If the hash values do not match, they measure the last block of MES pairs which has not gained as much trust as the older blocks through quantum hashing. This effectively collapses the adversary's action on this block to Pauli errors. The effective errors can be determined jointly from the measurement outcomes, which are recorded by Alice and Bob as part of their local Pauli data. Otherwise, if the hashes do match, then Alice and Bob synchronize their Pauli data. Note that similar to the teleportation-based protocol, the Pauli corrections performed by Alice and Bob are also recorded as part of the Pauli data. Together with the metadata, this gives Alice and Bob all the information they need to compute their estimate of the current joint state. Hence, they can determine how to proceed with the simulation next. \paragraph{Recycling entanglement and recycling data.} An important complication arising in simulation of protocols in the plain quantum model is that in order to achieve the simulation rate of $1-\Theta(\sqrt{\epsilon})$, we cannot afford to access a new MES pair in every round of communication using QVC. This is where we use a crucial property of QVC, namely the key recycling property. Note that in communication using QVC if no error occurs on the message then the pair of MESs used will remain intact, hence they can be recycled and used in later rounds. Otherwise, at some point Alice and Bob need to measure the MES pair to get the error syndrome, in which case the pair cannot be recycled. By performing quantum hashing regularly and carefully keeping track of the measured MES blocks, Alice and Bob can recycle MES pairs as needed and run the simulation by establishing a smaller number of MESs at the beginning of the protocol. In order to correctly simulate the input protocol we need to ensure that the recycling is successful in every iteration, namely that the same sequences of MES blocks are recycled by the two parties and that they are indeed not corrupted when being recycled. Note that if the two parties reuse an MES pair which has been corrupted due to an earlier transmission error in QVC, then even if Alice and Bob detect the error and measure the MES pair, they have no way of knowing whether the error has occurred the last time the MES pair were used or in an earlier round. Moreover, if a block of MES pairs has been locally measured by only one party, say Alice, then the other party, Bob, needs to measure the block and avoid this block when encrypting future messages using QVC. In order to achieve successful recycling, we modify the metadata so that it contains additional information to keep track of each party's measurements. Alice maintains a string $\mathit{RA}\in \{\sS,\sM\}^*$, where each $\sM$ symbol corresponds to a measured MES block and $\sS$ is used for an MES block still in superposition. In each iteration, Alice finds the next reusable MES block, records the index of the block in a string, $\mathit{IndexA}$, of recycled MESs and concatenates $\mathit{RA}$ with a new $\sS$ symbol. The string $\mathit{IndexA}$ serves as Alice's queue of reusable MES blocks. Moreover, if she measures an MES block in the current iteration, she changes the corresponding $\sS$ symbol in $\mathit{RA}$ to $\sM$. Similarly, Bob maintains the strings $\mathit{RB}$ and $\mathit{IndexB}$. Note that the recycled MES blocks are not necessarily reused immediately or even in the same iteration by the two parties. Alice and Bob need to ensure that the strings $\mathit{RA}$ and $\mathit{RB}$ are the same. Using her full-length estimate $\widetilde{\MB}$ of Bob's metadata $\mathit{FullMB}$, Alice computes an estimate $\widetilde{\RB}$ of $\mathit{RB}$. Note that if $\widetilde{\MB}=\mathit{FullMB}$ then $\widetilde{\RB}=\mathit{RB}$. Similarly, Bob computes his estimate $\widetilde{\RA}$ of $\mathit{RA}$ from his full-length estimate $\widetilde{\MA}$ of Alice's metadata $\mathit{FullMA}$. After synchronizing their metadata and ensuring that they have used the same number of MES blocks, they synchronize their recycling data. Alice recycles MES blocks using $\mathit{RA}$ and $\mathit{IndexA}$ and Bob using $\mathit{RB}$ and $\mathit{IndexB}$. Frequent hashing of the metadata allows them to be highly confident that their recycling data agree in a sufficiently long prefix. Furthermore, quantum hashing ensures that with high probability all the recycled MES blocks are indeed reusable. Note that the synchronization of the recycling data is slightly different from the synchronization of the metadata and the Pauli data. For the latter, Alice and Bob just need to know each other's local data, i.e., Alice needs to know $\mathit{FullMB}$ and $\mathit{FullPB}$ and Bob needs to know $\mathit{FullMA}$ and $\mathit{FullPA}$ and the corresponding data need not be the same. Whereas for the recycling data, Alice and Bob need to learn each other's data and match them, i.e., they need to ensure $\mathit{RA}=\mathit{RB}$. \paragraph{Entanglement distribution.} At the outset of the simulation, Alice and Bob use the Robust Entanglement Distribution protocol (Algorithm~\ref{algo:Robust Entanglement Distribution}) introduced in Section~\ref{subsec:hash} to share $\Theta(n\sqrt{\epsilon})$ copies of the MES $\ket{\phi^{0,0}}$, defined in Definition ~\ref{def:Bellstates}. The shared MESs are used as follows: \begin{itemize} \item $\Theta\br{n\sqrt{\epsilon}}$ MESs are used in pairs to serve as the key for encryption of messages using QVC. They are divided into $\mathit{L}_{\mathrm{QVC}}=\Theta\br{n\epsilon}$ blocks of $2r$ MES pairs, where $r=\Theta(\frac{1}{\sqrt{\epsilon}})$. In each block the MES pairs are implicitly numbered from $1$ to $2r$. The odd-numbered pairs are used in QVC to send messages from Alice to Bob and the even-numbered pairs are used to send messages from Bob to Alice. \item $\Theta(n\sqrt{\epsilon})$ MESs are reserved to be used in quantum hashing. \item The remaining $\Theta(n\sqrt{\epsilon})$ MESs are measured in the computational basis by both parties to obtain a common random string to be used as the seed for classical and quantum hashing. \end{itemize} We show that with the limited error budget of the adversary, the $\Theta(n\sqrt{\epsilon})$ MESs established at the beginning of simulation are sufficient to successfully simulate the input protocol. This allows us to achieve a simulation rate of $1-\Theta(\sqrt{\epsilon})$. \begin{figure}[!t] \centering \includegraphics[width=480pt]{EPR-circle.jpg} \caption{These figures represent the blocks of MES pairs at different stages of the protocol. To simplify the figure, we represent each block by a single MES. Note that these are used in a circular pattern, corresponding to recycling some of the previously used blocks of MES pairs. Those depicted as circles are assumed to be good and usable for QVC, those depicted by squares have been measured already in order to extract the error syndrome. Figure (a) represent the MES blocks at the beginning of the protocol, when none have been measured. Figure (b) represents them when Alice and Bob agree on which ones have been measured and have used the same amount of them for QVC, which is the desired state. Figure (c) represents a situation when Alice and Bob have gotten out-of-sync, e.g., Alice has measured some blocks that Bob has not (and maybe used QVC more often than Bob). They then work to get back in sync before resuming the simulation. } \label{fig:EPR_circle} \end{figure} \subsubsection{Quantum Hashing}\label{sec:Qhashing} By performing local measurements on the MES pairs used as the keys for encrypting the messages in QVC and comparing the measurement outcomes on both sides, one can extract the error syndrome corresponding to the corruptions introduced over the noisy communication channel. As explained in the previous subsection, although this allows the two parties to detect errors immediately, it is not efficient for our application. In Subsection~\ref{sec:qvc}, we introduced an error detection procedure which allows the parties to check for corruptions when QVC is used over several rounds of communication to send multiple messages at the cost of losing only one MES which is measured at the end. However, this error detection procedure is not directly useful in our application since the adversary can always choose the corruptions in a way that makes it impossible for Alice and Bob to detect errors; see subsection~\ref{sec:qvc}. Instead, Alice and Bob use the \emph{quantum hashing\/} procedure described below to check whether there is an undetected error. To avoid the adversary from hiding her corruptions from the detection procedure above, Alice and Bob choose a random subset of the MESs and try to detect errors in this subset rather than all MESs used in QVC. More precisely, quantum hashing involves the following steps in our algorithm. At the beginning of the $i$-th iteration, Alice and Bob pick a fresh MES serving as the control system used in the error detection procedure. Recall that Alice and Bob locally maintain the strings $\mathit{IndexA}$ and $\mathit{IndexB}$, respectively, corresponding to their recycled MES blocks. Alice uses the shared randomness established at the outset of the protocol to choose a random subset of the MES registers contained in the blocks specified by $\mathit{IndexA}\Br{i-t+1:\ellQVCA}$, for $t\in \Theta \br{n\epsilon}$. Using her recycling data $\mathit{RA}$, she locally determines the MESs in this random subset which she has not measured already. She performs her operations in the detection procedure described in Subsection~\ref{sec:qvc} only on these MESs. Bob does the same locally, based on $\mathit{IndexB}$ and $\mathit{RB}$. Alice and Bob store their measurement outcomes in $\QHA$ and $\mathit{QHB}$, respectively, and exchange the values. We prove that except with exponentially small probability, recycling is successful throughout the execution of the algorithm. Therefore we can assume $\mathit{IndexA}=\mathit{IndexB}$ in every iteration. But $\mathit{RA}$ and $\mathit{RB}$ do not necessarily match in every iteration. Therefore, in some iterations, Alice and Bob may perform the quantum hashing on different subsets of the MESs. Moreover, if the two parties are not synchronized in the number of MES blocks they have used , they might perform quantum hashing on an MES that is only used by one party but not the other. We discuss these out-of-sync quantum hashing scenarios in Subsection~\ref{sec:out-of-sync QH}. Alice and Bob compare their quantum hash values only when they believe they have full knowledge of each other's metadata, have used the same number of MES blocks and agree on their recycling data. If they get the same hash values, they assume no error has occurred. Otherwise, they believe they have detected an error. Note that similar to all other types of data that are communicated over the noisy channel, the quantum hash values may get corrupted by the adversary. We say a \emph{quantum hash collision\/} has occurred in an iteration only when recycling has been successful so far, Alice and Bob are indeed synchronized in their metadata, the number of MES blocks they have used and their recycling data and $\QHA=\mathit{QHB}$ despite the fact that there are non-measured MES blocks in the last $t$ blocks of $\mathit{IndexA}=\mathit{IndexB}$, which are not in the $\ket{\phi^{0,0}}^{\otimes 4r}$ state. The following lemma shows that in every iteration of the algorithm, assuming successful recycling up to that point, the probability of a quantum hash collision is at most $1/2$. \suppress{ Note that by Equations ~\eqref{eq:messagecorrupt},\eqref{eqn:quantumvernamcipher} and \eqref{eqn:reverseorderquantumvernamcipher}, even if Alice and Bob reuse a block of MES pairs corresponding to an iteration in which communication is corrupted, all the MESs always stay in $\textsf{span}\set{\ket{\phi^{0,k}}:0\leq k\leq d-1}$. } \begin{lemma}\label{lem:quantumhash} Let $m\in \mathbb{N}$ and $k=k_1 \ldots k_m \in \{0,1,\ldots,d-1\}^m$. Suppose that Alice and Bob share the states $\ket{\phi^{0,0}}_{A_0B_0},\ket{\phi^{0,k_1}}_{A_1B_1},\ldots,\ket{\phi^{0,k_m}}_{A_mB_m}$and a random variable $S$ distributed over $\set{0,1}^m$, interpreted as a subset of $[m]$. Alice and Bob apply $\control{{\mathrm X}}_{A_0A_i}$ and $\control{{\mathrm X}}_{B_0B_i}$, for all $i\in S$. Then they apply the quantum Fourier transform operator ${\mathrm F}$ and its inverse ${\mathrm F}^{\dagger}$ on $A_0$ and $B_0$, respectively. They measure the registers in the computational basis with outcomes $\QHA$ and $\mathit{QHB}$, respectively. Then for $k=0^m$, independent of the random variable $S$, we have \[ \prob{\QHA=\mathit{QHB}}=1\enspace. \] Moreover, for uniformly random $S$, for all $k\neq0^m$, we have \[ \prob{\QHA=\mathit{QHB}}\leq \frac{1}{2}\enspace. \] \suppress{ \[ \prob{\QHA=\mathit{QHB}}\begin{cases} =1 & \text{if $k_1=\ldots=k_m=0$}\\ \leq \frac{1}{2} &\text{otherwise}\enspace, \end{cases} \] and if $S$ is $\delta$-biased, then \[ \prob{\QHA=\mathit{QHB}} \begin{cases} =1 & \text{if $k_1=\ldots=k_m=0$} \\ \leq \frac{1}{2}+\delta & \text{otherwise}. \end{cases} \] Furthermore, the states in $A_1B_1,\ldots, A_mB_m$ remain unchanged. } \end{lemma} \begin{proof} Let $S=S_1S_2\ldots S_m$. By Lemma~\ref{lem:cnotbell}, the state in register $A_0B_0$ before the measurements is \begin{align*} \br{{\mathrm F}\otimes {\mathrm F}^\dagger} \ket{\phi^{0,-\sum_{i=1}^mS_ik_i}}_{A_0B_0} &=\br{{\mathrm F} {\mathrm Z}^{-\sum_{i=1}^mS_ik_i}\otimes {\mathrm F}^\dagger}\ket{\phi^{0,0}} =\br{{\mathrm F} {\mathrm Z}^{-\sum_{i=1}^mS_ik_i}{\mathrm F}^\dagger\otimes \id}\ket{\phi^{0,0}}\\ &=\br{{\mathrm X}^{-\sum_{i=1}^mS_ik_i}\otimes \id}\ket{\phi^{0,0}} =\ket{\phi^{-\sum_{i=1}^mS_ik_i, 0}}\enspace, \end{align*} where the first and the last equality follow from Definition~\ref{def:Bellstates}; the second equality holds by Proposition~\ref{fac:uuepr} and the fact that~${\mathrm F}={\mathrm F}^T$. The third equality follows from Proposition~\ref{fac:paulioperatorcommute}. Hence \[\prob{\QHA=\mathit{QHB}}=\prob{\sum S_ik_i=0\mod d}\enspace.\] If $k=0^m$ then the above probability equals $1$. Suppose that $k$ is non-zero in a non-empty subset $J$ of coordinates in $\Br{m}$. Consider the set $Z$ of all $s\in \{0,1\}^m$ such that $\sum s_ik_i=0 \mod d$. Note that the minimum Hamming distance of elements of $Z$ restricted to $J$ is at least $2$, since otherwise there exists $j\in J$ such that $d$ divides $k_j$, contradicting $k_j\in \Br{d-1}$. Fix $j\in J$ and let $e_j\in \{0,1\}^m$ be the string which is $1$ in the $j$-th coordinate and zero everywhere else. For every $s\in Z$, the string $s+e_j$ is not in $Z$. Therefore, $|Z|\leq 2^{m-1}$ and for $S$ uniformly distributed over $\{0,1\}^m$ we have \[ \prob{\sum S_ik_i=0\mod d}\leq\frac{1}{2}. \] Note that the above bound is tight when $|J|=1$. Finally, by Lemma~\ref{lem:cnotbell} the state in the registers $A_1B_1,\ldots, A_mB_m$ remains unchanged. \end{proof} In order to reduce the collision probability to a smaller constant, quantum hashing may be repeated for a constant number of times in every iteration with fresh control MESs and independent random subsets $S$. \paragraph{Classical seeds needed for quantum hashing.} Alice and Bob perform quantum hashing and communicate the hash values in every iteration but they only compare their hash values in a subset of iterations. We choose to do so in order to avoid the two parties from getting out-of-sync on which MES register to use in quantum hashing. As the hashing procedure only consumes a constant number of MESs in each iteration, the total number of MESs used in quantum hashing in the entire simulation is $\Theta\br{R_\mathrm{total}}=\Theta\br{n\sqrt{\epsilon}}$, and they constitute a constant fraction of the MESs distributed at the outset of the protocol; see Subsection \ref{subsec:description-large-quantum}. On the other hand, generating independent $\Theta\br{rt}$-bit seeds, with~$r\in \Theta\br{1/\sqrt{\epsilon}}$ and $t\in \Theta\br{n\epsilon}$, for each of the $R_\mathrm{total}$ iterations would require $\Theta\br{n^2\epsilon}$ bits of shared randomness. The shared randomness is obtained by measuring a fraction of the MESs established at the beginning of the algorithm. Even in the large-alphabet case, sharing $\Theta\br{n^2\epsilon}$ bits of randomness would require too much communication. To circumvent this obstacle Alice and Bob start with a smaller number of i.i.d. random bits and extend them to a much longer pseudo-random string. In more detail, they measure $\Theta\br{n\sqrt{\epsilon}}$ MESs in the computational basis and record the binary representation of the outcomes in $R'$. Then they each use the deterministic algorithm of Lemma \ref{lem:stretch} with $\delta=2^{-\Theta\br{n\sqrt{\epsilon}}}$, to obtain a shared $\delta$-biased string $R'$ of length $\Theta\br{rtR_\mathrm{total} }=\Theta\br{n^2\epsilon}$. The following lemma bounds the collision probability when instead of a uniformly random seed, a $\delta$-biased seed is used in quantum hashing. Note that in our application of Lemma~\ref{lem:quantum hash-delta biased}, we have $m=O\br{rt}=O\br{n\sqrt{\epsilon}}$. \begin{lemma} \label{lem:quantum hash-delta biased} Suppose that the random variable $S$ in Lemma \ref{lem:quantumhash} is $\delta$-biased. Then for all $k\neq 0^m$, we have \[ \prob{\QHA=\mathit{QHB}}\leq \frac{1}{2}+2^{m/2}\delta \enspace. \] \end{lemma} \begin{proof} Let $U$ denote the uniform distribution on $\{0,1\}^m$ and $Z$ be the subset of all $s\in\{0,1\}^m$ such that $\sum s_ik_i=0 \mod d$. By Propositions~\ref{prop:uniform-vs-delta-biased} and~\ref{prop:l1-distance}, we have \[ |U(Z)-S(Z)|\leq \frac{1}{2}\|U-S\|_1 \leq \frac{1}{2} \times 2^{m/2} \|U-S\|_2 \leq 2^{m/2}\delta\enspace. \] Therefore, by Lemma~\ref{lem:quantumhash}, we have \[ \prob{\QHA=\mathit{QHB}} = \prob{\sum S_ik_i=0\mod d} = S(Z) \leq \frac{1}{2}+2^{m/2}\delta \enspace. \] \end{proof} \subsubsection{Out-of-Sync Quantum Vernam Cipher}\label{subsec:out-of-sync QVC} Consider the scenario where Alice, based on her view of the simulation so far, implements a $+1$ block, while Bob believes their classical data are not consistent and therefore implements a $\sC$ iteration. Alice simulates a block of the input protocol $\Pi$ while using the next block of MES pairs in the queue $\mathit{IndexA}$ to encrypt her messages using QVC. At the same time, Bob tries to reconcile the inconsistency through classical communication and does not send his messages using QVC. In this scenario, they also interpret the messages they receive incorrectly. Alice believes Bob is also encrypting his messages using QVC and she applies QVC decoding operations and her Pauli corrections on Bob's messages. Moreover, she potentially applies unitary operations of the input protocol on her local registers. Meanwhile, Bob treats Alice's messages as classical information about the data he believes they need to synchronize. More importantly, since he does not perform the QVC operations (decoding operations in odd rounds and encoding operations in even rounds) on his side, in each round the corresponding MES pair becomes entangled with the message register. So crucial information for continuing the simulation spreads to multiple registers. Moreover, this scenario could continue for several iterations. Nonetheless, we provide a simple way to redirect the quantum information back to the $ABC$ registers, while effectively reducing this type of error to corruptions introduced in the joint state due to transmission errors by the adversary. Once reduced to such errors, Alice and Bob can actively rewind the incorrect part of the simulation and resume from there. As explained earlier, the first step for Alice and Bob is to ensure they have full knowledge of each other's metadata. Once they both achieve this goal, they discover the discrepancy in the number of MES blocks they have used. Suppose Bob has used fewer blocks of MES pairs than Alice, i.e., $\ellQVCA>\ell_\mathrm{QVC}^\mathrm{B}$ and he discovers this at the beginning of the $i$-th iteration. Let $E_1E_2\ldots E_{4r}$ be the registers with Bob containing halves of the $4r$ MESs in the first MES block that Alice has used, say in the $i'$-th iteration, but Bob has not so far. Note that in iteration $i'$, Alice has used the MES pairs corresponding to $E_1E_2,E_5E_6,\ldots,E_{4r-3}E_{4r-2}$ on her side to encrypt quantum information using QVC and she has performed QVC decoding operations on Bob's messages and her marginal of MES pairs corresponding to $E_3E_4,E_7E_8,\ldots,E_{4r-1}E_{4r}$. In the $i$-th iteration, Alice and Bob both send dummy messages to each other. Let $C_1,C_2,\ldots,C_r$ denote the $r$ message registers sent from Alice to Bob after communication over the noisy channel. For every $j\in[r]$, upon receiving $C_j$, Bob applies QVC decoding operations on $C_j$ and $E_{4j-3}E_{4j-2}$, and then applies QVC encoding operations on $C_j$ and $E_{4j-1}E_{4j}$, i.e., he applies \[ \br{\control{{\mathrm Z}}}_{E_{4j}C_j}\br{\control{{\mathrm X}}}_{E_{4j-1}C_j}\br{\control{{\mathrm X}}^{-1}}_{E_{4j-3}C_j}\br{\control{{\mathrm Z}}^{-1}}_{E_{4j-2}C_j}, \] and then he discards the message register $C_j$. The effect of these operations is the same as if Alice and Bob had both used the MES block in sync, i.e., in the $i'$-th iteration, \emph{except the following also happened independently of channel error\/}: \begin{enumerate} \item Alice's messages in the $i'$-th iteration were replaced by $C_1,\ldots,C_r$ and Bob applied his QVC decoding operations on these dummy messages rather than the messages Alice intended to communicate, \item the unitary operations used by Bob on the registers $BC$ were all identity, and \item Bob's messages were replaced by his messages of the $i'$-th iteration and Alice's QVC decoding operations were applied on these (classical) messages. \end{enumerate} The above procedure redirects the quantum information leaked to the MES registers back to the $ABC$ registers, while introducing errors which act exactly the same as transmission errors introduced by the adversary. As in the case of corruptions by the adversary, once Alice and Bob measure the MES block, the error collapses to a Pauli error which can be determined by comparing the measurement outcomes by Alice and Bob. We choose to perform the measurements at the end of the $i$-th iteration, rather than leaving the algorithm to detect the error through quantum hashing (as in the case of transmission errors). \subsubsection{Out-of-Sync Quantum Hashing} \label{sec:out-of-sync QH} Consider the scenario in which Alice and Bob have used the same number of MES blocks for communication using QVC but have measured different subsets of MES blocks. Suppose that when they perform quantum hashing, the random subset of MESs they choose contains an MES in registers $A_1B_1$, which has been measured by only one party, say Alice. Let $V_\mathrm{A}V_\mathrm{B}$ be the registers used as the control registers by Alice and Bob in quantum hashing. Alice and Bob compare their quantum hash values only if they believe they have measured the same subset of MES blocks. Therefore, if they compare their hash values in this iteration, it is due to a transmission error or a metadata hash collision. Note that in this scenario Bob applies a controlled-${\mathrm X}$ operation on the partially measured MES, while Alice who has measured her marginal does not. Since $A_1$ is already measured by Alice, after Bob's controlled-${\mathrm X}$ operation the registers $A_1B_1$ do not get entangled with $V_\mathrm{A}V_\mathrm{B}$. However, the state in the $V_\mathrm{A}V_\mathrm{B}$ registers will be mapped to $\ket{\phi^{0,a}}$, for a random $a\in\{0,1,\ldots,d-1\}$ corresponding to Alice's measurement outcome. This (quite probably) results in Alice and Bob taking incorrect actions from which they can recover once they realize the inconsistency in their classical data. The algorithm is designed to ensure that the register $B_1$ is measured by Bob and $A_1B_1$ is not reused by the two parties in future iterations. Moreover, Bob's controlled-${\mathrm X}$ operation does not change the outcome of his measurement on $B_1$. This ensures that Alice and Bob can correctly learn any potential error on the message register in the corresponding communication round once they learn each other's Pauli data. A subtler scenario occurs when Alice and Bob perform quantum hashing when they have used different numbers of MES blocks. Suppose that $\ellQVCA>\ell_\mathrm{QVC}^\mathrm{B}$, i.e., Alice has used more MES blocks and the random subset of MESs they choose for quantum hashing contains an MES which has been only used on Alice's side. As explained in the previous section, when QVC operations are performed only on one side, the MES pair used as the key becomes entangled with the message register and the quantum information in the message register leaks to these MESs. In the above scenario, once quantum hashing is done the information leaks into the additional MES used in hashing. Since the same MES register is used as the control system when applying the $\control{{\mathrm X}}$ operations on all the MESs in the random subset, the information may leak even further into those registers as well. Surprisingly, the simple solution we provided to recover from out-of-sync QVC resolves this issue as well. The first measure we need to take is to ensure quantum hashing is performed in a sequential way on the MESs in the random subset, starting from the MES used earliest to the latest one. This ensures that the states in the MES registers which have been used both by Alice and Bob do not get disturbed. However, the remaining MESs in the random subset become entangled with the additional MES used in hashing and potentially each other. We need to ensure that these MES registers are not reused in future iterations. Once the two parties synchronize their metadata and realize that they are out of sync on the number of MESs they have used, Bob completes QVC on his side as described in the previous section and immediately measures his marginal of the MES block. This ensures that he will never reuse this block in future iterations. The algorithm is designed so that by the time Alice needs to decide whether to recycle this MES block or not, she will have measured her marginal of the MES registers. We prove that except with probability $2^{-\Theta\br{n\epsilon}}$ recycling is successful in all iterations and such a block of MES registers is never recycled. Despite the fact that quantum hashing is performed before Bob completes the QVC operations on his side, this procedure has the same effect as if Bob had completed QVC \emph{before\/} the quantum hashing was performed. To understand this phenomenon, consider the following simpler scenario. Suppose that Alice and Bob share $3$ copies of the MES $\ket{\phi^{0,0}}$ in registers $A_1B_1$, $A_2B_2$ and $V_\mathrm{A}V_\mathrm{B}$. Alice uses the MES pair in registers $A_1B_1$ and $A_2B_2$ as the key to encrypt a message in register $C$ using QVC and sends the message register to Bob. Suppose that the adversary applies the Pauli error ${\mathrm X}^a {\mathrm Z}^b$ on the message register for some $a,b\in\{0,1,\ldots,d-1\}$. Now suppose that before Bob applies his QVC decoding operations, Alice applies $\control{{\mathrm X}}$ on $V_\mathrm{A}A_1$ with $V_\mathrm{A}$ being the control system. Then their joint state is \begin{align*} \br{\control{{\mathrm X}^{-1}}}_{B_1C}\br{\control{{\mathrm Z}^{-1}}}_{B_2C} \br{\control{{\mathrm X}}}_{V_\mathrm{A}A_1} \br{{\mathrm X}^a{\mathrm Z}^b}_C &\br{\control{{\mathrm Z}}}_{A_2C}\br{\control{{\mathrm X}}}_{A_1C} \\ &\ket{\phi^{0,0}}_{V_\mathrm{A}V_\mathrm{B}} \ket{\phi^{0,0}}_{A_1B_1} \ket{\phi^{0,0}}_{A_2B_2} \ket{\psi}_C. \end{align*} Note that $\br{\control{{\mathrm X}}}_{V_\mathrm{A}A_1}$ commutes with Bob's QVC decoding operation $\br{\control{{\mathrm Z}}}_{A_2C}\br{\control{{\mathrm X}}}_{A_1C}$. Therefore, by Equation~\eqref{QVC with error} their joint state is given by \[ \br{\control{{\mathrm X}}}_{V_\mathrm{A}A_1} \br{{\mathrm X}^a {\mathrm Z}^b}_C \ket{\phi^{0,0}}_{V_\mathrm{A}V_\mathrm{B}} \ket{\phi^{0,b}}_{A_1B_1} \ket{\phi^{0,-a}}_{A_2B_2} \ket{\psi}_C. \] Note that $V_\mathrm{A}V_\mathrm{B}$ and $A_1B_1$ are entangled as a result of the controlled-${\mathrm X}$ operation. Nevertheless, Alice and Bob still extract the correct error syndrome when they measure $A_1,A_2$ and $B_1,B_2$, respectively, and compare their measurement outcomes, as if Bob's QVC decoding operations were performed before Alice's $\br{\control{{\mathrm X}}}_{V_\mathrm{A}A_1}$ operation. This is due to the fact that the error on the message register is reflected in the MES pair as phase errors and the phase error in each MES can still be detected correctly by local measurements in the Fourier basis even after the controlled-${\mathrm X}$ operation is applied. In the out-of-sync quantum hashing scenario described above a similar effect occurs. Finally, note that when Alice and Bob do not agree on the number of MES blocks they have used, they do not compare their quantum hash values unless a transmission error or a meta data hash collision occurs. \subsubsection{First representation of the joint quantum state} \label{subsec:Q-1st-rep-sync} As in Section \ref{sec:general-descrpition-largeclasscial}, we start by introducing a first representation of the joint state, denoted $\mathit{JS}1$, which in turn is simplified into a more informative representation. This latter representation, denoted $\mathit{JS}2$, is the representation which Alice and Bob need to compute correctly in order to make progress in simulation of the input protocol $\Pi$ and decide their next action in $\Pi'$. Recall that due to the recycling of MESs, each block of MES pairs may be used multiple times to encrypt messages using QVC. The representations $\mathit{JS}1$ and $\mathit{JS}2$ defined below are valid only if the recycling has been successful so far in $\Pi'$, namely that Alice and Bob have recycled the same blocks of MES registers and that these registers were indeed in the $\ket{\phi^{0,0}}$ state when recycled. We prove that except with probability $2^{-\Theta\br{n\epsilon}}$ the recycling is successful throughout the algorithm. \suppress{ In addition to the registers $ABCER$, the representations $\mathit{JS}1$ and $\mathit{JS}2$ contain registers corresponding to MESs recycled for communication using QVC. To simplify the representations, in every iteration, the MES block being recycled is represented by a new block of MES registers all in the ideal $\ket{\phi^{0,0}}$ state, as if no recycling were done and a fresh block of MES pairs were available each time.} Recall that in the adversarial noise model, the adversary Eve can introduce arbitrary errors on the quantum communication register $C^\prime$ that passes through her hand subject to the constraints given by Eq.~\eqref{eqn:noise-model-1} and Eq.~\eqref{eqn:noise-model-2}. \suppress{the global action of the adversary Eve on the $n'$ communication registers in $\Pi'$ has a representation with Kraus operators of weight at most $\epsilon n'$, i.e., each Kraus operator can be written as a linear combination of Pauli errors of weight at most $\epsilon n'$.} Furthermore, as explained in Section~\ref{subsec:out-of-sync QVC}, the algorithm is designed so that once the two parties agree on the number of blocks of MES pairs they have used, the error in the joint state due to out-of-sync QVC in any iteration is translated to a transmission error on the message registers, as if the MES blocks were used in sync and transmission errors were introduced by the adversary. We emphasize that in both cases, the error on the message register is a mixture of linear combinations of Pauli errors and once Alice and Bob measure a block of MES pairs to extract (part of) the syndrome, the error on the corresponding message register collapses to a Pauli error. Then the joint state can be written in terms of a new mixture of linear combinations of Pauli errors conditioned on the measurement outcomes, which are recorded in the Pauli data by the two parties. To simplify the joint state representation and the analysis of the algorithm, without loss of generality, we focus on a fixed but arbitrary error syndrome in any such linear combination of Pauli errors arising in the simulation protocol $\Pi'$. We prove the correctness of the algorithm for any such error syndrome which by linearity implies the correctness of the algorithm against any adversary defined in Section~\ref{sec:noisy_comm_model}. Let~$E \in \CMcal{P}_{d,n^\prime}$ be a Pauli error with~$\mathrm{wt}\br{E}\leq \epsilon n^\prime$. In the remainder of this section, we assume $E$ is the error introduced by the adversary into the~$n^\prime$ communicated qudits in $\Pi'$. We first define the representations $\mathit{JS}1$ and $\mathit{JS}2$ after $i$ iterations of the algorithm in the case when the two parties have used the same number of blocks of MES pairs, i.e., $\ellQVCA=\ell_\mathrm{QVC}^\mathrm{B}$. In Subsection~\ref{subsec:Q-2nd-rep-out-of-sync}, we explain how these representations are modified when Alice and Bob are out of sync in the number of MES blocks they have used. We sketch how to obtain $\mathit{JS}1$ from $\mathit{FullMA}$, $\mathit{FullMB}$, $\mathit{FullPA}$ and $\mathit{FullPB}$, when the error syndrome is given by $W\in \br{\Sigma^2}^*$. Recall that when~$\ellQVCA=\ell_\mathrm{QVC}^\mathrm{B}$, there is a one-to-one correspondence between the state of each MES register and the Pauli error on the message register in the corresponding communication round. Therefore, in order to simplify the representations, without introducing any ambiguity, we omit the MES registers from the representations $\mathit{JS}1$ and $\mathit{JS}2$. The first representation $\mathit{JS}1$ of the joint state after $i$ iterations (when $\ellQVCA=\ell_\mathrm{QVC}^\mathrm{B}$) is given by \begin{align} \label{eqn:Q-JS1_1} \mathit{JS}1 = [*\ellQVCA] \cdots [*2][*1] \ket{\psi_{\mathrm{init}}}^{A B C E R}, \end{align} where $\ket{\psi_{\mathrm{init}}}^{A B C E R}$ is the initial state of the original input protocol $\Pi$ and the content of each bracket is described below. The $j$-th bracket corresponds to the $j$-th block of MES pairs which have been used by both Alice and Bob and contains from right to left $r$ iterations of the following: \begin{quote} Alice's unitary operation - \\ Pauli error on Alice's message - \\ Bob's Pauli correction - Bob's unitary operation - \\ Pauli error on Bob's message - \\ Alice's Pauli correction. \end{quote} Similar to the teleporation-based protocol, we allow for an additional unitary operation by Alice on the far left when she implements a block of type $-1$. Using the same rules described in Section~\ref{sec:large-classical-second-rep}, in each bracket, the block of unitary operations of the input protocol $\Pi$ applied by Alice (if any) and her block type ($\pm1$ or $0$) can be computed from $\mathit{FullMA}$. Moreover, her Pauli corrections are recorded in $\mathit{FullPA}$ and the block of $\mathit{FullPA}$ containing these Pauli corrections can be located using $\mathit{FullMA}$. Recall that each block of $\mathit{FullPA}$ may correspond to two different types of iterations: when Alice measures a block of MES pairs to extract the error syndrome she concatenates $\mathit{FullPA}$ with a new block containing her measurement outcomes (with no Pauli corrections), whereas in iterations in which she communicates using QVC, she may apply Pauli corrections in between and records the Pauli corrections in $\mathit{FullPA}$. Therefore, $\mathit{FullMA}$ may be used to distinguish these two different types of blocks and locate the corresponding Pauli corrections in $\mathit{FullPA}$. Similarly, in each bracket, the block of unitary operations of $\Pi$ applied by Bob, his block type and his Pauli corrections are obtained from $\mathit{FullMB}$ and $\mathit{FullPB}$. Finally, in $\mathit{JS}1$, the Pauli errors on the messages in each bracket are specified in terms of the error syndrome $W=W_1W_2\ldots W_{\ellQVCA} \in {\br{\Sigma^2}}^{2r\times \ellQVCA}$ defined below. The communication in each iteration of the algorithm has two parts. In the first part, the parties use a constant number of rounds to communicate the pointers and hash values. Any transmission error introduced by the adversary on these messages only affects the actions of the two parties in the current and future iterations, which will be recorded in the metadata and Pauli data and reflected in the joint state representation. The second part involves $2r$ rounds of communication, in which either classical information is communicated (e.g., to reconcile inconsistencies in the data structures) or QVC is used to communicate quantum information (on one side or both). Transmission errors on these messages can directly modify the joint state and need to be separately taken into account in the joint state representation. Let $W'=W'_1W'_2\ldots W'_{R_\mathrm{total}}$ denote the error syndrome corresponding to the restriction of~$E$ to these messages over the $R_\mathrm{total}$ iterations of the algorithm, where each $W'_j$ is a string in $\br{\Sigma^2}^{2r}$ representing a Pauli error on $2r$ qudits. For every $j\in \Br{\ellQVCA}$, if the $j$-th block of MES pairs has been used in sync on both sides, say in iteration $j'$, we let $W_j=W'_{j'}$. Otherwise, the $j$-th block of MES pairs has been used out of sync and we define $W_j$ to be the error syndrome arising on the message registers in the corresponding communication rounds due to the remedial actions the parties take to recover from the out-of-sync QVC; see Section~\ref{subsec:out-of-sync QVC} for more details.\suppress{We set $W_j$ to be $\br{0^2}^{2r}$, for all $j\in \Br{\ellQVCA+1,i}$.} Each $W_j\in \br{\Sigma^2}^{2r}$ specifies the $2r$ Pauli errors in the $j$-th bracket from the right. Note that in order to compute the representation $\mathit{JS}1$, one needs to know $\mathit{FullMA}$, $\mathit{FullMB}$, $\mathit{FullPA}$, $\mathit{FullPB}$ and $W$. This information is not necessarily available to Alice and Bob at any point during the simulation. In fact, We use the representation in order to analyze the progress in the simulation. Alice and Bob compute their best guess for $\mathit{JS}1$ based on their estimate of each other's classical data. They only compute their estimates of $\mathit{JS}1$ when they believe that they have full knowledge of each other's metadata and Pauli data, fully agree on the recycling data, have used the same number of MES blocks and have measured all blocks of MES pairs which were corrupted in communication using QVC or due to out-of-sync QVC. Alice's estimate $\JSoneA$ of $\mathit{JS}1$ is of the same form as in Equation \eqref{eqn:Q-JS1_1}, except she uses her best guess of $\mathit{FullMB}$ and $\mathit{FullPB}$ in the above procedure. Moreover, the string $W$ in $\mathit{JS}1$ is replaced by $W^\mathrm{A}=W^\mathrm{A}_1 \ldots W^\mathrm{A}_{\ellQVCA} \in {\br{\Sigma^2}}^{2r\times \ellQVCA}$ computed by Alice as follows. For every $j\in\Br{\ellQVCA}$, \begin{itemize} \item if $\mathit{RA}\Br{j}=\sS$, then she sets $W^\mathrm{A}_j=\br{0^2}^{2r}$. Recall that when Alice computes $\JSoneA$, she believes that they have used the same number of MES blocks and have both already measured all blocks of MES pairs which were corrupted in communication using QVC. Therefore, in Alice's view the remaining rounds of communication using QVC have not been corrupted. \item Otherwise, $\mathit{RA}\Br{j}=\widetilde{\RB}\Br{j}=\sM$, i.e., Alice has measured the corresponding block of MES pairs and believes Bob has measured them as well. Using $\mathit{FullMA}$ and $\widetilde{\MB}$, Alice locates the corresponding measurement outcomes in $\mathit{FullPA}$ and $\widetilde{\PB}$ and sets $W^\mathrm{A}_j$ to be the error syndrome obtained from the measurement outcomes. \end{itemize} Note that if Alice computes $\JSoneA$ in an iteration with no transmission errors or hash collisions, then the computed representation $\JSoneA$ is indeed equal to $\mathit{JS}1$. Bob computes $\mathit{JS}1^\mathrm{B}$ similarly based on his best estimate of $\mathit{FullMA}$ and $\mathit{FullPA}$. \subsubsection{Second representation of the joint quantum state} \label{subsec:Q-2nd-rep-sync} The representation $\mathit{JS}2$ is obtained from $\mathit{JS}1$ as follows. In $\mathit{JS}1$, starting from the rightmost bracket, we recursively try to cancel consecutive brackets if their contents correspond to inverse of one another. Once no further such cancellation is possible, what we are left with is the $\mathit{JS}2$ representation, which is of the following form (when $\ellQVCA=\ell_\mathrm{QVC}^\mathrm{B}$): \begin{align} \label{eqn:Q-JS2_1} \mathit{JS}2 = [\#b] \cdots [\#1] [U_{gr} \cdots U_{(g-1)r + 2} U_{(g-1)r + 1} ] \cdots [U_r \cdots U_2 U_1] \ket{\psi_{\mathrm{init}}}^{A B C E R}, \end{align} where $g$ is the largest integer such that the concatenation of the first $g$ brackets starting from the right equals the sequence $U_{gr},\ldots,U_2,U_1$ of unitary operations of $\Pi$. As in Section~\ref{sec:general-descrpition-largeclasscial}, we refer to these brackets as the ``good'' blocks, and the remaining $b$ brackets which need to be actively rewound are called the ``bad'' blocks. Once the parties have synchronized their classical data (to the best of their knowledge) as described earlier, they have all the information they need to compute their best guesses $\JSoneA$ and $\mathit{JS}1^\mathrm{B}$ of $\mathit{JS}1$. Using the same procedure described above, from $\JSoneA$ Alice computes her estimate $\JStwoA$ of $\mathit{JS}2$. Similarly, Bob computes $\mathit{JS}2^\mathrm{B}$ from $\mathit{JS}1^\mathrm{B}$. Note that the two parties may have different views of their joint state, based on which they decide how to further evolve the state in $\Pi'$. The rules by which Alice and Bob decide their respective types ($\pm1$ or $0$) for the next block in $\Pi'$, and which blocks of unitary operations of $\Pi$ (if any) are involved, are the same as the teleportation-based protocol; see Section~\ref{sec:large-classical-second-rep}. \subsubsection{Representation of the joint state while out-of-sync} \label{subsec:Q-2nd-rep-out-of-sync} We now define the representations $\mathit{JS}1$ and $\mathit{JS}2$ in the case when $\ellQVCA \neq \ell_\mathrm{QVC}^\mathrm{B}$. Note that in this case similar to the teleportation-based protocol, conditioned on the classical data with Alice and Bob and a fixed error syndrome $W'$ by the adversary, $\mathit{JS}1$ and $\mathit{JS}2$ represent a pure state. However, in addition to the $ABCER$ registers we need to include the MES blocks which have been used by only one party. Let $u \defeq |\ellQVCA -\ell_\mathrm{QVC}^\mathrm{B} |$. For concreteness suppose that $\ellQVCA > \ell_\mathrm{QVC}^\mathrm{B}$. Then the $\mathit{JS}1$ representation is of the following form: \begin{align}\label{eqn:Q-JS1-OoS} \mathit{JS}1 = [*\ellQVCA] \cdots [*\ell_\mathrm{QVC}^\mathrm{B}] \cdots [*2] [*1] \ket{\psi_{\mathrm{init}}}^{A B C E R}\enspace. \end{align} The content of the first $\ell_\mathrm{QVC}^\mathrm{B}$ brackets from the right corresponding to the MES blocks which have been used by both parties are obtained as described in Subsection~\ref{subsec:Q-1st-rep-sync}. The leftmost $u$ brackets, correspond to the MES blocks which have been used only by Alice. We refer to these blocks as the \emph{ugly\/} blocks. These brackets contain Alice's unitary operations from the input protocol $\Pi$, her Pauli correction operations and QVC encoding and decoding operations, as well as all the MES registers involved in these iterations which remain untouched on Bob's side. The representation $\mathit{JS}2$ is obtained from $\mathit{JS}1$ as follows: We denote by $[@u]\cdots [@1]$ the leftmost $u$ brackets corresponding to the ugly blocks. We use the procedure described in Subsection~\ref{subsec:Q-2nd-rep-sync} on the rightmost $\ell_\mathrm{QVC}^\mathrm{B}$ brackets in $\mathit{JS}1$ to obtain $\mathit{JS}2$ of the following form: \begin{align} \label{eqn:Q-JS2_OoS} \mathit{JS}2 = [@u] \cdots [@1] [\#b] \cdots [\#1] [U_{gr} \cdots U_{(g-1)r + 2} U_{(g-1)r + 1} ] \cdots [U_r \cdots U_2 U_1] \ket{\psi_{\mathrm{init}}}^{A B C E R}\enspace, \end{align} for some non-negative integers $g$ and $b$, which we refer to as the number of \emph{good\/} blocks and the number of \emph{bad\/} blocks in $\mathit{JS}2$ representation, respectively. We point out that Alice and Bob do not compute their estimates of $\mathit{JS}1$ and $\mathit{JS}2$ unless, based on their view of the simulation so far, they believe that they have used the same number of MES blocks. Therefore, whenever computed, $\JSoneA,\mathit{JS}1^\mathrm{B}$ and $\JStwoA,\mathit{JS}2^\mathrm{B}$ are always of the forms described in Subsections~\ref{subsec:Q-1st-rep-sync} and~\ref{subsec:Q-2nd-rep-sync}, respectively. Note that Alice and Bob can realize that $\ellQVCA \neq \ell_\mathrm{QVC}^\mathrm{B}$ by learning each other's meta data. Then if they do as described in Subsection~\ref{subsec:out-of-sync QVC}, if no error or collision occurs, they will reduce the number of ugly blocks in $\mathit{JS}2$ by one. \subsubsection{Constant collision probability for classical hashing suffices} As in the teleportation-based algorithm, in our algorithm in the plain model the hash function of Lemma~\ref{lem:hashes} is used to check for inconsistencies in the classical data maintained by Alice and Bob. Recall that in Section~\ref{sec:BKlarge}, the collision probability $p$ for the hash function $h$ of Lemma~\ref{lem:hashes} is chosen to be $1/\mathrm{poly}(n)$. The output of the hash function is of length $o=\Theta\br{\log \frac{1}{p}}=\Theta(\log n)$ bits. Therefore, in the large-alphabet case the hash values corresponding to the classical data can be communicated using only a constant number of rounds. However, in the small-alphabet case, using a logarithmic number of rounds to communicate the hash values leads to a vanishing simulation rate. In our algorithm in this section, we use the hash family of Lemma~\ref{lem:hashes} with a constant collision probability and show that $p=\Theta\br{1}$ suffices to keep the number of hash collisions low. We address this issue in the simpler large-alphabet setting to simplify the proof in the small-alphabet case at the conceptual level. Following Haeupler \cite{Haeupler:2014}, we circumvent the barrier explained above using the observation that hashing only makes one-sided errors. In other words, collision only occurs when the data to be compared are not equal, which in turn is a result of corruptions by the adversary. As the error budget of the adversary is bounded by $2n\epsilon$, one would expect the total number of rounds in which the classical data being compared are not equal to be bounded by $\Theta\br{n\epsilon}$. In fact, this allows us to have a constant collision probability while keeping the number of hash collisions in the same order as the number of transmission errors. \subsubsection{Summary of main steps} In Algorithm~\ref{algo:Main steps-large-alphabet-Yao}, we summarize the outline of the steps which are followed by Alice and Bob in the simulation. Note that since synchronizing recycling data creates new Pauli data, we choose to do this step before synchronizing the Pauli data. Similar to the teleportation-based case, the algorithm is designed so that unless there is a transmission error or a hash collision in comparing a given type of data, Alice and Bob will simultaneously go down these steps while never returning to a previous step. This in fact is a crucial property used in the analysis of the algorithm. \RestyleAlgo{boxruled} \begin{algorithm} Agree on the history of the simulation contained in metadata, i.e., ensure $\mathit{FullMA}= \widetilde{\MA}$ and $\mathit{FullMB}=\widetilde{\MB}$. This involves Algorithm~\ref{algo:rewindMD}---\textbf{\textsf{rewindMD}} and Algorithm~\ref{algo:extendMD}---\textbf{\textsf{extendMD}}.\\ Synchronize the number of MES blocks used in QVC, in particular, ensure $\ellQVCA=\widetilde{\ellQVCB}$ and $\ell_\mathrm{QVC}^\mathrm{B}=\widetilde{\ellQVCA}$. This is done via Algorithm~\ref{algo:Q-syncMES}---\textbf{\textsf{Q-syncMES}}.\\ Agree on the measurement pointers and the recycling data up to the pointers, in particular, ensure $\br{\ell_\RA,\mathit{RA}\Br{1:\ell_\RA}}=\br{\widetilde{\ellRB},\widetilde{\RB}\Br{1:\widetilde{\ellRB}}}$ and $\br{\ell_\RB,\mathit{RB}\Br{1:\ell_\RB}}=\br{\widetilde{\ellRA},\widetilde{\RA}\Br{1:\widetilde{\ellRA}}}$. This involves Algorithm~\ref{algo:rewindRD}---\textbf{\textsf{rewindRD}}.\\ Ensure no undetected quantum error from earlier rounds exists. This is done by ensuring $\QHA=\mathit{QHB}$ and involves Algorithm~\ref{algo:QmeasureEPR}---\textbf{\textsf{measuresyndrome}}.\\ Ensure $\ell_\RA=\ellQVCA$ and $\ell_\RB=\ell_\mathrm{QVC}^\mathrm{B}$. This is achieved via Algorithm~\ref{algo:extendRD}---\textbf{\textsf{extendRD}}.\\ Agree on Pauli data, in particular, ensure $\mathit{FullPA}=\widetilde{\PA}$ and $\mathit{FullPB}=\widetilde{\PB}$. This is done via Algorithm~\ref{algo:Q-rewindPD}---\textbf{\textsf{Q-rewindPD}} and Algorithm~\ref{algo:Q-extendPD}---\textbf{\textsf{Q-extendPD}}.\\ Compute the best guess for $\mathit{JS}1$ and $\mathit{JS}2$. If there are any ``bad'' blocks in the guess for $\mathit{JS}2$, reverse the last bad block of unitary operations. I.e., implement quantum rewinding so that~$b = 0$ in $\mathit{JS}2$. This is done in Algorithm~\ref{algo:Q-simulate}---\textbf{\textsf{Q-simulate}}.\\ If no ``bad'' blocks remain, implement the next block of rounds of the original protocol. This results in an increase in $g$ in $\mathit{JS}2$, and is also done through Algorithm~\ref{algo:Q-simulate}---\textbf{\textsf{Q-simulate}}.\\ \caption{Main steps in one iteration of the simulation for the large alphabet recycling-based model} \label{algo:Main steps-large-alphabet-Yao} \end{algorithm} \RestyleAlgo{ruled} Note that although in step $1$ we use the same algorithms for synchronizing the metadata as in the teleportation-based case (\textsf{rewindMD} and \textsf{extendMD}), the alphabet over which the metadata strings are defined are now different (see Subsection~\ref{sec:data-structure-large-alphabet-Yao}). The algorithms mentioned in the remaining step are presented in the next section. Figure~\ref{fig:flow-qvc} summarizes the main steps in flowchart form. \begin{figure}[!t] \centering \includegraphics[width=475pt]{Flowchart-Sec4.jpg} \caption{Flowchart of the recycling-based scheme for high rate noisy interactive quantum communication.} \label{fig:flow-qvc} \end{figure} \subsection{Algorithm} For each subroutine, we list all the global variables accessed by the subroutine as the \textbf{Input} at the beginning of the subroutine. Whenever applicable, the relation between the variables when the subroutine is called is stated as the \textbf{Promise} and the global variables which are modified by the subroutine are listed as the \textbf{Output}. \subsubsection{Data structure}\label{sec:data-structure-large-alphabet-Yao} Alice and Bob maintain a data structure obtained by modifying the one introduced in Section~\ref{sec:datastructurelargealphabetcleveburhman}. \begin{itemize} \item \textbf{Metadata:} The metadata now contain new symbols corresponding to recycling operations and error detection measurements. In every iteration $\mathit{NewMetaA} \in \{\mathsf{\pm1},\mathsf{0},\sC,\OED,\sM,\sC'\}$ specifies Alice's action in the current iteration. Similar to the teleportation-based algorithm, in an iteration with $\mathit{NewMetaA}=\sC$, Alice does not access the quantum registers and if $\mathit{NewMetaA}\in \{\mathsf{\pm1},\mathsf{0}\}$, then the unitary operators of the original protocol applied locally by Alice in the current iteration have exponent $\mathit{NewMetaA}$. If $\mathit{NewMetaA}=\sM$ Alice measures the block of MES registers specified by her measurement pointer and moves the pointer back by $1$. $\mathit{NewMetaA}=\sC'$ corresponds to a classical iteration for Alice in which she just moves her measurement pointer forward by $1$. Finally, in an iteration with $\mathit{NewMetaA}=\OED$, Alice completes QVC on her side as explained in Section~\ref{subsec:out-of-sync QVC}. The notation $\OED$ is used to emphasize that similar to an iteration with $\mathit{NewMetaA}=\mathsf{0}$ no unitary operator of the original protocol is applied by Alice, but she measures the block of MES registers at the end of the iteration for error detection. Alice records her metadata in $\mathit{FullMA}$ which is concatenated with $\mathit{NewMetaA}$ in each iteration and has length $i$ at the end of the $i$-th iteration. Similar to the teleportation-based algorithm, Alice maintains a string $\widetilde{\MB}$ as an estimate of Bob's metadata, which is not necessarily full-length. The length of $\widetilde{\MB}$ is denoted by $\ell_{\MBtilde}$. Alice also maintains $\ell_{\MA}$, her estimate for the length of $\widetilde{\MA}$, which is with Bob. $\mathit{MA}$ is defined as the prefix of $\mathit{FullMA}$ of length $\ell_{\MA}$, i.e., $\mathit{MA}\defeq \mathit{FullMA}\Br{1:\ell_{\MA}}$. When $\mathit{MA}$ appears in any of the algorithms in this section, it is implicitly computed by Alice from $\mathit{FullMA}$ and $\ell_{\MA}$. We denote by $\ellQVCA$ the number of iterations in which Alice has performed QVC so far. Note that $\ellQVCA$ can be computed from $\mathit{FullMA}$ and is equal to the number of $\mathsf{\pm1}$, $\mathsf{0}$, $\OED$ symbols in $\mathit{FullMA}$. Bob's local metadata variables are defined similarly. \item \textbf{Recycling data:} Quantum recycling data are used to decide whether a block of MES pairs can be reused for communication using QVC. The recycling data alphabet consists of the symbol $\sM$ corresponding to a measured block of MES pairs and the symbol $\sS$, used for a block of MES pairs still in superposition. Alice records her recycling data in a string $\mathit{RA}$, which is also computable from $\mathit{FullMA}$. Alice maintains a queue, $\mathit{IndexA}$, of indices corresponding to recycled MES blocks to be reused in QVC. In every iteration, $\mathit{NextMESIndexA}$ holds the index of the recycled MES block by Alice and the string $\mathit{IndexA}$ is concatenated with the index. If no such index exits in an iteration, it is assigned the value $\perp$ and the protocol aborts. $\ell_\mathit{NextMESA}$ is a pointer on $\mathit{RA}$ and $\mathit{IndexA}$ used by Alice in the recycling process. In every iteration, Alice moves this pointer to the next $\sS$ symbol in $\mathit{RA}$ (if it exists) and records the value of $\mathit{IndexA}$ in this coordinate into $\mathit{NextMESIndexA}$. Alice also maintains a measurement pointer, $\ell_\RA$, which specifies the block of MES pairs to be measured in iterations with $\mathit{NewMetaA}=\sM$. The measurement pointers serve as reference points up to which Alice and Bob compare their recycling data in each iteration. The pointer $\ell_\RA$ can stay the same or move backward or forward by $1$ and can be computed from $\mathit{FullMA}$. In every iteration, if the protocol does not abort, $\mathit{RA}$ is concatenated with an $\sS$ symbol corresponding to the recycled MES block and if an MES block is measured the corresponding element of $\mathit{RA}$ is changed from $\sS$ to $\sM$. Bob's recycling data variables are defined similarly. Alice computes $\widetilde{\RB}$ and $\widetilde{\ellRB}$ as her estimate of Bob's $\mathit{RB}$ and $\ell_\RB$, respectively, based on her full-length estimate $\widetilde{\MB}$ of $\mathit{FullMB}$. Note that if $\widetilde{\MB}=\mathit{FullMB}$ then Alice's estimates of Bob's recycling data are correct. \item \textbf{Pauli data:} In any iteration, new Pauli data is generated on Alice's side if and only if she measures a block of MES pairs or performs QVC locally. $\mathit{NewPauliA}$ has three parts: If a block of $r$ MES pairs is measured locally by Alice in the current iteration then the measurement outcome, $(m_1,m_2)\in \Sigma^{4r}$, is recorded in the first two parts of $\mathit{NewPauliA}$, and the third part contains $\perp^{2r}$, corresponding to no Pauli corrections. Otherwise, if Alice performs QVC then $\perp^{2r}$ is recorded in each of the first two parts and the third part similar to the teleportation-based protocol specifies the Pauli corrections. Alice records her Pauli data in $\mathit{FullPA}$. Starting from the empty string, $\mathit{FullPA}$ is concatenated with $\mathit{NewPauliA}$ whenever Alice measures an MES block or performs QVC. She maintains a string $\widetilde{\PB}$ as an estimate of Bob's Pauli data. The length of $\widetilde{\PB}$ is denoted by $\ell_{\PBtilde}$. Alice also maintains $\ell_{\PA}$, her estimate for the length of $\widetilde{\PA}$, which is with Bob. $\mathit{PA}$ denotes the prefix of $\mathit{FullPA}$ of length $\ell_{\PA}$, i.e., $\mathit{PA}\defeq \mathit{FullPA}\Br{1:\ell_{\PA}}$. When $\mathit{PA}$ appears in any of the algorithms in this section, it is implicitly computed by Alice from $\mathit{FullPA}$ and $\ell_{\PA}$. We define $q_{\mathit{MA}}\defeq\left| \mathit{FullPA}\right| / 6r$. Note that $q_{\mathit{MA}}$ can be computed from $\mathit{FullMA}$. Alice computes her estimate $q_{\widetilde{\MB}}$ of $q_{\mathit{MB}}$ using her full-length estimate $\widetilde{\MB}$ of $\mathit{FullMB}$. Bob's Pauli data variables are defined similarly. \item As in Section~\ref{sec:BKlarge}, we use $H$ with different variables as subscript to represent hash values, e.g., $H_{\MA}$ represents a hash value corresponding to $\mathit{MA}$. We use $\QHA$ and $\mathit{QHB}$ to represent quantum hash values. The data variables with a superscript $'$ denote the received data after transmission over the noisy channel, e.g., $\ell_{\MB}'$ denotes what Alice receives when Bob sends $\ell_{\MB}$. \item The variable $\mathit{Itertype}$ takes two new values: $\mathrm{RD}$ corresponding to recycling data synchronization and $\mathrm{QH}$ corresponding to an iteration in which the received quantum hash value does not match the locally computed quantum hash value. \end{itemize} Finally, $\mathit{L}_{\mathrm{QVC}}$ denotes the total number of MES blocks to be used as the keys in QVC. \begin{remark} We point out an additional subtlety in interpreting the Pauli data by the two parties. Recall that when Alice and Bob compute their estimates of the joint state, they use the metadata to locate in the Pauli data, the measurement outcomes for each block of measured MES pairs. However, due to transmission errors or collisions, it is possible to have an inconsistency between the metadata and the Pauli data. For instance, $\widetilde{\MA}$ may indicate that a specific block of $\widetilde{\PA}$ contains the measurement outcomes on an MES block, whereas it actually has $\perp^{2r}$ in the first two parts and corresponds to an iteration in which Alice has performed QVC. In any such scenario, Alice and Bob interpret the $\perp$ symbols as $0$ and compute the joint state. This most likely introduces new errors on the joint state from which Alice and Bob can recover once they obtain a correct view of the simulation. \end{remark} \subsubsection{Pseudo-codes} This section contains the pseudo-codes for the main algorithm and the subroutines that each party runs locally in the simulation protocol. \begin{algorithm} \Input{$n$ round protocol $\Pi$ in plain quantum model over polynomial-size alphabet $\Sigma$} \BlankLine \textbf{\textsf{Q-Initialization}}\; \BlankLine \SetKwProg{ForLoop}{For}{}{} \SetAlgoLined \ForLoop{$i = 1 \to R_\mathrm{total}$} { \SetAlgoVlined \textbf{\textsf{Recycle}}\; \If {$\mathit{NextMESIndexA}=\perp$} {\text{Abort}\;} $\mathit{IndexA} \leftarrow \br{\mathit{IndexA},\mathit{NextMESIndexA}}$\; $\mathit{RA} \leftarrow \br{\mathit{RA},\sS}$\; \Comment*[f] {\textbf{computing hash values}}\\ $H_{\MA} \leftarrow h_{S_{4i-3}}\br{\mathit{MA}}$\; $H_{\MBtilde} \leftarrow h_{S_{4i-2}}\br{\widetilde{\MB}}$\; $H_{\PA} \leftarrow h_{S_{4i-1}}\br{\mathit{PA}}$\; $H_{\PBtilde} \leftarrow h_{S_{4i}}\br{\widetilde{\PB}}$\; \textbf{\textsf{Quantum-hash}}\; \BlankLine Send $$\br{H_{\MA},\ell_{\MA},H_{\MBtilde},\ell_{\MBtilde},H_{\PA},\ell_{\PA},H_{\PBtilde},\ell_{\PBtilde},\QHA};$$\\ Receive $$\br{H_{\MAtilde}',\ell_{\MAtilde}',H_{\MB}',\ell_{\MB}',H_{\PAtilde}',\ell_{\PAtilde}',H_{\PB}',\ell_{\PB}',\mathit{QHB}'};$$\\ \BlankLine \textbf{\textsf{Q-Preprocess}}; \Comment*[f] {\textbf{Determining iteration type}}\\ \BlankLine \If {$\mathit{Itertype} \neq \mathrm{SIM}$} { Send $\mathit{msg}$\; Receive $\mathit{msg}^\prime$\; } \tcp*[f]{\textbf{messages are communicated alternately}}\\ \BlankLine \Comment*[f] {\textbf{Case i.A}}\\ \If {$\mathit{Itertype} = \mathrm{MD} \;\mathrm{and}\; \RewindExtend = \sR$} { \textbf{\textsf{rewindMD}}\; } \Comment*[f] {\textbf{Case i.B}}\\ \ElseIf {$\mathit{Itertype} = \mathrm{MD} \;\mathrm{and}\; \RewindExtend = \sE$} { \textbf{\textsf{extendMD}}\; } \Comment*[f] {\textbf{Case ii.A}}\\ \ElseIf{$\mathit{Itertype} = \mathrm{MES} \;\mathrm{and}\; \mathit{NewMetaA}=\sC$} { return\; } \Comment*[f] {\textbf{Case ii.B}}\\ \ElseIf{$\mathit{Itertype} = \mathrm{MES} \;\mathrm{and}\; \mathit{NewMetaA}=\OED$} { \textbf{\textsf{Q-syncMES}}\; } } \caption{{\textbf{\textsf{Q-Main}} (Alice's side) }} \end{algorithm} \setcounter{algocf}{12} \begin{algorithm} \setcounter{AlgoLine}{26} \SetKwBlock{Begin}{}{} \Begin{ \Comment*[f] {\textbf{Case iii}}\\ \ElseIf{$\mathit{Itertype}=\mathrm{RD} \;\mathrm{and}\; \RewindExtend=\sR$} { \textbf{\textsf{rewindRD}}\; } \Comment*[f] {\textbf{Case iv}}\\ \ElseIf{$\mathit{Itertype}=\mathrm{QH}$} { \textbf{\textsf{measuresyndrome}}\; } \Comment*[f] {\textbf{Case v}}\\ \ElseIf{$\mathit{Itertype} =\mathrm{RD} \;\mathrm{and}\; \RewindExtend=\sE$} { \textbf{\textsf{extendRD}}\; } \Comment*[f] {\textbf{Case vi.A}}\\ \ElseIf {$\mathit{Itertype} = \mathrm{PD} \;\mathrm{and}\; \RewindExtend = \sR$} { \textbf{\textsf{rewindPD}}\; } \Comment*[f] {\textbf{Case vi.B}}\\ \ElseIf{$\mathit{Itertype} = \mathrm{PD} \;\mathrm{and}\; \RewindExtend = \sE$} { \textbf{\textsf{extendPD}}\; } \Comment*[f] {\textbf{Case vii}}\\ \Else{\textbf{\textsf{Q-Simulate}}\;} } \Return{\textup{\textbf{\textsf{Q-Main}}}}\; \caption{\textbf{\textsf{Q-Main}} (Alice's side, cont. from previous page) } \label{algo:MainalgorithmQMessage} \end{algorithm} \begin{algorithm} \Input{ $$\br{ \begin{array}{c} H_{\MA},\ell_{\MA},H_{\MBtilde},\ell_{\MBtilde},H_{\PA},\ell_{\PA},H_{\PBtilde},\ell_{\PBtilde},\QHA \\ H_{\MAtilde}',\ell_{\MAtilde}',H_{\MB}',\ell_{\MB}',H_{\PAtilde}',\ell_{\PAtilde}',H_{\PB}',\ell_{\PB}',\mathit{QHB}' \\ \mathit{FullMA},\ellQVCA,\widetilde{\MB},\mathit{RA},\ell_\RA,\mathit{FullPA},\widetilde{\PB} \end{array} }$$ } \Output{$\br{\mathit{Itertype}, \RewindExtend, \mathit{NewMetaA}, \mathit{FullMA}, \ell_{\MA}, \widetilde{\NewMetaB}, \widetilde{\MB}, \ell_{\MBtilde}, \mathit{msg}}$} \BlankLine \If { $\br{H_{\MA},H_{\MBtilde}}=\br{H_{\MAtilde}',H_{\MB}'} \;\mathrm{and}\; \ell_{\MA}=\ell_{\MAtilde}'=\ell_{\MBtilde}=\ell_{\MB}'=i-1$ } { Compute $\widetilde{\ellQVCB}$, $\widetilde{\RB}$, $\widetilde{\ellRB}$, $q_{\widetilde{\MB}}$\; } \BlankLine \Comment*[f] {\textbf{Processing Metadata}}\\ \Comment*[f] {\textbf{Case i.A}}\\ \If { $\br{H_{\MA},H_{\MBtilde},\ell_{\MA},\ell_{\MBtilde}}\neq \br{H_{\MAtilde}',H_{\MB}',\ell_{\MAtilde}',\ell_{\MB}'}$ } { $\mathit{Itertype} \leftarrow \mathrm{MD}$\; $\RewindExtend \leftarrow \sR$\; $\mathit{NewMetaA} \leftarrow \sC$\; $\mathit{FullMA} \leftarrow \br{\mathit{FullMA},\mathit{NewMetaA}}$\; $\mathit{msg} \leftarrow \text{dummy message of length } r$\; } \caption{\textbf{\textsf{Q-Preprocess}} (Alice's side) } \label{algo:QPreprocess} \end{algorithm} \setcounter{algocf}{13} \begin{algorithm} \setcounter{AlgoLine}{8} \Comment*[f] {\textbf{Case i.B}}\\ \ElseIf { $\br{\ell_{\MA} < i-1} \;\mathrm{or}\; \br{\ell_{\MBtilde} < i-1}$ } { $\mathit{Itertype} \leftarrow \mathrm{MD}$\; $\RewindExtend \leftarrow \sE$\; $\mathit{NewMetaA} \leftarrow \sC$\; $\mathit{FullMA} \leftarrow \br{\mathit{FullMA},\mathit{NewMetaA}}$\; \If {$\ell_{\MA} < i-1$} { $\mathit{msg} \leftarrow \mathrm{encodeMD}\br{\mathit{FullMA}\Br{\ell_{\MA}+1,\ell_{\MA}+2}}\!;$ \tcp*[f]{\textbf{Encode MD in $\Sigma^r$}}\\ } \Else { $\mathit{msg} \leftarrow \text{dummy message of length } r$\; } } \Comment*[f] {\textbf{Comparing number of used MES blocks}}\\ \Comment*[f] {\textbf{Case ii.A}}\\ \ElseIf {$\ellQVCA > \widetilde{\ellQVCB}$} { $\mathit{Itertype} \leftarrow \mathrm{MES}$\; $\mathit{NewMetaA} \leftarrow \sC$\; $\mathit{FullMA} \leftarrow \br{\mathit{FullMA},\mathit{NewMetaA}}$\; $\ell_{\MA} \leftarrow \ell_{\MA}+1$\; $\widetilde{\NewMetaB} \leftarrow \OED$\; $\widetilde{\MB} \leftarrow \br{\widetilde{\MB},\widetilde{\NewMetaB}}$\; $\ell_{\MBtilde} \leftarrow \ell_{\MBtilde}+1$\; $\mathit{msg} \leftarrow \text{dummy message of length } r$\; } \Comment*[f] {\textbf{Case ii.B}}\\ \ElseIf {$\ellQVCA < \widetilde{\ellQVCB}$} { $\mathit{Itertype} \leftarrow \mathrm{MES}$\; $\mathit{NewMetaA} \leftarrow \OED$\; $\mathit{FullMA} \leftarrow \br{\mathit{FullMA},\mathit{NewMetaA}}$\; $\ell_{\MA} \leftarrow \ell_{\MA}+1$\; $\widetilde{\NewMetaB} \leftarrow \sC$\; $\widetilde{\MB} \leftarrow \br{\widetilde{\MB},\widetilde{\NewMetaB}}$\; $\ell_{\MBtilde} \leftarrow \ell_{\MBtilde}+1$\; $\mathit{msg} \leftarrow \text{dummy message of length }r$\; } \caption{\textbf{\textsf{Q-Preprocess}} (Alice's side, cont. from previous page)} \end{algorithm} \setcounter{algocf}{13} \begin{algorithm} \setcounter{AlgoLine}{35} \Comment*[f] {\textbf{Processing recycling data}}\\ \Comment*[f] {\textbf{Case iii}}\\ \ElseIf { $\br{\ell_\RA, \mathit{RA}\Br{1:\ell_\RA}} \neq \br{\widetilde{\ellRB}, \widetilde{\RB}\Br{1:\widetilde{\ellRB}}}$} { $\mathit{Itertype} \leftarrow \mathrm{RD}$\; $\RewindExtend\leftarrow \sR$\; \If {$\ell_\RA > \widetilde{\ellRB}$} { $\mathit{NewMetaA} \leftarrow \sM$\; $\widetilde{\NewMetaB} \leftarrow \sC$\; } \ElseIf {$\ell_\RA < \widetilde{\ellRB}$} { $\mathit{NewMetaA} \leftarrow \sC$\; $\widetilde{\NewMetaB} \leftarrow \sM$\; } \Else { $\mathit{NewMetaA} \leftarrow \sM$\; $\widetilde{\NewMetaB} \leftarrow \sM$\; } $\mathit{FullMA} \leftarrow \br{\mathit{FullMA},\mathit{NewMetaA}}$\; $\ell_{\MA} \leftarrow \ell_{\MA}+1$\; $\widetilde{\MB} \leftarrow \br{\widetilde{\MB},\widetilde{\NewMetaB}}$\; $\ell_{\MBtilde} \leftarrow \ell_{\MBtilde}+1$\; $\mathit{msg} \leftarrow \text{dummy message of length }r$\; } \Comment*[f] {\textbf{Case iv}}\\ \ElseIf{$\QHA\neq \mathit{QHB}'$} { $\mathit{Itertype} \leftarrow \mathrm{QH}$\; $\mathit{NewMetaA} \leftarrow \sM$\; $\mathit{FullMA} \leftarrow \br{\mathit{FullMA},\mathit{NewMetaA}}$\; $\ell_{\MA} \leftarrow \ell_{\MA}+1$\; $\widetilde{\NewMetaB} \leftarrow \sM$\; $\widetilde{\MB} \leftarrow \br{\widetilde{\MB},\widetilde{\NewMetaB}}$\; $\ell_{\MBtilde} \leftarrow \ell_{\MBtilde}+1$\; $\mathit{msg} \leftarrow \text{dummy message of length }r$\; } \Comment*[f] {\textbf{Case v}}\\ \ElseIf{$\ell_\RA < \ellQVCA $} { $\mathit{Itertype} \leftarrow \mathrm{RD}$\; $\RewindExtend\leftarrow \sE$\; $\mathit{NewMetaA} \leftarrow \sC'$\; $\mathit{FullMA} \leftarrow \br{\mathit{FullMA},\mathit{NewMetaA}}$\; $\ell_{\MA} \leftarrow \ell_{\MA}+1$\; $\widetilde{\NewMetaB} \leftarrow \sC'$\; $\widetilde{\MB} \leftarrow \br{\widetilde{\MB},\widetilde{\NewMetaB}}$\; $\ell_{\MBtilde} \leftarrow \ell_{\MBtilde}+1$\; $\mathit{msg} \leftarrow$ dummy message of length $r$\; } \caption{\textbf{\textsf{Q-Preprocess}} (Alice's side, cont. from previous page)} \end{algorithm} \setcounter{algocf}{13} \begin{algorithm} \setcounter{AlgoLine}{71} \Comment*[f] {\textbf{Processing Pauli data}}\\ \Comment*[f] {\textbf{Case vi.A}}\\ \ElseIf {$\br{H_{\PA},H_{\PBtilde},\ell_{\PA},\ell_{\PBtilde}}\neq \br{H_{\PAtilde}',H_{\PB}',\ell_{\PAtilde}',\ell_{\PB}'}$ } { $\mathit{Itertype} \leftarrow \mathrm{PD}$\; $\RewindExtend \leftarrow \sR$\; $\mathit{NewMetaA} \leftarrow \sC$\; $\mathit{FullMA} \leftarrow \br{\mathit{FullMA},\mathit{NewMetaA}}$\; $\ell_{\MA} \leftarrow \ell_{\MA}+1$\; $\widetilde{\NewMetaB} \leftarrow \sC$\; $\widetilde{\MB} \leftarrow \br{\widetilde{\MB},\widetilde{\NewMetaB}}$\; $\ell_{\MBtilde} \leftarrow \ell_{\MBtilde}+1$\; $\mathit{msg} \leftarrow \text{dummy message of length }r$\; } \Comment*[f] {\textbf{Case vi.B}}\\ \ElseIf {$\br{\ell_{\PA} < 6q_{\mathit{MA}} \cdot r} \;\mathrm{or}\; \br{\ell_{\PBtilde} < 6q_{\widetilde{\MB}} \cdot r}$} { $\mathit{Itertype} \leftarrow \mathrm{PD}$\; $\RewindExtend \leftarrow \sE$\; $\mathit{NewMetaA} \leftarrow \sC$\; $\mathit{FullMA} \leftarrow \br{\mathit{FullMA},\mathit{NewMetaA}}$\; $\ell_{\MA} \leftarrow \ell_{\MA}+1$\; $\widetilde{\NewMetaB} \leftarrow \sC$\; $\widetilde{\MB} \leftarrow \br{\widetilde{\MB},\widetilde{\NewMetaB}}$\; $\ell_{\MBtilde} \leftarrow \ell_{\MBtilde}+1$\; \If {$\ell_{\PA} < 6q_{\mathit{MA}} \cdot r$} { $\mathit{msg} \leftarrow {\mathit{FullPA}}\Br{\ell_{\PA}+1,\ell_{\PA}+r}$ } } \Comment*[f] {\textbf{Case vii}}\\ \Else { \textbf{\textsf{Q-Computejointstate}}\; $\mathit{Itertype} \leftarrow \mathrm{SIM}$\; $\mathit{FullMA}=\br{\mathit{FullMA},\mathit{NewMetaA}}$\; $\ell_{\MA} \leftarrow \ell_{\MA}+1$\; $\widetilde{\MB} \leftarrow \br{\widetilde{\MB},\widetilde{\NewMetaB}}$\; $\ell_{\MBtilde} \leftarrow \ell_{\MBtilde}+1$\; } \Return{\textup{\textbf{\textsf{Q-Preprocess}}}}\; \caption{\textbf{\textsf{Q-Preprocess}} (Alice's side, cont. from previous page)} \end{algorithm} \begin{algorithm} Initialize \\ \nonl $\qquad \mathit{L}_{\mathrm{QVC}} \leftarrow\Theta\br{n\epsilon}$ \; \nonl $\qquad r \leftarrow\Theta\br{1/\sqrt{\epsilon}}$ \; \nonl $\qquad R_\mathrm{total} \leftarrow \lceil n/{2r}+\Theta(n\epsilon)\rceil$ \; \nonl $\qquad t \leftarrow \Theta\br{n\epsilon}$ \; \nonl $\qquad \ell_{\MA},\ell_{\MBtilde},\ell_{\PA},\ell_{\PBtilde},\ellQVCA,\ell_\RA,\ell_\mathit{NextMESA} \leftarrow 0$ \; \nonl $\qquad \mathit{FullMA},\widetilde{\MB},\mathit{FullPA},\widetilde{\PB}\leftarrow \emptyset$ \; \nonl $\qquad \mathit{RA} \leftarrow (\sS,\ldots,\sS) \textrm{ of length } t$ \; \nonl $\qquad \mathit{IndexA} \leftarrow \br{\mathit{L}_{\mathrm{QVC}}-t+1,\ldots,\mathit{L}_{\mathrm{QVC}}}$ \; \tcp*[f]{\textbf{The indexing of the strings $\mathit{RA}$ and $\mathit{IndexA}$ starts from $-t+1$}}\\ \BlankLine $h\leftarrow$ hash function of Lemma~\ref{lem:hashes} with $p=\Theta(1), o=\Theta(1), s=\Theta\br{\log n}$ \; \textbf{\textsf{Robust Entanglement Distribution}($\Theta\br{n\sqrt{\epsilon}}$)} \; Reserve $\mathit{L}_{\mathrm{QVC}}\cdot 4r$ MES pairs to be used as the keys in QVC \; Reserve $10R_\mathrm{total}$ MESs to be used in quantum hashing \; Measure $\Theta\br{R_\mathrm{total}}$ MESs in the computational basis and record the binary representation of the outcomes in $S_1,\ldots,S_{4R_\mathrm{total}}$ \; \tcp*[f]{\textbf{$4R_\mathrm{total}$ seeds of length $s$ for the hash function $h$}}\\ Measure the remaining $\Theta\br{n\sqrt{\epsilon}}$ MESs in the computational basis and record the binary representation of the outcomes in $R'$ \; Extend $R'$ to a $\delta$-biased pseudo-random string $R=R_1,\ldots,R_{10R_\mathrm{total}}$ using the deterministic algorithm of Lemma~\ref{lem:stretch} where $\delta= 2^{-\Theta\br{\frac{n}{r}}}$ \; \tcp*[f]{\textbf{$10R_\mathrm{total}$ seeds of length $4rt$ used in quantum hashing}}\\ \Return {\textup{\textbf{\textsf{Q-Initialization}}}}\; \caption{\textbf{\textsf{Q-Initialization}} (Alice's side) } \end{algorithm}\label{Qalgo:Initialization} \begin{algorithm} \Input{$\br{\mathit{RA},\mathit{IndexA},\ell_\mathit{NextMESA},i}$} \Output{$\br{\ell_\mathit{NextMESA},\mathit{NextMESIndexA}}$} \BlankLine \If (\tcp*[f]{\textbf{No recycling in first $\mathit{L}_{\mathrm{QVC}}$ iterations}}) {$i \leq \mathit{L}_{\mathrm{QVC}}$} { $\mathit{NextMESIndexA} \leftarrow i$ \; } \Else { $\ell_\mathit{NextMESA} \leftarrow \ell_\mathit{NextMESA}+1$ \; \While { $\br{\ell_\mathit{NextMESA} < i} \; \mathrm{and} \; \br{\mathit{RA}\Br{\ell_\mathit{NextMESA}} = \sM}$ } {$\ell_\mathit{NextMESA} \leftarrow \ell_\mathit{NextMESA}+1$ \;} \If {$\ell_\mathit{NextMESA}=i$} { $\mathit{NextMESIndexA} \leftarrow \perp$ \; } \Else { $\mathit{NextMESIndexA} \leftarrow \mathit{IndexA}\Br{\ell_\mathit{NextMESA}}$ \; } } \Return{\textup{\textbf{\textsf{Recycle}}}}\; \caption{\textbf{\textsf{Recycle}} (Alice's side) } \label{algo:Recycle} \end{algorithm} \begin{algorithm} \Input{$\br{\mathit{RA},\mathit{IndexA},i,R}$} \Output{$\QHA$} \BlankLine $\QHA \leftarrow \emptyset$ \; \For {$k=1 \to 10$} { Choose a fresh MES from "Quantum Hash" category, and let $F$ denote the register containing Alice's half of the state\; \For (\tcp*[f]{\textbf{Hashing between blocks $i-t+1$ and $\ellQVCA$}}) {$j=1 \to 4r \br{t+\ellQVCA-i}$} { \If {$R_{10i+k}\Br{j}=1 \;\mathrm{and}\; \mathit{RA}\Br{i-t+\lceil\frac{j}{4r}\rceil}\neq \sM$} { Apply $\br{\control{{\mathrm X}}}_{FA_b}$ , where $b=4r \cdot \mathit{IndexA} \Br{ i-t+\lfloor \frac{j}{4r} \rfloor }+\br{ j\,\mod 4r }$\; } } Apply the Fourier transform operator on $F$, measure it in the computational basis and record the outcome in $qh$\; $\QHA \leftarrow \br{\QHA,qh}$\; } \Return{\textup{\textbf{\textsf{Quantum-hash}}}}\; \caption{\textbf{\textsf{Quantum-hash}} (Alice's side) } \label{algo:Quantum-hash} \end{algorithm} \begin{algorithm} \Input{$\br{\mathit{IndexA},\ellQVCA,\mathit{msg}',\mathit{FullPA}}$} \Promise{$\br{H_{\MA},H_{\MBtilde},\ell_{\MA},\ell_{\MBtilde}}=\br{H_{\MAtilde}',H_{\MB}',\ell_{\MAtilde}'+1,\ell_{\MB}'+1}$, $\ell_{\MA}=\ell_{\MBtilde}=i \;,\; \ellQVCA < \widetilde{\ellQVCB}$} \Output{$\br{\ellQVCA,\mathit{NewPauliA},\mathit{FullPA}}$} \BlankLine Let~$C_0$ be the communication register at the beginning of the current iteration (which is in Alice's possession) and for every~$j\in[r]$, let~$C_j$ denote the communication register containing $\mathit{msg}^\prime(j)$, Bob's $j$-th message in this iteration\; $\ellQVCA \leftarrow \ellQVCA+1$\; Let~$E_1 E_2 \dotsb E_{4r}$ be the registers with Alice containing halves of the~$4r$ MESs in the block indexed by $\mathit{IndexA} \Br{\ellQVCA}$ \; Apply $$\br{\control{{\mathrm Z}}}_{E_2C_0}\br{\control{{\mathrm X}}}_{E_1C_0}$$ For every~$j\in[r-1]$, upon receiving $C_j$ apply $$\br{\control{{\mathrm Z}}}_{E_{4j+2}C_j}\br{\control{{\mathrm X}}}_{E_{4j+1}C_j}\br{\control{{\mathrm X}}^{-1}}_{E_{4j-1}C_j}\br{\control{{\mathrm Z}}^{-1}}_{E_{4j}C_j}$$ Upon receiving $C_r$ apply $$\br{\control{{\mathrm X}}^{-1}}_{E_{4r-1}C_{r}}\br{\control{{\mathrm Z}}^{-1}}_{E_{4r}C_{r}}$$ \tcp*[f]{\textbf{See Section~\ref{subsec:out-of-sync QVC} for the rationale and Bob's analogue of above steps}} \\ Apply the Fourier transform operator to~$E_1,E_2, \dotsb, E_{4r}$ and measure them in the computational basis. Store the measurement outcomes in $\br{m_1,m_2}\in{\Sigma}^{4r}$\; $\mathit{RA}\Br{\ellQVCA}\leftarrow \sM$\; $\mathit{NewPauliA} \leftarrow \br{m_1,m_2,\perp^{2r}}$\; $\mathit{FullPA} \leftarrow \br{\mathit{FullPA},\mathit{NewPauliA}}$\; \Return{\textup{\textbf{\textsf{Q-syncMES}}}}\; \caption{\textbf{\textsf{Q-syncMES}} (Alice's side)} \label{algo:Q-syncMES} \end{algorithm} \begin{algorithm} \Input{$\br{\mathit{NewMetaA},\mathit{RA},\ell_\RA,\mathit{IndexA},\mathit{FullPA}}$} \Promise{$\br{H_{\MA},H_{\MBtilde},\ell_{\MA},\ell_{\MBtilde},\ellQVCA}= \br{H_{\MAtilde}',H_{\MB}',\ell_{\MAtilde}'+1,\ell_{\MB}'+1,\widetilde{\ellQVCB}}$ , $\ell_{\MA}=\ell_{\MBtilde}=i$ , $\br{\ell_\RA,\mathit{RA}\Br{1:\ell_\RA}} \neq \br{\widetilde{\ellRB},\widetilde{\RB}\Br{1:\widetilde{\ellRB}}}$ } \Output{$\br{\mathit{NewPauliA},\mathit{FullPA},\mathit{RA},\ell_\RA}$} \BlankLine \If {$\mathit{NewMetaA}=\sM \; \mathrm{and} \; \mathit{RA}\Br{\ell_\RA} = \sS$} { Sequentially apply the Fourier transform operator to all the MESs in the block indexed by $\mathit{IndexA} \Br{\ell_\RA}$ and measure them in the computational basis \; Store the measurement outcomes in $\br{m_1,m_2}\in{\Sigma}^{4r}$\; $\mathit{NewPauliA} \leftarrow \br{m_1,m_2,\perp^{2r}}$\; $\mathit{FullPA} \leftarrow \br{\mathit{FullPA},\mathit{NewPauliA}}$\; $\mathit{RA} \Br{\ell_\RA} \leftarrow \sM$\; } $\ell_\RA \leftarrow \ell_\RA-1$\; \Return{\textup{\textbf{\textsf{rewindRD}}}}\; \caption{\textbf{\textsf{rewindRD}} (Alice's side) } \label{algo:rewindRD} \end{algorithm} \begin{algorithm} \Input{$\br{\mathit{RA},\ell_\RA,\mathit{IndexA},\mathit{FullPA}}$} \Promise{$\br{H_{\MA},H_{\MBtilde},\ell_{\MA},\ell_{\MBtilde},\ellQVCA} =\br{H_{\MAtilde}',H_{\MB}',\ell_{\MAtilde}'+1,\ell_{\MB}'+1,\widetilde{\ellQVCB}}$ , $\ell_{\MA}=\ell_{\MBtilde}=i$ , $\br{\ell_\RA,\mathit{RA}\Br{1:\ell_\RA}}=\br{\widetilde{\ellRB},\widetilde{\RB}\Br{1:\widetilde{\ellRB}}}$ , $\QHA \neq \mathit{QHB}'$ } \Output{$\br{\mathit{NewPauliA},\mathit{FullPA},\mathit{RA},\ell_\RA}$} \BlankLine \If {$\mathit{RA}\Br{\ell_\RA} = \sS$} { Sequentially apply the Fourier transform operator to all the MESs in the block indexed by $\mathit{IndexA} \Br{\ell_\RA}$ and measure them in the computational basis \; Store the measurement outcomes in $\br{m_1,m_2}\in{\Sigma}^{4r}$\; $\mathit{NewPauliA} \leftarrow \br{m_1,m_2,\perp^{2r}}$\; $\mathit{FullPA} \leftarrow \br{\mathit{FullPA},\mathit{NewPauliA}}$\; $\mathit{RA} \Br{\ell_\RA} \leftarrow \sM$\; } $\ell_\RA \leftarrow \ell_\RA-1$ \Return{\textup{\textbf{\textsf{measuresyndrome}}}}\; \caption{\textbf{\textsf{measuresyndrome}} (Alice's side) } \label{algo:QmeasureEPR} \end{algorithm} \begin{algorithm} \Input{$\ell_\RA$} \Promise{$\br{H_{\MA},H_{\MBtilde},\ell_{\MA},\ell_{\MBtilde},\ellQVCA} =\br{H_{\MAtilde}',H_{\MB}',\ell_{\MAtilde}'+1,\ell_{\MB}'+1,\widetilde{\ellQVCB}}$ , $\ell_{\MA}=\ell_{\MBtilde}=i$ , $\br{\ell_\RA,\mathit{RA}\Br{1:\ell_\RA}}=\br{\widetilde{\ellRB},\widetilde{\RB}\Br{1:\widetilde{\ellRB}}}$ , $\QHA=\mathit{QHB}'$ , $\ell_\RA < \ellQVCA$ } \Output{$\ell_\RA$} \BlankLine $\ell_\RA \leftarrow \ell_\RA+1$\; \Return{\textup{\textbf{\textsf{extendRD}}}}\; \caption{\textbf{\textsf{extendRD}} (Alice's side)} \label{algo:extendRD} \end{algorithm} \begin{algorithm} \Input{$\br{H_{\PA},\ell_{\PA},H_{\PBtilde},\ell_{\PBtilde},H_{\PAtilde}',\ell_{\PAtilde}',H_{\PB}',\ell_{\PB}'}$} \Promise{$\br{H_{\MA},H_{\MBtilde},\ell_{\MA},\ell_{\MBtilde},\ellQVCA} =\br{H_{\MAtilde}',H_{\MB}',\ell_{\MAtilde}'+1,\ell_{\MB}'+1,\widetilde{\ellQVCB}}$ , $\ell_{\MA}=\ell_{\MBtilde}=i$ , $\br{\ell_\RA,\mathit{RA}\Br{1:\ell_\RA}}=\br{\widetilde{\ellRB},\widetilde{\RB}\Br{1:\widetilde{\ellRB}}}$ , $\QHA=\mathit{QHB}'$ , $\ell_\RA = \ellQVCA$ , $\br{H_{\PA},H_{\PBtilde},\ell_{\PA},\ell_{\PBtilde}}\neq \br{H_{\PAtilde}',H_{\PB}',\ell_{\PAtilde}',\ell_{\PB}'}$. } \Output{$\br{\ell_{\PA},\ell_{\PBtilde}}$} \If {$\ell_{\PA} \neq \ell_{\PAtilde}' \;\mathrm{or}\; \ell_{\PBtilde} \neq \ell_{\PB}'$} { \If {$\ell_{\PA} > \ell_{\PAtilde}'$} {$\ell_{\PA} \leftarrow \ell_{\PA}-r$\;} \If {$\ell_{\PBtilde} > \ell_{\PB}'$} {$\ell_{\PBtilde} \leftarrow \ell_{\PBtilde}-r$\;} } \Else { \If {$H_{\PA} \neq H_{\PAtilde}'$} {$\ell_{\PA} \leftarrow \ell_{\PA}-r$\;} \If {$H_{\PBtilde} \neq H_{\PB}'$} {$\ell_{\PBtilde} \leftarrow \ell_{\PBtilde}-r$\;} } \Return{\textup{\textbf{\textsf{Q-rewindPD}}}}\; \caption{\textbf{\textsf{Q-rewindPD}} (Alice's side)} \label{algo:Q-rewindPD} \end{algorithm} \begin{algorithm} \Input{$\br{\ell_{\PA},\ell_{\PBtilde},\widetilde{\PB},q_{\mathit{MA}},q_{\widetilde{\MB}},\mathit{msg}'}$} \Promise{$\br{H_{\MA},H_{\MBtilde},\ell_{\MA},\ell_{\MBtilde},\ellQVCA} =\br{H_{\MAtilde}',H_{\MB}',\ell_{\MAtilde}'+1,\ell_{\MB}'+1,\widetilde{\ellQVCB}}$ , $\ell_{\MA}=\ell_{\MBtilde}=i$ , $\br{\ell_\RA,\mathit{RA}\Br{1:\ell_\RA}}=\br{\widetilde{\ellRB},\widetilde{\RB}\Br{1:\widetilde{\ellRB}}}$ , $\QHA=\mathit{QHB}'$ , $\ell_\RA = \ellQVCA$ , $\br{H_{\PA},H_{\PBtilde},\ell_{\PA},\ell_{\PBtilde}}= \br{H_{\PAtilde}',H_{\PB}',\ell_{\PAtilde}',\ell_{\PB}'}$ , $\ell_{\PA} < 6q_{\mathit{MA}} \cdot r \quad \mathrm{or} \quad \ell_{\PBtilde} < 6q_{\widetilde{\MB}} \cdot r$.} \Output{$\br{\ell_{\PA},\widetilde{\PB},\ell_{\PBtilde}}$} \If {$\ell_{\PA} < 6q_{\mathit{MA}} \cdot r$} {$\ell_{\PA} \leftarrow \ell_{\PA}+r$\;} \If {$\ell_{\PBtilde} < 6q_{\widetilde{\MB}} \cdot r$} { $\widetilde{\PB}\Br{\ell_{\PBtilde}+1:\ell_{\PBtilde}+r} \leftarrow \mathit{msg}'$\; $\ell_{\PBtilde} \leftarrow \ell_{\PBtilde}+r$\; } \Return{\textup{\textbf{\textsf{Q-extendPD}}}}\; \caption{\textbf{\textsf{Q-extendPD}} (Alice's side) } \label{algo:Q-extendPD} \end{algorithm} \begin{algorithm} \Input{$\br{\mathit{FullMA},\widetilde{\MB},\mathit{RA},\mathit{FullPA},\widetilde{\PB}}$} \Promise{$\br{H_{\MA},H_{\MBtilde},\ell_{\MA},\ell_{\MBtilde},\ellQVCA} =\br{H_{\MAtilde}',H_{\MB}',\ell_{\MAtilde}',\ell_{\MB}',\widetilde{\ellQVCB}}$ , $\ell_{\MA}=\ell_{\MBtilde}=i-1$ , $\br{\ell_\RA,\mathit{RA}\Br{1:\ell_\RA}}=\br{\widetilde{\ellRB},\widetilde{\RB}\Br{1:\widetilde{\ellRB}}}$ , $\QHA=\mathit{QHB}'$ , $\ell_\RA=\ellQVCA$ , $\br{H_{\PA},H_{\PBtilde},\ell_{\PA},\ell_{\PBtilde}}= \br{H_{\PAtilde}',H_{\PB}',\ell_{\PAtilde}',\ell_{\PB}'}$, $\ell_{\PA}=6q_{\mathit{MA}} \cdot r\;, \ell_{\PBtilde} =6q_{\widetilde{\MB}} \cdot r$, } \Output{$\br{\JSoneA, \JStwoA,\mathit{NewMetaA},\widetilde{\NewMetaB}, \mathit{Block},\RewindExtend,\mathit{P}_\mathrm{Corr},\widetilde{\mathit{P}_\mathrm{Corr}}}$} \BlankLine Compute $\JSoneA$\; Compute $\JStwoA$\; Compute $\mathit{NewMetaA}$\; Compute $\widetilde{\NewMetaB}$\; Compute $\RewindExtend$\; Compute $\mathit{Block}$\; Compute $\mathit{P}_\mathrm{Corr}$\; Compute $\widetilde{\mathit{P}_\mathrm{Corr}}$\; \tcp*[f]{\textbf{Refer to Sections~\ref{subsec:Q-1st-rep-sync},~\ref{subsec:Q-2nd-rep-sync} to see how these variables are computed}} \Return{\textup{\textbf{\textsf{Q-computejointstate}}}}\; \caption{\textbf{\textsf{Q-computejointstate}} (Alice's side)} \label{algo:Q-computejointstate} \end{algorithm} \begin{algorithm} \Input{$\br{\ellQVCA,\mathit{IndexA},\mathit{FullPA},\ell_{\PA},\widetilde{\PB},\ell_{\PBtilde}, \mathit{NewMetaA},\RewindExtend,\mathit{Block},\mathit{P}_\mathrm{Corr},\widetilde{\mathit{P}_\mathrm{Corr}}}$ } \Promise{$\br{H_{\MA},H_{\MBtilde},\ell_{\MA},\ell_{\MBtilde},\ellQVCA} =\br{H_{\MAtilde}',H_{\MB}',\ell_{\MAtilde}'+1,\ell_{\MB}'+1,\widetilde{\ellQVCB}}$ , $\ell_{\MA}=\ell_{\MBtilde}=i$ , $\br{\ell_\RA,\mathit{RA}\Br{1:\ell_\RA}}=\br{\widetilde{\ellRB},\widetilde{\RB}\Br{1:\widetilde{\ellRB}}}$ , $\QHA=\mathit{QHB}'$ , $\ell_\RA=\ellQVCA$ , $\br{H_{\PA},H_{\PBtilde},\ell_{\PA},\ell_{\PBtilde}}= \br{H_{\PAtilde}',H_{\PB}',\ell_{\PAtilde}',\ell_{\PB}'}$, $\ell_{\PA}=6q_{\mathit{MA}} \cdot r\;, \ell_{\PBtilde} =6q_{\widetilde{\MB}} \cdot r$, } \Output{$\br{\ellQVCA,\mathit{NewPauliA},\mathit{FullPA},\ell_{\PA}, \widetilde{\NewPauliB},\widetilde{\PB},\ell_{\PBtilde},\ell_\RA}$} \BlankLine $\ellQVCA \leftarrow \ellQVCA+1$\; Continue the simulation of the noiseless protocol according to the output of \textbf{\textsf{Q-computejointstate}} using the block of MESs indexed by $\mathit{IndexA}\Br{\ellQVCA}$\; $\mathit{NewPauliA} \leftarrow \br{\bot^{2r},\bot^{2r},\mathit{P}_\mathrm{Corr}}$\; $\mathit{FullPA} \leftarrow \br{\mathit{FullPA},\mathit{NewPauliA}}$\; $\ell_{\PA} \leftarrow \ell_{\PA}+6r$\; $\widetilde{\NewPauliB}\leftarrow \br{\bot^{2r},\bot^{2r},\widetilde{\mathit{P}_\mathrm{Corr}}}$\; $\widetilde{\PB} \leftarrow \br{\widetilde{\PB},\widetilde{\NewPauliB}}$\; $\ell_{\PBtilde} \leftarrow \ell_{\PBtilde}+6r$\; $\ell_\RA \leftarrow \ell_\RA+1$\; \Return{\textup{\textbf{\textsf{Q-simulate}}}}\; \caption{\textbf{\textsf{Q-simulate}} (Alice's side)} \label{algo:Q-simulate} \end{algorithm} \subsection{Analysis} To simplify the analysis of the algorithm, without loss of generality, we assume that the error introduced by the adversary on the $n^\prime$ message registers in~$\Pi'$ is a Pauli error of weight at most~$\epsilon n^\prime$. We prove the correctness of the algorithm for any such error syndrome, which by linearity implies the correctness of the algorithm against any adversary defined in Section~\ref{sec:noisy_comm_model}. In order to track the simulation progress and show the correctness of the algorithm, we condition on some view of the local classical data recorded by Alice and Bob. Similar to Section~\ref{subsec:polysizeclassicalanalysis}, the analysis of Algorithm~\ref{algo:MainalgorithmQMessage} is in terms of potential functions which measure the correctness of the two players' views of what has happened so far in the simulation and quantify the progress in reproducing the joint state of the input protocol. We recall the following definitions from Section~\ref{subsec:polysizeclassicalanalysis}: \begin{align} &md_\mathrm{A}^+ \defeq~\text{the length of the longest prefix where $\mathit{MA}$ and $\widetilde{\MA}$ agree;}\label{eqn:Q-mda+}\\ &md_\mathrm{B}^+ \defeq~\text{the length of the longest prefix where $\mathit{MB}$ and $\widetilde{\MB}$ agree;}\label{eqn:Q-mdb+}\\ &md_\mathrm{A}^-\defeq \max\{\ell_{\MA},\ell_{\MAtilde}\}-md_\mathrm{A}^+;\label{eqn:Q-mda-}\\ &md_\mathrm{B}^-\defeq \max\{\ell_{\MB},\ell_{\MBtilde}\}-md_\mathrm{B}^+;\label{eqn:Q-mdb-}\\ &pd_\mathrm{A}^+\defeq \lfloor\frac{1}{r} \times~\text{the length of the longest prefix where $\mathit{PA}$ and $\widetilde{\PA}$ agree}\rfloor;\label{eqn:Q-pda+}\\ &pd_\mathrm{B}^+\defeq \lfloor\frac{1}{r} \times~\text{the length of the longest prefix where $\mathit{PB}$ and $\widetilde{\PB}$ agree}\rfloor;\label{eqn:Q-pdb+}\\ &pd_\mathrm{A}^-\defeq \frac{1}{r} \max\{\ell_{\PA},\ell_{\PAtilde}\}-pd_\mathrm{A}^+;\label{eqn:Q-pda-}\\ &pd_\mathrm{B}^-\defeq \frac{1}{r} \max\{\ell_{\PB},\ell_{\PBtilde}\}-pd_\mathrm{B}^+.\label{eqn:Q-pdb-} \end{align} We recall the definition of $g,b,u$ from Subsection~\ref{subsec:Q-2nd-rep-out-of-sync}: \begin{align} &g \defeq \text{the number of good blocks in $\mathit{JS}2$,}\label{eqn:Q-g}\\ &b \defeq \text{the number of bad blocks in $\mathit{JS}2$, and}\label{eqn:Q-b}\\ &u\defeq |\ellQVCA-\ell_\mathrm{QVC}^\mathrm{B}|,\label{eqn:Q-u} \end{align} We define \begin{align} rd^+ \defeq \max &\{j:\; j\leq \min\{\ell_\RA,\ell_\RB\} \;,\; \mathit{RA}\Br{1:j}=\mathit{RB}\Br{1:j} \nonumber \\ &\;, W_{k}=0^{4r}\text{ for all $k\leq j$ with $\mathit{RA}\Br{k}=\sS$}\} ;\label{eqn:rd+}\\ rd^- \defeq \max &\{\ell_\RA,\ell_\RB\}-rd^+,\label{eqn:rd-} \end{align} where $W$ in Equation~\ref{eqn:rd+} is the string corresponding to the error syndrome defined in Subsection~\ref{subsec:Q-1st-rep-sync}. At the end of the $i$-th iteration, we let \begin{align} &\Phi_{\mathrm{MD}}\defeq 2i-md_\mathrm{A}^++3md_\mathrm{A}^--md_\mathrm{B}^++3md_\mathrm{B}^-,\label{eqn:Q-phimd}\\ &\Phi_{\mathrm{RD}}\defeq \ellQVCA+\ell_\mathrm{QVC}^\mathrm{B}+ 13rd^- - 2rd^+, \label{eqn:Q-phird}\\ &\Phi_{\mathrm{PD}}\defeq 6q_{\mathit{MA}}+6q_{\mathit{MB}}-pd_\mathrm{A}^++pd_\mathrm{A}^--pd_\mathrm{B}^++pd_\mathrm{B}^-,\label{eqn:Q-phipd}\\ &\Phi_{\mathrm{Q}}\defeq g-b-9u, \label{eqn:Q-phiQ}\\ &\Phi\defeq\Phi_{\mathrm{Q}}-\Phi_{\mathrm{MD}}-\Phi_{\mathrm{RD}}-\Phi_{\mathrm{PD}}.\label{eqn:Q-phi} \end{align} The following lemma states an important property of potential functions $\Phi_{\mathrm{MD}}$, $\Phi_{\mathrm{RD}}$ and $\Phi_{\mathrm{PD}}$ defined above which we use in the analysis of the algorithm. \begin{lemma} \label{lem:potential-value-vs-data} Throughout the algorithm, it holds that \begin{itemize} \item $\Phi_{\mathrm{MD}}\geq 0$ with equality if and only if Alice and Bob have full knowledge of each other's metadata, i.e., $md_\mathrm{A}^+=md_\mathrm{B}^+=i$ and $md_\mathrm{A}^-=md_\mathrm{B}^-=0$. \item $\Phi_{\mathrm{RD}}\geq 0$ with equality if and only if Alice and Bob have used the same number of MES blocks $(\ellQVCA=\ell_\mathrm{QVC}^\mathrm{B})$, their measurement pointers $\ell_\RA$ and $\ell_\RB$ agree and are equal to $\ellQVCA$, they fully agree on the recycling data $(\mathit{RA}=\mathit{RB})$ and $W_{k}=0^{4r}$ for all $k\leq \ellQVCA$ with $\mathit{RA}\Br{k}=\sS$, i.e., $\ellQVCA=\ell_\mathrm{QVC}^\mathrm{B}=rd^+$ and $rd^-=0$. \item $\Phi_{\mathrm{PD}}\geq 0$ with equality if and only if Alice and Bob have full knowledge of each other's Pauli data, i.e., $pd_\mathrm{A}^+=6q_\mathit{MA}$, $pd_\mathrm{B}^+=6q_\mathit{MB}$ and $pd_\mathrm{A}^-=pd_\mathrm{B}^-=0$. \end{itemize} \end{lemma} \begin{proof} The first statement follows from the property that $md_\mathrm{A}^-,md_\mathrm{B}^- \geq 0$ and $md_\mathrm{A}^+,md_\mathrm{B}^+\leq i$. The second statement holds since $rd^- \geq 0$, $rd^+ \leq \min\{\ell_\RA,\ell_\RB\}$ and the property that $\ell_\RA\leq\ellQVCA$ and $\ell_\RB\leq\ell_\mathrm{QVC}^\mathrm{B}$. The third statement follows since $pd_\mathrm{A}^-,pd_\mathrm{B}^- \geq 0$, $pd_\mathrm{A}^+\leq 6q_\mathit{MA}$ , and $pd_\mathrm{B}^+\leq 6q_\mathit{MB}$. \end{proof} In order to avoid ambiguity, whenever necessary we use a superscript $i$ to indicate the value of the variables of the algorithm at the end of the $i$-th iteration. For instance, we denote Alice's recycling data at the end of the $i$-th iteration by $\mathit{RA}^i$. Before presenting the analysis of Algorithm~\ref{algo:MainalgorithmQMessage}, we formally define successful recycling. \begin{definition} \label{def:successful recycling} We say recycling is successful in the $i$-th iteration of Algorithm~\ref{algo:MainalgorithmQMessage} if the following hold: \begin{itemize} \item The algorithm does not abort in the $i$-th iteration, i.e., $\mathit{NextMESIndexA}^i,\mathit{NextMESIndexB}^i \neq \perp$, \item $\mathit{NextMESIndexA}^i=\mathit{NextMESIndexB}^i$, \item The block of MES registers indexed by $\mathit{NextMESIndexA}^i$ are in the $\ket{\phi^{0,0}}^{\otimes 4r}$ state at the beginning of the $i$-th iteration. \end{itemize} \end{definition} Note that the conditions of Definition~\ref{def:successful recycling} are all satisfied in the first $\mathit{L}_{\mathrm{QVC}}$ iterations of the algorithm, since in fact no recycling is done in those iterations. Moreover, we have $\mathit{IndexA}\Br{1:\mathit{L}_{\mathrm{QVC}}}=\mathit{IndexB}\Br{1:\mathit{L}_{\mathrm{QVC}}}=1:\mathit{L}_{\mathrm{QVC}}$. {\bf Proof Outline of Theorem~\ref{thm:Qmessagelargealphabet}.} In order to prove successful simulation of an $n$-round protocol, it suffices to show that $\Phi \geq n/{2r}$, at the end of the simulation. In Section~\ref{subsec:polysizeclassicalanalysis} we showed that except with exponentially small probability, the total number of hash collisions is~$O\br{n\epsilon}$. Then, for sufficiently large number of iterations, to prove the correctness it was sufficient to show that in any iteration with no error or hash collision the potential function increases by at least one, while any iteration with errors or hash collisions decreases the potential by at most some fixed constant. However, this statement is not necessarily true for Algorithm~\ref{algo:MainalgorithmQMessage} if the recycling of MESs has not been successful in an earlier iteration. In fact, the potential function is defined in terms of~$\mathit{JS}2$ which is a valid representation of the joint state at any stage in the simulation only if recycling has been successful so far. Therefore, to use such an argument, one needs to prove successful recycling first. On the other hand, to prove successful recycling in an iteration, we need to bound the number of iterations with a hash collision, as well as the number of iterations dedicated to ``recovery'' from hash collisions and transmission errors. Therefore, the analysis of the recycling-based protocol involves an inductive argument. The analysis in this section involves constants \begin{align*} c_1 < c_2 < c_3 < c_4 < c_5 < c_6 < c_7 < c_8 < c_9 \enspace, \end{align*} chosen such that $c_i$ is sufficiently large depending only on $c_j$ with $j<i$. \begin{definition} We say an iteration of Algorithm~\ref{algo:MainalgorithmQMessage} suffers from a \emph{metadata hash collision\/} when $H_{\MA}=H_{\MAtilde}$ despite the fact that $\mathit{MA} \neq \widetilde{\MA}$, or $H_{\MB}=H_{\MBtilde}$ despite the fact that $\mathit{MB} \neq \widetilde{\MB}$. Note that we distinguish between the above scenario and when, for instance,~$H_{\MA}=H_{\MAtilde}'$ due to a transmission error on $H_{\MAtilde}$, despite the two might have similar effects. \end{definition} The following lemma bounds the number of iterations with a metadata hash collision up to any point in the simulation assuming successful recycling in the earlier iterations. \begin{lemma}\label{lem:MD collisions} Suppose that recycling is successful in the first $i$ iterations of Algorithm~\ref{algo:MainalgorithmQMessage}. Then the number of iterations of the algorithm suffering from a metadata hash collision in the first $i$ iterations is at most $c_1n\epsilon$ with probability at least $1-2^{-\Theta(n\epsilon)}$. \end{lemma} \begin{proof} Note that a metadata hash collision occurs in an iteration only if $md_\mathrm{A}^- + md_\mathrm{B}^- \neq 0$ at the beginning of the iteration. Let $\alpha_\mathrm{MD}$ denote the number of such iterations in the first $i$ iterations of Algorithm~\ref{algo:MainalgorithmQMessage}. It suffices to prove that \[\Pr\br{\alpha_\mathrm{MD} > c_1n\epsilon} \leq 2^{-\Theta(n\epsilon)} \enspace. \] Note that in any iteration $md_\mathrm{A}^- + md_\mathrm{B}^-$ increases by at most $6$. Moreover, in an iteration with $md_\mathrm{A}^- + md_\mathrm{B}^- \neq 0$, if $md_\mathrm{A}^- + md_\mathrm{B}^-$ decreases, it decreases by at least $1$. Therefore, in at least $\alpha_\mathrm{MD}/7$ iterations, $md_\mathrm{A}^- + md_\mathrm{B}^-$ increases or remains unchanged at a nonzero value. Note that $md_\mathrm{A}^- + md_\mathrm{B}^- > 0$ increases or remains unchanged only if a transmission error or a metadata hash collision occurs. Moreover, when $md_\mathrm{A}^- + md_\mathrm{B}^-$ increases from zero in an iteration, it is due to a transmission error. The number of iterations is less than $2n$. So the total number of iterations with transmission errors, is at most $2n\epsilon$. This implies that in all the remaining iterations, i.e., at least $\alpha_\mathrm{MD}/7-2n\epsilon$ iterations a metadata hash collision occurs. Since the algorithm uses independent seeds in each iteration and the probability of collision is chosen to be $0.1$, the expected number of collisions is at most $\alpha_\mathrm{MD}/10$. If $\alpha_\mathrm{MD} > c_1n\epsilon$ for a sufficiently large $c_1$, then the Chernoff bound implies that the probability of having so many collisions is at most $2^{-\Theta\br{n\epsilon}}$. \end{proof} \begin{definition} We refer to an iteration of Algorithm~\ref{algo:MainalgorithmQMessage} as a \emph{recovery iteration of type I\/} if at least one of Alice or Bob conducts one of the cases i.A, i.B, ii.A, or ii.B. \end{definition} We use the following lemma to bound the number of type I recovery iterations. \begin{lemma} \label{lem:type I recovery} Suppose that in the first $i$ iterations of Algorithm~\ref{algo:MainalgorithmQMessage}, recycling is successful and the number of iterations suffering from a metadata hash collision is at most $c_1n\epsilon$. Then the number of type I recovery iterations in the first $i$ iterations is at most $c_2n\epsilon$. \end{lemma} \begin{proof} Let \[ \Phi_\mathrm{I} \defeq u+\Phi_{\mathrm{MD}}\enspace. \] By Lemma~\ref{lem:potential-value-vs-data} and the definition of $u$ in Eq.~\eqref{eqn:Q-u}, $\Phi_\mathrm{I}$ is always non-negative and is equal to zero if and only if Alice and Bob know each other's full metadata and have used the same number of MES blocks for QVC. Note that if $\Phi_\mathrm{I}=0$ at the beginning of an iteration, then the iteration is a type I recovery iteration only if a transmission error in communication of metadata messages occurs. The total number of such iterations is at most $2n\epsilon$. Let $\beta_\mathrm{I}$ denote the number of iterations in the first $i$ iterations starting with $\Phi_\mathrm{I}> 0$. Note that in any iteration, $\Phi_\mathrm{I}$ increases or remains unchanged at a nonzero value only if a metadata hash collision or a transmission error occurs. In each iteration, regardless of the number of errors and collisions, $\Phi_\mathrm{I}$ increases by at most $23$. Moreover, if $\Phi_\mathrm{I}$ decreases, it decreases by at least $1$. Assuming the number of metadata hash collisions is at most $c_1n\epsilon$, this implies that the number of iterations in which $\Phi_\mathrm{I}$ decreases is at most $23\br{c_1+2} n\epsilon$. So we have $\beta_\mathrm{I} \leq 24\br{c_1+2} n\epsilon$. Therefore, the total number of type I recovery iterations is at most $c_2n\epsilon$, where~$c_2 \defeq 24\br{c_1+2}+2$. \end{proof} \begin{definition} We say an iteration of Algorithm~\ref{algo:MainalgorithmQMessage} suffers from a \emph{quantum hash collision\/} when recycling has been successful so far, Alice and Bob know each other's metadata, have used the same number of MES blocks ($\ellQVCA=\ell_\mathrm{QVC}^\mathrm{B}$) and agree on their measurement pointers and their recycling data up to the measurement pointers ($\ell_\RA=\ell_\RB$ and ~$\mathit{RA}\Br{1:\ell_\RA}=\mathit{RB}\Br{1:\ell_\RB}$) but despite the fact that there is an undetected quantum error from earlier iterations, their quantum hash values match, i.e.,~$\QHA=\mathit{QHB}$. Note that we distinguish between the above scenario and when~$\QHA=\mathit{QHB}'$ due to a transmission error on $\mathit{QHB}$. \end{definition} We use the following lemma to bound the number of iterations suffering from a quantum hash collision. \begin{lemma} \label{lem:QH collisions} Suppose that in the first $i$ iterations of Algorithm~\ref{algo:MainalgorithmQMessage} recycling is successful and the number of iterations suffering from a metadata hash collision is at most $c_1n\epsilon$. Then the number of iterations suffering from a quantum hash collision in the first $i$ iterations is at most $c_3n\epsilon$ with probability at least $1-2^{-\Theta(n\epsilon)}$. \end{lemma} \begin{proof} Note that a quantum hash collision occurs in an iteration only if $rd^- \neq 0$ at the beginning of the iteration. Let $\alpha_\mathrm{RD}$ denote the number of such iterations in the first $i$ iterations of Algorithm~\ref{algo:MainalgorithmQMessage}. It suffices to prove that \[\Pr\br{\alpha_\mathrm{RD} > c_3n\epsilon} \leq 2^{-\Theta(n\epsilon)} \enspace. \] Note that in any iteration $rd^-$ increases by at most $2$. Moreover, in an iteration with $rd^- \neq 0$, if $rd^-$ decreases, it decreases by at least $1$. Therefore, in at least $\alpha_\mathrm{RD}/3$ iterations, $rd^-$ increases or remains unchanged at a nonzero value. Note that $rd^->0$ increases or remains unchanged only if \begin{itemize} \item A metadata hash collision or a transmission error on metadata messages (i.e., $H_{\MA}$, $\ell_{\MA}$, $H_{\MB}$, $\ell_{\MB}$, $H_{\MBtilde}$, $\ell_{\MBtilde}$, $H_{\MAtilde}$, $\ell_{\MAtilde}$) occurs, or else, \item The iteration is a type I recovery iteration. Alice and Bob are still reconciling an earlier inconsistency in their metadata and they are both in case i.A or case i.B, or one of them is in case ii.A and the other one in case ii.B. Else, \item A transmission error on quantum hash values or a quantum hash collision occurs. At least one party does not realize that $rd^->0$ and conducts one of the cases v, vi.A, vi.B, or vii. \end{itemize} Moreover, the value of $rd^-$ increases from zero in an iteration only if \begin{itemize} \item A metadata hash collision occurs and Alice and Bob act based on incorrect estimates of each other's recycling data, or else, \item A transmission error on metadata messages occurs, or else, \item A transmission error on quantum hash values occurs and only one party conducts case iv, or else, \item A transmission error on the Pauli data messages (i.e., $H_{\PA}$, $\ell_{\PA}$, $H_{\PB}$, $\ell_{\PB}$, $H_{\PBtilde}$, $\ell_{\PBtilde}$, $H_{\PAtilde}$, $\ell_{\PAtilde}$) occurs and one party conducts case vi.A or vi.B while the other is in case vii. Else, \item A transmission error occurs on the communicated QVC messages when both parties conduct case vii. \end{itemize} Assuming the number of metadata hash collisions is at most $c_1n\epsilon$, by Lemma~\ref{lem:type I recovery}, the total number of type I recovery iterations is at most $c_2n\epsilon$. The total number of transmission errors is at most $2n\epsilon$. Therefore, in at least $\alpha_\mathrm{RD}/3-\br{c_1+c_2+2}n\epsilon$ iterations a quantum hash collision occurs. The shared random string used as the classical seed for quantum hashing is $\delta$-biased with $\delta=2^{-\Theta\br{n\sqrt{\epsilon}}}$. By Lemma~\ref{lem:stretch}, the seeds are also $\delta^{\Theta\br{1}}$-statistically close to being $\Theta\br{R_\mathrm{total}}$-wise independent. Therefore, all hashing steps are statistically close to being fully independent. Combined with Lemma~\ref{lem:quantum hash-delta biased}, this implies that the expected number of quantum hash collisions is at most $10^{-3}\alpha_\mathrm{RD}$. For sufficiently large $c_3$, if $\alpha_\mathrm{RD} > c_3n\epsilon$, the Chernoff bound implies that the probability of having at least $\alpha_\mathrm{RD}/3-\br{c_1+c_2+2}n\epsilon$ quantum hash collisions is at most $2^{-\Theta(n\epsilon)}$. \end{proof} \begin{definition} We refer to an iteration of Algorithm~\ref{algo:MainalgorithmQMessage} as a recovery iteration of type II if it is not a type I recovery iteration and at least one of Alice or Bob conducts one of the cases iii, iv, or v. \end{definition} We use the following lemma to bound the number of type II recovery iterations. \begin{lemma} \label{lem:type II recovery} Suppose that in the first $i$ iterations of Algorithm~\ref{algo:MainalgorithmQMessage}, recycling is successful, the number of iterations suffering from a metadata hash collision is at most $c_1n\epsilon$ and the number of iterations suffering from a quantum hash collision is at most $c_3n\epsilon$. Then the total number of type II recovery iterations in the first $i$ iterations is at most $c_4n\epsilon$. \end{lemma} \begin{proof} Note that by Lemma~\ref{lem:potential-value-vs-data}, $\Phi_{\mathrm{RD}}$ is always non-negative. If at the beginning of an iteration $\Phi_\mathrm{RD}=0$, then the iteration is a type II recovery iteration only if \begin{itemize} \item $\Phi_\mathrm{MD}>0$ but due to a metadata hash collision or a transmission error on metadata messages both Alice and Bob do not realize that. In this case they compute their estimates of each other's recycling data based on incorrect estimates of each other's metadata. \item $\Phi_\mathrm{MD}=0$ but a transmission error in communication of quantum hashes occurs. \end{itemize} Therefore, in the first $i$ iterations the total number of type II recovery iterations starting with $\Phi_\mathrm{RD}=0$ is at most~$\br{c_1+2}n\epsilon$. \\ Let $\beta_\mathrm{RD}$ denote the number of iterations starting with $\Phi_\mathrm{RD} > 0$ in the first $i$ iterations. Assuming successful recycling in the preceding iterations, $\Phi_\mathrm{RD}>0$ increases or remains unchanged in an iteration only if \begin{itemize} \item The iteration is a type I recovery iteration, or else, \item $\Phi_\mathrm{MD}>0$ but due to a metadata hash collision or transmission errors on metadata messages, Alice and Bob don't realize that and act based on their incorrect estimates of each other's recycling data. \item $\Phi_\mathrm{MD}=0$, i.e., Alice and Bob have correct estimates of each other's recycling data but a quantum hash collision or a transmission error on quantum hash values occurs. \end{itemize} Moreover, $\Phi_\mathrm{RD}$ increases from zero in an iteration only if \begin{itemize} \item A transmission error occurs, or else, \item A metadata hash collision occurs and the two parties act based on incorrect estimates of each other's recycling data. \end{itemize} Therefore, the number of iterations in the first $i$ iterations with $\Phi_\mathrm{RD}$ increasing or remaining unchanged at a nonzero value is at most~$\br{c_1+c_2+c_3+2}n\epsilon$. Note that in each iteration, regardless of the number of errors and collisions, $\Phi_\mathrm{RD}$ increases by at most $30$. Moreover, if $\Phi_\mathrm{RD}$ decreases, it decreases by at least $1$. This implies that the number of iterations in which $\Phi_\mathrm{RD}$ decreases is at most $30\br{c_1+c_2+c_3+2}n\epsilon$. So, we have $\beta_\mathrm{RD} \leq 31\br{c_1+c_2+c_3+2}n\epsilon$. \\ Therefore, the total number of recovery iterations of type II in the first $i$ iterations is at most $c_4n\epsilon$, where $c_4 \defeq 31\br{c_1+c_2+c_3+2}+\br{c_1+2}$. \end{proof} \begin{definition} We say an iteration of Algorithm~\ref{algo:MainalgorithmQMessage} suffers from a \emph{Pauli data hash collision\/} when recycling has been successful so far, Alice and Bob know each other's metadata, agree on the number of MES blocks they have used ($\ellQVCA=\ell_\mathrm{QVC}^\mathrm{B}$), agree on their recycling data, their measurement pointers satisfy $\ell_\RA=\ell_\RB=\ellQVCA$, all the non-measured MES blocks are in the $\ket{\phi^{0,0}}^{\otimes 4r}$ state and $H_{\PA}=H_{\PAtilde}$ despite the fact that $\mathit{PA} \neq \widetilde{\PA}$ or $H_{\PB}=H_{\PBtilde}$ despite the fact that $\mathit{PB} \neq \widetilde{\PB}$. Note that we distinguish between the above scenario and when for instance~$H_{\PA}=H_{\PAtilde}'$ due to a transmission error on $H_{\PAtilde}$. \end{definition} We use the following lemma to bound the number of iterations suffering from a Pauli data hash collision. \begin{lemma} \label{lem:PD collisions} Suppose that in the first $i$ iterations of Algorithm~\ref{algo:MainalgorithmQMessage}, recycling is successful, the number of iterations suffering from a metadata hash collision is at most $c_1n\epsilon$ and the number of iterations suffering from a quantum hash collision is at most $c_3n\epsilon$. Then the number of iterations suffering from a Pauli data hash collision in the first $i$ iterations is at most $c_5n\epsilon$ with probability at least $1-2^{-\Theta(n\epsilon)}$. \end{lemma} \begin{proof} Note that a Pauli data hash collision occurs in an iteration only if $pd_\mathrm{A}^- + pd_\mathrm{B}^- \neq 0$ at the beginning of the iteration. Let $\alpha_\mathrm{PD}$ denote the number of such iterations in the first $i$ iterations of Algorithm~\ref{algo:MainalgorithmQMessage}. We prove that \[\Pr\br{\alpha_\mathrm{PD} > c_5n\epsilon} \leq 2^{-\Theta(n\epsilon)} \enspace. \] Note that in any iteration $pd_\mathrm{A}^- + pd_\mathrm{B}^-$ increases by at most $8$. Moreover, in an iteration with $pd_\mathrm{A}^- + pd_\mathrm{B}^- \neq 0$, if $pd_\mathrm{A}^- + pd_\mathrm{B}^-$ decreases, it decreases by at least $1$. Therefore, in at least $\alpha_\mathrm{PD}/9$ iterations, $pd_\mathrm{A}^- + pd_\mathrm{B}^-$ increases or remains unchanged at a nonzero value. Note that when $pd_\mathrm{A}^- + pd_\mathrm{B}^- >0$ increases or remains unchanged in an iteration, it is due to one of the following reasons: \begin{itemize} \item The iteration is a type I recovery iteration, or else, \item The iteration is a type II recovery iteration, or else, \item A Pauli data hash collision or a transmission error on Pauli data messages (i.e., $H_{\PA}$, $\ell_{\PA}$, $H_{\PB}$, $\ell_{\PB}$, $H_{\PBtilde}$, $\ell_{\PBtilde}$, $H_{\PAtilde}$, $\ell_{\PAtilde}$) occurs. \end{itemize} Moreover, $pd_\mathrm{A}^- + pd_\mathrm{B}^-$ increases from zero in an iteration only if \begin{itemize} \item A metadata hash collision or transmission error on the metadata messages occurs, or else, \item A transmission error on the quantum hash values occurs, or else, \item A transmission error on the Pauli data messages occurs, or else, \item Both parties conduct case vi.B and due to a transmission error, at least one of Alice or Bob extends her/his estimate of the other party's Pauli data incorrectly. \end{itemize} Assuming the number of iterations of Algorithm~\ref{algo:MainalgorithmQMessage} suffering from a metadata hash collision is at most $c_1n\epsilon$ and the number of iterations suffering from a quantum hash collision is at most $c_3n\epsilon$, the total number of type I and type II iterations is at most~$\br{c_2+c_4}n\epsilon$. The number of transmission errors is at most $2n\epsilon$. Therefore, in at least $\alpha_\mathrm{PD}/9-\br{c_1+c_2+c_4+2}n\epsilon$ iterations a Pauli data hash collision occurs. Since the algorithm uses independent seeds in each iteration and the probability of a collision is chosen to be $0.1$, the expected number of Pauli data hash collisions is at most $\alpha_\mathrm{PD}/10$. For sufficiently large $c_5$, if $\alpha_\mathrm{PD} > c_5n\epsilon$, the Chernoff bound implies that the probability of having so many Pauli data hash collisions in the first $i$ iterations is at most $2^{-\Theta\br{n\epsilon}}$. \end{proof} \begin{definition} We refer to an iteration of Algorithm~\ref{algo:MainalgorithmQMessage} as a recovery iteration of type III if it is not a type I or type II recovery iteration and at least one of Alice or Bob conducts one of the cases vi.A or vi.B. \end{definition} We use the following lemma to bound the number of type III recovery iterations. \begin{lemma} \label{lem:type III recovery} Suppose that in the first $i$ iterations of Algorithm~\ref{algo:MainalgorithmQMessage}, recycling is successful and the number of iterations suffering from metadata , quantum and Pauli data hash collisions is at most $c_1n\epsilon$, $c_3n\epsilon$ and $c_5n\epsilon$, respectively. Then the total number of type III recovery iterations in the first $i$ iterations is at most $c_6n\epsilon$. \end{lemma} \begin{proof} Note that by Lemma~\ref{lem:potential-value-vs-data}, $\Phi_{\mathrm{PD}}$ is always non-negative and it is equal to zero if and only if Alice and Bob have full knowledge of each other's Pauli data. If at the beginning of an iteration $\Phi_\mathrm{PD}=0$, then the iteration is a type III recovery iteration only if \begin{itemize} \item A transmission error occurs on the Pauli data messages, or else, \item A metadata hash collision or a transmission error on metadata messages occurs. In this case at least one party incorrectly believes that his/her estimate of the other party's Pauli data is not full-length. \end{itemize} Therefore, in the first $i$ iterations the total number of type II recovery iterations starting with $\Phi_\mathrm{PD}=0$ is at most~$\br{c_1+2}n\epsilon$. Let $\beta_\mathrm{PD}$ denote the number of iterations starting with $\Phi_\mathrm{PD} > 0$ in the first $i$ iterations. Note that in any iterations $\Phi_\mathrm{PD}>0$ increases or remains unchanged only if \begin{itemize} \item The iteration is a type I recovery iteration, or else, \item The iteration is a type II recovery iteration, or else, \item A Pauli data hash collision or a transmission error on Pauli data messages occurs. \end{itemize} Moreover, $\Phi_\mathrm{PD}$ increases from zero in an iteration only if \begin{itemize} \item A metadata hash collision or transmission error on the metadata messages occurs, or else, \item The iteration is a type I recovery iteration in which one party conducts case ii.A and the other case ii.B, or else, \item A transmission error on the quantum hash values occurs, or else, \item The iteration is a type II recovery iteration in which both parties conduct case iii or both conduct case iv, or else, \item A transmission error on the Pauli data messages occurs. \end{itemize} Therefore, the number of iterations in the first $i$ iterations with $\Phi_\mathrm{PD}$ increasing or remaining unchanged at a nonzero value is at most~$\br{c_1+c_2+c_4+c_5+2}n\epsilon$. Note that in each iteration, regardless of the number of errors and collisions, $\Phi_\mathrm{PD}$ increases by at most $22$. Moreover, if $\Phi_\mathrm{PD}$ decreases, it decreases by at least $1$. This implies that the number of iterations in which $\Phi_\mathrm{PD}$ decreases is at most $22\br{c_1+c_2+c_4+c_5+2}n\epsilon$. So, we have $\beta_\mathrm{PD} \leq 23\br{c_1+c_2+c_4+c_5+2}n\epsilon$. Therefore, the total number of recovery iterations of type III in the first $i$ iterations is at most $c_6n\epsilon$, where $c_6 \defeq 23\br{c_1+c_2+c_4+c_5+2}+\br{c_1+2}$. \end{proof} Let $\omega_i$ denote the number of iterations suffering from a transmission error or a hash collision in the first $i$ iterations, plus the number of recovery iterations of type I, II, or III, in the first $i$ iterations. Note that the bounds and probabilities in Lemmas~\ref{lem:MD collisions}--\ref{lem:type III recovery} are all independent of the iteration number $i$. As a corollary we have: \begin{cor} \label{cor:errors-collisions} There exist ~$q=2^{-\Theta(n\epsilon)}$ and a constant $c_7$ such that, for every $i\in \Br{R_\mathrm{total}}$, assuming successful recycling in the first $i$ iterations of Algorithm~\ref{algo:MainalgorithmQMessage}, except with probability at most~$q$, we have $\omega_i \leq c_7 n\epsilon$. \end{cor} The following lemma is the last ingredient we need to prove successful recycling in every iteration of the simulation. Recall that $\mathit{RA}^i$ and $\mathit{RB}^i$ denote the recycling data of Alice and Bob, respectively, at the end of the $i$-th iteration. \begin{lemma} \label{lem:recycling_requirements} Let $t=c_8n\epsilon$ where $c_8>3c_7$. Then for every $i\in \Br{R_\mathrm{total}}$ where $i\geq t$, if recycling is successful in the first $i-1$ iterations and $\omega_{i-1} \leq c_7 n\epsilon$, then: \begin{enumerate} \item $\mathit{RA}^i\Br{1:i-t}=\mathit{RB}^i\Br{1:i-t}$, i.e., at the end of iteration $i$, the recycling data of Alice and Bob agree in a prefix of length at least $i-t$. \item $\mathit{RA}^i\Br{1:i-t}=\mathit{RA}^{i+1}\Br{1:i-t}$, i.e., the prefix of $\mathit{RA}$ of length $i-t$ does not get modified in the next iteration. The same statement holds for $\mathit{RB}$. \item For every $k\in \Br{i-t}$ such that $\mathit{RA}^i\Br{k}=\mathit{RB}^i\Br{k}=\sS$, we have $W_{k}=0^{4r}$. \suppress{ ... in $\mathit{JS}2$ representation at the end of the $i$-th iteration the $k$-th block of MES registers is in the $\ket{\phi^{0,0}}^{\otimes 4r}$ state. } \end{enumerate} \end{lemma} \begin{proof} Part $1$: Toward contradiction, suppose that there exists $t'\in\Br{t,i-1}$ such that $\mathit{RA}^i\Br{i-t'}\neq \mathit{RB}^i\Br{i-t'}$. Without loss of generality, assume that $\mathit{RA}^i\Br{i-t'}=\sM$ and $\mathit{RB}^i\Br{i-t'}=\sS$. Suppose that the last time Alice's measurement pointer $\ell_\RA$ was equal to $i-t'$ was $t_2$ iterations earlier, i.e., iteration $i-t_2$. In that iteration, $\ell_\RA$ has distance $t_1\defeq t'-t_2$ from $i-t_2$, the iteration number. Note that the distance between the iteration number and $\ell_\RA$ increases only in (some) recovery iterations and it increases by at most $2$: the distance remains the same if Alice is in case v or case vii. Otherwise, it increases by $1$ if $\ell_\RA$ does not move and increases by $2$ when it moves back. This implies that in the first $i-t_2$ iterations, there have been at least $t_1/2$ recovery iterations. In the $t_2$ iterations after that, there is an inconsistency in the recycling data which does not get resolved. In any of these iterations one of the following holds: \begin{itemize} \item A metadata hash collision or a transmission error on metadata messages occurs, or else, \item The iteration is a type I recovery iteration and Alice and Bob are still trying to reconcile an inconsistency in their metadata, or else, \item The iteration is a type II recovery iteration. In this case, no metadata transmission error or collision occurs and the iteration is not a type I recovery iteration. So Alice and Bob know each other's recycling data and aware of the inconsistency, they are trying to resolve it. \end{itemize} Therefore, in the first $i-1$ iterations, the number of recovery iterations plus the number of iterations suffering from a transmission error or a collision is at least \[ t_1/2+t_2-1 \geq t'/2-1 \geq t/2-1 \geq \frac{c_8}{2}n\epsilon-1\enspace, \] contradicting~$\omega_{i-1} \leq c_7 n\epsilon$. Note that here we implicitly use the reasonable assumption that $n\epsilon$ is at least a constant. Part $2$: Suppose that recycling is successful in the first $i-1$ iterations and $\omega_{i-1}\leq c_7n\epsilon$. Note that by the same argument as in part $1$, at the end of iteration $i+1$, the difference between the measurement pointers and the iteration number is at most $2\omega_{i-1}+4 \leq 2c_7n\epsilon+4 \leq t$. Therefore, the prefixes of both $\mathit{RA}$ and $\mathit{RB}$ of length $i-t$ do not get modified in the next iteration. Part $3$: Note that the difference between the iteration number and $\ellQVCA$ increases only in (some) recovery iterations and it increases by at most $1$: The difference remains the same if Alice is in case ii.B or case vii. Otherwise, $\ellQVCA$ remains unchanged and the distance increases by $1$. The number of recovery iterations in the first $i$ iterations is at most $\omega_{i-1}+1 < t$. Therefore, at the end of the $i$-th iteration we have~$\min\{\ellQVCA,\ell_\mathrm{QVC}^\mathrm{B}\} > i-t$. Toward contradiction, suppose that there exists $t'\in\Br{t,i-1}$ such that~$\mathit{RA}^i\Br{i-t'}=\mathit{RB}^i\Br{i-t'}=\sS$ and $W_{i-t'} \neq 0^{4r}$. \suppress{...in $\mathit{JS}2$ representation at the end of the $i$-th iteration the block $i-t'$ of MES registers is not in the $\ket{\phi^{0,0}}^{\otimes 4r}$ state.} This is due to one of the following scenarios: \begin{itemize} \suppress{ \item The corresponding block of MES registers has been used only by one party for communication using QVC. } \item The corresponding block of MES registers has been used by both parties for communication using QVC, but in different iterations (out-of-sync QVC). \item It has been used in the same iteration by both parties for communication using QVC but transmission errors have occurred on the messages. \end{itemize} In any case, suppose that the last time one of the pointers $\ellQVCA$ or $\ell_\mathrm{QVC}^\mathrm{B}$ was equal to $i-t'$ was $t_2$ iterations earlier, i.e., iteration $i-t_2$ and without loss of generality, suppose that $\ellQVCA$ is that pointer. In iteration $i-t_2$, the pointer $\ellQVCA$ has distance $t_1\defeq t'-t_2$ from $i-t_2$, the iteration number. This implies that in the first $i-t_2$ iterations, there have been at least $t_1$ recovery iterations. In the $t_2$ iterations after that, the block of MES registers indexed by~$\mathit{IndexA}\Br{i-t'}$ are not measured by any of the parties. In any of these iterations one of the following holds: \begin{itemize} \item A metadata hash collision or a transmission error on metadata messages occurs, or else, \item The iteration is a type I recovery iteration and Alice and Bob are still trying to reconcile an inconsistency in their metadata, or else, \item A quantum hash collision or a transmission error on quantum hash values occurs, or else, \item The iteration is a type II recovery iteration. In this case, no metadata transmission error or collision occurs and the iteration is not a type I recovery iteration. So Alice and Bob know each other's recycling data. So Alice and Bob are both be in case iii or both in case iv. \end{itemize} \suppress{Note that, if at the $i$-th iteration only one party has used the block of MES registers, then in any of the previous $t_2$ iterations exactly one of the first two scenarios occurs.} The above argument implies that in the first $i-1$ iterations, the number of recovery iterations plus the number of iterations suffering from a transmission error or a collision is at least \[ t_1+t_2-1 = t'-1 \geq t-1 \geq c_8n\epsilon-1\enspace, \] contradicting~$\omega_{i-1} \leq c_7 n\epsilon$. \end{proof} We are now ready to prove that except with exponentially small probability recycling is successful in every iteration of the algorithm. Recall that we denote Alice's recycling pointer at the end of iteration $i$ by $\ell_\mathit{NextMESA}^{i}$. We use $m_i^\mathrm{A}$ to denote the number of $\sM$ symbols in~$\mathit{RA}^i\Br{1:\ell_\mathit{NextMESA}^{i}}$. Similarly, $\ell_\mathit{NextMESB}^{i}$ denotes Bob's recycling pointer at the end of iteration $i$ and the number of $\sM$ symbols in~$\mathit{RB}^i\Br{1:\ell_\mathit{NextMESB}^{i}}$ is denoted by $m_i^\mathrm{B}$. \begin{lemma} \label{lem:successful-recycling} Let $\mathit{L}_{\mathrm{QVC}}=c_9n\epsilon$, where $c_9 > c_7+c_8$. Then with probability at least $1-2^{-\Theta(n\epsilon)}$, recycling is successful throughout the execution of Algorithm~\ref{algo:MainalgorithmQMessage}. \end{lemma} \begin{proof} The proof is based on induction on the iteration number. Note that recycling starts from iteration $\mathit{L}_{\mathrm{QVC}}+1$. \textbf{Base case} ($i=\mathit{L}_{\mathrm{QVC}}+1$): Note that the conditions of Definition~\ref{def:successful recycling} are satisfied in the first $\mathit{L}_{\mathrm{QVC}}$ iterations of the algorithm and we have $\mathit{IndexA}\Br{1:\mathit{L}_{\mathrm{QVC}}}=\mathit{IndexB}\Br{1:\mathit{L}_{\mathrm{QVC}}}=1:\mathit{L}_{\mathrm{QVC}}$. Therefore, by Corollary~\ref{cor:errors-collisions}, except with probability at most~$q=2^{-\Theta(n\epsilon)}$, we have~$\omega_{\mathit{L}_{\mathrm{QVC}}} \leq c_7 n\epsilon$. Assuming~$\omega_{\mathit{L}_{\mathrm{QVC}}} \leq c_7 n\epsilon$, by Lemma~\ref{lem:recycling_requirements},~$\mathit{RA}^{\mathit{L}_{\mathrm{QVC}}+1}\Br{1:\mathit{L}_{\mathrm{QVC}}+1-t}=\mathit{RB}^{\mathit{L}_{\mathrm{QVC}}+1}\Br{1:\mathit{L}_{\mathrm{QVC}}+1-t}$ and for every $k\in \Br{\mathit{L}_{\mathrm{QVC}}+1-t}$ such that $\mathit{RA}^{\mathit{L}_{\mathrm{QVC}}+1}\Br{k}=\mathit{RB}^{\mathit{L}_{\mathrm{QVC}}+1}\Br{k}=\sS$, we have $W_{k}=0^{4r}$. \suppress{in $\mathit{JS}2$ representation at the end of iteration $\mathit{L}_{\mathrm{QVC}}+1$, the $k$-th block of MES registers is in the $\ket{\phi^{0,0}}^{\otimes 4r}$ state.} Note that the number of $\sM$ symbols in $\mathit{RA}^{\mathit{L}_{\mathrm{QVC}}+1}\Br{1:\mathit{L}_{\mathrm{QVC}}+1-t}$ and $\mathit{RB}^{\mathit{L}_{\mathrm{QVC}}+1}\Br{1:\mathit{L}_{\mathrm{QVC}}+1-t}$ is at most the number of type I and type II recovery iterations so far, hence at most $\omega_{\mathit{L}_{\mathrm{QVC}}+1} \leq \omega_{\mathit{L}_{\mathrm{QVC}}}+1 \leq c_7 n\epsilon+1 < \mathit{L}_{\mathrm{QVC}}+1-t$. Therefore, after running the \textsf{Recycle} subroutine, the algorithm does not abort and at the end of iteration $\mathit{L}_{\mathrm{QVC}}+1$, the recycling pointers are equal, i.e., $\ell_\mathit{NextMESA}^{\mathit{L}_{\mathrm{QVC}}+1}=\ell_\mathit{NextMESB}^{\mathit{L}_{\mathrm{QVC}}+1}$. Together with the fact that $\mathit{IndexA}\Br{1:\mathit{L}_{\mathrm{QVC}}}=\mathit{IndexB}\Br{1:\mathit{L}_{\mathrm{QVC}}}$, this implies that the conditions of Definition~\ref{def:successful recycling} are satisfied. Therefore, there exists an event $\mathcal{E}$ of probability at most~$q=2^{-\Theta(n\epsilon)}$ such that if $\lnot\mathcal{E}$ then, \begin{itemize} \item recycling is successful in iteration $\mathit{L}_{\mathrm{QVC}}+1$, \item $\ell_\mathit{NextMESA}^{\mathit{L}_{\mathrm{QVC}}+1}=\ell_\mathit{NextMESB}^{\mathit{L}_{\mathrm{QVC}}+1}$, and \item $\ell_\mathit{NextMESA}^{\mathit{L}_{\mathrm{QVC}}+1}=m_{\mathit{L}_{\mathrm{QVC}}+1}^\mathrm{A}+1$ and $\ell_\mathit{NextMESB}^{\mathit{L}_{\mathrm{QVC}}+1}=m_{\mathit{L}_{\mathrm{QVC}}+1}^\mathrm{B}+1$. \end{itemize} For $\mathit{L}_{\mathrm{QVC}} < i \leq R_\mathrm{total}$, let $\mathcal{T}_i$ be the following statement in terms of the iteration number $i$: \begin{itemize} \item Recycling is successful in the first $i$ iterations of the algorithm, \item $\ell_\mathit{NextMESA}^{i} = \ell_\mathit{NextMESB}^{i}$, i.e., the recycling pointers are equal at the end of iteration $i$, and \item $\ell_\mathit{NextMESA}^{i}=m_{i}^\mathrm{A}+i-\mathit{L}_{\mathrm{QVC}}$ and $\ell_\mathit{NextMESB}^{i}=m_{i}^\mathrm{B}+i-\mathit{L}_{\mathrm{QVC}}$. \end{itemize} \textbf{Induction hypothesis:} For $\mathit{L}_{\mathrm{QVC}} < i \leq R_\mathrm{total}$, there exists an event $\mathcal{E}_i$ of probability at most~$\br{i-\mathit{L}_{\mathrm{QVC}}}\cdot q$ such that if $\lnot\mathcal{E}_i$ then $\mathcal{T}_i$ holds. \textbf{Inductive step:} Assuming $\lnot\mathcal{E}_i$, by Corollary~\ref{cor:errors-collisions}, except with probability at most~$q=2^{-\Theta(n\epsilon)}$, we have~$\omega_{i} \leq c_7 n\epsilon$. Let $\mathcal{E}'$ be the event that~$\omega_{i} > c_7 n\epsilon$. Note that $\Pr \left( \mathcal{E}'| \lnot \mathcal{E}_i\right) \leq q$. Suppose further that $\lnot \mathcal{E}'$. Since $\ell_\mathit{NextMESA}^{i}=m_{i}^\mathrm{A}+i-\mathit{L}_{\mathrm{QVC}}$, we have \[ i-t-\ell_\mathit{NextMESA}^{i} = \mathit{L}_{\mathrm{QVC}}-t-m_{i}^\mathrm{A} \geq \mathit{L}_{\mathrm{QVC}}-t-\omega_{i} \geq \Omega(n\epsilon). \] Therefore, by induction hypothesis, we also have $i-t-\ell_\mathit{NextMESB}^{i} \geq \Omega(n\epsilon)$. By part $1$ of Lemma~\ref{lem:recycling_requirements}, we have $\mathit{RA}^{i+1}\Br{1:i+1-t}=\mathit{RB}^{i+1}\Br{1:i+1-t}$. Since $\omega_{i-1} \leq \omega_{i} \leq c_7 n\epsilon$, by part $2$ of Lemma~\ref{lem:recycling_requirements}, we have $\mathit{RA}^{i+1}\Br{1:i-t}=\mathit{RA}^i\Br{1:i-t}$ and $\mathit{RB}^{i+1}\Br{1:i-t}=\mathit{RB}^i\Br{1:i-t}$. Therefore, the algorithm does not abort in iteration $i+1$ and we have $\ell_\mathit{NextMESA}^{i+1} = \ell_\mathit{NextMESB}^{i+1}$. Moreover, $\ell_\mathit{NextMESA}^{i+1}=m_{i+1}^\mathrm{A}+(i+1)-\mathit{L}_{\mathrm{QVC}}$ and $\ell_\mathit{NextMESB}^{i+1}=m_{i+1}^\mathrm{B}+(i+1)-\mathit{L}_{\mathrm{QVC}}$. Note that since recycling is successful in the first $i$ iterations, at the beginning of iteration $i+1$, we have $\mathit{IndexA}=\mathit{IndexB}$. So $\mathit{NextMESIndexA}^{i+1}=\mathit{NextMESIndexB}^{i+1}\neq \perp$, i.e., the first and second conditions of Definition~\ref{def:successful recycling} are satisfied for iteration $i+1$. By part $3$ of Lemma~\ref{lem:recycling_requirements}, for every $k\in \Br{i+1-t}$ such that $\mathit{RA}^{i+1}\Br{k}=\mathit{RB}^{i+1}\Br{k}=\sS$, we have $W_{k}=0^{4r}$.\suppress{in $\mathit{JS}2$ representation at the end of iteration $i+1$, the $k$-th block of MES registers is in the $\ket{\phi^{0,0}}^{\otimes 4r}$ state.} Note that in the strings $\mathit{IndexA}$ and $\mathit{IndexB}$, while each index in $\Br{\mathit{L}_{\mathrm{QVC}}}$ may appear several times before the recycling pointers, it can only appear at most once after these pointers. Therefore, the block of MES registers indexed by $\mathit{NextMESIndexA}^{i+1}$ is indeed in the $\ket{\phi^{0,0}}^{\otimes 4r}$ state when it is recycled in iteration $i+1$ and the third condition of Definition~\ref{def:successful recycling} is also satisfied. For $\mathcal{E}_{i+1} \defeq \mathcal{E}_i \lor \mathcal{E}'$, we have \[ \Pr \br{\mathcal{E}_{i+1}} \leq \Pr \br{\mathcal{E}_{i}}+ \Pr \br{\mathcal{E}'|\lnot \mathcal{E}_i} \leq \br{i-\mathit{L}_{\mathrm{QVC}}}\cdot q+q=\br{i+1-\mathit{L}_{\mathrm{QVC}}}\cdot q \enspace. \] By the above argument, if $\lnot\mathcal{E}_{i+1}$ then $\mathcal{T}_{i+1}$ holds. Note that for $\mathit{L}_{\mathrm{QVC}} < i \leq R_\mathrm{total}$, we have \[ \br{i-\mathit{L}_{\mathrm{QVC}}}\cdot q = 2^{-\Theta(n\epsilon)} \enspace. \] \end{proof} \begin{lemma} \label{lem:Q-potential-increase} Assuming successful recycling throughout the execution of Algorithm~\ref{algo:MainalgorithmQMessage}, each iteration with no transmission error or hash collision increases the potential function $\Phi$ defined in Equation~\eqref{eqn:Q-phi} by at least $1$. \end{lemma} \begin{proof} Note that in an iteration with no error or hash collision Alice and Bob agree on the iteration type. Moreover, if $\mathit{Itertype}=\mathrm{MD}, \mathrm{RD}$ or $\mathrm{PD}$, they also agree on whether they extend or rewind the data and if $\mathit{Itertype}=\mathrm{MES}$ (Case ii), then exactly one of them is in sub-case A and the other one in sub-case B. We analyze the potential function in each case keeping in mind the hierarchy of the cases; e.g., Case ii or later cases are encountered only if Alice and Bob have full knowledge of each other's metadata. Lemma~\ref{lem:potential-value-vs-data} guarantees that $\Phi_{\mathrm{MD}}=0$ on entering Case ii, $\Phi_{\mathrm{MD}}=\Phi_{\mathrm{RD}}=0$ on entering Case vi and $\Phi_{\mathrm{MD}}=\Phi_{\mathrm{RD}}=\Phi_{\mathrm{PD}}=0$ on entering Case vii. \begin{itemize} \item Alice and Bob are in Case i.A: \begin{itemize} \item $\Phi_{\mathrm{RD}}$, $\Phi_{\mathrm{PD}}$ and $\Phi_{\mathrm{Q}}$ stay the same. \item $i$ increases by $1$. \item $md_\mathrm{A}^+$ and $md_\mathrm{B}^+$ stay the same. \item None of $md_\mathrm{A}^-$ and $md_\mathrm{B}^-$ increases and at least one decreases by $1$. \end{itemize} Therefore, $\Phi_{\mathrm{MD}}$ decreases by at least $3-2=1$ and $\Phi$ increases by at least $1$. \item Alice and Bob are in Case i.B: \begin{itemize} \item $\Phi_{\mathrm{RD}}$, $\Phi_{\mathrm{PD}}$ and $\Phi_{\mathrm{Q}}$ stay the same. \item $i$ increases by $1$. \item $md_\mathrm{A}^-$ and $md_\mathrm{B}^-$ stay at $0$. \item At least one of $\ell_{\MA}$ or $\ell_{\MB}$ is smaller than $i-1$; If only $\ell_{\MA} < i-1$, then $md_\mathrm{A}^+$ increases by $2$, and $md_\mathrm{B}^+$ by $1$. The case where only $\ell_{\MB} < i-1$ is similar. If both are smaller than $i-1$, then $md_\mathrm{A}^+$ and $md_\mathrm{B}^+$ both increase by $2$. \end{itemize} Therefore, $\Phi_{\mathrm{MD}}$ decreases by at least $3-2=1$ and $\Phi$ increases by at least $1$. \item Alice is in Case ii.A, Bob is in Case ii.B: \begin{itemize} \item $\Phi_{\mathrm{MD}}$ stays at $0$. \item $rd^+$ and $rd^-$ stay the same. \item $\ellQVCA$ stays the same and $\ell_\mathrm{QVC}^\mathrm{B}$ increases by $1$. \item $q_\mathit{MA}$ stays the same and $q_\mathit{MB}$ increases by $1$. \item $pd_\mathrm{A}^+,pd_\mathrm{A}^-,pd_\mathrm{B}^+,pd_\mathrm{B}^-$ stay the same. \item $g$ stays the same, $b$ increases by at most $1$ and $u$ decreases by $1$. \end{itemize} Therefore, $\Phi_{\mathrm{RD}}$, $\Phi_{\mathrm{PD}}$ and $\Phi_\mathrm{Q}$ increase by $1$, $6$ and at least $8$, respectively. So $\Phi$ increases by at least $1$. \item Alice is in Case ii.B, Bob is in Case ii.A: This case is similar to the one above. \item Alice and Bob are in Case iii: \begin{itemize} \item $\Phi_{\mathrm{MD}}$ stays at $0$. \item $rd^+$, $\ellQVCA$ and $\ell_\mathrm{QVC}^\mathrm{B}$ stay the same. \item $rd^-$ decreases by $1$. \item $pd_\mathrm{A}^+,pd_\mathrm{A}^-,pd_\mathrm{B}^+,pd_\mathrm{B}^-$ stay the same. \item None of $q_\mathit{MA}$ and $q_\mathit{MB}$ decreases. $q_\mathit{MA}$ increases by $1$ if $\mathit{RA}\Br{\ell_\RA}=\sS$. Similarly, $q_\mathit{MB}$ increases by $1$ if $\mathit{RB}\Br{\ell_\RB}=\sS$. \item $\Phi_{\mathrm{Q}}$ stays the same. \end{itemize} Therefore, $\Phi_{\mathrm{RD}}$ decreases by $13$ and $\Phi_{\mathrm{PD}}$ increases by at most $12$. So $\Phi$ increases by at least $1$. \item Alice and Bob are in Case iv: \begin{itemize} \item $\Phi_{\mathrm{MD}}$ stays at $0$. \item $rd^+$, $\ellQVCA$ and $\ell_\mathrm{QVC}^\mathrm{B}$ stay the same. \item $rd^-$ decreases by $1$. \item $pd_\mathrm{A}^+,pd_\mathrm{A}^-,pd_\mathrm{B}^+,pd_\mathrm{B}^-$ stay the same. \item None of $q_\mathit{MA}$ and $q_\mathit{MB}$ decreases. $q_\mathit{MA}$ increases by $1$ if $\mathit{RA}\Br{\ell_\RA}=\sS$. Similarly, $q_\mathit{MB}$ increases by $1$ if $\mathit{RB}\Br{\ell_\RB}=\sS$. \item $\Phi_{\mathrm{Q}}$ stays the same. \end{itemize} \item Alice and Bob are in Case v: \begin{itemize} \item $\Phi_{\mathrm{MD}}$ stays at $0$. \item $rd^-$ stays at $0$ and $rd^+$ increases by $1$. \item $\ellQVCA$ and $\ell_\mathrm{QVC}^\mathrm{B}$ stay the same. \item $\Phi_{\mathrm{PD}}$ and $\Phi_{\mathrm{Q}}$ stay the same. \end{itemize} Therefore, $\Phi_{\mathrm{RD}}$ decreases by $2$ and $\Phi$ increases by $2$. \item Alice and Bob are in Case vi.A: \begin{itemize} \item $\Phi_{\mathrm{MD}}$ and $\Phi_{\mathrm{RD}}$ stay at $0$. \item $pd_\mathrm{A}^+,pd_\mathrm{B}^+,q_\mathit{MA},q_\mathit{MB}$ stay the same. \item None of $pd_\mathrm{A}^-$ and $pd_\mathrm{B}^-$ increases and at least one decreases by $1$. \item $\Phi_{\mathrm{Q}}$ stays the same. \end{itemize} Therefore, $\Phi_{\mathrm{PD}}$ decreases by at least $1$. So $\Phi$ increases by at least $1$. \item Alice and Bob are in Case vi.B: \begin{itemize} \item $\Phi_{\mathrm{MD}}$ and $\Phi_{\mathrm{RD}}$ stay at $0$. \item $q_\mathit{MA}$, $q_\mathit{MB}$ stay the same and $pd_\mathrm{A}^-$, $pd_\mathrm{B}^-$ stay at $0$. \item At least one of the following holds: $\ell_{\PA} < 6q_{\mathit{MA}}\cdot r$, in which case $pd_\mathrm{A}^+$ increases by $1$ (otherwise it remains unchanged), or $\ell_{\PB} < 6q_{\mathit{MB}}\cdot r$, and then $pd_\mathrm{B}^+$ increases by $1$ (otherwise it remains unchanged). \item $\Phi_{\mathrm{Q}}$ stays the same. \end{itemize} Therefore, $\Phi_{\mathrm{PD}}$ decreases by at least $1$. So $\Phi$ increases by at least $1$. \item Alice and Bob are in Case vii: \begin{itemize} \item $\Phi_{\mathrm{MD}}$, $\Phi_{\mathrm{RD}}$ and $\Phi_{\mathrm{PD}}$ stay at $0$. \item $u$ stays at $0$. \item If $b\neq0$ then $g$ stays the same and $b$ decreases by $1$, otherwise, $b$ stays at $0$ and $g$ increases by $1$. \end{itemize} Therefore, $\Phi_{\mathrm{Q}}$ increases by $1$ and so does $\Phi$. \end{itemize} So assuming successful recycling throughout the execution of the algorithm, the potential function $\Phi$ increases by at least $1$ in every iteration with no transmission error or hash collision. \end{proof} \begin{lemma} \label{lem:Q-potential-decrease} Assuming successful recycling throughout the execution of Algorithm~\ref{algo:MainalgorithmQMessage}, each iteration of the algorithm, regardless of the number of hash collisions and transmission errors, decreases the potential function $\Phi$ by at most $85$. \end{lemma} \begin{proof} In any iteration, $i$ increases by $1$, while $g$, $md_\mathrm{A}^+$, $md_\mathrm{B}^+$, $rd^+$, $pd_\mathrm{A}^+$ and $pd_\mathrm{B}^+$ decrease by at most $1$; $b$, $u$, $\ellQVCA$, $\ell_\mathrm{QVC}^\mathrm{B}$, $q_\mathit{MA}$ and $q_\mathit{MB}$ increase by at most $1$; $md_\mathrm{A}^-$ and $md_\mathrm{B}^-$ increase by at most $3$; $rd^-$ increases by at most $2$; and $pd_\mathrm{A}^-$ and $pd_\mathrm{B}^-$ increase by at most $4$. Hence, $\Phi_{\mathrm{MD}}$, $\Phi_{\mathrm{RD}}$, $\Phi_{\mathrm{PD}}$ increase by at most $22$, $30$ and $22$, respectively, and $\Phi_{\mathrm{Q}}$ decreases by at most $11$. So in total, $\Phi$ decreases by at most $85$. \end{proof} Finally, we are ready to prove the main result of this section. \setcounter{theorem}{0} \begin{theorem}[\textbf{Restated}] Consider any $n$-round alternating communication protocol $\Pi$ in the plain quantum model, communicating messages over a noiseless channel with an alphabet $\Sigma$ of bit-size $\Theta\br{\log n}$. Algorithm~\ref{algo:MainalgorithmQMessage} is a quantum coding scheme which given $\Pi$, simulates it with probability at least $1-2^{-\Theta\br{n\epsilon}}$, over any fully adversarial error channel with alphabet $\Sigma$ and error rate $\epsilon$. The simulation uses $n\br{1+\Theta\br{\sqrt{\epsilon}}}$ rounds of communication, and therefore achieves a communication rate of $1-\Theta\br{\sqrt{\epsilon}}$. \end{theorem} \begin{proof} Let $R_\mathrm{total}=\ceil{\frac{n}{2r}}+86\br{c_1+c_3+c_5+2}n\epsilon$. By Lemma~\ref{lem:successful-recycling}, recycling is successful throughout the execution of the algorithm with probability at least $1-2^{-\Theta\br{n\epsilon}}$. Assuming successful recycling, by Lemmas~\ref{lem:MD collisions},~\ref{lem:QH collisions} and~\ref{lem:PD collisions}, the total number of iterations with a hash collision is at most $c_1+c_3+c_5$ except with probability $2^{-\Theta\br{n\epsilon}}$. Since the number of iterations is less than $2n$, the total number of iterations with a transmission error is at most $2n\epsilon$. Therefore, by Lemma~\ref{lem:Q-potential-increase}, in the remaining $R_\mathrm{total}-\br{c_1+c_3+c_5+2}n\epsilon$ iterations the potential function $\Phi$ increases by at least $1$. The potential function decreases in an iteration only if a hash collision or a transmission error occurs and by Lemma~\ref{lem:Q-potential-decrease}, it decreases by at most $85$. So at the end of the simulation, we have \[ g-b-u \geq \Phi_{\mathrm{Q}} \geq \Phi \geq R_\mathrm{total}-\br{c_1+c_3+c_5+2}n\epsilon-85\br{c_1+c_3+c_5+2}n\epsilon \geq \frac{n}{2r}\enspace. \] Therefore the simulation is successful. The cost of entanglement distribution is~$\Theta\br{n\sqrt{\epsilon}}$. Moreover, the amount of communication in each iteration is independent of the iteration type and is always $\br{2r+\Theta(1)}$: in every iteration each party sends $\Theta(1)$ symbols to communicate the hash values and the value of the pointers in line $13$ of Algorithm~\ref{algo:MainalgorithmQMessage}; each party sends another $r$ symbols either in line $17$ of Algorithm~\ref{algo:MainalgorithmQMessage}, if $\mathit{Itertype}\neq\mathrm{SIM}$ or in Algorithm~\ref{algo:Q-simulate}. Hence, we have \begin{align*} \text{Total number of communicated qudits} &= \Theta\br{n\sqrt{\epsilon}} + R_\mathrm{total}\cdot \br{2r+\Theta(1)} \\ &= \Theta\br{n\sqrt{\epsilon}} + \br{\ceil{\frac{n}{2r} + \Theta\br{n\epsilon}}}\br{2r+\Theta(1)} \\ &= n\br{1+\Theta\br{\sqrt{\epsilon}}}\enspace. \end{align*} \end{proof} \suppress{ \begin{lemma} Let $t=c_2n\epsilon$, then for every $i$, $\mathit{L}_{\mathrm{QVC}} < i \leq R_\mathrm{total}$, with probability at least~$1-2^{-\Omega(n\epsilon)}$, at the end of the $i$-th iteration of Algorithm~\ref{algo:MainalgorithmQMessage} we have: \begin{enumerate} \item $\mathit{RA}\Br{1:i-t}=\mathit{RB}\Br{1:i-t}$, \item for all $j\in[i-t]$ such that $\mathit{RA}\Br{j}=\mathit{RB}\Br{j}=\sS$, the block of MES registers indexed by~$\mathit{IndexA}\Br{j}$ are in the $\ket{\phi^{0,0}}^{\otimes 4r}$ state, \item $\ell_\mathit{NextMESA} = \ell_\mathit{NextMESB} < i-t$. \end{enumerate} \end{lemma} \begin{proof} The proof is based on a double induction argument. Note that the conditions of Definition~\ref{def:successful recycling} are satisfied in the first $\mathit{L}_{\mathrm{QVC}}$ iterations of the algorithm and we have $\mathit{IndexA}\Br{1:\mathit{L}_{\mathrm{QVC}}}=\mathit{IndexB}\Br{1:\mathit{L}_{\mathrm{QVC}}}=1:\mathit{L}_{\mathrm{QVC}}$. Therefore, by Lemmas~\ref{lem:MD collisions}--\ref{lem:type III recovery}, with probability at least~$1-2^{-\Omega(n\epsilon)}$, we have~$\omega_{\mathit{L}_{\mathrm{QVC}}} \leq c_7 n\epsilon$. Let $\mathit{L}_{\mathrm{QVC}} < i \leq R_\mathrm{total}$ and suppose that recycling is successful in the first $i-1$ iterations of the algorithm. This implies that $\mathit{IndexA}\Br{1:i-1}=\mathit{IndexB}\Br{1:i-1}$ and $\ell_\mathit{NextMESA} = \ell_\mathit{NextMESB}$ at the end of iteration $i-1$. Moreover, with probability at least~$1-2^{-\Omega(n\epsilon)}$, we have~$\omega_{i-1} \leq c_7 n\epsilon$. Assuming~$\omega_{i-1} \leq c_7 n\epsilon$, at the end of the $i$-th iteration we have: \begin{enumerate} \item $\mathit{RA}\Br{1:i-t}=\mathit{RB}\Br{1:i-t}$: Toward contradiction, suppose that there exists $t'\in\Br{t,i-1}$ such that $\mathit{RA}\Br{i-t'}\neq \mathit{RB}\Br{i-t'}$, at the end of the $i$-th iteration. Without loss of generality, assume that $\mathit{RA}\Br{i-t'}=\sM$ and $\mathit{RB}\Br{i-t'}=\sS$. Suppose that the last time Alice's measurement pointer $\ell_\RA$ reached $\mathit{RA}\Br{i-t'}$ was $t_2$ iterations earlier, i.e., iteration $i-t_2$. In that iteration, $\ell_\RA$ has distance $t_1\defeq t'-t_2$ from $i-t_2$, the iteration number. Note that the distance between the iteration number and $\ell_{\MA}$ increases only in (some) recovery iterations and it increases by at most $2$: The distance remains the same if Alice is in case v or case vii. Otherwise, it increases by $1$ if $\ell_{\MA}$ does not move and increases by $2$ when it moves back. This implies that in the first $i-t_2$ iterations, there have been at least $t_1/2$ recovery iterations. In the $t_2$ iterations after that, there is an inconsistency in the recycling data which does not get resolved. In any of these iterations one of the following holds: \begin{itemize} \item A metadata hash collision or a transmission error on metadata messages occurs, or else, \item The iteration is a type I recovery iteration and Alice and Bob are still trying to reconcile an inconsistency in their metadata, or else, \item The iteration is a type II recovery iteration. In this case, no metadata transmission error or collision occurs and the iteration is not a type I recovery iteration. So Alice and Bob know each other's recycling data and aware of the inconsistency, they are trying to resolve it. \end{itemize} Therefore, in the first $i-1$ iterations, the number of recovery iterations plus the number of iterations suffering from a transmission error or a collision is at least \[ t_1/2+t_2-1 \geq t'/2-1 \geq t/2-1 \geq \frac{c_2}{2}n\epsilon-1\enspace, \] contradicting~$\omega_{i-1} \leq c_7 n\epsilon$. \item For all $j\in[i-t]$ such that $\mathit{RA}\Br{j}=\mathit{RB}\Br{j}=\sS$, the block of MES registers indexed by~$\mathit{IndexA}\Br{j}$ are in the $\ket{\phi^{0,0}}^{\otimes 4r}$ state: Toward contradiction, suppose that at the end of the $i$-th iteration, there exists $t'\in\Br{t,i-1}$ such that~$\mathit{RA}\Br{i-t'}=\mathit{RB}\Br{i-t'}$ and the block of MES registers indexed by~$\mathit{IndexA}\Br{i-t'}$ are not in the $\ket{\phi^{0,0}}^{\otimes 4r}$ state. This is due to one of the following: \begin{itemize} \item The block of MES registers has been used only by one party for communication using QVC. \item It has been used by both parties for communication using QVC, but in different iterations (out-of-sync QVC). \item It has been used in the same iteration by both parties for communication using QVC but a transmission error has occurred. \end{itemize} In any case, suppose that the last time one of the pointers $\ellQVCA$ or $\ell_\mathrm{QVC}^\mathrm{B}$ was equal to $i-t'$ was $t_2$ iterations earlier, i.e., iteration $i-t_2$ and without loss of generality, suppose that $\ellQVCA$ is that pointer. In iteration $i-t_2$, the pointer $\ellQVCA$ has distance $t_1\defeq t'-t_2$ from $i-t_2$, the iteration number. Note that the distance between the iteration number and $\ellQVCA$ increases only in (some) recovery iterations and it increases by at most $1$: The distance remains the same if Alice is in case ii.B or case vii. Otherwise, $\ellQVCA$ remains the same and the distance increases by $1$. This implies that in the first $i-t_2$ iterations, there have been at least $t_1$ recovery iterations. In the $t_2$ iterations after that, the block of MES registers indexed by~$\mathit{IndexA}\Br{j}$ are not measured by any of the parties. In any of these iterations one of the following holds: \begin{itemize} \item A metadata hash collision or a transmission error on metadata messages occurs, or else, \item The iteration is a type I recovery iteration and Alice and Bob are still trying to reconcile an inconsistency in their metadata, or else, \item A quantum hash collision or a transmission error on quantum hash values occurs, or else, \item The iteration is a type II recovery iteration. In this case, no metadata transmission error or collision occurs and the iteration is not a type I recovery iteration. So Alice and Bob know each other's recycling data. So Alice and Bob are both be in case iii or both in case iv. \end{itemize} Note that, if at the $i$-th iteration only one party has used the block of MES registers indexed by~$\mathit{IndexA}\Br{j}$, then in any of the previous $t_2$ iterations exactly one of the first two scenarios occurs. The above argument implies that in the first $i-1$ iterations, the number of recovery iterations plus the number of iterations suffering from a transmission error or a collision is at least \[ t_1+t_2-1 = t'-1 \geq t-1 \geq c_2n\epsilon-1\enspace, \] contradicting~$\omega_{i-1} \leq c_7 n\epsilon$. \item $\ell_\mathit{NextMESA} = \ell_\mathit{NextMESB} < i-t$. \end{enumerate} \end{proof} We still use $md^+_A,md^+_B,pd^+_A,pd^+_B,md^-_A,md^-_B,pd^-_A,pd^-_B$ defined in Section~\ref{subsec:polysizeclassicalanalysis}. We also use $\Phi_{\mathrm{MD}}$ and $\Phi_{\mathrm{PD}}$ defined in~\eqref{eqn:phimd}\eqref{eqn:phipd}. \begin{lemma}\label{lem:quantumqmdqpddecrease} Throughout the algorithm, it holds that \begin{itemize} \item $\Phi_{\mathrm{MD}} \leq 0$ with equality if and only if Alice and Bob have full knowledge of each other's metadata, i.e., $md_{A}^+ = md_{B}^+ = i$ and $md_{A}^- = md_{B}^- = 0$. \item $\Phi_{PD} \leq 0$ with equality if and only if Alice and Bob have full knowledge of each other's Pauli data, i.e., $pd_{A}^+ = pd_{B}^+ = q_{\mathrm{MA}} = q_{\mathrm{MB}}$ and $md_{A}^- = md_{B}^- = 0$. \end{itemize}. \end{lemma} \begin{proof} It follows from the proof of Lemma~\ref{lem:phimdpdnegativelargeclassical}. \end{proof} \begin{lemma}\label{lem:quantumqmdqpdincrease} Without a hash collision or error, it holds that \begin{itemize} \item $\Phi_{\mathrm{ MD}}$ increases by at least $1$ in \textsf{Case i.A,i.B} and keeps unchanged in the remaining cases; \item $\Phi_{\mathrm{ PD}}$ increases by at least $1$ in \textsf{iii.A,iii.B}; decreases by at most $1$ in \textsf{Case ii.A (ii.B)} and keeps unchanged the remaining cases. \end{itemize} \end{lemma} \begin{proof} We split the proof into the following cases. \begin{itemize} \item The \textsf{Case i.A,i.B,ii.A,ii.B,iii.A,iii.B} exactly follows from the proof of Lemma~\ref{lem:potential increase}. \item In the remaining cases, the meta data is fully synchronized. Thus, $\Phi_{\mathrm{MD}}$ is kept being $0$. \item In \textsf{Case iv}, the Pauli data is unchanged as the players do not process it. \item In \textsf{Case v, vi}, the pointers $\ell_{\mathrm{PA}}$ $\ell_{\mathrm{PB}}$ are not updated though the Pauli data is changed. Hence, $pd_A^+$ and $pd_B^+$ do not decrease; $pd_A^-$ and $pd_B^-$ do not increase; $q_{\mathrm{MA}}$ and $q_{\mathrm{MB}}$ are unchanged. \item In \textsf{Case vii}, the Pauli data is fully synchronized. \end{itemize} \end{proof} Define $q_M\defeq\max\set{q_{\mathrm{MA}},q_{\mathrm{MB}}}$. Note that $q_{\mathrm{MA}}\geq \ell_{\mathrm{QRD}_A}$ and $q_{\mathrm{MB}}\geq \ell_{\mathrm{QRD}_B}$. The potential function of $\mathrm{QRD}$ is defined similarly to the ones for meta data and Pauli data. Set \begin{align} &qrd^+\defeq\text{the length of the longest prefix where $\mathrm{QRD}_A[1,\ell_{\mathrm{ QRD}_A}]$ and $\mathrm{QRD}_B[1,\ell_{\mathrm{ QRD}_B}]$ agree;}\label{eqn:qrd+}\\ &qrd^-\defeq \max\set{\ell_{\mathrm{QRD}_A},\ell_{\mathrm{QRD}_B}}-qrd^+;\label{eqn:qrd-}\\ &\Phi_{\mathrm{QRD}}\defeq qrd^+-qrd^--q_M.\label{eqn:qrdpotential} \end{align} \begin{lemma}\label{lem:phiqrdnegative} $\Phi_{\mathrm{QRD}}\leq 0$. \end{lemma} \begin{proof} Notice that $qrd^-\geq 0$, then $$\Phi_{\mathrm{QRD}}\leq qrd^--q_M\leq \ell_{\mathrm{QRD}_A}-q_M\leq \ell_{\mathrm{QRD}_A}-q_{MA}\leq 0.$$ \end{proof} As each term occurs in $\Phi_{\mathrm{QRD}}$ changes by at most a constant, we have the following lemma. \begin{lemma}\label{lem:qrdquantumdecrease} Each iteration of algorithm decreases $\Phi_{\mathrm{QRD}}$ by at most a constant regardless errors or hash collisions. \end{lemma} \begin{lemma}\label{lem:potentialqrd} Without a hash collision or error, each iteration of the algorithm does not decrease $\Phi_{\mathrm{QRD}}$ in \textsf{case i.A,i.B,iii.A,iii.B,vi,vii}; decreases it by at most $1$ in~\textsf{case ii.A,ii.B} and increases it by at least $1$ in \textsf{case iv, v}. \end{lemma} \begin{proof} \begin{itemize} \item In \textsf{Case i.A,i.B, iii.A,iii.B}, the players do not process QRD and $q_M$ is unchanged. \item In \textsf{Case ii.A (ii.B)}, $qrd^+$ does not decrease; $qrd^-$ increases by at most $1$; $q_M$ is unchanged. \item In \textsf{case iv}, $qrd^+$ and $q_M$ are unchanged. $qrd^-$ decreases by $1$. Thus $\Phi_{\mathrm{QRD}}$ increases by at least $1$. \item In \textsf{case v}, $qrd^+$ increases by at least $1$; $qrd^-=0$ and $q_M$ are both unchanged. Thus $\Phi_{\mathrm{QRD}}$ increases by at least $1$. \item In \textsf{case vi,vii}, $\mathrm{QRD}$ is fully synchronized and $q_{\mathrm{MA}}=q_{\mathrm{MB}}=q_M$. Thus $\Phi_{\mathrm{QRD}}$ is kept being $0$. \end{itemize} \end{proof} The following lemma upper bounds the number of hash collisions occurring in Algorithm~\ref{algo:MainalgorithmQMessage}. We may adapt the proof of Corollary 4.6 in~\cite{Haeupler:2014}. Here we provide a simpler proof. \begin{lemma} Choosing $r=\Theta\br{1/\sqrt{\epsilon}}$, the number of iterations of Algorithm~\ref{algo:MainalgorithmQMessage} suffering from a hash collision is at most $6n\epsilon$ with probability at least $1-2^{-\Theta\br{n\epsilon}}$. \end{lemma} \begin{proof} By Algorithm~\ref{algo:Robust Entanglement Distribution}, the hash seeds are uniform and independent unknown to the adversary. From the choice of the parameters in line 3 of Algorithm~\ref{Qalgo:Initialization} and Fact~\ref{fac:stretch}, all $R_{\text{total}}$ hashing steps are $\delta$-close to being independent. Overall, there are at most $R_{\text{total}}$ iterations, each with hash collision $1-\br{1-p}^5=\Theta\br{\frac{1}{n^5}}$. The conclusion follows by the Chernoff bound. \end{proof} \begin{lemma}\label{lem:qm+welldefined} Choosing $r=\Theta\br{1/\sqrt{\epsilon}}$, it holds with probability at least $1-2^{-\Theta\br{\epsilon n}}$, $\mathrm{SQRD}_A[q_M\mod L_{\mathrm{QVC}}+1]=\mathrm{SQRD}_B[q_M\mod L_{\mathrm{QVC}}+1]$ throughout the execution of Algorithm~\ref{algo:MainalgorithmQMessage}. \end{lemma} \begin{proof} It is trivial when $q_M\leq L_{\mathrm{QVC}}$. Assume that $q_M> L_{\mathrm{QVD}}$, between two consecutive passes of $\br{q_M\mod L_{\mathrm{QVC}}+1}$-th block, $\mathrm{SQRD}_A[q_M\mod L_{\mathrm{QVC}}+1]=\mathrm{SQRD}_B[q_M\mod L_{\mathrm{QVC}}+1]$ is checked by $\Theta\br{\sqrt{\epsilon}n}$ times via exchanging $H_{\mathrm{QRD}_A}$ and $H_{\mathrm{QRD}_B}$, among which at most $\Theta\br{\epsilon n}$ checks are corrupted. \end{proof} \noindent \textbf{Assumption T}. $\mathrm{SQRD}_A[q_M\mod L_{\mathrm{QVC}}+1]=\mathrm{SQRD}_B[q_M\mod L_{\mathrm{QVC}}+1]$ throughout Algorithm~\ref{algo:MainalgorithmQMessage}. \begin{lemma}\label{lem:invariancesubspace} Assuming \textbf{T}, any pair of the shared state for quantum Vernam cipher in the block marked by $F$ in both $\mathrm{QRD}_A$ and $\mathrm{QRD}_B$ lies in $\mathrm{span}\set{\ket{\phi^{0,k}}:0\leq k\leq d-1}$. \end{lemma} \begin{proof} As all the shared states are $\ket{\phi}$, initially, the random subsets Alice and Bob sample in \textbf{Q-Quantum-hash} are same. Note that in the quantum Vernam cipher the algorithm only uses the MESs as control-registers, which only introduces relative phases. Therefore, any pair of MESs for quantum Vernam cipher not being measured lies in $\mathrm{span}\set{\ket{\phi^{0,k}}:0\leq k\leq d-1}$. By Lemma~\ref{lem:hashorder}, the the quantum hash operators commute with each other. We may assume the players hash those blocks marked by $F$ first. In \textbf{Q-Quantum-hash}, the algorithm applies a double control-X controlled by a fresh hash state $\ket{\phi}$. The shared states for quantum Vernam cipher still lie in $\mathrm{span}\set{\ket{\phi^{0,k}}:0\leq k\leq d-1}$ by Lemma~\ref{lem:cnotbell}. \end{proof} We define \begin{equation}\label{eqn:tau} \tau\defeq\min\set{j-1:~q_M< j\leq L_{\mathrm{QVC}}+\min\set{q_{\mathrm{MA}},q_{\mathrm{MB}}}~\mbox{and}~\atop~\mathrm{SQRD}_A[j\mod L_{\mathrm{QVC}}+1]\neq \mathrm{SQRD}_B[j\mod L_{\mathrm{QVC}}+1]}, \end{equation} Recall the definition in Eq.~\eqref{eqn:u}, \[u=\abs{q_{\mathrm{MA}}-q_{\mathrm{MB}}}.\] \begin{lemma}\label{lem:nextgoodeprexists} Assuming \textbf{T} and choosing $r=\Theta\br{1/\sqrt{\epsilon}}$, it holds with probability $1-2^{-\Theta\br{n\epsilon}}$ that $u\leq 10n\epsilon$ throughout the Algorithm~\ref{algo:MainalgorithmQMessage}. Moreover, there exist $\Theta\br{n\epsilon}$ indexes $j\in[q_M+1,\tau]$ such that $\mathrm{QRD}_A[j]=\mathrm{QRD}_B[j]=F$ and the pairs of MESs for quantum Vernam cipher in $j$-th block are $\ket{\phi}^{\otimes 2r}$. \end{lemma} \begin{proof} First we notice that each iteration increases $u$ by at most $1$. Thus without error or hash collisions, $u$ increases by at most $2n\epsilon$. We then study the cases with error or hash collision by induction on the number of iterations $i$ have been executed. It holds trivially in the beginning of the algorithm. Without loss of the generality, we consider Alice's side by analyzing how Alice's operations affect $\tau,q_M$ and the shared states. We consider the following three cases. \begin{itemize} \item In \textsf{case i.A,i.B, iii.A,iii.B, iv, v}, in each iteration, $q_{\mathrm{MA}}$ and $q_{\mathrm{MB}}$ are unchanged as the players are only processing the classical data and the symbols in $\mathrm{QRD}$ are either unchanged or changed to M from F. \item In \textsf{case ii. A (ii.B), vi, vii}, $q_{MA}$ increases by at most $1$ and thus $\tau$ and $u$ increases by at most $1$. At most one symbol F in $\mathrm{QRD}$ is changed to M. All the quantum operations are only applied to the blocks outside $[q_M+1,\tau]$ ($q_M$ is after the iteration). Thus the shared blocks of states in $[q_M+1,\tau]$ ($q_M$ is after the iteration) are unchanged. \item By Lemma~\ref{lem:hashorder}, the hash operations over different MESs commute. Thus those shared $\ket{\phi}$ are not changed while the players are implementing \textbf{Q-Quantum-hash}. \end{itemize} From the analysis above, $u$ increases by at most $1$ if there is an error or hash collision. The total amount of errors is $2n\epsilon$ and the total number of hash collisions is at most $6n\epsilon$ with probability at least $1-2^{-\Theta\br{\epsilon n}}$. Thus $u\leq10n\epsilon $ with probability at least $1-2^{-\Theta\br{\epsilon n}}$. Note that one error or hash collision leads to at most one block of MESs being measured (the measurement may happen much later after the error occurs). There are still at least $L_{\mathrm{QVC}}-10n\epsilon=\Theta\br{n\epsilon}$ blocks of MESs survived throughout the algorithm. \end{proof} Suppose Alice and Bob perform the measurements introduced in \textsf{Qmeasuresyndrome} Alg.~\ref{algo:QmeasureEPR} line 1 to those blocks with index in $[\tau+1,q_M]$ if the block in his or her $\mathrm{QRD}$ is marked by F. Fixing the outcome of the measurements, the joints message state is a pure state By Lemma~\ref{lem:nextgoodeprexists}. Moreover, fixing the outcome, the joint state collapses to the state of the form JS1 defined in~\eqref{eqn:JS1}. Hence, the players are able to compute the syndrome after exchanging the outcome of the measurement. Therefore, we have the following lemma. \begin{lemma}\label{lem:Qinvariance} Assuming \textbf{T}, suppose Alice and Bob perform the measurements introduced in \textbf{Qmeasuresyndrome} Alg.~\ref{algo:QmeasureEPR} line 1 to all the blocks of pairs of MESs with index in $[\tau+1,q_M]$ if the block in his or her $\mathrm{QRD}$ is marked by F. Fixing the outcome of the measurement, the joint state of message state is of the form JS1 defined in~\eqref{eqn:JS1} with with simplification JS2 defined in~\ref{eqn:JS2}. Moreover, the number of good blocks and bad blocks defined in Section~\ref{sec:general-discription-large-alphabet-cleveburhman} are independent of the outcome of the measurement. \end{lemma} With the previous lemma, we are able to define the potential function for the quantum phase as Eq.~\eqref{eqn:phiQ}. As Section 3.4, we use $g,b,u$ defined in Eqs.~\eqref{eqn:g}\eqref{eqn:b}\eqref{eqn:u}. Set \begin{equation}\label{eqn:qphiq} \Phi_\mathrm{Q}\defeq g-b-10u. \end{equation} From Lemma~\ref{lem:Qinvariance}, $\Phi_\mathrm{Q}$ is well defined. \begin{lemma}\label{lem:phiq} Assuming \textbf{T}, in Algorithm~\ref{algo:MainalgorithmQMessage}, each iteration of the algorithm without a hash collision or error increases $\Phi_\mathrm{Q}$ by at least $5$ in \textsf{Case ii.A (ii.B)}; increases by at least $1$ in \textsf{Case vii} and does not decrease in the remaining cases. \end{lemma} \begin{proof} The proof is split to the following cases. \begin{itemize} \item In \textsf{Case i.A,i.B,iii.A,iii.B,iv}, $\Phi_\mathrm{Q}$ is unchanged because the players are only processing the classical data. \item In \textsf{Case ii.A}(\textsf{ii.B}), $g$ remains the same; $b$ increases by at most $1$; and $u$ decreases by $1$. Thus $\Phi_\mathrm{Q}$ increase by at least $9$. \item In \textsf{Case v}, $g, u$ are unchanged; $b$ does not increase. \item In \textsf{Case vi}, $q_{\mathrm{ MA}}$ and $q_{\mathrm{ MB}}$ are synchronized. Thus $u=0$ unchanged. And $g$ is unchanged; $b$ decreases by $1$. \item In \textsf{Case vii}, $g-b$ increases by $1$ and $u$ is unchanged. \end{itemize} \end{proof} \begin{lemma}\label{lem:phiqdecrease} Assuming \textbf{T}, with probability at least $1-2^{-\Theta\br{n\epsilon}}$, each iteration of the algorithm decreases $\Phi_\mathrm{Q}$ by at most constant. \end{lemma} \begin{proof} The only difference from proofs for the Cleve-Burhman model is that when a block of MESs are reused, it must be $\ket{\phi}^{\otimes 2r}$ high probability. Otherwise, the error in the previous use would be passed to the current use, which could corrupt many good unitary blocks in JS2 within one iteration. The reason is that between two consecutive uses, at least $\Theta\br{\sqrt{\epsilon}n}$ \textbf{Quantum-hash} are implemented. Among them only at most $\Theta\br{\epsilon n}$ are corrupted. It means that this block is survived through $\Theta\br{\sqrt{\epsilon}n}$ quantum hashes. By Lemma~\ref{lem:quantumhash} and our choice of the parameters, it is $\ket{\phi}^{\otimes 2r}$ with probability $1-2^{-\Theta\br{\epsilon n}}$. \end{proof} We define the potential function \begin{equation}\label{eqn:qphi} \Phi\defeq\Phi_\mathrm{Q}+\Phi_{\mathrm{MD}}+\Phi_{PD}+\Phi_{\mathrm{QRD}}. \end{equation} \begin{lemma}\label{lem:qmessagephi} With probability at least $1-2^{-\Theta\br{\epsilon n}}$, each iteration of algorithm decreases $\Phi$ by at most constant. Without a hash collision or error each iteration of algorithm increases $\Phi$ by at least $1$ with probability at least $1-2^{-\Theta\br{\epsilon n}}$. \end{lemma} \begin{proof} Assuming \textbf{T}, we can prove this lemma via the following arguments: Note that each term appears in $\Phi_{\mathrm{MD}}, \Phi_{\mathrm{PD}}$ and $\Phi_{\mathrm{QRD}}$ changes by at most a constant regardless the error or hash collision. Combing with Lemma~\ref{lem:phiqdecrease}, we conclude the first part of the lemma. The second part of the lemma follows from Lemma~\ref{lem:quantumqmdqpdincrease}, Lemma~\ref{lem:potentialqrd} and Lemma~\ref{lem:phiq}. The rest part is to verify \[ (1-2^{\Theta(n\sqrt{\epsilon})})(1-2^{\Theta(n{\epsilon})})=(1-2^{\Theta(n{\epsilon})}). \] \end{proof} \setcounter{theorem}{0} \begin{theorem}[Restated] With probability at least $1-2^{-\Theta\br{n\epsilon}}$, one can simulate $n$-messages noiseless communication protocol of Yao model by using $n(1+\Theta(\sqrt{\epsilon}))$ against any fully adversarial error quantum channel, albeit while assuming that the original protocol and the channel operate on qudit with $d=\Theta(\log n)$. In other words, the simulation rate can achieve $1-\Theta(\sqrt{\epsilon})$. \end{theorem} \begin{proof} With probability at least $1-2^{-\Theta\br{n\epsilon}}$, there are at most $6n\epsilon$ errors and hash collisions and \textbf{T} is valid throughout the execution os Algorithm~\ref{algo:MainalgorithmQMessage}. As $\Phi_{\mathrm{MD}},\Phi_{\mathrm{PD}}$ and $\Phi_{\mathrm{QRD}}$ are at most $0$ by Lemma~\ref{lem:quantumqmdqpdincrease} and Lemma~\ref{lem:phiqrdnegative}, the number of the good blocks in $JS2$ \[g\geq \Phi_{\mathrm{Q}}\geq\Phi\geq R_{\mathrm{total}}-\Theta(12n\epsilon)=\frac{n}{r}+\Theta\br{n\epsilon}\geq \frac{n}{r},\] after $R_{\mathrm{total}}$ iterations with probability at least $1-2^{-\Theta\br{n\epsilon}}$, which implies the players have correctly simulated all the blocks in the original protocol. $g$ can only increase in \textsf{Case vii}, and increases at most $1$ one iteration of Case \textsf{Case vii}. Then there are at most $R_{total}-n/r=\Theta(n\epsilon)$ iterations which execute Line 13 of Algorithm~\ref{algo:MainalgorithmQMessage}(Line 10 and Line 11 of Algorithm~\ref{algo:Mainalgorithm}). Notice that each $\mathit{msg}$ contains $\Theta(r)$ symbols. The number of total communication rounds is at most \begin{equation} R_{total}(r+\Theta(1))+\Theta(n\epsilon)\cdot \Theta(r)=n(1+\sqrt{\epsilon}), \end{equation} where $R_{total}$ denotes the number of iterations; $r$ is the number of rounds to transmit $\mathit{msg}$, mostly Pauli data; $\Theta(1)$ is the number of rounds to transmit hash values of the Metadata and Pauli data and their lengths. \end{proof} \setcounter{theorem}{15} } \section{Conclusion} In this paper, we study efficient simulation of noiseless two-party interactive quantum communication via low noise channels. For noise parameter $\epsilon$, a lower bound of $1-\Theta(\sqrt{\epsilon})$ on the communication rate is proved in the plain quantum model with large communication alphabets. To achieve this goal, we first study the teleportation-based model in which the parties have access to free entanglement and the communication is over a noisy classical channel. In this model, we show the same lower bound of $1-\Theta(\sqrt{\epsilon})$ in the large alphabet case. We adapt the framework developed for the teleportation-based model to the plain quantum model in which the parties do not have access to pre-shared entanglement and communicate over a noisy quantum channel. We show how quantum Vernam cipher can be used in the interactive communication setting to efficiently recycle and reuse entanglement, allowing us to simulate any input protocol with an overhead of only $1+\Theta(\sqrt{\epsilon})$. In an upcoming paper, we will show how the same communication rate can be achieved when the communication alphabet is of constant size. \suppress{ This beats the currently best known overhead of $1 - O(\sqrt{\epsilon \log \log 1/\epsilon})$, in the corresponding plain \emph{classical} model, which is also conjectured to be optimal in \cite{Haeupler:2014}. To achieve this goal, we actually show $1-\Theta(\sqrt{\epsilon})$ is a lower bound for four communication models: \{Large alphabet, Small alphabet\} $\times$ \{teleportation-based model, plain quantum model\}. \paragraph{Implications of our results} In this work, we have studied the capacity of noisy quantum channels to implement two-way communication. In particular, we studied the ability of memoryless quantum channels to simulate interactive two-party communication, with the channel available in both directions, but without any assistance by side resources, e.g. classical side channels. As discussed in Section~\ref{sec:problem}, this can be seen as a generalization of channel coding (which is discussed in Section~\ref{sec:channel-coding}), which is then the special case when all communication flows in one direction only. As discussed in Section~\ref{sec:ecc-inapp}, coding seems much harder in the interactive setting than in the one-way setting. Not much is known about the two-way quantum capacity. Despite this, it is not the case for all channels that the unassisted one-way capacity is at least as large as the unassisted two-way capacity. For example, the qubit erasure channel with erasure probability $\frac{1}{2}$ has no $1$-way quantum capacity~\cite{BenettDS:97}. When the channel can be used in either direction, noisy back classical communication becomes possible, and one can lower bound the capacity by $\frac{1}{10}$~\cite{BenettDS:97,LeungLS:2009}. A similar effect happens to the qubit depolarizing channel \cite{BDSW96,BNTTU14}. Thus, comparing memoryless channels in the classical and the quantum setting, the one-way capacity of classical channels is always an upper bound on its two-way capacity, while we see that this does not hold for all quantum channels. For general memoryless quantum channels, the $2$-way capacity is only known to be upper bounded by the entanglement-assisted quantum capacity $Q_E$~\cite{BSST99, BSST02}, which is equal to the quantum feedback capacity~\cite{Bow04}. This bound is not tight (for example, for very noisy qubit depolarizing channel, $2$-way capacity vanishes but $Q_E>0$). Moreover, for the qubit depolarizing channel with noise rate $\epsilon$, in the low noise regime, $Q_1 = 1 - H\br{\epsilon} + \epsilon \log 3 + O\br{\epsilon^2}$~\cite{LLS17}. We have established an achievable rate for the interactive setting of $1-\Theta\br{\sqrt{\epsilon}}$. If our conjectured optimality holds, the interactive capacity will be lower then $Q_1$ in the dependence on $\epsilon$. Other potential quantum advantage due to the interaction include secret key expansion. These effects enrich the subject but also add to the challenge of determining the interactive capacity, and our work presents important progress in the low-noise regime. A further implication of our result is that quantum communication complexity is very robust against transmission noise at low error rate. In particular, for alternating protocols like those considered in this paper and in most known protocols for quantum communication complexity, the overhead goes to one as the noise goes to zero, allowing one to get the full quantum advantage whenever such an advantage can be obtained. \paragraph{Open questions.} Two questions stem directly from our work. First, we conjecture that a rate of $1-O\br{\sqrt{\epsilon}}$ is optimal. Is this conjecture true, and if so, what is the constant hidden in the $O$ notation (up to leading order in $\epsilon$)? Second, what is the optimal rate of communication in the high noise regime, for large $\epsilon$? Another important direction is concerning the fact that our coding scheme assumes that the protocol to be simulated is alternating, i.e., Alice and Bob alternate in sending qudits to each other. We believe that a lot of the machinery that we have developed should transpose well to study the more general setting where the protocol to be simulated has a more general structure, potentially with messages constructed from different number of qudits in different rounds. Once this is better understood, it would be important to perform a deeper investigation of the relationship between the different flavors of capacities for noisy quantum channels. In the current work, we already have to deal with many types of synchronization errors at the teleportation, Quantum Vernam Cipher and quantum hashing level, for example. An interesting question from this point is: what about synchronization errors over the channel itself? There has been much interest in the classical interactive coding literature recently towards such type of errors~\cite{braverman2017coding, haeupler2017synchronization, sherstov2017optimal}. How useful would the data structures that we develop here be to study the generalization of such errors to the quantum setting. Many other interesting directions of research in the quantum setting stem from the other exciting directions that have been pursued recently in the classical setting, for example~\cite{BK12,BN13,BrakerskiKN:2014,GMS12,GellesMS:2014,GH14, BravermanK:2017,GhaffariHS:2014,EfremenkoGH:2015,FranklinGOS:2015, HaeuplerV:2017,BenYishaiSK:2017}. We believe that our framework should be extendable to the study of many of these problems in the quantum setting. Two other important questions that arise specifically in the quantum setting are the following. First, considering a larger fault-tolerant setting due to the inherently fragile nature of quantum data, can we also perform high rate interactive quantum communication when also the local quantum computation is noisy? Second, does quantum communication allow one to evade the classical no-go results obtained for interactive communication in a cryptographic setting~\cite{chung2013knowledge, gelles2015private}? As we have seen in this work, the unique properties of quantum information can be helpful in the interactive communication setting, since we were able to achieve higher communication rate over fully adversarial binary channels in the plain model than the conjectured upper bound in the corresponding plain classical setting. } \section{Teleportation-based protocols via classical channel with large alphabet}\label{sec:BKlarge} \input{3_0_Main} \input{3_1_Results} \input{3_2_Description} \input{3_3_Algorithm} \newpage \input{3_4_Analysis} \newpage \section{Recycling-based protocol via quantum channel with large alphabet} \input{4_0_Overview} \input{4_1_Results} \input{4_2_Description} \input{4_3_Algorithm} \newpage \input{4_4_Analysis} \input{8_Conclusion.tex} \section*{Acknowledgments} D.~Leung's research supported in part by an NSERC Discovery grant and a CIFAR research grant via the Quantum Information Science program; A.~Nayak's research supported in part by NSERC Canada; A.~Shayeghi's research supported in part by NSERC Canada and OGS; D.~Touchette's research supported in part by NSERC, partly via PDF program, CIFAR and by Industry Canada; P.~Yao's research is supported by the National Key R\&D Program of China 2018YFB1003202, National Natural Science Foundation of China (Grant No. 61972191), a China Youth 1000-Talent grant and Anhui Initiative in Quantum Information Technologies Grant No. AHY150100; N.~Yu's research supported in part by the Australian Research Council (Grant No: DE180100156). Most of this project was done while D.~Touchette was a postdoctoral fellow at Institute for Quantum Computing (IQC) and Perimeter Institute for Theoretical Physics (PI). Part of the work was done while P.~Yao visited PI, and P.~Yao thanks PI for its hospitality. Part of the work was done while N. Yu visited IQC, and N.~Yu thanks IQC for its hospitality. IQC and PI are supported in part by the Government of Canada and the Province of Ontario. \bibliographystyle{alpha}
{ "redpajama_set_name": "RedPajamaArXiv" }
7,219
The caliber that leads to the LUC 11CF automatic chronograph, fully designed, developed and produced in Chopard Manufacture, offers one of the most sophisticated of the High Precision Watches: Flyback chronograph, also known as fly-back. The 18-carat 18-carat rose-gold shell of 44 mm in diameter encompasses softer, voluptuous curves. Horns, polished and satin, have been completely redesigned.
{ "redpajama_set_name": "RedPajamaC4" }
103
import re from shellstreaming import api from shellstreaming.istream import RandInt from shellstreaming.operator import ShellCmd from shellstreaming.ostream import LocalFile OUTPUT_FILE = '/tmp/06_ShellCmd_daemon.txt' NUM_RECORDS = 100000 def main(): randint_stream = api.IStream(RandInt, 0, 100, sleep_sec=1e-7, max_records=NUM_RECORDS) cat_stream = api.Operator( [randint_stream], ShellCmd, 'cat < IN_STREAM > OUT_STREAM', daemon=True, out_record_def=api.RecordDef([{'name': 'num', 'type': 'INT'}]), out_col_patterns={'num': re.compile(r'^.+$', re.MULTILINE)}, msg_to_cmd='(*^o^*)\n', reply_from_cmd='(*^o^*)\n') api.OStream(cat_stream, LocalFile, OUTPUT_FILE, output_format='json', fixed_to=['localhost']) def test(): import json with open(OUTPUT_FILE) as f: for i, line in enumerate(f): record = json.loads(line) assert(0 <= int(record['num']) <= 100) assert(i + 1 == NUM_RECORDS)
{ "redpajama_set_name": "RedPajamaGithub" }
2,922
Q: How to set scroll in the way that a certain part of the text is visible JavaFX Good Evening, Everybody! There is an object called TextArea, and I use it to display text,big text of a big book, making it not Editable. If it's not able to display all the text a the same time, it gains a scroll and displays only a part of itself. So that's the point, can we set which exactly part of the text, shall be displayed? Just a kleiner hint, I don't ask more) P.S. Also, Fisrt, I asked myself a question, what to use to display required text, and didn't find anything better than TextArea, probably, because of a bad seeking. Maybe, someone of you, dear programmers, came across the same problem and found a better solution? A: You do not have to show everything, you can display number of lines(from big string), and by events(like Mouse wheel) push next line of the string to your TextArea simple Example: public class Controller { public TextArea text; int current=0; int d=1; int rowSize= 20; int rowsToSee=10; String[] strings=null; @FXML public void initialize() { String string =""; for(int i=0;i<3000;i++){ string=string+" "+i; if(i%rowSize==0){ string=string+"\n"; } } strings = string.split("\n"); final String s=string; text.setText(returnLines(current,current+rowsToSee,strings)); text.setOnScroll(new EventHandler<ScrollEvent>() { @Override public void handle(ScrollEvent event) { if(event.getDeltaY()<0){ text.setText(returnLines(current,current+rowsToSee,strings)); current=current+d; }else { if(current!=0) { current=current-d; text.setText(returnLines(current,current+rowsToSee,strings)); } }} }); } String returnLines(int from,int to,String[] strArry){ String s=""; for(int i=from;i<to;i++){ if(strArry.length>i){ s=s+strArry[i]+"\n"; } } return s; } }
{ "redpajama_set_name": "RedPajamaStackExchange" }
7,535
Les Grandes halles de Rákóczi tér (en hongrois : Rákóczi téri nagycsarnok) sont un marché couvert situé dans le quartier de Csarnok du de Budapest. Halles Patrimoine culturel de Hongrie Halle (construction) Édifice construit en 1897
{ "redpajama_set_name": "RedPajamaWikipedia" }
5,960
Q: Difference between $\sum_{x=1}^n \frac{1}{x}$ and $\int_1^n \frac{1}{x} dx$ I don't remember lots of calculus stuff so how to prove this with minimal amount of calculus : for any integer $n\ge 1$ $$ \left|\sum_{x=1}^n \frac{1}{x} - \ln (n)\right| \le 1 $$ I only see that $\sum_{x=1}^n \frac{1}{x} $ is the left Riemann sum approximation of $\ln (n) = \int_1^n \frac{1}{x} dx$ with rectangle width equals $1$ . But is there a less calculus way without the general error function ? A: Proven here . $\ln(n) = \int_1^n 1/x \; dx$ is less than its left Riemann sum approximation with rectangle width equals 1 but greater than its right Riemann sum approximation with rectangle width equals 1 , this produces 2 inequalities , which can be rewritten to produce the statement in the original post .
{ "redpajama_set_name": "RedPajamaStackExchange" }
7,496
set -eux SYMLINKED_HOME=$1 FLAVOR=$2 BOOST=${3:-1.63.0} COMPILER=${4:-clang} BOND_ROOT=/root/bond BUILD_ROOT=/root/build BUILD_SCRIPTS=$BOND_ROOT/tools/ci-scripts/linux export PATH=/opt/ghc/bin:$PATH # We set our cflags in non-standard variables because Stack chokes on some # otherwise acceptable configurations. case "$COMPILER" in clang) export CXX=clang++ export CC=clang BOND_CXX_FLAGS="-Qunused-arguments --system-header-prefix=boost/" BOND_CC_FLAGS="-Qunused-arguments --system-header-prefix=boost/" ;; gcc) export CXX=g++ export CC=gcc BOND_CXX_FLAGS= BOND_CC_FLAGS= ;; *) echo "Unknown compiler $COMPILER"; exit 1;; esac BOND_CMAKE_FLAGS="-DBOND_USE_CCACHE=TRUE" # All of the CMake-based builds need BOOST_ROOT set, even if they don't # build any C++ code export BOOST_ROOT=/opt/boosts/boost_`echo $BOOST | tr . _` mkdir -p $SYMLINKED_HOME $BUILD_ROOT ln -s /root/.ccache $SYMLINKED_HOME/.ccache ln -s /root/.stack $SYMLINKED_HOME/.stack cd $BUILD_ROOT local SCRIPT_PATH="$BUILD_SCRIPTS/build_$FLAVOR.zsh" if [[ -e "$SCRIPT_PATH" ]]; then source "$SCRIPT_PATH" else echo "Unknown FLAVOR $FLAVOR" exit 1 fi
{ "redpajama_set_name": "RedPajamaGithub" }
5,304
news q&a A mindful mentor Amy Maxmen Nature (2012)Cite this article 80 Accesses Gary Gibbons, the next director of the US National Heart, Lung, and Blood Institute, hopes to diversify the biomedical workforce. Cardiologist Gary Gibbons will take up the reins at the US National Heart, Lung, and Blood Institute this month. Credit: NIH Diversity in the US scientific workforce remains a problem even though it is well documented. African Americans constitute almost 13% of the US population, but earned only 3.6% of the country's doctoral degrees in science in 2007, up from 2.5% a decade before1. And according to a study published last year2, African American investigators are 10% less likely than white applicants to win research grants from the US National Institutes of Health (NIH) — even when they have the same number of publications, citations and previous grants. Addressing the diversity of the biomedical workforce has long been a priority for clinician and physiologist, Gary Gibbons, and he will soon be in a position with hefty influence. On 13 August, Gibbons will become the first black director of the National Heart, Lung, and Blood Institute (NHLBI) at the NIH in Bethesda, Maryland, which has a US$3-billion annual budget for research into heart disease, asthma, blood and sleep disorders, and more. Gibbons is leaving his post as director of the Cardiovascular Research Institute at the Morehouse School of Medicine in Atlanta, Georgia, to take up his new job. He has also served on the faculties of Stanford University in California and Harvard Medical School in Boston, Massachusetts. He talks to Nature about his hopes for his new post. Were you surprised by the study that found racial bias in the NIH grant-reviewing process? It certainly was a troubling finding, but I must admit that it didn't surprise me. There are a number of factors associated with a higher likelihood of getting research funding that are not related to an investigator's hypothesis, such as the rank of their institution. Better mentorship might help minorities who are applying for NIH grants. I think that institutions should try harder to link minorities to mentors who can encourage their success. One motivation for me to leave Harvard and go to Morehouse was that many of the students at Morehouse are from under-represented minorities. I thought that I could be a helpful mentor. What did you do at Morehouse? My challenge was to conduct leading-edge research that addressed cardiovascular health disparities, which are complex, in a setting that doesn't have the legacy or the kind of resources that I could draw on at my previous institutions. In one ongoing project, we're using whole-exome sequencing to discover novel DNA variants that are associated with severe hypertension in African Americans. During the study, we have to remain aware that molecular mechanisms alone won't account for why nearly 1.5 times the proportion of African Americans compared to white Americans have been diagnosed with hypertension3. So, we have to integrate behavioural, social and neighbourhood components into health outcomes. Will you support such multifactorial studies at the NHLBI? Yes, because the mission of the NIH is to enable discovery science that ultimately improves public health. I'll be building on a legacy of an institute that has long appreciated multidimensional data sets. More generally, I think that health researchers can now begin to connect the dots by taking a systems approach that integrates different levels of scale — from DNA to cells and organs — and by realizing that individuals are immersed in a social, ecological system. An ecosystem includes, for example, how safe it is to walk a neighbourhood's streets. I'll also be curious to see how investigators [supported by the NHLBI] will incorporate newly available technologies. There are extraordinary opportunities to get richer data sets that weren't available when the NHLBI began the Framingham Heart Study in 1948. For example, participants in a typical cardiovascular study go through a battery of tests to assess how active they are, but now we have smart phones that can deliver this information and devices to monitor heart rate and sleep habits. Why are you so committed to public health? Because of my mother. She was an orphan who made it through college and received a master's degree in education, with help from a stranger who paid most of the expenses after hearing her speak at her high-school graduation. That gave her an appreciation for what caring people could do for someone in need. She founded a church, a nursery school and a house for unmarried teenage mothers. She took kids who were in trouble into our house while I was growing up in the predominantly African American neighbourhood of Germantown, Philadelphia. My mother instilled in me a sense of social responsibility that has stuck. When I was speaking to people about applying for the position at the NHLBI, I realized how much the NIH's mission of giving back to the community resonated with me. What do you hope one of your legacies will be at the NHLBI? I have a particular passion for ensuring gender, racial and ethnic diversity in the workforce. The NIH already has training programmes that could contribute along those lines, so the mechanisms are there. Frankly it's largely a matter of leveraging them. The small number of PhDs granted to African Americans right now is pathetic. It's an egregious, systemic failure that extends beyond the NIH. That said, I'm going to do everything I can to get the best and brightest students out there to engage in science. National Science Foundation. Science and Engineering Indicators 2010: Appendix Tables 108–116 (2010); available at http://go.nature.com/ayln6n Ginther, D. K. et al. Science 333, 1015–1019 (2011). Centers for Disease Control and Prevention Morbidity and Mortality Weekly Report 60(Suppl.), 94–97 (2011); available at http://go.nature.com/z1jw2p Related links in Nature Research A workforce out of balance 2012-Jun-19 Black applicants less likely to win NIH grants 2011-Aug-18 Science gender gap probed 2011-Feb-07 NIH receives key reports on the health and diversity of the biomedical workforce Researchers highlight the impact of slavery on health and disease Maxmen, A. A mindful mentor. Nature (2012). https://doi.org/10.1038/nature.2012.11134 DOI: https://doi.org/10.1038/nature.2012.11134
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
1,180
{"url":"https:\/\/robotics.stackexchange.com\/questions\/12043\/how-do-the-joint-angles-of-a-4-legged-impact-the-bodys-position-with-respect-to","text":"How do the joint angles of a 4-legged impact the body's position with respect to the world frame?\n\nFor a four-legged robot (like Big Dog or the one shown here) how are the joint angles and \"feet\" position related to the body's frame in the world\/inertial frame? For example, if I know the body's position and orientation in the world frame, and the joint angles, how do I derive the relationship that tells me where the robots \"feet\" are?\n\nFor simplification, if I assume the legs can be represented as a planar 3R manipulator (where the end effector is the foot), it's easy enough to derive the relationship between the end effector and the angles. But the \"base\" is the robot's body, which will change position and orientation when the joint angles change. So do I have to find the matrix which relates the body to the world frame, then find the position of the foot with respect to the world? Or am I thinking of this the wrong way?\n\nIf you know all the joint angles, to fill in all the world frame information, all you need to know is the pose of something in the world frame.\n\nYou first ask how to find the feet position given the pose of the body and the joint angles. I'll explain that and then I'll explain how to find the body pose from a foot pose (and therefore the poses of all the other feet).\n\nI'm going to assume you understand homogeneous transformation matrices for computing forward kinematics.\n\nI'll be using the notation: $$^AT_B$$ To denote the transform from frame B to frame A. Or alternatively, the pose of B expressed in frame A.\n\nBody pose known\n\nSo we know the joint angles $\\theta$ and the pose of the body in the world frame $^WT_B$.\n\nBy representing the the four legs as 3DOF arms where the feet are end-effectors and the body is the \"base\", we can calculate the position and orientation of each foot relative to the body. This is the same as saying that we have the poses of the feet expressed in the body frame as a function of the joint angles: $$^BT_{f_i}(\\theta)$$ Where $f_i$ represents each of the four feet.\n\nWe therefore can easily calculate the pose of each foot in the world frame by transforming the pose of the foot in the body frame into the world frame: $$^WT_{f_i}(\\theta) = ^WT_B \\cdot ^BT_{f_i}(\\theta)$$\n\nA foot pose known\n\nNow let's say we know the pose of a single foot in the world frame $^WT_{f_i}$ and all the joint angles.\n\nSince you know the pose and not just the position, you can completely calculate the pose of the body based on that one foot. If this is not intuitive, think of the fact that for any set of joint angles, if you move the body in any way, the position and\/or orientation of every foot will change, i.e. every combination of body and joint angles corresponds to a unique foot pose).\n\n\\begin{align} ^WT_B(\\theta) &= ^WT_{f_i} \\cdot ^BT_{f_i}^{-1}(\\theta) \\\\ ^WT_B(\\theta) &= ^WT_{f_i} \\cdot ^{f_i}T_B(\\theta) \\end{align}","date":"2021-04-11 02:30:36","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 1, \"equation\": 3, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9047591686248779, \"perplexity\": 421.8607627644554}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-17\/segments\/1618038060603.10\/warc\/CC-MAIN-20210411000036-20210411030036-00190.warc.gz\"}"}
null
null
Gordon Mitchell (born c. 1933) was a Canadian football player who played for the BC Lions. References Living people 1933 births Players of Canadian football from Alberta Canadian football tackles BC Lions players Canadian football people from Edmonton
{ "redpajama_set_name": "RedPajamaWikipedia" }
9,793
{"url":"https:\/\/brilliant.org\/discussions\/thread\/numbers\/","text":"\u00d7\n\n# Numbers\n\nWhat are the different type of numbers? [image removed, moderators could not see how it was pertinent to the topic\/question]\n\nNote by Nageswari Mca\n3\u00a0years, 11\u00a0months ago\n\nSort by:\n\nI can't understand you \u00b7 3\u00a0years, 11\u00a0months ago\n\nWhatdo you mean? \u00b7 3\u00a0years, 11\u00a0months ago\n\ni want difficult question for my carrier with improve the knowledge sir please post it \u00b7 1\u00a0year, 7\u00a0months ago\n\nWhat does the picture have to do about numbers? \u00b7 3\u00a0years, 11\u00a0months ago","date":"2017-03-23 02:22:14","metadata":"{\"extraction_info\": {\"found_math\": false, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8703082203865051, \"perplexity\": 9255.952715184161}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2017-13\/segments\/1490218186608.9\/warc\/CC-MAIN-20170322212946-00453-ip-10-233-31-227.ec2.internal.warc.gz\"}"}
null
null
Any idea that makes you money, needs to be protected. If it weren't for intellectual property protection, NIKE® would still be just two guys selling shoes out of the back of a car at local running tracks. We provide personal, cost-effective and strategic attention to detail in conducting clearance searches, pursuing registration and helping to enforce & maintain rights in your ideas. Our goal is to give your ideas the broadest protection so you can make money from them. Pillar IP, Inc. is a unique intellectual property agency specializing in developing business relationships and portfolio management. With a primary focus on IP clearance, prosecution, protection and enforcement, Pillar IP, Inc. also offers value-added services. Pillar IP, Inc. stays up to date on current IP issues to ensure that the knowledge gained is used to provide our clients with update-to-date seamless prosecution and portfolio management. We offer our clients big firm/agency experience and expertise in a more personal and cost effective manner. Pillar IP is proud to sponsor excellence; particularly when excellence resides locally! We believe in giving back to our community and helping amateur athletes reach their desired goals; be they Olympic dreams or otherwise. We have a number of local/national charities that we support and we're so proud to be supporting Michael Tayler in his decision to continue training for the Canadian 2020 Olympic Team! © 2013 - 2019 Pillar IP, Inc. Learning about IP doesn't have to be dry. Sign up for our monthly email and get helpful IP news & tips.
{ "redpajama_set_name": "RedPajamaC4" }
3,122
At DF Markets we help you understand the world of trading via our powerful spread betting platform and assist you on your journey using our extensive range of resources. We comply with the strict FCA Rules in the conduct of our business. This includes our complete commitment to treating customers fairly and with full transparency. All money held on behalf of clients is kept in top-tier banks and in a segregated bank account. We do not pass client money to any part of the business as working capital and we are debt-free with no exposure to corporate or sovereign debt. We have secured substantial liquidity and capital reserves significantly in excess of regulatory requirements. Our trading conditions are amongst the most competitive in the market: no minimum deposit, no commissions, and instant order execution. Combined with the highly competitive spreads (0.6 pips on EUR/USD and 0.8 pips on USD/JPY) and the wide range of markets, this makes our offering suitable for both experienced and novice spread bettors. We also provide extended hours trading on more than 50 popular markets, including US stocks and German and UK indices. Our Beginner and Intermediate trading courses will guide you step-by-step, and help you cover the basics in a way you feel confident to trade. Each section ends with a short quiz and timely feedback on your progress. Master the building blocks of financial spread betting through our extensive collection of video tutorials. Reading charts, placing conditional orders and making the best of using logical mode are just a few of the topics covered. Short and helpful guides to the most important concepts in financial spread betting. Fully indexed and searchable knowledge base of the main topics you need to master - perfect for quick learning reference and overview. When we talk about DF Trader, I cannot miss its ease of use and the ability to fully customize it for your own needs. DF Markets also has web and mobile platforms, which can be comfortably used on the move and yet, allow you to trade with an enormous number of instruments. I'd recommend DF Markets to both new and experienced traders. The platform they offer is really fast and secure. The Economic calendar integrated into it makes it easier to keep up with the latest news in the financial markets. Moreover, they are regulated by the FCA, which makes them reliable and trustworthy. Even though DF Trader's interface differs from the MT4 that I used to use, I got accustomed to the platform very quickly. Also, I appreciated the professionalism of their Customer Service. I had no issues when it comes to withdrawals as well. I would definitely recommend this broker. DF Market's platform works in a natural and intuitive way. The interface is simple and clean and has well-built functions. I found it pleasing that amongst its standard features, there are some perks like conditional orders, which I use for building my strategies. The broker's execution is flawless and slippage occurs on very rare occasions. DF Markets platform allowed me to adjust the look and feel of it according to my preferences as a trader, which I really enjoyed. I highly valued the opportunity to chat with a professional trader, directly through the platform. Furthermore, I was fascinated by the speed of execution of my orders and that at any given time I can see what's happening in the markets within the platform. DF Markets platform allowed me to adjust the look and feel of it according to my preferences as a trader, which I really enjoyed. I highly valued the opportunity to chat with a professional trader, directly through the platform. Futhermore, I was fascinated by the speed of execution of my orders and that at any given time I can see what's happening in the markets within the platform. We have extensive experience in the financial sector across different verticals and a variety of geographical regions. Our combined industry knowledge spans decades of providing financial services to clients in a retail and professional environment. All of our team members are required to know and follow the internal company code of conduct as well as the relevant state and EU regulations. We take pride in the processes we have established to ensure client priorities closely match company priorities. From the very beginning of your journey with DF markets, you will be treated as a valued customer. Our dedicated account management team can take you step by step from your first demo account, through the learning curve of financial spread betting, to your first real-money transactions, deposits and withdrawals. We have created a proprietary spread betting platform which we constantly upgrade with new features and useful, trading tools. It is backed by a team of skilled developers who make sure your spread betting experience is glitch-free and the markets are always just a click away. The technical team consists of web, mobile and desktop developers with the most appropriate arsenal of supported programming languages.
{ "redpajama_set_name": "RedPajamaC4" }
2,523
Svinjska mast je izdelek iz maščobnega tkiva prašičev, ki ga dobimo s suhim ali mokrim taljenjem. Včasih so jo na veliko uporabljali kot mast za kuhanje ali namaz (podobno kot maslo), a se je njena uporaba v zadnjem času drastično zmanjšala zaradi škodljivih vplivov na zdravje, ki jih povzroča visoka vsebnost nasičenih maščobnih kislin. Svinjska mast ima višje tališče kot olje, zato jo organizem teže raztaplja in prebavlja. Ima okoli 50% nasičenih maščobnih kislin. Namazi Hrana
{ "redpajama_set_name": "RedPajamaWikipedia" }
1,277
As a Jew, I'm grateful that Joe Biden used the word 'Shylocks' Published on September 17, 2014 September 17, 2014 by khalper US Vice President Joe Biden shakes hand with Israeli Prime Minister Benjamin Netanyahu during the state memorial service for former prime minister Ariel Sharon at the Knesset in Jerusalem on Jan. 13, 2014 [AFP] Joe Biden, once again, pulled a Joe Biden, (made a gaffe) when he referred to predatory bankers as shylocks. During a Tuesday speech at the Legal Services Corporation's 40th anniversary conference, Biden said that his son, Delaware Attorney General Beau Biden, heard stories about banks preying on fellow servicemen and women when he served in Iraq: People would come up to him and talk about what was happening to them at home in terms of foreclosures in terms of bad loans that were being – I mean, these Shylocks who took advantage of these women and men while overseas. Of course, Shylock is a fairly charged reference, as it refers to the Jewish, interest-charging, money-lender in Shakespeare masterpiece The Merchant of Venice. There is much debate over whether the character is an anti-semitic stereotype, or a sympathetic character through whom Shakespeare critiques anti-semitism. But, to be sure, when people use the word, they mean it pejoratively. They are describing someone who takes advantage of others and not someone who highlights the inhumanity of anti-Semitism. You will not, for instance, hear anyone ever describe Anne Frank as a Shylock, though she certainly humanized Jews and indicted anti-semitism. Predictably, Abraham Foxman, who is still the president of the Anti-Defamation League despite announcing his retirement last year, had something to say about this: When someone as friendly to the Jewish community and open and tolerant an individual as is Vice President Joe Biden, uses the term 'Shylocked' to describe unscrupulous moneylenders dealing with service men and women, we see once again how deeply embedded this stereotype about Jews is in society. Biden issued an apology and a "Jews love me" #HumbleBrag: Abe Foxman has been a friend and adviser of mine for a long time. He's correct, it was a poor choice of words, particularly as he said coming from 'someone as friendly to the Jewish community and open and tolerant an individual as is Vice President Joe Biden.' But the truth is, Biden's Jewish stereotype based gaffe marked a historic moment. We are invisible no more! For years, Biden has delivered countless gaffes at the expense of several marginalized and disenfranchised groups like African Americans, the disabled, Indian Americans and even himself! In 2007, for instance, Biden "praised" Barack Obama, against whom he was running for the Democratic nomination: "I mean, you got the first mainstream African-American who is articulate and bright and clean and a nice-looking guy. I mean, that's a storybook, man." At a 2008 campaign rally, way before Kanye had made it cool, Biden urged Missouri state senator Chuck Graham, "Stand up Chuck, let 'em see you." Realizing Graham is in a wheelchair Biden recovered thusly: "Oh, God love you. What am I talking about. I'll tell you what, you're making everybody else stand up, though, pal." Back in 2006, Biden reached out to Indian Americans by observing, "In Delaware, the largest growth of population is Indian Americans, moving from India. You cannot go to a 7-11 or a Dunkin' Donuts unless you have a slight Indian accent. I'm not joking." Biden is so Gaffe-prone, he even insulted himself. At a 2008 rally in Nashua, North Hampshire, Biden had this to say: Hillary Clinton is as qualified or more qualified than I am to be Vice President of the United States of America. Let's get that straight. She's a truly close personal friend. She is qualified to be President of the United States of America. She's easily qualified to be Vice President of the United States of America. Quite frankly, it might have been a better pick than me. But she's first rate. A Jewish-based gaffe was past due. I felt like my community had been snubbed by Biden. But Tuesday, the Vice President fixed that. Originally posted on RawStory Categories Uncategorized•Tags Gaffe, jewish, Joe Biden, RawStory, Vice President, Vice President Joe Biden Previous Ex-gay movie star tells Lady Gaga to "shut up" Next 8 Worst Things Urban Outfitters Has Done
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
2,088
\section{Introduction} Many systems respond to external perturbations by avalanches which behave intermittently with a power-law distribution of sizes. The paradigm of such self-organized critical (SOC) behavior is the so-called sandpile model \cite{btw}. It maintains by an infinitely slow drive a critical steady-state, where the internal dissipation balances the external drive. Candidates for such phenomena include granular piles \cite{held-etal:1990,frette-etal:1996}, microfracturing processes \cite{petri-etal:1994}, and earthquakes \cite{gutenberg-richter:1956}. Despite many theoretical and numerical investigations a thorough understanding of self-organized criticality is still lacking \cite{tang:1988,zhang:1989,kadanoff-etal:1989,hwa-kardar:1992,% paczuski-boettcher:1996,dickman-etal:1998,dhar:1998,% vespignani-zapperi:1997}. Fundamental problems which need to be solved involve deriving a continuum theory which would for instance determine the upper critical dimension, above which mean-field theory applies \cite{tang:1988,ZLS:1995}. Similar behavior can be found from elastic interfaces driven through random media \cite{narayan-fisher:1993,nattermann-etal:1992,leschhorn:1993,laszlo}. They undergo a continuous (critical) depinning transition as the external driving force is varied. With increasing force one passes from a phase where the interface is pinned to a depinned phase where the interface moves with a constant velocity. Close to the critical point, the motion of the interface takes place in ``bursts'' with no characteristic size and the interface develops scaling described by critical exponents. These phenomena can be met in fluids driven through porous media \cite{rubio-etal:1989}, in domain walls in magnets (the Barkhausen effect) \cite{zapperi-etal:1998}, in flux lines in type II superconductors \cite{field-etal:1995}, and in charge-density waves \cite{middleton-fisher:1993}. In this paper we investigate the connections between self-organized criticality and depinning transitions \cite{tang:1988,paczuski-boettcher:1996,narayan-fisher:1993 zapperi-etal:1998,narayan:1994,paczuski-etal:1996,cule:1998,allau:1999}. We first establish a generic, exact relation \cite{allau:1999} between sandpile models and driven interfaces which builds upon previous investigations of e.g.\ a charge-density wave model \cite{narayan:1994} and a rice-pile model \cite{paczuski-boettcher:1996}. Specifically, we discuss the Bak-Tang-Wiesenfeld (BTW) \cite{btw} model and, as an example, a stochastic sandpile model \cite{christensen-etal:1996,AL:1996} through a mapping to a model for interface depinning with slightly different noise terms. The mapping enables one to understand the slow-drive criticality used in sandpile simulations in terms of standard concepts for driven interfaces. Using the continuum theory for interface depinning it follows for these sandpile models that the upper critical dimension $d_c$ is 4, and the relevant noise is of quenched type. The connection with interfaces allows us to establish a scaling relation for the correlation length exponent for sandpile models. In addition, we discuss in the interface representation sandpiles driven at fixed density, driven at boundaries, and extremal drive criticality. \section{Sandpiles} The sandpile models are here defined as follows: to each site of a $d$-dimensional lattice (square in $d=2$) of size $L^d$ is associated a variable $z_x$ which counts the number of grains on that site. When the number of grains on a site exceeds a critical threshold $z_x > z_c$, the site is active and it topples. This means that $2d$ grains are removed from that site and given to the $2d$ nearest neighbors (nn): $z_x \to z_x - 2d$, $z_{nn} \to z_{nn} + 1, ~ \forall nn$. Sandpiles are usually open such that grains which topple out of the system are lost (in one dimension: $z_0 \equiv z_{L+1} \equiv 0$). It is also possible, as discussed later, to use periodic boundary conditions. When there are no more active sites in the system, one grain is added to a randomly chosen site, $z_x \to z_x + 1$. The time and number of topplings till the system again contains no active sites define an avalanche and its internal lifetime and size. For the BTW model one has $z_c=2d-1$, whereas for stochastic sandpile models \cite{christensen-etal:1996,AL:1996} the threshold $z_c$ is not constant. Below we will focus on 1) the BTW model and 2) a stochastic model where the threshold $z_c$ is randomly chosen to be for example $2d-1$ or $2d$ after each toppling, i.e.\ $P(z_c)$, the probability distribution of the $z_c$'s, is any reasonable choice (i.e.\ decaying sufficiently fast). In terms of the internal avalanche time, the external drive is infinitely slow \cite{finite-drive}. After a transient, the system reaches a steady-state in which the slow drive and the dissipation of grains balance each other. The boundary conditions (BCs) are essential to obtain criticality and they are usually of the Dirichlet type, $z\equiv 0$, such that particles are dissipated to the outside \cite{ZLS:1995}. Alternatively, the SOC steady state can be reached by using bulk dissipation and, e.g., periodic BCs \cite{vespignani-zapperi:1997}. In the SOC steady-state the probability to have avalanches of lifetime $t$ and size $s$ follow power-law distributions: $ p(t) = t^{-\tau_t} f_t(t/L^z) $ and $ p(s) = s^{-\tau} f(s/L^D) , $ with $s\sim t^{D/z}$ and $z(\tau_t-1)=D(\tau-1)$ \cite{Teb99}. Here the size scales as $s\sim \ell^D$ and the (spatial) area as $\ell^{d}$ (for compact avalanches) with $\ell$ the linear dimension. The fact that each added grain will perform of the order of $L^2$ topplings before leaving the system leads to the fundamental result \begin{equation} \left< s \right> \sim L^2 \label{eq:<s>} \end{equation} independent of dimension \cite{tang:1988,kadanoff-etal:1989}. Thus, $\tau=2-2/D$ and $\tau_t = 1 + (D-2)/z$. Equation~(\ref{eq:<s>}) yields $\gamma/\nu=2$, where $\gamma$ describes the divergence of the susceptibility (bulk response to a bulk field) near a critical point, $\chi = \left< s \right> \sim |\Delta|^{-\gamma}$, and $\nu$ is the (spatial) correlation length exponent, $\xi \sim|\Delta|^{-\nu}$ \cite{tang:1988}. Here $\Delta=\zeta-\zeta_c$ is the control parameter, $\zeta = \left< z_x \right>$, and the critical value $\zeta_c = \left< z_x \right>_{\rm SOC}$, where this average is taken in the slowly driven SOC steady-state with $\Delta=0$ \cite{tang:1988,dickman-etal:1998}. \section{Interface depinning} For driven interfaces in random media critical scaling is obtained with a force $F$ close to a critical value $F_c$. Depinned interfaces move with a velocity $v\sim f^\theta$, with $f=F-F_c \geq 0$. Pinned interfaces are blocked by pinning paths/manifolds which arise from the quenched disorder environment. Close to criticality, correlations scale as $x^{2\chi}$, with $\chi$ the roughness exponent, up to a correlation length $\xi \sim |f|^{-\nu}$. The characteristic time scale is $\xi^z$ with $z$ the dynamic exponent and it follows that $\theta=\nu(z-\chi)$ \cite{narayan-fisher:1993,nattermann-etal:1992}. Near the depinning transition, the simplest choice to describe the dynamics of the interface is the following continuum equation ('quenched Edwards-Wilkinson', or linear interface model, LIM) \cite{narayan-fisher:1993,nattermann-etal:1992,leschhorn:1993}: \begin{equation} \frac{\partial H}{\partial t} = \nabla^2 H + \eta(x,H) + F. \label{eq:qEW} \end{equation} Here, $H(x,t)$ measures the height of a given site $x$ at time $t$. The quenched noise $\eta(x,H)$ has correlations given by $ \langle \eta(x,H) \eta(x',H')\rangle = \delta^d(x-x') \, G(H-H'), $ where $G(H-H')$ decays rapidly, approximated by a delta function for random-field disorder. The critical exponents at the depinning transition have been calculated by $\epsilon$-expansions \cite{narayan-fisher:1993,nattermann-etal:1992} and simulations \cite{leschhorn:1993,laszlo,qEW-exponents}. The upper critical dimension is $d_c=4$, above which mean-field theory applies \cite{qEW-mft}. Below we will also discuss so-called columnar noise with $G(H)\equiv 1$ \cite{parisietpietronero}. The interface equation~(\ref{eq:qEW}) obeys an invariant so that the static response scales as \cite{narayan-fisher:1993} $\chi(q,\omega=0) \sim q^{-2}$, i.e., \begin{equation} \gamma/\nu = 2 . \label{eq:gamma/nu} \end{equation} For forces below $F_c$, the (bulk) response of the interface triggered by a small increase in $F$ scales as $ \chi_{\rm bulk} \equiv {d \left< H \right>} / {dF} \sim (F_c-F)^{-\gamma}$. Right at the critical point one can argue as follows \cite{narayan-fisher:1993,nattermann-etal:1992}: the roughness of the interface scales as $\ell^\chi$ and assuming that $\Delta \left< H \right>$ will scale in the same way it follows \begin{equation} \gamma = 1 + \chi \nu . \label{eq:gamma} \end{equation} This yields $\chi + 1/\nu = 2$, i.e., there are only two independent exponents for depinning described by (\ref{eq:qEW}). The standard scaling relations are valid for interfaces with parallel dynamics: all sites with $\partial H / \partial t > 0$ are updated in parallel. Note that interfaces with extremal (i.e., one unstable site at a time) and parallel drive have the {\it same\/} pinning paths. This manifests the Abelian character of the LIM in that the order in which active sites are advanced does not matter \cite{paczuski-etal:1996}. \section{Mapping of sandpile dynamics} Next we will show that the SOC critical behavior can be related exactly to the slowly driven depinning transition in an interface model. Thus, Eqs.~(\ref{eq:<s>}) and (\ref{eq:gamma/nu}) are equivalent and Eq.~(\ref{eq:gamma}) yields an expression for the correlation length exponent $\nu$ for sandpiles. The first step is to formulate the stopping of an avalanche in a SOC system as being due to a pinning path for an interface $H(x,t)$. This field is given in the continuum limit by \begin{equation} H(x,t) = \int_{0}^{t} dt' \, \rho(x,t') , \label{eq:H=int-rho} \end{equation} where the order parameter $\rho(x,t)$ is the activity (topplings) at site $x$ at time $t$, i.e., $\rho=\dot{H}=v\sim f^\theta$. In words: $H(x,t)$ counts the number of topplings at site $x$ up to time $t$. At the microscopic level this is an exact correspondence between a toppling and the interface advance. A toppling takes place when $z_x > z_c$, which by the relation \begin{equation} z_x = z_c + \frac{\partial H}{\partial t} , \label{eq:z-def} \end{equation} yields the dynamics $\partial H/\partial t > 0 ~ \Rightarrow ~ H \to H + 1$, whereas $H$ is unchanged at the sites where no toppling takes place. The dynamics of sandpile models thus map to discrete interface equations where an avalanche takes the interface $H(x,t)$ from one pinning path to the next in the quenched random medium \cite{paczuski-boettcher:1996,narayan-fisher:1993 zapperi-etal:1998,narayan:1994,cule:1998}. Since the interface counts topplings it does not move backwards and thus Eq.~(\ref{eq:z-def}) effectively reads ${\partial H}/{\partial t}=\theta(z_x-z_c)$, which is the standard discretization for depinning models \cite{leschhorn:1993}. We are currently investigating the applicability of such discretization procedures to various models \cite{allau:1999,unpublished}. Next, we express $z_x$ in terms of $H(x,t)$ for the specific models introduced above. The number of grains $z_x$ on site $x$ is $z_{x} = N_{in} - N_{out} + F(x,t)$, where $N_{in}$ is the number of grains added to this site from its $2d$ nearest neighbors (nn) and $N_{out}$ is the number of grains removed from this site due to topplings. The (external) driving force $F(x,t)$ counts the number of grains added from the outside. Since $N_{in}=\Sigma_{nn} H(x_{nn},t)$ and $N_{out}=2dH(x,t)$ (for details and extensions to other models see \cite{unpublished}) we arrive at \begin{equation} \frac{\partial H}{\partial t} = \nabla^2 H - z_c(x,H) + F(x,t) , \label{eq:H-eq} \end{equation} where $\nabla^2 H$ is the discrete Laplacian. The Dirichlet boundary conditions for $z_x$ become $H\equiv 0$ and the dynamics is parallel. Similar connections have been previously discussed for a charge-density wave model \cite{narayan:1994} and for a boundary driven rice-pile model \cite{paczuski-boettcher:1996} (see below). In the stochastic model, $z_c(x,H)$ is a random variable which changes after each toppling. Thus $z_c(x,H)$ acts like quenched random point-disorder similar to $\eta(x,H)$ in Eq.~(\ref{eq:qEW}). The BTW model has $z_c$ equal to a constant. The dissipation needed to reach the SOC state (loss of grains $z_x$) takes place through the BC of $H\equiv 0$. Using strong boundary pinning may thus give rise to the possibility of observing SOC experimentally in systems displaying a depinning transition. We emphasize that the mapping prescription can in principle be applied to any sandpile model. For other, more complicated, toppling rules \cite{kadanoff-etal:1989,AL:1997} additional terms like the ``Kardar-Parisi-Zhang'' nonlinearity $|\nabla H|^2$ may appear. On the internal (fast) time scale the driving force $F(x,t)$ does not act as a time-dependent noise but as columnar-type disorder. It counts all the grains added to the system by the slow drive, i.e.\ $F(x,t) \to F(x,t) +1$, and thus increases as function of time in an uncorrelated fashion. In the opposite limit when a grain is added (e.g.) each time step (``fast drive'') $F(x,t)$ would correspond to a time-dependent noise \cite{narayan-fisher:1993}. Since $H\equiv 0$ at the boundary and $F$ increases as function of time the steady-state profile of $H$ will be close to a paraboloid or, in one dimension, a parabola (see also \cite{dickman-etal:1998}). In the steady-state, just after an avalanche, the slowly increasing force $F$ is balanced by the negative curvature $\nabla^2 H$ of the paraboloid such that all sites are pinned ($\partial H/\partial t \le 0$). This illustrates that the interface effectively is driven by a force equal to the critical force $F_c\equiv \zeta_c-\overline{z_c}$, where $\overline{z_c}$ is the average of $z_c(x,H)$ in the steady state (for the BTW model trivially $z_c =2d-1$). Accordingly, the slow drive reaches the depinning critical point by adjusting the dissipation to the driving force such that the velocity (order parameter) is infinitesimal. The steady-state of the different sandpile models is described by an equation similar to Eq.~(\ref{eq:qEW}). Thus the exponent relation (\ref{eq:gamma/nu}) holds and it is equivalent to Eq.~(\ref{eq:<s>}) which describes the scaling of the average avalanche size (``susceptibility''). Assuming that a roughness exponent $\chi$ can be defined for sandpile models, one can argue that Eq.~(\ref{eq:gamma}) is valid also for sandpiles. Furthermore, the upper critical dimension is $d_c=4$. Note that the ensuing noise will contain a columnar component \cite{narayan:1994,cule:1998,parisietpietronero} due to the random drive $F(x,t)$. The one-dimensional BTW model has a critical force $F_c=1-1=0$, which corresponds to the critical point of the columnar-disorder interface model \cite{parisietpietronero}. In $d>1$, one has $F_c<0$ which in combination with the fact that the interface by definition cannot move backwards implies that the BTW model displays a more complicated behavior than the columnar models investigated in \cite{narayan:1994,parisietpietronero}. Note also that avalanches in stochastic models will have a random structure due to the explicit point disorder whereas avalanches in the BTW model show a more regular behavior \cite{allau:1999,avalanches}. For the case of the boundary driven one-dimensional rice-pile models \cite{christensen-etal:1996,AL:1996} a similar mapping of the dynamics can be done with an auxiliary field $H(0,t)$ and a drive implemented as $H(0,t) \to H(0,t) +1$ \cite{paczuski-boettcher:1996}. The rice-pile models have Dirichlet BC at $x=0$ and Neumann BC (reflective) at $x=L$ which yields $\left<s\right>\sim L$. In our picture the boundary drive is $F(1,t) \to F(1,t) +1$ and $F(x>1,t)=0$. Because of the Neumann BC [$H(L,t)=H(L+1,t)$] the steady state develops a parabolic profile with the left branch pointing up \cite{paczuski-boettcher:1996}. \section{Various ensembles} We next consider the more straightforward cases in which sandpiles are studied with periodic boundary conditions (amounting to $H(1) = H(L)$ in one dimension). In such cases the SOC steady state can be tuned into by various approaches. It can be reached by using a carefully tuned bulk dissipation $\epsilon \sim L^{-2}$ \cite{vespignani-zapperi:1997}. In this case, periodic BCs are also the best since the scaling of the system is not a mixture of boundary and bulk scaling \cite{LFH:1998}. As above, we arrive at \begin{equation} \frac{\partial H}{\partial t} = \nabla^2 H - z_c(x,H) -\epsilon(x,H) + F(x,t) \label{eq:H-eq-eps} \end{equation} with $H$ periodic. As in Eq.~(\ref{eq:H-eq}), the force $F(x,t)$ is columnar and increases on the slow time scale. The dissipation $\epsilon(x,H)$ takes now into account all the grains removed before the site at $x$ topples. It increases with a (small) probability only when a site topples and this means that $\epsilon$ explicitly depends on $H$. Therefore, a dissipation event effectively corresponds to a shift in the $z_c$ value. Thus, one obtains that the BTW model with bulk dissipation contains a very weak point-disorder component (since the increases in $\overline{F}$ equal in the statistical sense the increases in $\epsilon$). Though point-disorder is in general expected to be a relevant perturbation, in the infinite system size limit the Larkin length \cite{narayan-fisher:1993,nattermann-etal:1992} associated with the cross-over from columnar behavior diverges and thus the avalanche behavior is not governed by the weak point disorder. By this argument the BTW models with or without bulk dissipation are equivalent to the same interface depinning equation (\ref{eq:qEW}) in accordance with simulations of the BTW and bulk dissipation models \cite{chessa-etal:1998}. Note that the boundary critical behavior of the BTW model depends on the specific boundary condition: Dirichlet BCs display a different behavior \cite{D=2}, whereas Neumann BCs (reflective) are similar to the bulk. In the case of periodic BCs and bulk dissipation, the $H$-field fluctuates around an average flat profile. The terms $F(x,t)$ and $\epsilon(x,H)$ will balance each other in the steady state with an average difference such that $F_c=\zeta_c-\overline{z_c}<0$. For larger dissipation rates the system moves away from the critical point and, in analogy to Eq.~(\ref{eq:gamma/nu}), the bulk susceptibility scales as $\chi_{\rm bulk} \sim 1/\epsilon \sim \xi_{\epsilon}^{1/\nu_\epsilon}$, with $\nu_\epsilon=1/2$ \cite{vespignani-zapperi:1997}. The fixed density (or energy) drive previously used in simulations \cite{tang:1988,dickman-etal:1998} corresponds to a normal driven interface. Thus, the situation is such that $H(x,t=0)=0$, $\zeta = L^{-d} \sum_{x} F(x,0)$ with $F(x,t)=F(x,0)$, and periodic BCs and $\epsilon(x,H)\equiv 0$ such that no 'grains' are lost. The control parameter $\Delta=\zeta-\zeta_c$ ($=F-F_c\equiv f$) is varied and criticality is only obtained when $\Delta=0$; note that choosing $\zeta$ corresponds to using a spatially dependent force $F(x,0)$ with $\zeta=\left< F(x,0) \right>$. Here, the system is not generally in the SOC steady-state but by letting the control parameter $\Delta\to 0$ one reaches the critical point \cite{tang:1988,dickman-etal:1998}. The noise is set at the beginning of an avalanche at the columnar values $F(x,0)$. Depending on the exact nature of the initial configuration one may observe a different dynamic behavior but the steady-state behavior should correspond to the slowly driven case \cite{dickman-etal:1998}. In ``microcanonical'' simulations \cite{chessa-etal:1998prl} one has dissipation operating on the slow time scale with exactly the same rate as $F(x,t)$. Thus microcanonical simulations correspond to fixed density simulations with a specific initial configuration: after each avalanche, the time is reset to zero, the force is replaced with $F \to F +\nabla^2 H$, and the forces at $x'$ ($x''$) are increased (decreased) by one unit where $x'$ and $x''$ are randomly chosen sites. Finally the interface is initialized, $H\equiv 0$. Since the $\nabla^2 H$ term does not introduce correlations this new starting condition is equivalent to the fixed density case but with the initial configuration chosen to be in the SOC steady state. Combining the scaling relations (\ref{eq:gamma/nu}) and (\ref{eq:gamma}) it follows that \begin{equation} 2+d = D + 1/\nu , \label{eq:scal-rel} \end{equation} where $D = d + \chi$. In addition, the average area scales as $\left< \ell^d \right> \sim L^{1/\nu}$. These relations are also valid for sandpiles and Eq.~(\ref{eq:scal-rel}) provides estimates for $\nu$: in $d=1$, $\nu\approx 1.30$, and in $d=2$, $\nu\approx 0.78$. Numerical results yield $\nu=1.25(5)$ ($d=1$, stochastic model) \cite{veje} and $\nu=0.79(4)$ ($d=2$, BTW model) \cite{dickman-etal:1998}. Note, however, that the estimates quoted for $\nu$ for sandpile models depend on the relation $D = d+\chi$, which means that the underlying assumption is that the roughness exponent $\chi$ can be defined for slowly driven sandpile models. \section{Conclusions} In summary, we have started from the depinning equation (\ref{eq:qEW}) to discuss the continuum description of self-organized critical sandpile models. Thus, their upper critical dimension is $d_c=4$ and a scaling relation for the correlation length exponent $\nu$ is obtained. We find that the BTW model has columnar disorder $F$ on the avalanche time scale whereas the stochastic models have explicitly point disorder included. Other models with slightly modified toppling rules (e.g., the Manna model \cite{manna:1992}) may or may not belong to the same classes depending on the noise terms arising from the mapping (this we are currently investigating further in \cite{allau:1999,unpublished}). The present approach shows that the relevant noise for sandpiles is 'quenched'. The physics of sandpiles is such that the random decisions or events (grain deposition, choices for thresholds) are frozen into the dynamics of a site as long as it is stable, and their memory decays only slowly as the activity goes on. A recent field theory for $\rho(x,t)$ used analogies from systems with absorbing states and assumed that the noise was Reggeon field-theory like (i.e., time-dependent and not quenched) \cite{dickman-etal:1998}. Physically, the effect which is not incorporated in such Gaussian correlations is that the pinning forces along the interface selects a pinning path in the random media which stops the avalanche. The mapping between interface and sandpile dynamics allows one to characterize the sandpile universality classes by the quenched noise in the interface equations. It also allows to gain novel insight about the previously introduced ways of reaching the depinning critical point: balancing the force with dissipation (slow drive, or self-organized criticality), tuning the average force (as for fixed density sandpiles), tuning the interface velocity (extremal drive criticality), and finally tuning the driving force. This becomes possible because of the diffusive character of interface or sandpile dynamics and because of the Abelian character of the linear interface equation. K. B. L. is supported by the Carlsberg Foundation. \vspace*{-4mm}
{ "redpajama_set_name": "RedPajamaArXiv" }
2,344
Liens externes liste des médaillés sur sports123.com Championnats d'Europe de gymnastique artistique Liste de sportifs Gymnastique artistique masculine
{ "redpajama_set_name": "RedPajamaWikipedia" }
683
The city of Fontana Community Services Department will host the Fontana Arts Festival from 5-10 p.m. in Downtown Fontana, Saturday, July 30, on Sierra Avenue between Seville Avenue and Orange Way in conjunction with the Sunset on Sierra event. The combined Arts Festival and Sunset on Sierra will feature art, music, local eateries and family-friendly fun activities. The event also will provide a look at the city's potential street improvements that are among the proposals in the Fontana Active Transportation Plan and General Plan update. Together, the Fontana Arts Festival and the Sunset on Sierra event will showcase a variety of artists and entertainment in an open streets pedestrian and bicycle-friendly atmosphere. The programming will feature a three-dimensional chalk art display; a three-dimensional glow-in-the-dark booth; live on-stage musical performances by The Jay Leno Show opening band the Soulville Band, saxophonist Jason Klunk and Afro-Latin band Quitapenas; muralist Armani Carter; children's activities; free interactive art activities; caricature and balloon artists and more. Sunset on Sierra will showcase temporary, complete street installations on Arrow Boulevard that make it safer and easier for pedestrians and bicyclists. City officials hope to gain valuable feedback from the community to develop the Fontana Active Transportation Plan, which will guide future plans for street improvements. Curb extensions – a traffic-calming measure used to extend the sidewalk and reduce street-crossing distances for pedestrians. Parklets – extensions of the sidewalk to converts one or more on-street parking lanes to create more usable public space. For more information about the Fontana Arts Festival, contact the Fontana Art Depot Gallery at (909) 349-6975 or visit www.Arts.Fontana.org. For more information about Sunset on Sierra, visit this website, email mreza@fontana.org or call (909) 350-7607.
{ "redpajama_set_name": "RedPajamaC4" }
8,390
\section{Introduction} \label{sec:introduction} Cavity quantum electrodynamics \cite{Berman94,Raimond01,Mabuchi02,Vahala03,Khitrova06,Carmichael07a} has as its central objective the realization of strong dipole coupling between a discrete transition in matter (e.g.,~an atom or quantum dot) and a mode of an electromagnetic cavity. Most often strong coupling is demonstrated through the realization of vacuum Rabi splitting \cite{Mondragon83,Agarwal84}. First realized for Rydberg atoms in superconducting microwave cavities \cite{Bernardot92,Brune96} and for transitions at optical wavelengths in high-finesse Fabry Perots \cite{Thompson92,Childs96,Boca04,Maunz05}, vacuum Rabi splitting was recently observed in monolithic structures where the discrete transition is provided by a semiconductor quantum dot \cite{Yoshie04,Reithmaier04,Peter05}, and in a coupled system of qubit and resonant circuit engineered from superconducting electronics \cite{Wallraff04}. More generally, vacuum Rabi spectra can be observed for any pair of coupled harmonic oscillators \cite{Carmichael94} without the need for strong coupling of the one-atom kind. Prior to observations for single atoms and quantum dots, similar spectra were observed in many-atom \cite{Raizen89,Zhu90,Gripp96} and -exciton \cite{Weisbuch92,Khitrova99} systems where the radiative coupling is collectively enhanced. The definitive signature of single-atom strong coupling is the large effect a single photon in the cavity has on the reflection, side-scattering, or transmission of another photon. Strong coupling has a dramatic effect, for example, on the delayed photon coincidence rate in forwards scattering when a cavity QED system is coherently driven on axis \cite{Carmichael85,Rice88,Carmichael91,Brecha99}. Photon antibunching is seen at a level proportional to the parameter $2C_1=2g^2/\gamma\kappa$ \cite{Carmichael91}, where $g$ is the atomic dipole coupling constant, $\gamma$ is the atomic spontaneous emission rate, and $2\kappa$ is the photon loss rate from the cavity; the collective parameter $2C=N2C_1$, with $N$ the number of atoms, does not enter into the magnitude of the effect when $N\gg1$. In the one-atom case, and for $2\kappa\gg\gamma$, the size of the effect is raised to $(2C_1)^2$ \cite{Carmichael85,Rice88} [see Eq.~(\ref{eqn:g2_ideal})]. The first demonstration of photon antibunching was made \cite{Rempe91} for moderately strong coupling ($2C_1\approx4.6$) and $N=18$, $45$, and $110$ (effective) atoms. The measurement has subsequently been repeated for somewhat higher values of $2C_1$ and slightly fewer atoms \cite{Mielke98,Foster00a}, and a measurement for one trapped atom \cite{Birnbaum05}, in a slightly altered configuration, has demonstrated the so-called photon blockade effect \cite{Imamoglu97,Werner99,Rebic99,Rebic02,Kim99,Smolyaninov02}---i.e., the antibunching of forwards-scattered photons for coherent driving of a vacuum-Rabi resonance, in which case a two-state approximation may be made \cite{Tian92}, assuming the coupling is sufficiently strong. The early experiments of Rempe {\it et al.\/} \cite{Rempe91} and those of Mielke {\it et al.\/} \cite{Mielke98} and Foster {\it et al.\/} \cite{Foster00a} employ systems designed around a Fabry-Perot cavity mode traversed by a thermal atomic beam. Their theoretical modeling therefore presents a significant challenge, since for the numbers of {\it effective\/} atoms used, the atomic beam carries hundreds of atoms---typically an order of magnitude larger than the effective number \cite{Carmichael99}---into the interaction volume. The Hilbert space required for exact calculations is enormous ($2^{100}\sim10^{30}$); it grows and shrinks with the number of atoms, which inevitably fluctuates over time; and the atoms move through a spatially varying cavity mode, so their coupling strengths are changing in time. Ideally, all of these features should be taken into account, although certain approximations might be made. For weak excitation, as in the experiments, the lowest permissible truncation of the Hilbert space---when calculating two-photon correlations---is at the two-quanta level. Within a two-quanta truncation, relatively simple formulas can be derived so long as the atomic motion is overlooked \cite{Carmichael91,Brecha99}. It is even possible to account for the unequal coupling strengths of different atoms, and, through a Monte-Carlo average, fluctuations in their spatial distribution \cite{Rempe91}. A significant discrepancy between theory and experiment nevertheless remains: Rempe {\it et al.\/} \cite{Rempe91} describe how the amplitude of the Rabi oscillation (magnitude of the antibunching effect) was scaled down by a factor of 4 and a slight shift of the theoretical curve was made in order to bring their data into agreement with this model; the discrepancy persists in the experiments of Foster {\it et al.\/} \cite{Foster00a}, except that the required adjustment is by a scale factor closer to 2 than to 4. Attempts to account for these discrepancies have been made but are unconvincing. Martini and Schenzle \cite{Martini01} report good agreement with one of the data sets from Ref.~\cite{Rempe91}; they numerically solve a many-atom master equation, but under the unreasonable assumption of stationary atoms and equal coupling strengths. The unlikely agreement results from using parameters that are very far from those of the experiment---most importantly, the dipole coupling constant is smaller by a factor of approximately 3. Foster {\it et al.\/} \cite{Foster00a} report a rather good theoretical fit to one of their data sets. It is obtained by using the mentioned approximations and adding a detuning in the calculation to account for the Doppler broadening of a misaligned atomic beam. They state that ``Imperfect alignment $\ldots$ can lead to a tilt from perpendicular of as much as $1^\circ$''. They suggest that the mean Doppler shift is offset in the experiment by adjusting the driving laser frequency and account for the distribution about the mean in the model. There does appear to be a difficulty with this procedure, however, since while such an offset should work for a ring cavity, it is unlikely to do so in the presence of the counter-propagating fields of a Fabry-Perot. Indeed, we are able to successfully simulate the procedure only for the ring-cavity case (Sec.~\ref{sec:detuning}). The likely candidates to explain the disagreement between theory and experiment have always been evident. For example, Rempe {\it et al.\/} \cite{Rempe91} state: \begin{itemize} \item[]{ ``Apparently the transient nature of the atomic motion through the cavity mode (which is not included here or in Ref.~[7]) has a profound effect in decorrelating the otherwise coherent response of the sample to the escape of a photon.''} \end{itemize} \noindent and also: \begin{itemize} \item[]{ ``Empirically, we also know that $|g^{(2)}(0)-1|$ is reduced somewhat because the weak-field limit is not strictly satisfied in our measurements.''} \end{itemize} \noindent To these two observations we should add---picking up on the comment in \cite{Foster00a}---that in a standing-wave cavity an atomic beam misalignment would make the decorrelation from atomic motion a great deal worse. Thus, the required improvements in the modeling are: (i) a serious accounting for atomic motion in a thermal atomic beam, allowing for up to a few hundred interacting atoms and a velocity component along the cavity axis, and (ii) extension of the Hilbert space to include 3, 4, etc.\ quanta of excitation, thus extending the model beyond the weak-field limit. The first requirement is entirely achievable in a quantum trajectory simulation \cite{Carmichael93,Dalibard92,Dum92,Gardiner04,Carmichael07b}, while the second, even with recent improvements in computing power, remains a formidable challenge. In this paper we offer an explanation of the discrepancies between theory and experiment in the measurements of Refs.~\cite{Rempe91} and \cite{Foster00a}. We perform {\it ab initio\/} quantum trajectory simulations in parallel with a Monte-Carlo simulation of a tilted atomic beam. The parameters used are listed in Table \ref{tab:parameters}: Set 1 corresponds to the data displayed in Fig.~4(a) of Ref.~\cite{Rempe91}, and Set 2 to the data displayed in Fig.~4 of Ref.~\cite{Foster00a}. All parameters are measured quantities--- or are inferred from measured quantities---and the atomic beam tilt alone is varied to optimize the data fit. Excellent agreement is demonstrated for atomic beam misalignments of approximately $10\mkern2mu{\rm mrad}$ (a little over $1/2^\circ$). These simulations are performed using a two-quanta truncation of the Hilbert space. Simulations based upon a three-quanta truncation are also carried out, which, although not adequate for the experimental conditions, can begin to address physics beyond the weak-field limit. From these, an inconsistency with the intracavity photon number reported by Foster {\it et al.\/} \cite{Foster00a} is found. \begin{table}[t] \begin{tabular}{|c||c|c|} \hline Parameter & Set~$1$ & Set~$2$\\ \hline \hline \vbox{\vskip3pt\hbox{cavity halfwidth}\vskip3pt\hbox{$\mkern45mu\kappa/2\pi$}\vskip1pt} & \vbox{\hbox{$0.9\mkern1mu{\rm MHz}$}\vskip6pt} & \vbox{\hbox{$7.9\mkern1mu{\rm MHz}$}\vskip6pt}\\ \hline \vbox{\vskip3pt\hbox{dipole coupling constant}\vskip3pt\hbox{$\mkern70mug_{\rm max}/\kappa$}\vskip1pt} & \vbox{\hbox{$3.56$}\vskip6pt} & \vbox{\hbox{$1.47$}\vskip6pt}\\ \hline \vbox{\vskip3pt\hbox{atomic linewidth}\vskip3pt\hbox{$\mkern50mu\gamma/\kappa$}\vskip1pt} & \vbox{\hbox{$5.56$}\vskip6pt} & \vbox{\hbox{$0.77$}\vskip6pt}\\ \hline \vbox{\vskip3pt\hbox{mode waist}\vskip4pt\hbox{$\mkern33muw_{\rm 0}$}\vskip1pt} & \vbox{\hbox{$50\mkern1mu\mu{\rm m}$}\vskip3pt} & \vbox{\hbox{$21.5\mkern1mu\mu{\rm m}$}\vskip3pt}\\ \hline \vbox{\vskip3pt\hbox{wavelength}\vskip3pt\hbox{$\mkern35mu\lambda$}\vskip1pt} & \vbox{\hbox{$\mkern5mu852{\rm nm}$ (Cs)}\vskip3pt} & \vbox{\hbox{$\mkern5mu780{\rm nm}$ (Rb)}\vskip3pt}\\ \hline \hline \vbox{\vskip3pt\hbox{effective atom number}\vskip4pt\hbox{$\mkern75mu\bar N_{\rm eff}$}\vskip1pt} & \vbox{\hbox{18}\vskip6pt} & \vbox{\hbox{13}\vskip6pt}\\ \hline \vbox{\vskip3pt\hbox{oven temperature}\vskip3pt\hbox{$\mkern60muT$}\vskip1pt} & \vbox{\hbox{$473\mkern1mu{\rm K}$}\vskip6pt} & \vbox{\hbox{$430\mkern1mu{\rm K}$}\vskip6pt}\\ \hline \vbox{\vskip3pt\hbox{mean speed in oven}\vskip3pt\hbox{$\mkern65mu\overline{v}_{\rm oven}$}\vskip1pt} & \vbox{\hbox{$ 274.5\mkern1mu{\rm m\!/s}$}\vskip4pt} & \vbox{\hbox{$326.4\mkern1mu {\rm m\!/s}$}\vskip4pt}\\ \hline \vbox{\vskip3pt\hbox{mean speed in beam}\vskip3pt\hbox{$\mkern65mu\overline{v}_{\rm beam}$}\vskip1pt} & \vbox{\hbox{$323.4\mkern1mu{\rm m\!/s}$}\vskip4pt} & \vbox{\hbox{$384.5\mkern1mu {\rm m\!/s}$}\vskip4pt}\\ \hline \end{tabular} \caption{ Parameters used in the simulations. Set 1 is taken from Ref.~\cite{Rempe91} and Set 2 from Ref.~\cite{Foster00a}.} \label{tab:parameters} \end{table} Our model is described in Sec.~\ref{sec:cavityQED_atomic_beams}, where we formulate the stochastic master equation used to describe the atomic beam, its quantum trajectory unraveling, and the two-quanta truncation of the Hilbert space. The previous modeling on the basis of a stationary-atom approximation is reviewed in Sect.~\ref{sec:stationary_atoms} and compared with the data of Rempe {\it et al.\/} \cite{Rempe91} and Foster {\it et al.\/} \cite{Foster00a}. The effects of atomic beam misalignment are discussed in Sec.~\ref{sec:atomic_beam}; here the results of simulations with a two-quanta truncation are presented. Results obtained with a three-quanta truncation are presented in Sec.~\ref{sec:photon_number}, where the issue of intracavity photon number is discussed. Our conclusions are stated in Sec.~\ref{sec:conclusions}. \section{Cavity QED with Atomic Beams} \label{sec:cavityQED_atomic_beams} \subsection{Stochastic Master Equation: Atomic Beam Simulation} \label{sec:beam_simulation} Thermal atomic beams have been used extensively for experiments in cavity QED \cite{Bernardot92,Brune96,Thompson92,Childs96,Raizen89,Zhu90,Gripp96,Rempe91,Mielke98,Foster00a}. The experimental setups under consideration are described in detail in Refs.~\cite{Brecha90} and \cite{Foster99}. As typically, the beam is formed from an atomic vapor created inside an oven, from which atoms escape through a collimated opening. We work from the standard theory of an effusive source from a thin-walled oriface \cite{Ramsey56}, for which for an effective number $\bar N_{\rm eff}$ of intracavity atoms \cite{Thompson92,Carmichael99} and cavity mode waist $\omega_0$ ($\bar N_{\rm eff}$ is the average number of atoms within a cylinder of radius $w_0/2$), the average escape rate is \begin{equation} R=64\bar N_{\rm eff}\bar v_{\rm beam}/3\pi^2w_0, \end{equation} with mean speed in the beam \begin{equation} \bar v_{\rm beam}=\sqrt{9\pi k_BT/8M}, \end{equation} where $k_B$ is Boltzmann's constant, $T$ is the oven temperature, and $M$ is the mass of an atom; the beam has atomic density \begin{equation} \varrho=4\bar N_{\rm eff}/\pi w_0^2l, \label{eqn:density} \end{equation} where $l$ is the beam width, and distribution of atomic speeds \begin{equation} P(v)dv=2u^3(v)e^{-u^2(v)}du(v), \label{eqn:speed_dist} \end{equation} $u(v)\equiv 2v/\sqrt\pi\mkern2mu\bar v_{\rm oven}$, where \begin{equation} \bar v_{\rm oven}=\sqrt{8k_BT/\pi M}=(8/3\pi)\bar v_{\rm beam} \end{equation} is the mean speed of an atom inside the oven, as calculated from the Maxwell-Boltzmann distribution. Note that $\bar v_{\rm beam}$ is larger than $\bar v_{\rm oven}$ because those atoms that move faster inside the oven have a higher probability of escape. In an open-sided cavity, neither the interaction volume nor the number of interacting atoms is well-defined; the cavity mode function and atomic density are the well-defined quantities. Clearly, though, as the atomic dipole coupling strength decreases with the distance of the atom from the cavity axis, those atoms located far away from the axis may be neglected, introducing, in effect, a finite interaction volume. How far from the cavity axis, however, is far enough? One possible criterion is to require that the interaction volume taken be large enough to give an accurate result for the collective coupling strength, or, considering its dependence on atomic locations (at fixed average density), the {\it probability distribution\/} over collective coupling strengths. According to this criterion, the actual number of interacting atoms is typically an order of magnitude larger than $\bar N_{\rm eff}$ \cite{Carmichael99}. If, for example, one introduces a cut-off parameter $F<1$, and defines the interaction volume by \cite{Carmichael99,Carmichael96,Sanders97} \begin{equation} V_F\equiv\{(x,y,z):g(x,y,z)\ge F g_{\rm max}\}, \label{eqn:interaction_volume} \end{equation} with \begin{equation} \label{eqn:coupling_strength} g(x,y,z)=g_{\rm max}\cos(kz)\exp\!\left[-(x^2+y^2)/w_0^2\right] \end{equation} the spatially varying coupling constant for a standing-wave TEM$_{00}$ cavity mode \cite{note_cavity_mode}---wavelength $\lambda=2\pi/k$---the computed collective coupling constant is \cite{Carmichael99} \begin{equation} \sqrt{\bar N_{\rm eff}}\mkern5mug_{\rm max}\to\sqrt{\bar N_{\rm eff}^F}\mkern5mug_{\rm max},\nonumber \end{equation} with \begin{equation} \bar N_{\rm eff}^F=(2\bar N_{\rm eff}/\pi)\mkern-5mu\left[(1-2F^2)\cos^{-1}F+F\sqrt{1-F^2}\right]. \label{eqn:effective_atom_number} \end{equation} For the choice $F=0.1$, one obtains $\bar N_{\rm eff}^F=0.98\bar N_{\rm eff}$, a reduction of the collective coupling strength by 1\%, and the interaction volume---radius $r\approx3(w_0/2)$---contains approximately $9\bar N_{\rm eff}$ atoms on average. This is the choice made for the simulations with a three-quanta truncation reported in Sec.~\ref{sec:photon_number}. When adopting a two-quanta truncation, with its smaller Hilbert space for a given number of atoms, we choose $F=0.01$, which yields $\bar N_{\rm eff}^F=0.9998\bar N_{\rm eff}$ and $r\approx4.3(w_0/2)$, and approximately $18\bar N_{\rm eff}$ atoms in the interaction volume on average. In fact, the volume used in practice is a little larger than $V_F$. In the course of a Monte-Carlo simulation of the atomic beam, atoms are created randomly at rate $R$ on the plane $x=-w_0\sqrt{|\ln F|}$. At the time, $t_0^j$, of its creation, each atom is assigned a random position and velocity ($j$ labels a particular atom), \begin{equation} {\bm r}_j(t_0^j)=\mkern-3mu\left(\begin{matrix}-w_0\sqrt{|\ln F|}\\\noalign{\vskip2pt}y_j(t_0^j)\\ \noalign{\vskip3pt} z_j(t_0^j)\end{matrix}\right), \qquad {\bm v_j}=v_j\mkern-3mu\left(\begin{matrix}\cos\theta\\0\\\sin\theta\end{matrix}\right), \end{equation} where $y_j(t_0^j)$ and $z_j(t_0^j)$ are random variables, uniformly distributed on the intervals $|y_j(t_0^j)| \leq w_0\sqrt{|\ln F|}\mkern2mu$ and $|z_j(t_0^j)|\leq \lambda/4$, respectively, and $v_j$ is sampled from the distribution of atomic speeds [Eq.~(\ref{eqn:speed_dist})]; $\theta$ is the tilt of the atomic beam away from perpendicular to the cavity axis. The atom moves freely across the cavity after its creation, passing out of the interaction volume on the plane $x=w_0\sqrt{|\ln F|}$. Thus the interaction volume has a square rather than circular cross section and measures $2\sqrt{|\ln F|}w_0$ on a side. It is larger than $V_F$ by approximately $30\%$. Atoms are created in the ground state and returned to the ground state when they leave the interaction volume. On leaving an atom is disentangled from the system by comparing its probability of excitation with a uniformly distributed random number $r$, $0\leq r\leq1$, and deciding whether or not it will---anytime in the future---spontaneously emit; thus, the system state is projected onto the excited state of the leaving atom (the atom will emit) or its ground state (it will not emit) and propagated forwards in time. Note that the effects of light forces and radiative heating are neglected. At the thermal velocities considered, typically the ratio of kinetic energy to recoil energy is of order $10^8$, while the maximum light shift $\hbar g_{\rm max}$ (assuming one photon in the cavity) is smaller than the kinetic energy by a factor of $10^7$; even if the axial component of velocity only is considered, these ratios are as high as $10^4$ and $10^3$ with $\theta\sim10\mkern2mu{\rm mrad}$, as in Figs.~\ref{fig:fig10} and \ref{fig:fig11}. In fact, the mean intracavity photon number is considerably less than one (Sec.~\ref{sec:photon_number}); thus, for example, the majority of atoms traverse the cavity without making a single spontaneous emission. Under the atomic beam simulation, the atom number, $N(t)$, and locations ${\bm r_j(t)}$, $j=1,\ldots,N(t)$, are changing in time; therefore, the atomic state basis is dynamic, growing and shrinking with $N(t)$. We assume all atoms couple resonantly to the cavity mode, which is coherently driven on resonance with driving field amplitude $\cal{E}$. Then, including spontaneous emission and cavity loss, the system is described by the stochastic master equation in the interaction picture \begin{eqnarray} \dot{\rho}&=&{\cal E}[\hat a^{\dag}-\hat a,\rho]+\sum_{j=1}^{N(t)}g({\bm r}_j(t)) [\hat a^{\dag}\hat\sigma_{j-}-\hat a\hat \sigma_{j+},\rho]\nonumber\\ \noalign{\vskip-4pt} &&+\frac{\gamma}{2}\sum_{j=1}^{N(t)} \left(2\hat\sigma_{j-}\rho\hat\sigma_{j+}-\hat\sigma_{j+}\hat\sigma_{j-}\rho -\rho\hat\sigma_{j+}\hat\sigma_{j-}\right)\nonumber\\ \noalign{\vskip6pt} &&+\kappa\left(2\hat a\rho\hat a^{\dag}-\hat a^{\dag}\hat a\rho -\rho\hat a^{\dag}\hat a\right), \label{eqn:master_equation} \end{eqnarray} with dipole coupling constants \begin{equation} g({\bm r}_j(t))=g_{\rm max}\cos(kz_j(t))\exp\!\left[-\frac{x_j^2(t)+y_j^2(t)}{w_0^2}\right], \label{eqn:coupling_constant} \end{equation} where $\hat a^\dagger$ and $\hat a$ are creation and annihilation operators for the cavity mode, and $\hat\sigma_{j+}$ and $\hat\sigma_{j-}$, $j=1\ldots N(t)$, are raising and lowering operators for two-state atoms. \subsection{Quantum Trajectory Unraveling} In principle, the stochastic master equation might be simulated directly, but it is impossible to do so in practice. Table \ref{tab:parameters} lists effective numbers of atoms $\bar N_{\rm eff}=18$ and $\bar N_{\rm eff}=13$. For cut-off parameter $F=0.01$ and an interaction volume of approximately $1.3\times V_F$ [see the discussion below Eq.~(\ref{eqn:effective_atom_number})], an estimate of the number of interacting atoms gives $N(t)\sim1.3\times18\bar N_{\rm eff}\approx420$ and $300$, respectively, which means that even in a two-quanta truncation the size of the atomic state basis ($\sim10^5$ states) is far too large to work with density matrix elements. We therefore make a quantum trajectory unraveling of Eq.~(\ref{eqn:master_equation}) \cite{Carmichael93,Dalibard92,Dum92,Gardiner04,Carmichael07b}, where, given our interest in delayed photon coincidence measurements, conditioning of the evolution upon direct photoelectron counting records is appropriate: the (unnormalized) conditional state satisfies the nonunitary Schr\"odinger equation \begin{equation} \frac{d|\bar\psi_{\rm REC}\rangle}{dt}=\frac1{i\hbar}\hat H_B(t)|\bar\psi_{\rm REC}\rangle, \label{eqn:continuous} \end{equation} with non-Hermitian Hamiltonian \begin{eqnarray} \hat H_B(t)/i\hbar&=&{\cal E}(\hat a^{\dag}-\hat a)+\sum_{j=1}^{N(t)} g({\bm r}_j(t)) (\hat a^{\dag}\hat\sigma_{j-}-\hat a\hat\sigma_{j+})\nonumber \\ &&-\mkern3mu\kappa\hat a^{\dag}\hat a-\frac{\gamma}{2} \sum_{j=1}^{N(t)}\hat\sigma_{j+}\hat\sigma_{j-}, \label{eqn:Hamiltonian1} \end{eqnarray} and this continuous evolution is interrupted by quantum jumps that account for photon scattering. There are $N(t)+1$ scattering channels and correspondingly $N(t)+1$ possible jumps: \begin{subequations} \begin{eqnarray} |\bar\psi_{\rm REC}\rangle\to\hat a|\bar\psi_{\rm REC}\rangle, \label{eqn:cavity_jump}\\\nonumber \end{eqnarray} for forwards scattering---i.e., the transmission of a photon by the cavity---and \begin{equation} |\bar\psi_{\rm REC}\rangle\to\hat\sigma_{j-}|\bar\psi_{\rm REC}\rangle,\qquad j=1,\ldots,N(t), \label{eqn:atom_jump} \end{equation} \end{subequations} for scattering to the side (spontaneous emission). These jumps occur, in time step $\Delta t$, with probabilities \begin{subequations} \begin{equation} P_{\rm forwards}=2\kappa\langle\hat a^\dag\hat a\rangle_{\rm REC}\Delta t, \label{eqn:forwards_prob} \end{equation} and \begin{equation} P_{\rm side}^{(j)}=\gamma\langle\hat\sigma_{j+}\hat\sigma_{j-}\rangle_{\rm REC}\Delta t,\qquad j=1,\ldots,N(t); \label{eqn:side_prob} \end{equation} otherwise, with probability \end{subequations} \begin{equation} 1-P_{\rm forwards}-\sum_{j=1}^{N(t)}P_{side}^{(j)},\nonumber \end{equation} the evolution under Eq.~(\ref{eqn:continuous}) continues. For simplicity, and without loss of generality, we assume a negligible loss rate at the cavity input mirror compared with that at the output mirror. Under this assumption, backwards scattering quantum jumps need not be considered. Note that non-Hermitian Hamiltonian (\ref{eqn:Hamiltonian1}) is explicitly time dependent and stochastic, due to the Monte-Carlo simulation of the atomic beam, and the normalized conditional state is \begin{equation} |\psi_{\rm REC}\rangle=\frac{|\bar\psi_{\rm REC}\rangle}{\sqrt{\langle\bar\psi_{\rm REC}|\bar\psi_{\rm REC}\rangle}}. \end{equation} \subsection{Two-Quanta Truncation} Even as a quantum trajectory simulation, a full implementation of our model faces difficulties. The Hilbert space is enormous if we are to consider a few hundred two-state atoms, and a smaller collective-state basis is inappropriate, due to spontaneous emission and the coupling of atoms to the cavity mode at unequal strengths. If, on the other hand, the coherent excitation is sufficiently weak, the Hilbert space may be truncated at the two-quanta level. The conditional state is expanded as \begin{widetext} \begin{equation} |\psi_{\rm REC}(t)\rangle=|00\rangle+\alpha(t)|10\rangle+\sum_{j=1}^{N(t)}\beta_j(t)|0j\rangle+\eta(t) |20\rangle+\sum_{j=1}^{N(t)}\zeta_j(t)|1j\rangle+\!\!\sum_{j>k=1}^{N(t)}\vartheta_{jk}(t)|0jk\rangle, \label{eqn:two_quanta_state} \end{equation} \end{widetext} where the state $|n0\rangle$ has $n=0,1,2$ photons inside the cavity and no atoms excited, $|0j\rangle$ has no photon inside the cavity and the $j\mkern1mu^{\rm th}$ atom excited, $|1j\rangle$ has one photon inside the cavity and the $j\mkern1mu^{\rm th}$ atom excited, and $|0jk\rangle$ is the two-quanta state with no photons inside the cavity and the $j\mkern1mu^{\rm th}$ and $k^{\rm th}$ atoms excited. The truncation is carried out at the minimum level permitted in a treatment of two-photon correlations. Since each expansion coefficient need be calculated to dominant order in ${\cal E}/\kappa$ only, the non-Hermitian Hamiltonian (\ref{eqn:Hamiltonian1}) may be simplified as \begin{eqnarray} \hat H_B(t)/i\hbar&=&{\cal E}\hat a^{\dag}+\sum_{j=1}^{N(t)} g({\bm r}_j(t)) (\hat a^{\dag}\hat\sigma_{j-}-\hat a\hat\sigma_{j+})\nonumber \\ &&-\mkern3mu\kappa\hat a^{\dag}\hat a-\frac{\gamma}{2} \sum_{j=1}^{N(t)}\hat\sigma_{j+}\hat\sigma_{j-}, \label{eqn:Hamiltonian2} \end{eqnarray} dropping the term $-{\cal E}\hat a$ from the right-hand side. While this self-consistent approximation is helpful in the analytical calculations reviewed in Sec.~\ref{sec:stationary_atoms}, we do not bother with it in the numerical simulations. Truncation at the two-quanta level may be justified by expanding the density operator, along with the master equation, in powers of ${\cal E}/\kappa$ \cite{Carmichael85,Rice88,Carmichael07c}. One finds that, to dominant order, the density operator factorizes as a pure state, thus motivating the simplification used in all previous treatments of photon correlations in many-atom cavity QED \cite{Carmichael91,Brecha99}. The quantum trajectory formulation provides a clear statement of the physical conditions under which this approximation holds. Consider first that there is a fixed number of atoms $N$ and their locations are also fixed. Under weak excitation, the jump probabilities (\ref{eqn:forwards_prob}) and (\ref{eqn:side_prob}) are very small, and quantum jumps are extremely rare. Then, in a time of order $2(\kappa+\gamma/2)^{-1}$, the continuous evolution (\ref{eqn:continuous}) takes the conditional state to a stationary state, satisfying \begin{equation} \hat H_B|\psi_{\rm ss}\rangle=0, \label{eqn:stationary_state} \end{equation} without being interrupted by quantum jumps. In view of the overall rarity of these jumps, to a good approximation the density operator is \begin{equation} \rho_{\rm ss}=|\psi_{\rm ss}\rangle\langle\psi_{\rm ss}|, \end{equation} or, if we recognize now the role of the atomic beam, the continuous evolution reaches a quasi-stationary state, with density operator \begin{equation} \rho_{\rm ss}=\overline{\vphantom{\vbox{\vskip8pt}}|\psi_{\rm qs}(t)\rangle\langle\psi_{\rm qs}(t)|\mkern-2mu} \mkern2mu, \end{equation} where $|\psi_{\rm qs}(t)\rangle$ satisfies Eq.~(\ref{eqn:continuous}) (uninterrupted by quantum jumps) and the overbar indicates an average over the fluctuations of the atomic beam. This picture of a quasi-stationary pure-state evolution requires the time between quantum jumps to be much larger than $2(\kappa+\gamma/2)^{-1}$, the time to recover the quasi-stationary state after a quantum jump has occurred. In terms of photon scattering rates, we require \begin{equation} R_{\rm forwards}+R_{\rm side}\ll{\textstyle\frac12}(\kappa+\gamma/2), \label{eqn:weak_field_limit1} \end{equation} where \begin{subequations} \begin{eqnarray} R_{\rm forwards}&=&2\kappa\langle\hat a^\dagger\hat a\rangle_{\rm REC},\label{eqn:forwards_rate}\\ R_{\rm side}&=&\gamma\sum_{j=1}^{N(t)}\langle\hat\sigma_{j+}\hat\sigma_{j-}\rangle_{\rm REC}. \label{eqn:side_rate} \end{eqnarray} \end{subequations} When considering delayed photon coincidences, after a first forwards-scattered photon is detected, let us say at time $t_k$, the two-quanta truncation [Eq.~(\ref{eqn:two_quanta_state})] is temporarily reduced by the associated quantum jump to a one-quanta truncation: \begin{equation} |\psi_{\rm REC}(t_k)\rangle\to|\psi_{\rm REC}(t_k^+)\rangle,\nonumber \end{equation} where \begin{equation} |\psi_{\rm REC}(t_k^+)\rangle=|00\rangle+\alpha(t_k^+)|10\rangle+\sum_{j=1}^{N(t_k)}\beta_j(t_k^+)|0j\rangle, \label{eqn:one_quanta_state} \end{equation} with \begin{equation} \alpha(t_k^+)=\frac{\sqrt2\eta(t_k)}{|\alpha(t_k)|},\qquad\beta_j(t_k^+)=\frac{\zeta(t_k)}{|\alpha(t_k)|}. \end{equation} Then the probability for a subsequent photon detection at $t_k+\tau$ is \begin{equation} P_{\rm forwards}=2\kappa|\alpha(t_k+\tau)|^2\Delta t. \label{eqn:prob_second} \end{equation} Clearly, if this probability is to be computed accurately (to dominant order) no more quantum jumps of any kind should occur before the full two-quanta truncation has been recovered in its quasi-stationary form; in the experiment a forwards-scattered ``start'' photon should be followed by a ``stop'' photon without any other scattering events in between. We discuss how well this condition is met by Rempe {\it et al.\/}~\cite{Rempe91} and Foster {\it et al.\/}~\cite{Foster00a} in Sec.~\ref{sec:photon_number}. Its presumed validity is the basis for comparing their measurements with formulas derived for the weak-field limit. \section{Delayed Photon Coincidences for Stationary Atoms} \label{sec:stationary_atoms} Before we move on to full quantum trajectory simulations, including the Monte-Carlo simulation of the atomic beam, we review previous calculations of the delayed photon coincidence rate for forwards scattering with the atomic motion neglected. Beginning with the original calculation of Carmichael {\it et al.\/}~\cite{Carmichael91}, which assumes a fixed number of atoms, denoted here by $\bar N_{\rm eff}$, all coupled to the cavity mode at strength $g_{\rm max}$, we then relax the requirement for equal coupling strengths \cite{Rempe91}; finally a Monte-Carlo average over the spatial configuration of atoms, at fixed density $\varrho$, is taken. The inadequacy of modeling at this level is shown by comparing the computed correlation functions with the reported data sets. \subsection{Ideal Collective Coupling} For an ensemble of $\bar N_{\rm eff}$ atoms located on the cavity axis and at antinodes of the standing wave, the non-Hermitian Hamiltonian (\ref{eqn:Hamiltonian2}) is taken over in the form \begin{eqnarray} \hat H_B/i\hbar&=&{\cal E}\hat a^\dagger+g_{\rm max}(\hat a^{\dag}\hat J_--\hat a\hat J_+)\nonumber\\ &&-\mkern3mu\kappa\hat a^\dagger\hat a-\frac\gamma4(\hat J_z+N_{\rm eff}), \end{eqnarray} where \begin{equation} \hat J_{\pm}\equiv\sum_{j=1}^{N_{\rm eff}}\hat\sigma_{j\pm},\qquad\hat J_z\equiv\sum_{j=1}^{N_{\rm eff}}\hat\sigma_{jz} \end{equation} are collective atomic operators, and we have written $2\hat\sigma_{j+}\hat\sigma_{j-}=\hat\sigma_{jz}+1$. The conditional state in the two-quanta truncation is now written more simply as \begin{widetext} \begin{equation} |\psi_{\rm REC}(t)\rangle=|00\rangle+\alpha(t)|10\rangle +\beta(t)|01\rangle+\eta(t)|20\rangle+\zeta(t) |11\rangle+\vartheta(t)|02\rangle, \end{equation} \end{widetext} where $|nm\rangle$ is the state with $n$ photons in the cavity and $m$ atoms excited, the $m$-atom state being a collective state. Note that, in principle, side-scattering denies the possibility of using a collective atomic state basis. While spontaneous emission from a particular atom results in the transition $|n1\rangle\to \hat\sigma_{j-}|n1\rangle\to|n0\rangle$, which remains within the collective atomic basis, the state $\hat\sigma_{j-} |n2\rangle$ lies outside it; thus, side-scattering works to degrade the atomic coherence induced by the interaction with the cavity mode. Nevertheless, its rate is assumed negligible in the weak-field limit [Eq.~(\ref{eqn:weak_field_limit1})], and therefore a calculation carried out entirely within the collective atomic basis is permitted. The delayed photon coincidence rate obtained from $|\psi_{\rm REC}(t_k)\rangle=|\psi_{\rm ss}\rangle$ and Eqs.~(\ref{eqn:one_quanta_state}) and (\ref{eqn:prob_second}) yields the second-order correlation function \cite{Carmichael91,Brecha99,Carmichael07d} \begin{widetext} \begin{equation} g^{(2)}(\tau)=\left\{ 1-2C_1\frac{\xi}{1+\xi}\frac{2C}{1+2C-2C_1\xi/(1+\xi)} \,e^{-\frac{1}{2}(\kappa+\gamma/2)\tau}\! \left[\cos\left(\Omega\tau\right) \!+\!\frac{\frac{1}{2}(\kappa+\gamma/2)}{\Omega} \sin\left(\Omega\tau\right)\right]\right\}^2, \label{eqn:g2_ideal} \end{equation} \end{widetext} with vacuum Rabi frequency \begin{equation} \Omega=\sqrt{\bar N_{\rm eff}g_{\rm max}^2-{\textstyle\frac14}(\kappa-\gamma/2)^2}, \label{eqn:vacuum_Rabi_frequency} \end{equation} where \begin{equation} \xi\equiv2\kappa/\gamma, \end{equation} and \begin{equation} C\equiv\bar N_{\rm eff}C_1,\qquad C_1\equiv g_{\rm max}^2/\kappa\gamma. \end{equation} For $\bar N_{\rm eff}\gg1$, as in Parameter Sets 1 and 2 (Table \ref{tab:parameters}), the deviation from second-order coherence---i.e., $g^{(2)}(\tau)=1$---is set by $2C_1\xi/(1+\xi)$ and provides a measure of the single-atom coupling strength. For small time delays the deviation is in the negative direction, signifying a photon antibunching effect. It should be emphasized that while second-order coherence serves as an unambiguous indicator of strong coupling in the single-atom sense, vacuum Rabi splitting---the frequency $\Omega$---depends on the collective coupling strength alone. Both experiments of interest are firmly within the strong coupling regime, with $2C_1\xi/(1+\xi)=1.2$ for that of Rempe {\it et al.\/} \cite{Rempe91} ($2C_1=4.6$), and $2C_1\xi/(1+\xi)=4.0$ for that of Foster {\it et al.\/} \cite{Foster00a} ($2C_1=5.6$). Figure~\ref{fig:fig1} plots the correlation function obtained from Eq.~(\ref{eqn:g2_ideal}) for Parameter Sets 1 and 2. Note that since the expression is a perfect square, the apparent photon bunching of curve (b) is, in fact, an extrapolation of the antibunching effect of curve (a); the continued nonclassicality of the correlation function is expressed through the first two side peaks, which, being taller than the central peak, are classically disallowed \cite{Rice88,Mielke98}. A measurement of the intracavity electric field perturbation following a photon detection [the square root of Eq.~(\ref{eqn:g2_ideal})] presents a more unified picture of the development of the quantum fluctuations with increasing $2C_1\xi/(1+\xi)$. Such a measurement may be accomplished through conditional homodyne detection \cite{Carmichael00,Foster00b,Foster02}. \begin{figure}[t] \hskip-0.2in \includegraphics[width=3.0in,keepaspectratio=true]{fig1.eps} \caption{Second-order correlation function for ideal coupling [Eq.~(\ref{eqn:g2_ideal})]: (a) Parameter Set 1, (b) Parameter Set 2.} \label{fig:fig1} \end{figure} In Fig.~\ref{fig:fig1} the magnitude of the antibunching effect---the amplitude of the vacuum Rabi oscillation--- is larger than observed in the experiments by approximately an order of magnitude (see Fig.~\ref{fig:fig3}). Significant improvement is obtained by taking into account the unequal coupling strengths of atoms randomly distributed throughout the cavity mode. \subsection{Fixed Atomic Configuration} \label{sec:fixed_configuration} Rempe {\it et al.\/}~\cite{Rempe91} extended the above treatment to the case of unequal coupling strengths, adopting the non-Hermitian Hamiltonian (\ref{eqn:Hamiltonian2}) while keeping the number of atoms and the atom locations fixed. For $N$ atoms in a spatial configuration $\{{\bm r}_j\}$, the second-order correlation function takes the same form as in Eq.~(\ref{eqn:g2_ideal})---still a perfect square---but with a modified amplitude of oscillation \cite{Rempe91,Carmichael07e}: \begin{widetext} \begin{equation} g^{(2)}_{\{{\bm r}_j\}}(\tau)=\left\{ 1-\frac{[1+\xi(1+C_{\{{\bm r}_j\}})] S_{\{{\bm r}_j\}}-2C_{\{{\bm r}_j\}}}{1+(1+\xi/2)S_{\{{\bm r}_j\}}}\, e^{-\frac12(\kappa+\gamma/2)\tau}\!\left[\cos\left(\Omega\tau\right) +\frac{\frac12(\kappa+\gamma/2)}{\Omega}\sin\left(\Omega\tau\right)\right]\right\}^2, \label{eqn:g2_fixed_configuration} \end{equation} \end{widetext} with \begin{equation} C_{\{{\bm r}_j\}}\equiv\sum_{j=1}^N C_{1j}, \qquad C_{1j}\equiv g^2({\bm r}_j)/\kappa\gamma, \end{equation} \begin{equation} S_{\{{\bm r}_j\}}\equiv\sum_{j=1}^N \frac{2 C_{1j}}{1+\xi(1+C_{\{{\bm r}_j\}}) -2\xi C_{1j}}, \end{equation} where the vacuum Rabi frequency is given by Eq.~(\ref{eqn:vacuum_Rabi_frequency}) with effective number of interacting atoms \begin{equation} \bar N_{\rm eff}\to N^{\{{\bm r}_j\}}_{\rm eff}\equiv\sum_{j=1}^Ng^2({\bm r}_j)/g_{\rm max}^2. \end{equation} \subsection{Monte-Carlo Average and Comparison with Experimental Results} \label{sec:Monte_Carlo_average} In reality the number of atoms and their configuration both fluctuate in time. These fluctuations are readily taken into account if the typical atomic motion is sufficiently slow; one takes a stationary-atom Monte-Carlo average over configurations, adopting a finite interaction volume $V_F$ and combining a Poisson average over the number of atoms $N$ with an average over their uniformly distributed positions ${\bm r}_j$, $j=1,\ldots,N$. In particular, the effective number of interacting atoms becomes \begin{equation} \bar N_{\rm eff}=\overline{N^{\{{\bm r}_j\}}_{\rm eff}}, \end{equation} where the overbar denotes the Monte-Carlo average. Although it is not justified by the velocities listed in Table \ref{tab:parameters}, a stationary-atom approximation was adopted when modeling the experimental results in Refs.~\cite{Rempe91} and \cite{Foster00a}. The correlation function was computed as the Monte-Carlo average \begin{equation} g^{(2)}(\tau)=\overline{g^{(2)}_{\{{\bm r}_j\}}(\tau)}, \label{eqn:g2_average1} \end{equation} with $g^{(2)}_{\{{\bf r}_j\}}(\tau)$ given by Eq.~(\ref{eqn:g2_fixed_configuration}). In fact, taking a Monte-Carlo average over {\it normalized\/} correlation functions in this way is not, strictly, correct. In practice, first the delayed photon coincidence rate is measured, as a separate average, then subsequently normalized by the average photon counting rate. The more appropriate averaging procedure is therefore \begin{equation} g^{(2)}(\tau)=\frac{\overline{\langle\hat a^\dag(0)\hat a^\dag(\tau) \hat a(\tau)\hat a(0)\rangle_{\{{\bm r}_j\}}}} {\left(\overline{\langle\hat a^\dag\hat a\rangle_{\{{\bm r}_j\}}}\mkern2mu\right)^2}, \end{equation} or, in a form revealing more directly the relationship to Eq.~(\ref{eqn:g2_fixed_configuration}), the average is to be weighted by the square of the photon number: \begin{equation} g^{(2)}(\tau)=\frac{\overline{\left(\left\langle \hat a^\dag \hat a\right\rangle_{\{{\bm r}_j\}}\right)^2g^{(2)}_{\{{\bm r}_j\}}(\tau)}} {\left(\overline{\langle\hat a^\dag\hat a\rangle_{\{{\bm r}_j\}}}\mkern2mu\right)^2}, \label{eqn:g2_average2} \end{equation} where \begin{equation} \langle\hat a^\dag\hat a\rangle_{\{{\bm r}_j\}}=\left(\frac{{\cal E}/\kappa} {1+2C_{\{{\bm r}_j\}}}\right)^2 \label{eqn:photon_number} \end{equation} is the intracavity photon number expectation---in stationary state $|\psi_{\rm ss}\rangle$ [Eq.~(\ref{eqn:stationary_state})]---for the configuration of atoms $\{{\bm r}_j\}$. Note that the statistical independence of forwards-scattering events that are widely separated in time yields the limit \begin{equation} \lim_{\tau\to\infty}g^{(2)}_{\{{\bm r}_j\}}(\tau)\to1, \end{equation} which clearly holds for the average (\ref{eqn:g2_average1}) as well. Equation~(\ref{eqn:g2_average2}), on the other hand, yields \begin{equation} \lim_{\tau\to\infty}g^{(2)}(\tau)\to\overline{\left(\left\langle \hat a^\dag \hat a\right\rangle_{\{{\bm r}_j\}}\right)^2}\bigg{/}\left(\overline{\langle\hat a^\dag\hat a\rangle_{\{{\bm r}_j\}}} \mkern2mu\right)^2\ge1. \end{equation} A value greater than unity arises because while there are fluctuations in $N$ and $\{{\bm r}_j\}$, their correlation time is infinite under the stationary-atom approximation; the expected decay of the correlation function to unity is therefore not observed. \begin{figure}[t] \hskip-0.2in \includegraphics[width=3.0in,keepaspectratio=true]{fig2.eps} \caption{Second-order correlation function with Monte-Carlo average over number of atoms $N$ and configuration $\{{\bm r_j}\}$. The average is taken according to Eq.~(\ref{eqn:g2_average1}) (thin line) and Eq.~(\ref{eqn:g2_average2}) (thick line) for (a) Parameter Set 1, (b) Parameter Set 2.} \label{fig:fig2} \end{figure} \begin{figure}[b] \begin{center} \begin{tabular}{c} \includegraphics[height=6cm]{fig3.eps} \end{tabular} \end{center} \caption{ \label{fig:fig3} Second-order correlation function with Monte-Carlo average, Eq.~(\ref{eqn:g2_average2}), over number of atoms $N$ and configuration $\{{\bm r_j}\}$ compared with the experimental data from (a) Fig.~4(a) of Ref.~\cite{Rempe91} (Parameter Set 1) and (b) Fig.~4 of Ref.~\cite{Foster00a} (Parameter Set 2).} \end{figure} The two averaging schemes are compared in the plots of Fig.~\ref{fig:fig2}, which suggest that atomic beam fluctuations should have at least a small effect in the experiments; although, just how important they turn out to be is not captured at all by the figure. The actual disagreement between the model and the data is displayed in Fig.~\ref{fig:fig3}. The measured photon antibunching effect is significantly smaller than predicted in both experiments: smaller by a factor of 4 in Fig.~\ref{fig:fig3}(a), as the authors of Ref.~\cite{Rempe91} explicitly state, and by a factor of a little more than 2 in Fig.~\ref{fig:fig3}(b). The rest of the paper is devoted to a resolution of this disagreement. It certainly arises from a breakdown of the stationary-atom approximation as suggested by Rempe {\it et al.\/} \cite{Rempe91}. Physics beyond the addition of a finite correlation time for fluctuations of $N(t)$ and $\{{\bm r}_j(t)\}$ is needed, however. We aim to show that the single most important factor is the alignment of the atomic beam. \section{Delayed Photon Coincidences for an Atomic Beam} \label{sec:atomic_beam} We return now to the full atomic beam simulation outlined in Sec.~\ref{sec:cavityQED_atomic_beams}. With the beam perpendicular to the cavity axis, the rate of change of the dipole coupling constants might be characterized by the cavity-mode transit time, determined from the mean atomic speed and the cavity-mode waist. Taking the values of these quantities from Table \ref{tab:parameters}, the experiment of Rempe {\it et al.\/} has $w_0/\bar v_{\rm source}=182\mkern2mu{\rm nsec}$, which should be compared with a vacuum-Rabi-oscillation decay time $2(\kappa+\gamma/2)^{-1}=94\mkern2mu{\rm nsec}$, while Foster {\it et al.\/} have $w_0/\bar v_{\rm source} =66\mkern2mu{\rm nsec}$ and a decay time $2(\kappa+\gamma/2)^{-1}=29\mkern2mu{\rm nsec}$. In both cases, the ratio between the transit time and decay time is $\sim2$; thus, we might expect the internal state dynamics to follow the atomic beam fluctuations adiabatically, to a good approximation at least, thus providing a justifying for the stationary-atom approximation. Figure~\ref{fig:fig3} suggests that this is not so. Our first task, then, is to see how well in practice the adiabatic following assertion holds. \subsection{Monte-Carlo Simulation of the Atomic Beam: Effect of Beam Misalignment} \label{sec:correlation_function} Atomic beam fluctuations induce fluctuations of the intracavity photon number expectation, as illustrated by the examples in Figs.~\ref{fig:fig4} and \ref{fig:fig5}. Consider the two curves (a) in these figures first, where the atomic beam is aligned perpendicular to the cavity axis. The ringing at regular intervals along these curves is the transient response to {\it enforced\/} cavity-mode quantum jumps---jumps {\it enforced\/} to sample the quantum fluctuations efficiently (see Sec.~\ref{sec:simulation_results}). Ignoring these perturbations for the present, we see that with the atomic beam aligned perpendicular to the cavity axis the fluctuations evolve more slowly than the vacuum Rabi oscillation---at a similar rate, in fact, to the vacuum Rabi oscillation decay. As anticipated, an approximate adiabatic following is plausible. Consider now the two curves (b); these introduce a $9.6\mkern2mu{\rm mrad}$ misalignment of the atomic beam, following up on the comment of Foster {\it et al.\/}~\cite{Foster00a} that misalignments as large as~$1^\circ$ ($17.45\mkern2mu{\rm mrad}$) might occur. The changes in the fluctuations are dramatic. First, their size increases, though by less on average than it might appear. The altered distributions of intracavity photon numbers are shown in Fig.~\ref{fig:fig6}. The means are not so greatly changed, but the variances (measured relative to the square of the mean) increase by a factor of 2.25 in Fig.~\ref{fig:fig4} and 1.45 in Fig.~\ref{fig:fig5}. Notably, the distribution is asymmetric, so the most probable photon number lies below the mean. The asymmetry is accentuated by the tilt, especially for Parameter Set 1 [Fig.~\ref{fig:fig6}(a)]. More important than the change in amplitude of the fluctuations, though, is the increase in their frequency. Again, the most significant effect occurs for Parameter Set 1 (Fig.~\ref{fig:fig4}), where the frequency with a $9.6\mkern2mu{\rm mrad}$ tilt approaches that of the vacuum Rabi oscillation itself; clearly, there can be no adiabatic following under these conditions. Indeed, the net result of the changes from Fig.~\ref{fig:fig4}(a) to Fig.~\ref{fig:fig4}(b) is that the {\it quantum\/} fluctuations, initiated in the simulation by quantum jumps, are completely lost in a background of classical noise generated by the atomic beam. It is clear that an atomic beam misalignment of sufficient size will drastically reduce the photon antibunching effect observed. \begin{figure}[b] \vskip0.15in \hskip-0.2in \includegraphics[width=2.8in,keepaspectratio=true]{fig4.eps} \caption{Typical trajectory of the intracavity photon number expectation for Parameter Set 1: (a) atomic beam aligned perpendicular to the cavity axis, (b) with a $9.6\mkern2mu{\rm mrad}$ tilt of the atomic beam. The driving field strength is ${\mathcal E}/\kappa=2.5\times10^{-2}$.} \label{fig:fig4} \vskip0.4in \hskip-0.2in \includegraphics[width=2.8in,keepaspectratio=true]{fig5.eps} \caption{As in Fig.~\ref{fig:fig4} but for Parameter Set 2.} \label{fig:fig5} \end{figure} \begin{figure}[t] \hskip-0.2in \includegraphics[width=2.8in,keepaspectratio=true]{fig6.eps} \caption{Distribution of intracavity photon number expectation with the atom beam perpendicular to the cavity axis (thin line) and a $9.6\mkern2mu{\rm mrad}$ tilt of the atomic beam (thick line): (a) Parameter Set 1, (b) Parameter Set 2.} \label{fig:fig6} \end{figure} For a more quantitative characterization of its effect, we carried out quantum trajectory simulations in a one-quantum truncation (without quantum jumps) and computed the semiclassical photon number correlation function \begin{eqnarray} g^{(2)}_{\rm sc}(\tau)=\frac{\overline{\langle(\hat a^\dag\hat a)(t)\rangle_{\rm REC} \langle(\hat a^\dag\hat a)(t+\tau)\rangle_{\rm REC}}} {\left(\overline{\langle(\hat a^\dag\hat a)(t)\rangle_{\rm REC}}\mkern2mu\right)^2}, \label{eqn:g2_semiclassical} \end{eqnarray} where the overbar denotes a time average (in practice an average over an ensemble of sampling times $t_k$). The photon number expectation was calculated in two ways: first, by assuming that the conditional state adiabatically follows the fluctuations of the atomic beam, in which case, from Eq.~(\ref{eqn:photon_number}), we may write \begin{equation} \langle(\hat a^\dag\hat a)(t)\rangle_{\rm REC}=\left(\frac{{\cal E/\kappa}} {1+2C_{\{{\bm r_j}(t)\}}}\right)^2, \label{eqn:ABC:ad} \end{equation} and second, without the adiabatic assumption, in which case the photon number expectation was calculated from the state vector in the normal way. Correlation functions computed for different atomic beam tilts according to this scheme are plotted in Figs.~\ref{fig:fig7} and \ref{fig:fig8}. In each case the curves shown in the left column assume adiabatic following while those in the right column do not. The upper-most curves [frames (a) and (e)] hold for a beam aligned perpendicular to the cavity axis and those below [frames (b)--(d) and (f)--(h)] show the effects of increasing misalignment of the atomic beam. A number of comments are in order. Consider first the aligned atomic beam. Correlation times read from the figures are in approximate agreement with the cavity-mode transit times computed above: the numbers are $191\mkern2mu{\rm nsec}$ and $167\mkern2mu{\rm nsec}$ from frames (a) and (e), respectively, of Fig.~\ref{fig:fig7}, compared with $w_0/\bar v_{\rm oven}=182\mkern2mu{\rm nsec}$; and $68\mkern2mu{\rm nsec}$ and $53\mkern2mu{\rm nsec}$ from frames (a) and (e) of Fig.~\ref{fig:fig8}, respectively, compared with $w_0/\bar v_{\rm oven}=66\mkern2mu{\rm nsec}$. The numbers show a small decrease in the correlation time when the adiabatic following assumption is lifted (by 10-20\%) but no dramatic change; and there is a corresponding small increase in the fluctuation amplitude. \begin{figure}[h] \hskip-0.2in \includegraphics[width=3.0in,keepaspectratio=true]{fig7.eps} \caption{Semiclassical correlation function for Parameter Set 1, with adiabatic following of the photon number (left column) and without adiabatic following (right column); for atomic beam tilts of (a,e) $0\mkern2mu{\rm mrad}$, (b,f) $4\mkern2mu{\rm mrad}$, (c,g) $9\mkern2mu{\rm mrad}$, (d,h) $13\mkern2mu{\rm mrad}$.} \label{fig:fig7} \end{figure} \begin{figure}[h] \hskip-0.2in \includegraphics[width=3.0in,keepaspectratio=true]{fig8.eps} \caption{As in Fig.~\ref{fig:fig7} but for Parameter Set 2 and atomic beam tilts of (a,e) $0\mkern2mu{\rm mrad}$, (b,f) $10\mkern2mu{\rm mrad}$, (c,g) $17\mkern2mu{\rm mrad}$, (d,h) $34\mkern2mu{\rm mrad}$.} \label{fig:fig8} \end{figure} Consider now the effect of an atomic beam tilt. Here the changes are significant. They are most evident in frames (d) and (h) of each figure, but clear already in frames (c) and (g) of Fig.~\ref{fig:fig7}, and frames (b) and (f) of Fig.~\ref{fig:fig8}, where the tilts are close to the tilt used to generate Figs.~\ref{fig:fig4}(b) and \ref{fig:fig5}(b) (also to those used for the data fits in Sec.~\ref{sec:simulation_results}). There is first an increase in the magnitude of the fluctuations---the factors 2.25 and 1.45 noted above---but, more significant, a separation of the decay into two pieces: a central component, with short correlation time, and a much broader component with correlation time larger than $w_0/\bar v_{\rm oven}$. Thus, for a misaligned atomic beam, the dynamics become notably nonadiabatic. Our explanation of the nonadiabaticity begins with the observation that any tilt introduces a velocity component along the standing wave, with transit times through a quarter wavelength of $\lambda/4\bar v_{\rm oven}\sin\theta =86\mkern2mu{\rm nsec}$ in the Rempe {\it et al.\/} \cite{Rempe91} experiment and $\lambda/4\bar v_{\rm oven} \sin\theta=60\mkern2mu{\rm nsec}$ in the Foster {\it et al.\/} \cite{Foster00a} experiment. Compared with the transit time $w_0/\bar v_{\rm oven}$, these numbers have moved closer to the decay times of the vacuum Rabi oscillation---$94\mkern2mu{\rm nsec}$ and $29\mkern2mu {\rm nsec}$, respectively. Note that the distances traveled through the standing wave during the cavity-mode transit, in time $w_0/\bar v_{\rm oven}$, are $w_0\sin\theta=0.53\lambda$ (Parameter Set 1) and $w_0\sin\theta=0.28\lambda$ (Parameter Set 2). It is difficult to explain the detailed shape of the correlation function under these conditions. Speaking broadly, though, {\it fast atoms\/} produce the central component, the short correlation time associated with nonadiabatic dynamics, while {\it slow atoms\/} produce the background component with its long correlation time, which follows from an adiabatic response. Increased tilt brings greater separation between the responses to fast and slow atoms. Simple functional fits to the curves in frame (g) of Fig.~\ref{fig:fig7} and frame (f) of Fig.~\ref{fig:fig8} yield short correlation times of 40-50$\mkern2mu{\rm nsec}$ and $20\mkern2mu{\rm nsec}$, respectively. Consistent numbers are recovered by adding the decay rate of the vacuum Rabi oscillation to the inverse travel time through a quarter wavelength; thus, $(1/94+1/86)^{-1}\mkern2mu{\rm nsec}=45\mkern2mu{\rm nsec}$ and $(1/29+1/60)^{-1} \mkern2mu{\rm nsec}=20\mkern2mu{\rm nsec}$, respectively, in good agreement with the correlation times deduced from the figures. The last and possibly most important thing to note is the oscillation in frames (g) and (h) of Fig.~\ref{fig:fig7} and frame (h) of Fig.~\ref{fig:fig8}. Its frequency is the vacuum Rabi frequency, which shows unambiguously that the oscillation is caused by a nonadiabatic response of the intracavity photon number to the fluctuations of the atomic beam. For the tilt used in frame (g) of Fig.~\ref{fig:fig7}, the transit time through a quarter wavelength is approximately equal to the vacuum-Rabi-oscillation decay time, while it is twice that in frame (f) of Fig.~\ref{fig:fig8}. As the tilts used are close to those giving the best data fits in Sec.~\ref{sec:simulation_results}, this would suggest that atomic beam misalignment places the experiment of Rempe {\it et al.\/}~\cite{Rempe91} further into the nonadiabatic regime than that of Foster {\it et al.\/}~\cite{Foster00a}, though the tilt is similar in the two cases. The observation is consistent with the greater contamination by classical noise in Fig.~\ref{fig:fig4}(b) than in Fig.~\ref{fig:fig5}(b) and with the larger departure of the Rempe {\it et al.\/} data from the stationary-atom model in Fig.~\ref{fig:fig3}. \subsection{Simulation Results and Data Fits} \label{sec:simulation_results} The correlation functions in the right-hand column of Figs.~\ref{fig:fig7} and \ref{fig:fig8} account for atomic-beam-induced classical fluctuations of the intracavity photon number. While some exhibit a vacuum Rabi oscillation, the signals are, of course, photon bunched; a correlation function like that of Fig.~\ref{fig:fig7}(g) provides evidence of {\it collective\/} strong coupling, but not of strong coupling of the {\it one-atom\/} kind, for which a photon antibunching effect is needed. We now carry out full quantum trajectory simulations in a two-quanta truncation to recover the photon antibunching effect---i.e., we bring back the quantum jumps. In the weak-field limit the {\it normalized\/} photon correlation function is independent of the amplitude of the driving field ${\cal E}$ [Eqs.~(\ref{eqn:g2_ideal}) and (\ref{eqn:g2_fixed_configuration})]. The forwards photon scattering rate itself is proportional to $({\cal E}/\kappa)^2$ [Eq.~(\ref{eqn:photon_number})], and must be set in the simulations to a value very much smaller than the inverse vacuum-Rabi-oscillation decay time [Eq.~(\ref{eqn:weak_field_limit1})]. Typical values of the intracavity photon number were $\sim10^{-7}-10^{-6}$. It is impractical, under these conditions, to wait for the natural occurrence of forwards-scattering quantum jumps. Instead, cavity-mode quantum jumps are enforced at regular sample times $t_k$ [see Figs.~\ref{fig:fig4}(a) and \ref{fig:fig5}(a)]. Denoting the record with enforced cavity-mode jumps by $\overline{\vbox{\vskip7.5pt}{\rm REC} \mkern-2mu}\mkern2mu$, the second-order correlation function is then computed as the ratio of ensemble averages \begin{equation} \label{eqn:g2} g^{(2)}(\tau)=\frac{\overline{\langle(\hat a^\dag\hat a)(t_k)\rangle_{\overline{{\rm REC}\mkern-4mu}} \mkern4mu \langle(\hat a^\dag\hat a)(t_k+\tau)\rangle_{\overline{{\rm REC}\mkern-4mu}}\mkern4mu}} {\left(\overline{\langle(\hat a^\dag\hat a)(t_l)\rangle_{\overline{{\rm REC}\mkern-4mu}}\mkern4mu} \mkern2mu\right)^{\mkern-2mu 2}}\mkern2mu, \end{equation} where the sample times in the denominator, $t_l$, are chosen to avoid the intervals---of duration a few correlation times---immediately after the jump times $t_k$; this ensures that both ensemble averages are taken in the steady state. With the cut-off parameter [Eq.~(\ref{eqn:interaction_volume})] set to $F=0.01$, the number of atoms within the interaction volume typically fluctuates around $N(t)\sim400$-$450$ atoms for Parameter Set 1 and $N(t)\sim280$-$320$ atoms for Parameter Set 2; in a two-quanta truncation, the corresponding numbers of state amplitudes are $\sim90,000$ (Parameter Set 1) and $\sim45,000$ (Parameter Set 2). \begin{figure}[b] \hskip-0.2in \includegraphics[width=3.0in,keepaspectratio=true]{fig9.eps} \caption{Second-order correlation function from full quantum trajectory simulations with a two-quanta truncation: (a) Parameter Set 1 and $\theta=0\mkern2mu{\rm mrad}$ (thick line), $7\mkern2mu{\rm mrad}$ (medium line), $12\mkern2mu{\rm mrad}$ (thin line); (b) Parameter Set 2 and $\theta=0\mkern2mu{\rm mrad}$ (thick line), $10\mkern2mu{\rm mrad}$ (medium line), $17\mkern2mu{\rm mrad}$ (thin line).} \label{fig:fig9} \end{figure} \begin{figure}[t] \hskip-0.2in \includegraphics[width=3.0in,keepaspectratio=true]{fig10.eps} \caption{Best fits to experimental results: (a) data from Fig.~4(a) of Ref.~\cite{Rempe91} are fitted with Parameter Set 1 and $\theta=9.7\mkern2mu{\rm mrad}$ (thick line) and $10\mkern2mu{\rm mrad}$ (thin line); (b) data from Fig.~4 of Ref.~\cite{Foster00a} are fitted with Parameter Set~2 and $\theta=9.55\mkern2mu{\rm mrad}$. Averages of (a) 200,000 and (b) 50,000 samples were taken with a cavity-mode cut-off $F=0.01$.} \label{fig:fig10} \end{figure} Figure~\ref{fig:fig9} shows the computed correlation functions for various atomic beam tilts. We select from a series of such results the one that fits the measured correlation function most closely. Optimum tilts are found to be $9.7\mkern2mu{\rm mrad}$ for the Rempe {\it et al.\/}~\cite{Rempe91} experiment and $9.55\mkern2mu{\rm mrad}$ for the experiment of Foster {\it et al.\/}~\cite{Foster00a}. The best fits are displayed in Fig.~\ref{fig:fig10}. In the case of the Foster {\it et al.\/} data the fit is extremely good. The only obvious disagreement is that the fitted frequency of the vacuum Rabi oscillation is possibly a little low. This could be corrected by a small increase in atomic beam density---the parameter $\bar N_{\rm eff}$---which is only known approximately from the experiment, in fact by fitting the formula (\ref{eqn:vacuum_Rabi_frequency}) to the data. The fit to the data of Rempe {\it et al.\/}~\cite{Rempe91} is not quite so good, but still convincing with some qualifications. Note, in particular, that the tilt used for the fit might be judged a little too large, since the three central minima in Fig.~\ref{fig:fig10}(a) are almost flat, while the data suggest they should more closely follow the curve of a damped oscillation. As the thin line in the figure shows, increasing the tilt raises the central minimum relative to the two on the side; thus, although a better fit around $\kappa\tau=0$ is obtained, the overall fit becomes worse. This trend results from the sharp maximum in the semiclassical correlation function of Fig.~\ref{fig:fig7}(g), which becomes more and more prominent as the atomic beam tilt is increased. The fit of Fig.~\ref{fig:fig10}(b) is extremely good, and, although it is not perfect, the thick line in Fig.~\ref{fig:fig10}(a), with a $9.7\mkern2mu{\rm mrad}$ tilt, agrees moderately well with the data once the uncertainty set by shot noise is included, i.e., adding error bars of a few percent (see Fig.~\ref{fig:fig13}). Thus, leaving aside possible adjustments due to omitted noise sources, such as spontaneous emission---to which we return in Sec.~\ref{sec:photon_number}---and atomic and cavity detunings, the results of this and the last section provide strong support for the proposal that the disagreement between theory and experiment presented in Fig.~\ref{fig:fig3} arises from an atomic beam misalignment of approximately $0.5^\circ$. One final observation should be made regarding the fit to the Rempe {\it et al.\/}~\cite{Rempe91} data. Figure \ref{fig:fig11} replots the comparison made in Fig.~\ref{fig:fig10}(a) for a larger range of time delays. Frame (a) plots the result of our simulation for a perfectly aligned atomic beam, and frames (b) and (c) shows the results, plotted in Fig.~\ref{fig:fig10}(a), corresponding to atomic beam tilts of $\theta=9.7 \mkern2mu{\rm mrad}$ and $10\mkern2mu{\rm mrad}$, respectively. The latter two plots are overlayed by the experimental data. Aside from the reduced amplitude of the vacuum Rabi oscillation, in the presence of the tilt the correlation function exhibits a broad background arising from atomic beam fluctuations. Notably, the background is entirely absent when the atomic beam is aligned. The experimental data exhibit just such a background (Fig.~3(a) of Ref.~\cite{Rempe91}); moreover, an estimate, from Fig.~\ref{fig:fig11}, of the background correlation time yields approximately $400\mkern2mu{\rm nsec}$, consistent with the experimental measurement. It is significant that this number is more than twice the transit time, $w_0/\bar v_{\rm oven} =182\mkern2mu{\rm nsec}$, and therefore not explained by a perpendicular transit across the cavity mode. In fact the background mimics the feature noted for larger tilts in Figs.~\ref{fig:fig7} and \ref{fig:fig8}; as mentioned there, it appears to find its origin in the separation of an adiabatic (slowest atoms) from a nonadiabatic (fastest atoms) response to the density fluctuations of the atomic beam. Note, however, that a correlation time of $400\mkern2mu{\rm nsec}$ appears to be consistent with a perpendicular transit across the cavity when the cavity-mode transit time is defined as $2w_0/\bar v_{\rm oven}=364\mkern2mu{\rm nsec}$, or, using the peak rather than average velocity, as $4w_0/\sqrt\pi\bar v_{\rm oven}=411\mkern2mu{\rm nsec}$; the latter definition was used to arrive at the $400\mkern2mu{\rm nsec}$ quoted in Ref.~\cite{Rempe91}. There is, of course, some ambiguity in how a transit time should be defined. We are assuming that the time to replace an ensemble of interacting atoms with a statistically independent one---which ultimately is what determines the correlation time---is closer to $w_0/\bar v_{\rm oven}$ than $2w_0/\bar v_{\rm oven}$. In support of the assumption we recall that the number obtained in this way agrees with the semiclassical correlation function for an aligned atomic beam [Figs.~\ref{fig:fig7} and~\ref{fig:fig8}, frame (a)]. \begin{figure}[ht] \hskip-0.2in \includegraphics[width=3.0in,keepaspectratio=true]{fig11.eps} \caption{Second-order correlation function from full quantum trajectory simulations with a two-quanta basis for Parameter Set 1 and (a) $\theta=0\mkern2mu{\rm mrad}$, (b) $\theta=9.7\mkern2mu{\rm mrad}$, (c) $\theta=10\mkern2mu{\rm mrad}$. Averages of (a) 15,000, and (b) and (c) 200,000 samples were taken with a cavity-mode cut-off $F=0.01$.} \label{fig:fig11} \end{figure} \subsection{Mean-Doppler-Shift Compensation} \label{sec:detuning} Foster {\it et al.\/}~\cite{Foster00a}, in an attempt to account for the disagreement of their measurements and the stationary-atom model, extended the results of Sec.~\ref{sec:fixed_configuration} to include an atomic detuning. They then fitted the data using the following procedure: (i) the component of atomic velocity along the cavity axis is viewed as a Doppler shift from the stationary-atom resonance, (ii) the mean shift is assumed to be offset by an adjustment of the driving field frequency (tuning to moving atoms) at the time the data are taken, and (iii) an average over residual detunings---deviations from the mean---is taken in the model, i.e., the detuning-dependent generalization of Eq.~(\ref{eqn:g2_fixed_configuration}). The approach yields a reasonable fit to the data (Fig.~6 of Ref.~\cite{Foster00a}). The principal difficulty with this approach is that a standing-wave cavity presents an atom with {\it two\/} Doppler shifts, not one. It seems unlikely, then, that adjusting the driving field frequency to offset one shift and not the other could compensate for even the average effect of the atomic beam tilt. This difficulty is absent in a ring cavity, though, so we first assess the performance of the outlined prescription in the ring-cavity case. In a ring cavity, the spatial dependence of the coupling constant [Eq.~(\ref{eqn:coupling_constant})] is replaced by \begin{equation} g({\bm r}_j(t))=\frac{g_{\rm max}}{\sqrt2}\exp(ikz_j(t))\exp\!\left[-\frac{x_j^2(t)+y_j^2(t)}{w_0^2}\right], \end{equation} where the factor $\sqrt2$ ensures that the collective coupling strength and vacuum Rabi frequency remain the same. Figure \ref{fig:fig12}(a) shows the result of a numerical implementation of the proposed mean-Doppler-shift compensation for an atomic beam tilt of $17.3\mkern2mu{\rm mrad}$, as used in Fig.~6 of Ref.~~\cite{Foster00a}. It works rather well. The compensated curve (thick line) almost recovers the full photon antibunching effect that would be seen with an aligned atomic beam (thin line). The degradation that remains is due to the uncompensated dispersion of velocities (Doppler shifts) in the atomic beam. For the case of a standing-wave cavity, on the other hand, the outcome is entirely different. This is shown by Fig.~\ref{fig:fig12}(b). There, offsetting one of the two Doppler shifts only makes the degradation of the photon antibunching effect worse. In fact, we find that any significant detuning of the driving field from the stationary atom resonance is highly detrimental to the photon antibunching effect and inconsistent with the Foster {\it et al.\/} data. \begin{figure}[t] \hskip-0.2in \includegraphics[width=3.0in,keepaspectratio=true]{fig12.eps} \caption{Doppler-shift compensation for a misaligned atomic beam in (a) ring and (b) standing-wave cavities (Parameter Set 2). The second-order correlation function is computed with the atomic beam perpendicular to the cavity axis (thin line), a $17.3\mkern2mu{\rm mrad}$ tilt of the atomic beam (medium line), and a $17.3\mkern2mu{\rm mrad}$ tilt plus compensating detuning of the cavity and stationary atom resonances $\Delta\omega/\kappa=k\bar v_{\rm oven}\sin\theta/\kappa=0.916$ (thick line).} \label{fig:fig12} \end{figure} \section{Intracavity Photon Number} \label{sec:photon_number} The best fits displayed in Fig.~\ref{fig:fig10} were obtained from simulations with a two-quanta truncation and premised upon the measurements being made in the weak-field limit. The strict requirement of the limit sets a severe constraint on the intracavity photon number. We consider now whether the requirement is met in the experiments. Working from Eqs.~(\ref{eqn:forwards_rate}) and (\ref{eqn:side_rate}), and the solution to Eq.~(\ref{eqn:stationary_state}), a fixed configuration $\{{\bm r}_j\}$ of $N$ atoms (Sec.~\ref{sec:fixed_configuration}) yields photon scattering rates \cite{Carmichael91,Brecha99,Carmichael07c} \begin{subequations} \begin{equation} R_{\rm forwards}=2\kappa\langle\hat a^\dag\hat a\rangle_{\rm REC}=2\kappa\mkern-3mu \left(\frac{{\cal E/\kappa}}{1+2C_{\{{\bm r_j}\}}}\right)^{\mkern-2mu 2}, \label{eqn:scattering_rate_forwards} \end{equation} and \begin{eqnarray} R_{\rm side}&=&\gamma\sum_{k=1}^{N}\langle\hat\sigma_{k+}\hat\sigma_{k-}\rangle\nonumber\\ \noalign{\vskip2pt} &=&\gamma\sum_{k=1}^{N}\left(\frac{g({\bm r}_k)}{\gamma/2}\frac{{\cal E/\kappa}} {1+2C_{\{{\bm r_j}\}}}\right)^{\mkern-2mu 2}\nonumber\\ \noalign{\vskip2pt} &=&2C_{\{{\bm r}_j\}}2\kappa\langle\hat a^\dag\hat a\rangle_{\rm REC}, \end{eqnarray} with ratio \end{subequations} \begin{eqnarray} \frac{R_{\rm side}}{R_{\rm forwards}}=2C_{\{{\bm r}_j\}}=\frac{2N_{\rm eff}^{\{{\bm r}_j\}} g_{\rm max}^2}{\kappa\gamma}\sim\frac{2\bar N_{\rm eff}g_{\rm max}^2}{\kappa\gamma}. \label{eqn:scattering_rate_ratio} \end{eqnarray} The weak-field limit [Eq.~(\ref{eqn:weak_field_limit1})] requires that the {\it greater\/} of the two rates be much smaller than $\frac12(\kappa+\gamma/2)$; it is not necessarily sufficient that the forwards scattering rate be low. The side scattering (spontaneous emission) rate is larger than the forwards scattering rate in both of the experiments being considered---larger by a large factor of $70$--$80$. Thus, from Eqs.~(\ref{eqn:scattering_rate_forwards}) and (\ref{eqn:scattering_rate_ratio}), the constraint on intracavity photon number may be written as \begin{equation} \langle\hat a^\dag\hat a\rangle\ll\frac{1+\gamma/2\kappa}{8\bar N_{\rm eff} g^2_{\rm max}/\kappa\gamma}, \label{eqn:weak_field_limit2} \end{equation} where, from Table \ref{tab:parameters}, the right-hand side evaluates as $1.2\times10^{-2}$ for Parameter Set 1 and $4.7\times10^{-3}$ for Parameter Set 2, while the intracavity photon numbers inferred from the experimental count rates are $3.8\times10^{-2}$ \cite{Rempe91} and $7.6\times10^{-3}$ \cite{Foster00a}. It seems that neither experiment satisfies condition (\ref{eqn:weak_field_limit2}). As an important final step we should therefore relax the weak-driving-field assumption (photon number $\sim10^{-7}$--$10^{-6}$ in the simulations) and assess what effect this has on the data fits; can the simulations fit the inferred intracavity photon numbers as well? To address this question we extended our simulations to a three-quanta truncation of the Hilbert space with cavity-mode cut-off changed from $F=0.01$ to $F=0.1$. With the changed cut-off the typical number of atoms in the interaction volume is halved: $N(t)\sim180$--$220$ atoms for Parameter Set 1 and $N(t)\sim150$--$170$ atoms for Parameter Set 2, from which the numbers of state amplitudes (including three-quanta states) increase to $1,300,000$ and $700,000$, respectively. The new cut-off introduces a small error in $\bar N_{\rm eff}$, hence in the vacuum Rabi frequency, but the error is no larger than one or two percent. At this point an additional approximation must be made. At the excitation levels of the experiments, even a three-quanta truncation is not entirely adequate. Clumps of three or more side-scattering quantum jumps can occur, and these are inaccurately described in a three-quanta basis. In an attempt to minimize the error, we artificially restrict (through a veto) the number of quantum jumps permitted within some prescribed interval of time. The accepted number was set at two and the time interval to $1\kappa^{-1}$ for Parameter Set~1 and $3\kappa^{-1}$ for Parameter Set 2 (the correlation time measured in cavity lifetimes is longer for Parameter Set 2). With these settings approximately 10\% of the side-scattering jumps were neglected at the highest excitation levels considered. The results of our three-quanta simulations appear in Fig.~\ref{fig:fig13}; they use the optimal atomic beam tilts of Fig.~\ref{fig:fig10}. Figure \ref{fig:fig13}(a) compares the simulation with the data of Rempe {\it et al.}~\cite{Rempe91} at an intracavity photon number that is approximately six times smaller than what we estimate for the experiment (a more realistic simulation requires a higher level of truncation and is impossible for us to handle numerically). The overall fit in Fig.~\ref{fig:fig13} is as good as that in Fig.~\ref{fig:fig10}, with a slight improvement in the relative depths of the three central minima. A small systematic disagreement does remain, however. We suspect that the atomic beam tilt used is actually a little large, while the contribution to the decoherence of the vacuum Rabi oscillation from spontaneous emission should be somewhat more. We are satisfied, nevertheless, that the data of Rempe {\it et al.\/}~\cite{Rempe91} are adequately explained by our model. \begin{figure}[t] \vskip0.25cm \begin{center} \begin{tabular}{c} \includegraphics[height=6cm]{fig13.eps} \end{tabular} \end{center} \vskip-0.5cm \caption{ Second-order correlation function from full quantum trajectory simulations with a three-quanta truncation and atomic beam tilts as in Fig.~\ref{fig:fig10}: (a) Parameter Set 1, mean intracavity photon number $\langle a^{\dag} a\rangle=6.7\times 10^{-3}$; (b) Parameter Set 2, mean intracavity photon numbers $\langle a^{\dag} a\rangle=2.2\times 10^{-4}$, $5.7\times 10^{-4}$, $1.1\times 10^{-3}$, and $1.7\times 10^{-3}$ (thickest curve to thinest curve). Averages of 20,000 samples were taken with a cavity-mode cut-off $F=0.1$. Shot noise error bars are added to the data taken from Ref.~\cite{Rempe91}. \label{fig:fig13} } \end{figure} Results for the experiment of Foster {\it et al} \cite{Foster00a} lead in a rather different direction. They are displayed in Fig.~\ref{fig:fig13}(b), where four different intracavity photon numbers are considered. The lowest, $\langle\hat a^\dagger\hat a\rangle=2.2\times10^{-4}$, reproduces the weak-field result of Fig.~\ref{fig:fig10}(b). As the photon number is increased, the fit becomes progressively worse. Even at the very low value of $5.7\times10^{-4}$ intracavity photons, spontaneous emission raises the correlation function for zero delay by a noticeable amount. Then we obtain $g^{(2)}(0)>1$ at the largest photon number considered. Somewhat surprisingly, even this photon number, $\langle\hat a^\dagger\hat a\rangle=1.7\times10^{-3}$, is smaller than that estimated for the experiment---smaller by a factor of five. Our simulations therefore disagree significantly with the measurements, despite the near perfect fit of Fig.~\ref{fig:fig10}(b). The simplest resolution would be for the estimated photon number to be too high. A reduction by more than an order of magnitude is needed, however, implying an unlikely error, considering the relatively straightforward method of inference from photon counting rates. This anomaly, for the present, remains unresolved. \section{Conclusions} \label{sec:conclusions} Spatial variation of the dipole coupling strength has for many years been a particular difficulty for cavity QED at optical frequencies. The small spatial scale set by the optical wavelength makes any approach to a resolution a formidable challenge. There has nevertheless been progress made with cooled and trapped atoms \cite{Hood00,Pinkse00,Boca04,Maunz05,Birnbaum05,Hennrich05}, and in semiconductor systems \cite{Yoshie04,Reithmaier04,Peter05} where the participating `atoms' are fixed. The earliest demonstrations of strong coupling at optical frequencies employed standing-wave cavities and thermal atomic beams, where control over spatial degrees of freedom is limited to the alignment of the atomic beam. Of particular note are the measurements of photon antibunching in forwards scattering \cite{Rempe91,Mielke98,Foster00a}. They provide a definitive demonstration of strong coupling at the one-atom level; although many atoms might couple to the cavity mode at any time, a significant photon antibunching effect occurs only when individual atoms are strongly coupled. Spatial effects pose difficulties of a theoretical nature as well. Models that ignore them can point the direction for experiments, but fail, ultimately, to account for experimental results. In this paper we have addressed a long-standing disagreement of this kind---disagreement between the theory of photon antibunching in forwards scattering for stationary atoms in a cavity \cite{Carmichael85,Rice88,Carmichael91,Brecha99,Rempe91} and the aforementioned experiments \cite{Rempe91,Mielke98,Foster00a}. {\it Ab initio\/} quantum trajectory simulations of the experiments have been carried out, including a Monte-Carlo simulation of the atomic beam. Importantly, we allow for a misalignment of the atomic beam, since this was recognized as a critical issue in Ref.~\cite{Foster00a}. We conclude that atomic beam misalignment is, indeed, the most likely reason for the degradation of the measured photon antibunching effect from predicted results. Working first with a two-quanta truncation, suitable for the weak-field limit, data sets measured by Rempe {\it et al.\/}~\cite{Rempe91} and Foster {\it et al.\/}~\cite{Foster00a} were fitted best by atomic beam tilts from perpendicular to the cavity axis of $9.7\mkern2mu{\rm mrad}$ and $9.55\mkern2mu{\rm mrad}$, respectively. Atomic motion is recognized as a source of decorrelation omitted from the model used to fit the measurements in Ref.~\cite{Rempe91}. We found that the mechanism is more complex than suggested there, however. An atomic beam tilt of sufficient size results in a nonadiabatic response of the intracavity photon number to the inevitable density fluctuations of the beam. Thus classical noise is written onto the forwards-scattered photon flux, obscuring the antibunched quantum fluctuations. The parameters of Ref.~\cite{Rempe91} are particularly unfortunate in this regard, since the nonadiabatic response excites a {\it bunched\/} vacuum Rabi oscillation, which all but cancels out the antibunched oscillation one aims to measure. Although both of the experiments modeled operate at relatively low forwards scattering rates, neither is strictly in the weak-field limit. We have therefore extended our simulations---subject to some numerical constraints---to assess the effects of spontaneous emission. The fit to the Rempe {\it et al.} data~\cite{Rempe91} was slightly improved. We noted that the optimum fit might plausibly be obtained by adopting a marginally smaller atomic beam tilt and allowing for greater decorrelation from spontaneous emission, though a more efficient numerical method would be required to verify this possibility. The fit to the Foster {\it et al.} data~\cite{Foster00a} was highly sensitive to spontaneous emission. Even for an intracavity photon number five times smaller than the estimate for the experiment, a large disagreement with the measurement appeared. No explanation of the anomaly has been found. We have shown that cavity QED experiments can call for elaborate and numerically intensive modeling before a full understanding, at the quantitative level, is reached. Using quantum trajectory methods, we have significantly increased the scope for realistic modeling of cavity QED with atomic beams. While we have shown that atomic beam misalignment has significantly degraded the measurements in an important set of experiments in the field, this observation leads equally to a positive conclusion: potentially, nonclassical photon correlations in cavity QED can be observed at a level at least ten times higher than so far achieved. \section*{Acknowledgements} This work was supported by the NSF under Grant No.\ PHY-0099576 and by the Marsden Fund of the RSNZ.
{ "redpajama_set_name": "RedPajamaArXiv" }
5,660
Normally, pain transmission from primary sensory fibers in the periphery to secondary projection neurons in the spinal cord serves a protective role in our bodies. However, during chronic pain states, this response becomes maladaptive, signaling pain in the absence of any such stimuli. It is the goal of the lab that I am a part of to better understand chronic pain in order to more effectively treat such conditions. The main goal of my research is to investigate ways in which we may be able to modulate the neural mechanisms underlying chronic pain. Currently, I am considering the role of agmatine, an endogenous amine, in this plasticity-driven event. My research is focused mainly on immunocytochemical and molecular analysis of the spinal cord and brain in mice, specifically investigating whether agmatine can be up or down-regulated via modulation of its synthetic and metabolic enzymes. This research may offer a new approach, taking advantage of endogenous amines, to the treatment of chronic pain. Fairbanks CA, Goracke-Postle CJ. Neurobiological studies of chronic pain and analgesia: Rationale and refinements. Eur J Pharmacol. 2015;759:169-181. Goracke-Postle CJ, Overland AC, Riedl MS, Stone LS, Fairbanks CA. Potassium- and capsaicin-induced release of agmatine from spinal nerve terminals. J Neurochem. 2007;102:1738-1748. Goracke-Postle CJ, Overland AC, Stone LS, Fairbanks CA. Agmatine transport into spinal nerve terminals is modulated by polyamine analogs. J Neurochem. 2007;100:132-141. Goracke-Postle CJ, Nguyen HO, Stone LS, Fairbanks CA. Release of tritiated agmatine from spinal synaptosomes. Neuroreport. 2006;17(1):13-17. Nguyen HO, Goracke-Postle CJ, Kaminski LL, Overland AC, Morgan AD, Fairbanks CA.. Neuropharmacokinetic and dynamic studies of agmatine. Ann N Y Acad Sci. 2003;1009:82-105.
{ "redpajama_set_name": "RedPajamaC4" }
6,883