text
stringlengths
5.77k
29.6k
extracted_entities
listlengths
3
70
refined_entities
listlengths
0
5
\section{Introduction} Let $G$ be a simple undirected graph with the \textit{vertex set} $V(G)$ and the \textit{edge set} $E(G)$. A vertex with degree one is called a \textit{pendant vertex}. The distance between the vertices $u$ and $v$ in graph $G$ is denoted by $d_G(u,v)$. A cycle $C$ is called \textit{chordless} if $C$ has no \textit{cycle chord} (that is an edge not in the edge set of $C$ whose endpoints lie on the vertices of $C$). The \textit{Induced subgraph} on vertex set $S$ is denoted by $\langle S\rangle$. A path that starts in $v$ and ends in $u$ is denoted by $\stackrel\frown{v u}$. A \textit{traceable} graph is a graph that possesses a Hamiltonian path. In a graph $G$, we say that a cycle $C$ is \textit{formed by the path} $Q$ if $ | E(C) \setminus E(Q) | = 1 $. So every vertex of $C$ belongs to $V(Q)$. In 2011 the following conjecture was proposed: \begin{conjecture}(Hoffmann-Ostenhof \cite{hoffman}) Let $G$ be a connected cubic graph. Then $G$ has a decomposition into a spanning tree, a matching and a family of cycles. \end{conjecture} Conjecture \theconjecture$\,$ also appears in Problem 516 \cite{cameron}. There are a few partial results known for Conjecture \theconjecture. Kostochka \cite{kostocha} noticed that the Petersen graph, the prisms over cycles, and many other graphs have a decomposition desired in Conjecture \theconjecture. Ozeki and Ye \cite{ozeki} proved that the conjecture holds for 3-connected cubic plane graphs. Furthermore, it was proved by Bachstein \cite{bachstein} that Conjecture \theconjecture$\,$ is true for every 3-connected cubic graph embedded in torus or Klein-bottle. Akbari, Jensen and Siggers \cite[Theorem 9]{akbari} showed that Conjecture \theconjecture$\,$ is true for Hamiltonian cubic graphs. In this paper, we show that Conjecture \theconjecture$\,$ holds for traceable cubic graphs. \section{Results} Before proving the main result, we need the following lemma. \begin{lemma} \label{lemma:1} Let $G$ be a cubic graph. Suppose that $V(G)$ can be partitioned into a tree $T$ and finitely many cycles such that there is no edge between any pair of cycles (not necessarily distinct cycles), and every pendant vertex of $T$ is adjacent to at least one vertex of a cycle. Then, Conjecture \theconjecture$\,$ holds for $G$. \end{lemma} \begin{proof} By assumption, every vertex of each cycle in the partition is adjacent to exactly one vertex of $T$. Call the set of all edges with one endpoint in a cycle and another endpoint in $T$ by $Q$. Clearly, the induced subgraph on $E(T) \cup Q$ is a spanning tree of $G$. We call it $T'$. Note that every edge between a pendant vertex of $T$ and the union of cycles in the partition is also contained in $T'$. Thus, every pendant vertex of $T'$ is contained in a cycle of the partition. Now, consider the graph $H = G \setminus E(T')$. For every $v \in V(T)$, $d_H(v) \leq 1$. So Conjecture \theconjecture$\,$ holds for $G$. \vspace{1em} \end{proof} \noindent\textbf{Remark 1.} \label{remark:1} Let $C$ be a cycle formed by the path $Q$. Then clearly there exists a chordless cycle formed by $Q$. Now, we are in a position to prove the main result. \begin{theorem} Conjecture \theconjecture$\,$ holds for traceable cubic graphs. \end{theorem} \begin{proof} Let $G$ be a traceable cubic graph and $P : v_1, \dots, v_n$ be a Hamiltonian path in $G$. By \cite[Theorem 9]{akbari}, Conjecture A holds for $v_1 v_n \in E(G)$. Thus we can assume that $v_1 v_n \notin E(G)$. Let $v_1 v_j, v_1 v_{j'}, v_i v_n, v_{i'} v_n \in E(G)\setminus E(P)$ and $j' < j < n$, $1 < i < i'$. Two cases can occur: \begin{enumerate}[leftmargin=0pt,label=] \item \textbf{Case 1.} Assume that $i < j$. Consider the following graph in Figure \ref{fig:overlapping} in which the thick edges denote the path $P$. Call the three paths between $v_j$ and $v_i$, from the left to the right, by $P_1$, $P_2$ and $P_3$, respectively (note that $P_1$ contains the edge $e'$ and $P_3$ contains the edge $e$). \begin{figure}[H] \begin{center} \includegraphics[width=40mm]{engImages/overlapping.pdf} \caption{Paths $P_1$, $P_2$ and $P_3$} \label{fig:overlapping} \end{center} \end{figure} If $P_2$ has order $2$, then $G$ is Hamiltonian and so by \cite[Theorem 9]{akbari} Conjecture \theconjecture$\,$ holds. Thus we can assume that $P_1$, $P_2$ and $P_3$ have order at least $3$. Now, consider the following subcases:\\ \begin{enumerate}[leftmargin=0pt,label=] \label{case:1} \item \textbf{Subcase 1.} There is no edge between $V(P_r)$ and $V(P_s)$ for $1 \leq r < s \leq 3$. Since every vertex of $P_i$ has degree 3 for every $i$, by \hyperref[remark:1]{Remark 1}$\,$ there are two chordless cycles $C_1$ and $C_2$ formed by $P_1$ and $P_2$, respectively. Define a tree $T$ with the edge set $$ E\Big(\langle V(G) \setminus \big(V(C_1) \cup V(C_2)\big) \rangle\Big) \bigcap \big(\bigcup_{i=1}^3 E(P_i)\big).$$ Now, apply \hyperref[lemma:1]{Lemma 1} $\,$for the partition $\{T, C_1, C_2\}$.\\ \item \textbf{Subcase 2.} \label{case:edge} There exists at least one edge between some $P_r$ and $P_s$, $r<s$. With no loss of generality, assume that $r=1$ and $s=2$. Suppose that $ab \in E(G)$, where $a \in V(P_1)$, $b \in V(P_2)$ and $d_{P_1}(v_j, a) + d_{P_2}(v_j, b)$ is minimum. \begin{figure}[H] \begin{center} \includegraphics[width=40mm]{engImages/ab.pdf} \caption{The edge $ab$ between $P_1$ and $P_2$} \label{fig:ab} \end{center} \end{figure} Three cases occur: \\ (a) There is no chordless cycle formed by either of the paths $\stackrel\frown{v_j a}$ or $\stackrel\frown{v_j b}$. Let $C$ be the chordless cycle $\stackrel\frown{v_j a}\stackrel\frown{ b v_j}$. Define $T$ with the edge set $$ E\Big(\langle V(G) \setminus V(C)\rangle\Big) \bigcap \big(\bigcup_{i=1}^3 E(P_i)\big).$$ Now, apply \hyperref[lemma:1]{Lemma 1} $\,$for the partition $\{T,C\}$. \\ (b) There are two chordless cycles, say $C_1$ and $C_2$, respectively formed by the paths $\stackrel\frown{v_j a}$ and $\stackrel\frown{v_j b}$. Now, consider the partition $C_1$, $C_2$ and the tree induced on the following edges, $$E\Big(\langle V(G) \setminus \big(V(C_1) \cup V(C_2)\big) \rangle\Big) \; \bigcap \; E\Big(\bigcup_{i=1}^3 P_i\Big),$$ and apply \hyperref[lemma:1]{Lemma 1}.\\ (c) With no loss of generality, there exists a chordless cycle formed by the path $\stackrel\frown{v_j a}$ and there is no chordless cycle formed by the path $\stackrel\frown{v_j b}$. First, suppose that for every chordless cycle $C_t$ on $\stackrel\frown{v_j a}$, at least one of the vertices of $C_t$ is adjacent to a vertex in $V(G) \setminus V(P_1)$. We call one of the edges with one end in $C_t$ and other endpoint in $V(G) \setminus V(P_1)$ by $e_t$. Let $v_j=w_0, w_1, \dots, w_l=a$ be all vertices of the path $\stackrel\frown{v_j a}$ in $P_1$. Choose the shortest path $w_0 w_{i_1} w_{i_2} \dots w_l$ such that $0 < i_1 < i_2 < \dots < l$. Define a tree $T$ whose edge set is the thin edges in Figure \ref{fig:deltaCycle}.\\ Call the cycle $w_0 w_{i_1} \dots w_l \stackrel\frown{b w_0}$ by $C'$. Now, by removing $C'$, $q$ vertex disjoint paths $Q_1, \dots, Q_q$ which are contained in $\stackrel\frown{v_j a}$ remain. Note that there exists a path of order $2$ in $C'$ which by adding this path to $Q_i$ we find a cycle $C_{t_i}$, for some $i$. Hence there exists an edge $e_{t_i}$ connecting $Q_i$ to $V(G) \setminus V(P_1)$. Now, we define a tree $T$ whose the edge set is, $$\quad\quad\quad \bigg( E\Big(\langle V(G) \setminus V(C') \rangle \Big)\; \bigcap \; \Big(\bigcup_{i=1}^3 E(P_i)\Big) \bigg) \bigcup \Big(\big\{e_{t_i} \mid 1 \leq i \leq q \big\} \Big).$$ Apply \hyperref[lemma:1]{Lemma 1} $\,$for the partition $\{T,C'\}$.\\ \begin{figure}[H] \begin{center} \includegraphics[width=40mm]{engImages/deltaCycle.pdf} \caption{The cycle $C'$ and the tree $T$} \label{fig:deltaCycle} \end{center} \end{figure} Next, assume that there exists a cycle $C_1$ formed by $\stackrel\frown{v_j a}$ such that none of the vertices of $C_1$ is adjacent to $V(G) \setminus V(P_1)$. Choose the smallest cycle with this property. Obviously, this cycle is chordless. Now, three cases can be considered:\\ \begin{enumerate}[leftmargin=5pt,label=(\roman*)] \item There exists a cycle $C_2$ formed by $P_2$ or $P_3$. Define the partition $C_1$, $C_2$ and a tree with the following edge set, $$E\Big(\langle V(G) \setminus \big(V(C_1) \cup V(C_2)\big)\rangle \Big) \bigcap \Big( \bigcup_{i=1}^3 E(P_i) \Big),$$ and apply \hyperref[lemma:1]{Lemma 1}.\\ \item There is no chordless cycle formed by $P_2$ and by $P_3$, and there is at least one edge between $V(P_2)$ and $V(P_3)$. Let $ab \in E(G)$, $a \in V(P_2)$ and $b \in V(P_3)$ and moreover $d_{P_2}(v_j, a) + d_{P_3}(v_j,b)$ is minimum. Notice that the cycle $\stackrel\frown{v_j a} \stackrel\frown{b v_j}$ is chordless. Let us call this cycle by $C_2$. Now, define the partition $C_2$ and a tree with the following edge set, $$E\Big(\langle V(G) \setminus V(C_2)\rangle \Big) \bigcap \Big( \bigcup_{i=1}^3 E(P_i) \Big),$$ and apply \hyperref[lemma:1]{Lemma 1}.\\ \item There is no chordless cycle formed by $P_2$ and by $P_3$, and there is no edge between $V(P_2)$ and $V(P_3)$. Let $C_2$ be the cycle consisting of two paths $P_2$ and $P_3$. Define the partition $C_2$ and a tree with the following edge set, $$E\Big(\langle V(G) \setminus V(C_2)\rangle \Big) \bigcap \Big( \bigcup_{i=1}^3 E(P_i) \Big),$$ and apply \hyperref[lemma:1]{Lemma 1}. \end{enumerate} \end{enumerate} \vspace{5mm} \item \textbf{Case 2.} \label{case:2} Assume that $j < i$ for all Hamiltonian paths. Among all Hamiltonian paths consider the path such that $i'-j'$ is maximum. Now, three cases can be considered:\\ \begin{enumerate}[leftmargin=0pt,label=] \item \textbf{Subcase 1.} There is no $s < j'$ and $t > i'$ such that $v_s v_t \in E(G)$. By \hyperref[remark:1]{Remark 1} $\,$ there are two chordless cycles $C_1$ and $C_2$, respectively formed by the paths $v_1 v_{j'}$ and $v_{i'} v_n$. By assumption there is no edge $xy$, where $x \in V(C_1)$ and $y \in V(C_2)$. Define a tree $T$ with the edge set: $$ E\Big(\langle V(G) \setminus \big(V(C_1) \cup V(C_2)\big) \rangle \Big) \bigcap \Big( E(P) \cup \{v_{i'}v_n, v_{j'}v_1\} \Big).$$ Now, apply \hyperref[lemma:1]{Lemma 1} $\,$for the partition $\{T, C_1, C_2\}$.\\ \item \textbf{Subcase 2.} \label{subcase:22} There are at least four indices $s, s' < j$ and $t, t' > i$ such that $v_s v_t, v_{s'} v_{t'} \in E(G)$. Choose four indices $g, h < j$ and $e, f > i$ such that $v_h v_e, v_g v_f \in E(G)$ and $|g-h| + |e-f|$ is minimum. \begin{figure}[H] \begin{center} \includegraphics[width=90mm]{engImages/case2-subcase2.pdf} \caption{Two edges $v_h v_e$ and $v_g v_f$} \label{fig:non-overlapping} \end{center} \end{figure} Three cases can be considered:\\ \begin{enumerate}[leftmargin=0pt,label=(\alph*)] \item There is no chordless cycle formed by $\stackrel\frown{v_g v_h}$ and by $\stackrel\frown{v_e v_f}$. Consider the cycle $\stackrel\frown{v_g v_h} \stackrel\frown{v_e v_f}v_g$ and call it $C$. Now, define a tree $T$ with the edge set, $$\,\,\,E\Big(\langle V(G) \setminus V(C)\rangle \Big) \bigcap \Big( E(P) \cup \{v_1v_{j}, v_{i}v_n\} \Big),$$ apply \hyperref[lemma:1]{Lemma 1} $\,$for the partition $\{T, C\}$.\\ \item With no loss of generality, there exists a chordless cycle formed by $\stackrel\frown{v_e v_f}$ and there is no chordless cycle formed by the path $\stackrel\frown{v_g v_h}$. First suppose that there is a chordless cycle $C_1$ formed by $\stackrel\frown{v_e v_f}$ such that there is no edge between $V(C_1)$ and $\{v_1, \dots, v_j\}$. By \hyperref[remark:1]{Remark 1} $,$ there exists a chordless cycle $C_2$ formed by $\stackrel\frown{v_1 v_j}$. By assumption there is no edge between $V(C_1)$ and $V(C_2)$. Now, define a tree $T$ with the edge set, $$\quad\quad\quad\quad E\Big(\langle V(G) \setminus \big(V(C_1) \cup V(C_2)\big)\rangle \Big) \bigcap \Big( E(P) \cup \{v_1v_{j}, v_{i}v_n\} \Big),$$ and apply \hyperref[lemma:1]{Lemma 1} $\,$for the partition $\{T, C_1, C_2\}$. $\;$ Next assume that for every cycle $C_r$ formed by $\stackrel\frown{v_e v_f}$, there are two vertices $x_r \in V(C_r)$ and $y_r \in \{v_1, \dots, v_j\}$ such that $x_r y_r \in E(G)$. Let $v_e=w_0, w_1, \dots, w_l=v_f$ be all vertices of the path $\stackrel\frown{v_e v_f}$ in $P$. Choose the shortest path $w_0 w_{i_1} w_{i_2} \dots w_l$ such that $0 < i_1 < i_2 < \dots < l$. Consider the cycle $w_0 w_{i_1} \dots w_l \stackrel\frown{v_g v_h}$ and call it $C$. Now, by removing $C$, $q$ vertex disjoint paths $Q_1, \dots, Q_q$ which are contained in $\stackrel\frown{v_e v_f}$ remain. Note that there exists a path of order $2$ in $C$ which by adding this path to $Q_i$ we find a cycle $C_{r_i}$, for some $i$. Hence there exists an edge $x_{r_i} y_{r_i}$ connecting $Q_i$ to $V(G) \setminus V(\stackrel\frown{v_e v_f})$. We define a tree $T$ whose edge set is the edges, $$\quad\quad\quad\quad\quad\quad E\Big(\langle V(G) \setminus V(C)\rangle \Big) \bigcap \Big( E(P) \cup \{v_1v_{j}, v_{i}v_n\} \cup \big\{x_{r_i} y_{r_i} \mid 1 \leq i \leq q\big\} \Big),$$ then apply \hyperref[lemma:1]{Lemma 1} $\,$ on the partition $\{T, C\}$.\\ \begin{figure}[H] \begin{center} \includegraphics[width=90mm]{engImages/deltaNonOverlapping.pdf} \caption{The tree $T$ and the shortest path $w_0 w_{i_1}\dots w_l$} \label{fig:delta-non-overlapping} \end{center} \end{figure} \item There are at least two chordless cycles, say $C_1$ and $C_2$ formed by the paths $\stackrel\frown{v_g v_h}$ and $\stackrel\frown{v_e v_f}$, respectively. Since $|g-h| + |e-f|$ is minimum, there is no edge $xy \in E(G)$ with $x \in V(C_1)$ and $y \in V(C_2)$. Now, define a tree $T$ with the edge set, $$\quad\quad\quad\quad E\Big( \langle V(G) \setminus \big(V(C_1) \cup V(C_2)\big) \rangle \Big) \bigcap \Big( E(P) \cup \{v_1 v_{j}, v_{i}v_n\} \Big),$$ and apply \hyperref[lemma:1]{Lemma 1} $\,$for the partition $\{T, C_1, C_2\}$.\\ \end{enumerate} \item \textbf{Subcase 3.} There exist exactly two indices $s,t$, $s < j' < i' < t$ such that $v_s v_t \in E(G)$ and there are no two other indices $s', t'$ such that $s' < j < i < t'$ and $v_{s'} v_{t'} \in E(G)$. We can assume that there is no cycle formed by $\stackrel\frown{v_{s+1} v_j}$ or $\stackrel\frown{v_i v_{t-1}}$, to see this by symmetry consider a cycle $C$ formed by $\stackrel\frown{v_{s+1} v_j}$. By \hyperref[remark:1]{Remark 1} $\,$ there exist chordless cycles $C_1$ formed by $\stackrel\frown{v_{s+1} v_j}$ and $C_2$ formed by $\stackrel\frown{v_{i} v_n}$. By assumption $v_s v_t$ is the only edge such that $s < j$ and $t > i \;$. Therefore, there is no edge between $V(C_1)$ and $V(C_2)$. Now, let $T$ be a tree defined by the edge set, $$ E\Big(\langle V(G) \setminus \big(V(C_1) \cup V(C_2)\big)\rangle \Big) \bigcap \Big( E(P) \cup \{v_1v_{j}, v_{i}v_n\} \Big),$$ and apply \hyperref[lemma:1]{Lemma 1} $\,$for the partition \{$T$, $C_1$, $C_2$\}.\\ $\quad$Furthermore, we can also assume that either $s \neq j'-1$ or $t \neq i'+1$, otherwise we have the Hamiltonian cycle $\stackrel\frown{v_1 v_s} \stackrel\frown{v_t v_n} \stackrel\frown{v_{i'} v_{j'}} v_1$ and by \cite[Theorem 9]{akbari} Conjecture \theconjecture$\,$ holds. $\quad$By symmetry, suppose that $s \neq j'-1$. Let $v_k$ be the vertex adjacent to $v_{j'-1}$, and $k \notin \{j'-2, j'\}$. It can be shown that $k > j'-1$, since otherwise by considering the Hamiltonian path $P': \; \stackrel\frown{ v_{k+1} v_{j'-1}}\stackrel\frown{v_k v_1} \stackrel\frown{v_{j'} v_n}$, the new $i'-j'$ is greater than the old one and this contradicts our assumption about $P$ in the \hyperref[case:2]{Case 2}. $\quad$We know that $j' < k < i$. Moreover, the fact that $\stackrel\frown{v_{s+1} v_j}$ does not form a cycle contradicts the case that $j' < k \le j$. So $j < k < i$. Consider two cycles $C_1$ and $C_2$, respectively with the vertices $v_1 \stackrel\frown{v_{j'} v_{j}} v_1$ and $v_n \stackrel\frown{v_{i'} v_{i}} v_n$. The cycles $C_1$ and $C_2$ are chordless, otherwise there exist cycles formed by the paths $\stackrel\frown{v_{s+1} v_j}$ or $\stackrel\frown{v_i v_{t-1}}$. Now, define a tree $T$ with the edge set $$ E\Big(\langle V(G) \setminus \big(V(C_1) \cup V(C_2)\big)\rangle \Big) \bigcap \Big( E(P) \cup \{v_s v_t, v_k v_{j'-1}\} \Big),$$ and apply \hyperref[lemma:1]{Lemma 1} $\,$for the partition \{$T$, $C_1$, $C_2$\}. \end{enumerate} \end{enumerate} \end{proof} \noindent\textbf{Remark 2.} \label{remark:2} Indeed, in the proof of the previous theorem we showed a stronger result, that is, for every traceable cubic graph there is a decomposition with at most two cycles.
[ "Hoffmann-Ostenhof", "Kostochka", "Ozeki", "Ye", "Bachstein", "Akbari", "Jensen", "Siggers", "Petersen graph", "Hamiltonian path", "Hamiltonian cubic graphs", "traceable cubic graphs", "3-connected cubic plane graphs", "3-connected cubic graph", "Klein-bottle", "torus", "spanning tree", "matching", "family of cycles", "conjecture", "simple undirected graph", "vertex set", "edge set", "pendant vertex", "distance between the vertices", "chordless cycle", "cycle chord", "Induced subgraph", "path", "decomposition", "cubic graph", "partition", "cycle", "edge", "vertex", "graph", "Lemma 1", "Figure", "Hamiltonian paths", "graph theory" ]
[ "conjecture", "graph", "cycle", "vertex", "edge" ]
\section{Principle of nano strain-amplifier} \begin{figure*}[t!] \centering \includegraphics[width=5.4in]{Fig1} \vspace{-0.5em} \caption{Schematic sketches of nanowire strain sensors. (a)(b) Conventional non-released and released NW structure; (c)(d) The proposed nano strain-amplifier and its simplified physical model.} \label{fig:fig1} \vspace{-1em} \end{figure*} Figure \ref{fig:fig1}(a) and 1(b) show the concept of the conventional structures of piezoresistive sensors. The piezoresistive elements are either released from, or kept on, the substrate. The sensitivity ($S$) of the sensors is defined based on the ratio of the relative resistance change ($\Delta R/R$) of the sensing element and the strain applied to the substrate ($\varepsilon_{sub}$): \begin{equation} S = (\Delta R/R)/\varepsilon_{sub} \label{eq:sensitivity} \end{equation} In addition, the relative resistance change $\Delta R/R$ can be calculated from the gauge factor ($GF$) of the material used to make the piezoresistive elements: $\Delta R/R = GF \varepsilon_{ind}$, where $\varepsilon_{ind}$ is the strain induced into the piezoresistor. In most of the conventional strain gauges as shown in Fig. \ref{fig:fig1} (a,b), the thickness of the sensing layer is typically below a few hundred nanometers, which is much smaller than that of the substrate. Therefore, the strain induced into the piezoresistive elements is approximately the same as that of the substrate ($\varepsilon_{ind} \approx \varepsilon_{sub}$). Consequently, to improve the sensitivity of strain sensors (e.g. enlarging $\Delta R/R$), electrical approaches which can enlarge the gauge factor ($GF$) are required. Nevertheless, as aforementioned, the existence of the large gauge factor in nanowires due to quantum confinement or surface state, is still considered as controversial. It is also evident from Eq. \ref{eq:sensitivity} that the sensitivity of strain sensors can also be improved using a mechanical approach, which enlarges the strain induced into the piezoresistive element. Figure \ref{fig:fig1}(c) shows our proposed nano strain-amplifier structure, in which the piezoresistive nanowires are locally fabricated at the centre of a released bridge. The key idea of this structure is that, under a certain strain applied to the substrate, a large strain will be concentrated at the locally fabricated SiC nanowires. The working principle of the nano strain-amplifier is similar to that of the well-known dogbone structure, which is widely used to characterize the tensile strength of materials \cite{dogbone1,dogbone2}. That is, when a stress is applied to the dogbone-shape of a certain material, a crack, if generated, will occur at the middle part of the dogbone. The large strain concentrated at the narrow area located at the centre part with respect to the wider areas located at outer region, causes the crack. Qualitative and quantitative explanations of the nano strain-amplifier are presented as follows. For the sake of simplicity, the released micro frame and nanowire (single wire or array) of the nano strain-amplifier can be considered as solid springs, Fig. \ref{fig:fig1}(d). The stiffness of these springs are proportional to their width ($w$) and inversely proportional to their length (l): $K \propto w/l$. Consequently, the model of the released nanowire and micro frames can be simplified as a series of springs, where the springs with higher stiffness correspond to the micro frame, and the single spring with lower stiffness corresponds to the nanowire. It is well-known in classical physics that, for serially connected springs, a larger strain will be concentrated in the low--stiffness string, while a smaller strain will be induced in the high--stiffness string \cite{Springbook}. The following analysis quantitatively explained the amplification of the strain. \begin{figure}[b!] \centering \includegraphics[width=3in]{Fig2} \vspace{-1em} \caption{Finite element analysis of the strain induced in to the nanowire array utilizing nano strain-amplifier.} \label{fig:fig2} \end{figure} When a tensile mechanical strain ($\varepsilon_{sub}$) is applied to the substrate, the released structure will also be elongated. Since the stiffness of the released frame is much smaller than that of the substrate, it is safe to assume that the released structure will follows the elongation of the substrate. The displacement of the released structure $\Delta L$ is: \begin{equation} \Delta L = \Delta L_m + \Delta L_n = L_m \varepsilon_m + L_n \varepsilon_n \label{eq:displacement} \end{equation} where $L_m$, $L_n$ are the length; $\Delta L_m$, $\Delta L_n$ are the displacement; and $\varepsilon_m$, $\varepsilon_n$ are the strains induced into the micro spring and nano spring, respectively. The subscripts m and n stand for the micro frames and nanowires, respectively. Furthermore, due to the equilibrium of the stressing force ($F$) along the series of springs, the following relationship is established: $F= K_m\Delta L_m = K_n \Delta L_n$, where $K_m$, $K_n$ are the stiffness of the released micro frames and nanowires, respectively. Consequently the relationship between the displacement of the micro frame (higher stiffness) and nanowires (lower stiffness) is: \begin{equation} \frac{\Delta L_m}{\Delta L_n}=\frac{K_n}{K_m}=\frac{L_mw_n}{L_nw_m} \label{eq:euili} \end{equation} Substituting Eqn. \ref{eq:euili} into Eqn. \ref{eq:displacement}, the strain induced into the locally fabricated nanowires is: \begin{equation} \varepsilon_n = \frac{\Delta L_n}{L_n} = \frac{1}{1-\frac{w_m-w_n}{w_m}\frac{L_m}{L}}\varepsilon_{sub} \label{eq:strainamp} \end{equation} Equation \ref{eq:strainamp} indicates that increasing the ratio of $w_m/w_n$ and $L_m/L_n$ significantly amplifies the strain induced into the nanowire from the strain applied to the substrate. This model is also applicable to the case of nanowire arrays, in which $w_n$ is the total width of all nanowires in the array. The theoretical model is then verified using the finite element analysis (FEA). In the FEA simulation, we compare the strain induced into (i) non released nanowires, (ii) the conventionally released nanowires, and (iii) our nano strain-amplifier structure, using COMSOL Multiphysics \texttrademark. In our nano strain amplifying structure, the width of the released frame was set to be 8 $\mu$m, while the width of each nanowire in the array (3 wires) was set to be 370 nm. The nanowires array structure was selected as it can enhance the electrical conductance of the SiC nanowires resistor which makes the subsequent experimental demonstration easier. The ratio between the length of nanowires and micro bridge was set to be 1: 20. With this geometrical dimensions, strain induced into nanowires array $\varepsilon_n$ was numerically calculated to be approximately 6 times larger than $\varepsilon_{sub}$, Eqn. \ref{eq:strainamp}. The simulation results show that for all structure, the elongation of non-released and released nanowires follow that of the substrate. In addition, strain was almost completely transferred into conventional released and non-released structures. Furthermore, the ratio of the strain induced in to the locally fabricated nanowires was estimated to be 5.9 times larger than that of the substrate, Fig. \ref{fig:fig2}. These results are in solid agreement with the theoretical analysis presented above. For a nanowire array with an average width of 470 nm, the amplified gain of strain was found to be 4.5. Based on the theoretical analysis, we conducted the following experiments to demonstrate the high sensitivity of SiC nanowire strain sensors using the nano strain-amplifier. A thin 3C-SiC film with its thickness of 300 nm was epitaxially grown on a 150 mm diameter Si wafer using low pressure chemical vapour deposition \cite{SiC_growth}. The film was \emph{in situ} doped using Al dopants. The carrier concentration of the p-type 3C-SiC was found to be $5 \times 10^{18}$ cm$^{-3}$, using a hot probe technique \cite{philip}. The details of the characteristics of the grown film can be found elsewhere \cite{Phan_JMC}. Subsequently, I-shape p-type SiC resistors with aluminum electrodes deposited on the surface were patterned using inductive coupled plasma (ICP) etching. As the piezoresistance of p-type 3C-SiC depends on crystallographic orientation, all SiC resistors of the present work were aligned along [110] direction to maximize the piezoresistive effect. Next, the micro scale SiC resistors were then released from the Si substrate using dry etching (XeF$_2$). Finally, SiC nanowire arrays were formed at the centre of the released bridge using focused ion beam (FIB). Two types of nanowire array were fabricated with three nanowires for each array. The average width of each nanowire in each type were 380 nm and 470 nm, respectively. Figure \ref{fig:fig3} shows the SEM images of the fabricated samples, including the conventional released structure, non-released nanowires, and the nano strain-amplifier. \begin{figure}[t!] \centering \includegraphics[width=3in]{Fig3} \caption{SEM image of SiC strain sensors. (a) Released SiC micro bridge used for the subsequent fabrication of the nano strain-amplifier; (b) SEM of a micro SiC resistor where the SiC nanowires array were formed using FIB; (c) SEM of non-released SiC nanowires; (d) SEM of locally fabricated SiC nanowires released from the Si substrate (nano strain-amplifier).} \label{fig:fig3} \vspace{-1em} \end{figure} The current voltage (I-V) curves of all fabricated samples were characterized using a HP 4145 \texttrademark ~parameter analyzer. The linear relationship between the applied voltage and measured current, indicated that Al made a good Ohmic contact with the highly doped SiC resistance, Fig. \ref{fig:IV}. Additionally, the electrical conductivity of both nanowires and micro frame estimated from the I-V curve and the dimensions of the resistors shows almost the same value. This indicated that the FIB process did not cause a significant surface damage to the fabricated nanowires. \begin{figure}[b!] \centering \includegraphics[width=3in]{Fig4} \vspace{-1.5em} \caption{Current voltage curves of the fabricated SiC resistors.} \label{fig:IV} \end{figure} The bending experiment was used to characterize the piezoresistive effect in micro size SiC resistors and locally fabricated SiC nanowire array. In this experiment one end of the Si cantilever (with a thickness of 625 $\mu$m, and a width of 7 mm) was fixed while the other end was deflected by applying different forces. The distance from the fabricated nanowires to the free end of the Si cantilever was approximately 45 mm. The strain induced into the Si substrate is $\varepsilon_\text{sub} = Mt/2EI$, where $M$ is the applied bending moment; and $t$, $E$ and $I$ are the thickness, Young's modulus and the moment of inertia of the Si cantilever, respectively. The response of the SiC resistance to applied strain was then measured using a multimeter (Agilent \texttrademark 34401 A). \begin{figure}[h!] \centering \includegraphics[width=3in]{Fig5.eps} \vspace{-1.5em} \caption{Experimental results. (a) A comparision between the relative resistance change in the nano strain-amplifiers, non released nanowires and released micro frames; (b) The repeatability of the SiC nanowires strain sensors utilizing the proposed structure.} \label{fig:DRR} \vspace{-1em} \end{figure} The relative resistance change ($\Delta R/R$) of the micro and nano SiC resistors was plotted against the strain induced into the Si substrate $\varepsilon_{sub}$, Fig. \ref{fig:DRR}(a). For all fabricated samples, the relative resistance change shows a good linear relationship with the applied strain ($\varepsilon_{sub}$). In addition, with the same applied strain to the Si substrate, the resistance change of the SiC nanowires using the nano strain-amplifier was much larger than that of the the SiC micro resistor and the conventional non-released SiC nanowires. In addition, reducing the width of the SiC nanowires also resulted in the increase of the sensitivity. The magnitude of the piezoresistive effect in the nano strain-amplifier as well as conventional structures were then quantitatively evaluated based on the effective gauge factor ($GF_{eff}$), which is defined as the ratio of the relative resistance change to the applied strain to the substrate: $GF_{eff} = (\Delta R/R)/\varepsilon_{sub}$. Accordingly, the effective gauge factor of the released micro SiC was found to be 28, while that of the non-released SiC nanowires was 35. From the data shown in Fig. \ref{fig:DRR}, the effective gauge factor of the 380 nm and 470 nm SiC nanowires in the nano strain-amplifier were calculated as 150 and 124, respectively. Thus for nanowire arrays with average widths of 380 nm and 470 nm, the sensitivity of the nano strain-amplifier was 5.4 times and 4.6 times larger than the bulk SiC, respectively. These results were consistent with analytical and numerical models presented above. The relative resistance change of the nano strain-amplifier also showed excellent linearity with the applied strain, with a linear regression of above 99\%. The resistance change of the nano strain-amplifier can also be converted into voltage signals using a Wheatstone bridge, Fig. \ref{fig:DRR}(b). The output voltage of the nano strain-amplifier increases with increasing tensile strains from 0 ppm to 180 ppm, and returned to the initial value when the strain was completely removed, confirming a good repeatability after several strain induced cycles. The linearity of the relative resistance change, and the repeatability indicate that the proposed structure is promising for strain sensing applications. In conclusion, this work presents a novel mechanical approach to obtain highly sensitive piezoresistance in nanowires based on a nano strain-amplifier. The key factor of the nano strain-amplifier lies on nanowires locally fabricated on a released micro structure. Experimental studies were conducted on SiC nanowires, confirming that by utilizing our nano strain-amplifier, the sensitivity of SiC nanowires was 5.4 times larger than that of conventional structures. This result indicated that the nano strain-amplifier is an excellent platform for ultra sensitive strain sensing applications.
[ "Principle of nano strain-amplifier", "nanowire strain sensors", "piezoresistive sensors", "piezoresistive elements", "strain sensors", "nano strain-amplifier structure", "SiC nanowires", "dogbone structure", "tensile strength of materials", "released micro frame and nanowire", "classical physics", "Finite element analysis", "strain induced in to the nanowire array", "tensile mechanical strain", "micro spring and nano spring", "micro frames and nanowires", "stressing force", "displacement of the micro frame and nanowires", "nanowire arrays", "finite element analysis", "COMSOL Multiphysics", "SiC nanowires resistor", "SiC nanowire strain sensors", "3C-SiC film", "Si wafer", "low pressure chemical vapour deposition", "Al dopants", "p-type 3C-SiC", "inductive coupled plasma", "SiC resistors", "dry etching", "focused ion beam", "nanowire array", "HP 4145", "Al", "SiC resistance", "Si cantilever", "Young's modulus", "Agilent", "multimeter" ]
[ "nano strain-amplifier", "SiC nanowires", "strain sensors", "piezoresistive sensors", "nanowire strain sensors" ]
\section{Introduction} The concept of synchronistion is based on the adjustment of rhythms of oscillating systems due to their interaction \cite{pikovsky01}. Synchronisation phenomenon was recognised by Huygens in the 17th century, time when he performed experiments to understand this phenomenon \cite{bennett02}. To date, several kinds of synchronisation among coupled systems were reported, such as complete \cite{li16}, phase \cite{pereira07,batista10}, lag \cite{huang14}, and collective almost synchronisation \cite{baptista12}. Neuronal synchronous rhythms have been observed in a wide range of researches about cognitive functions \cite{wang10,hutcheon00}. Electroencephalography and magnetoencephalography studies have been suggested that neuronal synchronization in the gamma frequency plays a functional role for memories in humans \cite{axmacher06,fell11}. Steinmetz et al. \cite{steinmetz00} investigated the synchronous behaviour of pairs of neurons in the secondary somatosensory cortex of monkey. They found that attention modulates oscillatory neuronal synchronisation in the somatosensory cortex. Moreover, in the literature it has been proposed that there is a relationship between conscious perception and synchronisation of neuronal activity \cite{hipp11}. We study spiking and bursting synchronisation betwe\-en neuron in a neuronal network model. A spike refers to the action potential generated by a neuron that rapidly rises and falls \cite{lange08}, while bursting refers to a sequence of spikes that are followed by a quiescent time \cite{wu12}. It was demonstrated that spiking synchronisation is relevant to olfactory bulb \cite{davison01} and is involved in motor cortical functions \cite{riehle97}. The characteristics and mechanisms of bursting synchronisation were studied in cultured cortical neurons by means of planar electrode array \cite{maeda95}. Jefferys $\&$ Haas discovered synchronised bursting of CA1 hippocampal pyramidal cells \cite{jefferys82}. There is a wide range of mathematical models used to describe neuronal activity, such as the cellular automaton \cite{viana14}, the Rulkov map \cite{rulkov01}, and differential equations \cite{hodgkin52,hindmarsh84}. One of the simplest mathematical models and that is widely used to depict neuronal behaviour is the integrate-and-fire \cite{lapicque07}, which is governed by a linear differential equation. A more realistic version of it is the adaptive exponential integrate-and-fire (aEIF) model which we consider in this work as the local neuronal activity of neurons in the network. The aEIF is a two-dimensional integrate-and-fire model introduced by Brette $\&$ Gerstner \cite{brette05}. This model has an exponential spike mechanism with an adaptation current. Touboul $\&$ Brette \cite{touboul08} studied the bifurcation diagram of the aEIF. They showed the existence of the Andronov-Hopf bifurcation and saddle-node bifurcations. The aEIF model can generate multiple firing patterns depending on the parameter and which fit experimental data from cortical neurons under current stimulation \cite{naud08}. In this work, we focus on the synchronisation phenomenon in a randomly connected network. This kind of network, also called Erd\"os-R\'enyi network \cite{erdos59}, has nodes where each pair is connected according to a probability. The random neuronal network was utilised to study oscillations in cortico-thalamic circuits \cite{gelenbe98} and dynamics of network with synaptic depression \cite{senn96}. We built a random neuronal network with unidirectional connections that represent chemical synapses. We show that there are clearly separated ranges of parameters that lead to spiking or bursting synchronisation. In addition, we analyse the robustness to external perturbation of the synchronisation. We verify that bursting synchronisation is more robustness than spiking synchronisation. However, bursting synchronisation requires larger chemical synaptic strengths, and larger voltage potential relaxation reset to appear than those required for spiking synchronisation. This paper is organised as follows: in Section II we present the adaptive exponential integrate-and-fire model. In Section III, we introduce the neuronal network with random features. In Section IV, we analyse the behaviour of spiking and bursting synchronisation. In the last Section, we draw our conclusions. \section{Adaptive exponential integrate-and-fire} As a local dynamics of the neuronal network, we consider the adaptive exponential integrate-and-fire (aEIF) model that consists of a system of two differential equations \cite{brette05} given by \begin{eqnarray}\label{eqIF} C \frac{d V}{d t} & = & - g_L (V - E_L) + {\Delta}_T \exp \left(\frac{V - V_T}{{\Delta}_T} \right) \nonumber \\ & & +I-w , \nonumber \\ \tau_w \frac{d w}{d t} & = & a (V - E_L) - w, \end{eqnarray} where $V(t)$ is the membrane potential when a current $I(t)$ is injected, $C$ is the membrane capacitance, $g_L$ is the leak conductance, $E_L$ is the resting potential, $\Delta_T$ is the slope factor, $V_T$ is the threshold potential, $w$ is an adaptation variable, $\tau_w$ is the time constant, and $a$ is the level of subthreshold adaptation. If $V(t)$ reaches the threshold $V_{\rm{peak}}$, a reset condition is applied: $V\rightarrow V_r$ and $w\rightarrow w_r=w+b$. In our simulations, we consider $C=200.0$pF, $g_L=12.0$nS, $E_L=-70.0$mV, ${\Delta}_T=2.0$mV, $V_T=-50.0$mV, $I=509.7$pA, $\tau_w=300.0$ms, $a=2.0$nS, and $V_{\rm{peak}}=20.0$mV \cite{naud08}. The firing pattern depends on the reset parameters $V_r$ and $b$. Table \ref{table1} exhibits some values that generate five different firing patterns (Fig. \ref{fig1}). In Fig. \ref{fig1} we represent each firing pattern with a different colour in the parameter space $b\times V_r$: adaptation in red, tonic spiking in blue, initial bursting in green, regular bursting in yellow, and irregular in black. In Figs. \ref{fig1}a, \ref{fig1}b, and \ref{fig1}c we observe adaptation, tonic spiking, and initial burst pattern, respectively, due to a step current stimulation. Adaptation pattern has increasing inter-spike interval during a sustained stimulus, tonic spiking pattern is the simplest regular discharge of the action potential, and the initial bursting pattern starts with a group of spikes presenting a frequency larger than the steady state frequency. The membrane potential evolution with regular bursting is showed in Fig. \ref{fig1}d, while Fig. \ref{fig1}e displays irregular pattern. \begin{table}[htbp] \caption{Reset parameters.} \centering \begin{tabular}{c c c c c} \hline Firing patterns & Fig. & b (pA) & $V_r$ (mV) & Layout \\ \hline adaptation &\ref{fig1}(a) & 60.0 & -68.0 & red \\ tonic spiking & \ref{fig1}(b) & 5.0 & -65.0 & blue\\ initial burst & \ref{fig1}(c) & 35.0 & -48.8 & green \\ regular bursting & \ref{fig1}(d) & 40.0 & -45.0 & yellow\\ irregular & \ref{fig1}(e) & 41.2 & -47.4 & black \\ \hline \end{tabular} \label{table1} \end{table} \begin{figure}[hbt] \centering \includegraphics[height=7cm,width=10cm]{fig1.eps} \caption{(Colour online) Parameter space for the firing patterns as a function of the reset parameters $V_r$ and $b$. (a) Adaptation in red, (b) tonic spiking in blue, (c) initial bursting in green, (d) regular bursting in yellow, and (e) irregular in black.} \label{fig1} \end{figure} As we have interest in spiking and bursting synchronisation, we separate the parameter space into a region with spike and another with bursting patterns (Fig. \ref{fig2}). To identify these two regions of interest, we use the coefficient of variation (CV) of the neuronal inter-spike interval (ISI), that is given by \begin{eqnarray}\label{CV} {\rm CV}=\frac{{\sigma}_{\rm{ISI}}}{\rm{\overline{ISI}}}, \end{eqnarray} where ${\sigma}_{\rm{ISI}}$ is the standard deviation of the ISI normalised by the mean $\bar{\rm ISI}$ \cite{gabbiani98}. Spiking patterns produce $\rm{CV}<0.5$. Parameter regions that represent the neurons firing with spiking pattern are denoted by gray colour in Fig. \ref{fig2}. Whereas, the black region represents the bursting patterns, which results in $\rm{CV} \geq 0.5$. \begin{figure}[hbt] \centering \includegraphics[height=7cm,width=9cm]{fig2.eps} \caption{Parameter space for the firing patterns as a function of the reset parameters $V_r$ and $b$. Spike pattern in region I ($\rm{CV}<0.5$) and bursting pattern in region II ($\rm{CV}\geq 0.5$) are separated by white circles.} \label{fig2} \end{figure} \section{Spiking or bursting synchronisation} In this work, we constructed a network where the neurons are randomly connected \cite{erdos59}. Our network is given by \begin{eqnarray}\label{eqIFrede} C \frac{d V_i}{d t} & = & - g_L (V_i - E_L) + {\Delta}_T \; \rm{exp} \left(\frac{V_i - V_T}{{\Delta}_T} \right) \nonumber \\ & + & I_i - w_i + g_{\rm{ex}} (V_{\rm{ex}} - V_i) \sum_{j=1}^N A_{ij} s_j + \Gamma_i, \nonumber \\ \tau_w \frac{d w_i}{d t} & = & a_i (V_i - E_L) - w_i, \nonumber \\ \tau_{\rm{ex}} \frac{d s_i}{d t} & = & - s_i. \end{eqnarray} where $V_i$ is the membrane potential of the neuron $i$, $g_{\rm{ex}}$ is the synaptic conductance, $V_{\rm{ex}}$ is the synaptic reversal potential, $\tau_{\rm{ex}}$ is the synaptic time constant, $s_i$ is the synaptic weight, $A_{ij}$ is the adjacency matrix, $\Gamma_i$ is the external perturbation, and $a_i$ is randomly distributed in the interval $[1.9,2.1]$. The schematic representation of the neuronal network that we have considered is illustrated in Fig \ref{fig3}. Each neuron is randomly linked to other neurons with a probability $p$ by means of directed connections. When $p$ is equal to 1, the neuronal network becames an all-to-all network. A network with this topology was used by Borges et al. \cite{borges16} to study the effects of the spike timing-dependent plasticity on the synchronisation in a Hodgkin-Huxley neuronal network. \begin{figure}[hbt] \centering \includegraphics[height=6cm,width=9cm]{fig3.eps} \caption{Schematic representation of the neuronal network where the neurons are connected according to a probability $p$.} \label{fig3} \end{figure} A useful diagnostic tool to determine synchronous behaviour is the complex phase order parameter defined as \cite{kuramoto03} \begin{equation} z(t)=R(t)\exp({\rm i}\Phi(t))\equiv\frac{1}{N}\sum_{j=1}^{N}\exp({\rm i}\psi_{j}), \end{equation} where $R$ and $\Phi$ are the amplitude and angle of a centroid phase vector, respectively, and the phase is given by \begin{equation} \psi_{j}(t)=2\pi m+2\pi\frac{t-t_{j,m}}{t_{j,m+1}-t_{j,m}}, \end{equation} where $t_{j,m}$ corresponds to the time when a spike $m$ ($m=0,1,2,\dots$) of a neuron $j$ happens ($t_{j,m}< t < t_{j,m+1}$). We have considered the beginning of the spike when $V_j>-20$mV. The value of the order parameter magnitude goes to 1 in a totally synchronised state. To study the neuronal synchronisation of the network, we have calculated the time-average order-parameter, that is given by \begin{equation} \overline{R}=\frac{1}{t_{\rm fin}-{t_{\rm ini}}}\sum_{t_{\rm ini}}^{t_{\rm fin}}R(t), \end{equation} where $t_{\rm fin}-t_{\rm ini}$ is the time window for calculating $\bar{R}$. Figs. \ref{fig4}a, \ref{fig4}b, and \ref{fig4}c show the raster plots for $g_{\rm ex}=0.02$nS, $g_{\rm ex}=0.19$nS, and $g_{\rm ex}=0.45$nS, respectively, considering $V_r=-58$mV, $p=0.5$, and $b=70$pA, where the dots correspond to the spiking activities generated by neurons. For $g_{\rm ex}=0.02$nS (Fig. \ref{fig4}a) the network displays a desynchonised state, and as a result, the order parameter values are very small (black line in Fig. \ref{fig4}d). Increasing the synaptic conductance for $g_{\rm ex}=0.19$nS, the neuronal network exhibits spike synchronisation (Fig. \ref{fig4}b) and the order parameter values are near unity (red line in Fig. \ref{fig4}d). When the network presents bursting synchronisation (Fig. \ref{fig4}c), the order parameter values vary between $R\approx 1$ and $R\ll 1$ (blue line in Fig. \ref{fig4}d). $R\ll 1$ to the time when the neuron are firing. \begin{figure}[hbt] \centering \includegraphics[height=11cm,width=10cm]{fig4.eps} \caption{(Colour online) Raster plot for (a) $g_{\rm ex}=0.02$nS, (b) $g_{\rm ex}=0.19$nS, and (c) $g_{\rm ex}=0.45$nS, considering $V_r = -58$mV, $p=0.5$, and $b=70$pA. In (d) the order parameter is computed for $g_{\rm ex}=0.02$nS (black line), $g_{\rm ex}=0.19$nS (red line), and $g_{\rm ex}=0.19$nS (blue line).} \label{fig4} \end{figure} In Fig. \ref{fig5}a we show ${\bar R}$ as a function of $g_{\rm ex}$ for $p=0.5$, $b=50$pA (black line), $b=60$pA (red line), and $b=70$pA (blue line). The three results exhibit strong synchronous behaviour (${\bar R}>0.9$) for many values of $g_{\rm ex}$ when $g_{\rm ex}\gtrsim 0.4$nS . However, for $g_{\rm ex}\lesssim 0.4$nS, it is possible to see synchronous behaviour only for $b=70$pA in the range $0.15{\rm nS}<g_{\rm ex}<0.25{\rm nS}$. In addition, we calculate the coefficient of variation (CV) to determine the range in $g_{\rm ex}$ where the neurons of the network have spiking or bursting behaviour (Fig. \ref{fig5}b). We consider that for CV$<0.5$ (black dashed line) the neurons exhibit spiking behaviour, while for CV$\geq 0.5$ the neurons present bursting behaviour. We observe that in the range $0.15{\rm nS}<g_{\rm ex}<0.25{\rm nS}$ for $b=70$pA there is spiking sychronisation, and bursting synchronisation for $g_{\rm ex}\gtrsim 0.4$nS. \begin{figure}[hbt] \centering \includegraphics[height=7cm,width=9cm]{fig5.eps} \caption{(Colour online) (a) Time-average order parameter and (b) CV for $V_r=-58$mV, $p=0.5$, $b=50$pA (black line), $b=60$pA (red line), and $b=70$pA (blue line).} \label{fig5} \end{figure} \section{Parameter space of synchronisation} The synchronous behaviour depends on the synaptic conductance and the probability of connections. Fig. \ref{fig6} exhibits the time-averaged order parameter in colour scale as a function of $g_{\rm ex}$ and $p$. We verify a large parameter region where spiking and bursting synchronisation is strong, characterised by ${\bar R}>0.9$. The regions I and II correspond to spiking and bursting patterns, respectively, and these regions are separated by a white line with circles. We obtain the regions by means of the coefficient of variation (CV). There is a transition between region I and region II, where neurons initially synchronous in the spike, loose spiking synchronicity to give place to a neuronal network with a regime of bursting synchronisation. \begin{figure}[hbt] \centering \includegraphics[height=6cm,width=9cm]{fig6.eps} \caption{(Colour online) $g_{\rm ex} \times p$ for $V_r=-58$mV and $b=70$pA, where the colour bar represents the time-average order parameter. The regions I (spike patterns) and II (bursting patterns) are separated by the white line with circles.} \label{fig6} \end{figure} We investigate the dependence of spiking and bursting synchronisation on the control parameters $b$ and $V_r$. To do that, we use the time average order parameter and the coefficient of variation. Figure \ref{fig7} shows that the spike patterns region (region I) decreases when $g_{\rm ex}$ increases. This way, the region I for $b<100$pA and $V_r=-49$mV of parameters leading to no synchronous behaviour (Fig. \ref{fig7}a), becomes a region of parameters that promote synchronised bursting (Fig. \ref{fig7}b and \ref{fig7}c). However, a large region of desynchronised bursting appears for $g_{\rm ex}=0.25$nS about $V_r=-45$mV and $b>100$pA in the region II (Fig. \ref{fig7}b). For $g_{\rm ex}=0.5$nS, we see, in Fig. \ref{fig7}c, three regions of desynchronous behaviour, one in the region I for $b<100$pA, other in region II for $b<200$pA, and another one is located around the border (white line with circles) between regions I and II for $b>200$pA. \begin{figure}[hbt] \centering \includegraphics[height=12cm,width=7cm]{fig7.eps} \caption{(Colour online) Parameter space $b \times V_r$ for $p=0.5$, $\gamma=0$ (a) $g_{\rm ex}=0.05$nS, (b) $g_{\rm ex}=0.25$nS, and (c) $g_{\rm ex}=0.5$nS, where the colour bar represents the time-average order parameter. The regions I (spike patterns) and II (bursting patterns) are separated by white circles.} \label{fig7} \end{figure} It has been found that external perturbations on neuronal networks not only can induce synchronous behaviour \cite{baptista06,zhang15}, but also can suppress synchronisation \cite{lameu16}. Aiming to study the robustness to perturbations of the synchronous behaviour, we consider an external perturbation $\Gamma_i$ (\ref{eqIFrede}). It is applied on each neuron $i$ with an average time interval of about $10$ms and with a constant intensity $\gamma$ during $1$ms. Figure \ref{fig8} shows the plots $g_{\rm ex} \times p$ for $\gamma>0$, where the regions I and II correspond to spiking and bursting patterns, respectively, separated by white line with circles, and the colour bar indicates the time-average order parameter values. In this Figure, we consider $V_r=-58$mV, $b=70$pA, (a) $\gamma=250$pA, (b) $\gamma=500$pA, and (c) $\gamma=1000$pA. For $\gamma=250$pA (Fig. \ref{fig8}a) the perturbation does not suppress spike synchronisation, whereas for $\gamma=500$pA the synchronisation is completely suppressed in region I (Fig. \ref{fig8}b). In Fig. \ref{fig8}c, we see that increasing further the constant intensity for $\gamma=1000$pA, the external perturbation suppresses also bursting synchronisation in region II. Therefore,the synchronous behavior in region II is more robustness to perturbations than in the region I, due to the fact that the region II is in a range with high $g_{\rm ex}$ and $p$ values, namely strong coupling and high connectivity. \begin{figure}[hbt] \centering \includegraphics[height=12cm,width=7cm]{fig8.eps} \caption{(Colour online) $g_{\rm ex} \times p$ for $V_r=-58$mV, $b=70$pA, (a) $\gamma=250$pA, (b) $\gamma=500$pA, and (c) $\gamma=1000$pA.} \label{fig8} \end{figure} In order to understand the perturbation effect on the spike and bursting patterns, we consider the same values of $g_{\rm ex}$ and $p$ as Fig. \ref{fig7}a. Figure \ref{fig9} exhibits the space parameter $b\times V_r$, where $\gamma$ is equal to $500$pA. The external perturbation suppresses synchronisation in the region I, whereas we observe synchronisation in region II. The synchronous behaviour in region II can be suppressed if the constant intensity $\gamma$ is increased. Therefore, bursting synchronisation is more robustness to perturbations than spike synchronisation. \begin{figure}[hbt] \centering \includegraphics[height=5cm,width=7cm]{fig9.eps} \caption{(Colour online) $b \times V_r$ for $g_{\rm ex}=0.05$nS, $p=0.5$, and $\gamma=500$pA, where the colour bar represents the time-average order parameter. The regions I (spike patterns) and II (bursting patterns) are separated by white line with circles.} \label{fig9} \end{figure} \section{Conclusion} In this paper, we studied the spiking and bursting synchronous behaviour in a random neuronal network where the local dynamics of the neurons is given by the adaptive exponential integrate-and-fire (aEIF) model. The aEIF model can exhibit different firing patterns, such as adaptation, tonic spiking, initial burst, regular bursting, and irregular bursting. In our network, the neurons are randomly connected according to a probability. The larger the probability of connection, and the strength of the synaptic connection, the more likely is to find bursting synchronisation. It is possible to suppress synchronous behaviour by means of an external perturbation. However, synchronous behaviour with higher values of $g_{\rm ex}$ and $p$, which typically promotes bursting synchronisation, are more robust to perturbations, then spike synchronous behaviour appearing for smaller values of these parameters. We concluded that bursting synchronisation provides a good environment to transmit information when neurons are stron\-gly perturbed (large $\Gamma$). \section*{Acknowledgements} This study was possible by partial financial support from the following Brazilian government agencies: CNPq, CAPES, and FAPESP (2011/19296-1 and 2015/07311-7). We also wish thank Newton Fund and COFAP.
[ "Huygens", "Steinmetz et al.", "Jefferys $\\&$ Haas", "Brette $\\&$ Gerstner", "Touboul $\\&$ Brette", "Erd\\\"os-R\\'enyi network", "neuronal synchronous rhythms", "synchronous behaviour of pairs of neurons", "spiking and bursting synchronisation", "neuronal network model", "action potential", "spiking synchronisation", "bursting synchronisation", "synchronised bursting of CA1 hippocampal pyramidal cells", "cellular automaton", "Rulkov map", "differential equations", "integrate-and-fire", "adaptive exponential integrate-and-fire (aEIF) model", "Andronov-Hopf bifurcation", "saddle-node bifurcations", "random neuronal network", "chemical synapses", "adaptive exponential integrate-and-fire", "membrane potential", "membrane capacitance", "leak conductance", "resting potential", "slope factor", "threshold potential", "adaptation variable", "time constant", "level of subthreshold adaptation", "firing pattern", "Adaptation", "Tonic spiking", "Initial bursting", "Regular bursting", "Irregular", "Neuron", "Spiking and bursting synchronisation", "Neuronal network", "Hodgkin-Huxley neuronal network", "Synchronous behaviour", "Phase order parameter", "Neuron", "Synaptic conductance", "Spiking synchronisation", "Bursting synchronisation", "Coefficient of variation", "Neuronal network", "Order parameter", "Time-average order parameter", "Raster plot", "Synchronous behaviour", "Probability of connections", "Control parameters", "Brazilian government agencies", "CNPq", "CAPES", "FAPESP", "Newton Fund", "COFAP", "adaptive exponential integrate-and-fire (aEIF) model", "random neuronal network", "spiking and bursting synchronous behaviour", "neurons", "external perturbation", "spike synchronisation", "bursting synchronisation" ]
[]
\subsection{Power} \noindent PMTs that meet the required specifications in terms of pulse rise time, dark current and counting rates, and quantum efficiency require applied high voltages (HV) between $1-2.5$~kV and have maximum current ratings of $0.2-0.5$~mA. For the detector design using 12 read-out channels per module, the HV power supply (HVPS) must provide approximately 10~mA per module. In order to minimize costs, we aim to use one HV power supply to power 10 modules (120 channels), and thus we require a HVPS rated to approximately 100~mA and 500~W. For a 100 module detector, 10 HVPS are required and the total power requirement would thus be approximately 5~kW. Several commercial HVPS systems exist that meet these requirements. For example, the \href{http://theelectrostore.com/shopsite_sc/store/html/PsslashEK03R200-GK6-Glassman-New-refurb.html}{Glassman model number PS/EK03R200-GK6} provides an output of $\pm3$~kV with a maximum of 200 mA, and features controllable constant current / constant voltage operation. Regulation and monitoring of the power supplied to the detector will be required on both the module distribution boards and the front-end distribution boards. In both cases, over-current and over-voltage protection will be necessary both for safety and in order to protect the front-end electronics from damage. The monitoring may be accomplished by a measurement circuit that digitizes and transmits the measured voltages and currents over a serial bus to the slow control system for the detector by a generic, CERN built data acquisition board called an Embedded Local Monitoring Board (ELMB)~\cite{ELMB}. Energy calibration will be done in situ using an $^{241}$Am source, which yields a 60~keV $X$-ray. Calibration runs performed at specified intervals will track the PMT+scintillator response as a function of time. In addition to energy calibration, an LED pulser that can deliver a stable light pulse into each scintillator will also be deployed. The LED system will be used to monitor drift in response of the PMT+scintillator as a function of time in between $^{241}$Am source calibrations as well as detect any inefficient or non-functional readout channels. \subsection{Power} \noindent PMTs that meet the required specifications in terms of pulse rise time, dark current and counting rates, and quantum efficiency require applied high voltages (HV) between $1-2.5$~kV and have maximum current ratings of $0.2-0.5$~mA. For the detector design using 12 read-out channels per module, the HV power supply (HVPS) must provide approximately 10~mA per module. In order to minimize costs, we aim to use one HV power supply to power 10 modules (120 channels), and thus we require a HVPS rated to approximately 100~mA and 500~W. For a 100 module detector, 10 HVPS are required and the total power requirement would thus be approximately 5~kW. Several commercial HVPS systems exist that meet these requirements. For example, the \href{http://theelectrostore.com/shopsite_sc/store/html/PsslashEK03R200-GK6-Glassman-New-refurb.html}{Glassman model number PS/EK03R200-GK6} provides an output of $\pm3$~kV with a maximum of 200 mA, and features controllable constant current / constant voltage operation. Regulation and monitoring of the power supplied to the detector will be required on both the module distribution boards and the front-end distribution boards. In both cases, over-current and over-voltage protection will be necessary both for safety and in order to protect the front-end electronics from damage. The monitoring may be accomplished by a measurement circuit that digitizes and transmits the measured voltages and currents over a serial bus to the slow control system for the detector by a generic, CERN built data acquisition board called an Embedded Local Monitoring Board (ELMB)~\cite{ELMB}. Energy calibration will be done in situ using an $^{241}$Am source, which yields a 60~keV $X$-ray. Calibration runs performed at specified intervals will track the PMT+scintillator response as a function of time. In addition to energy calibration, an LED pulser that can deliver a stable light pulse into each scintillator will also be deployed. The LED system will be used to monitor drift in response of the PMT+scintillator as a function of time in between $^{241}$Am source calibrations as well as detect any inefficient or non-functional readout channels. \section{Introduction \label{sec:Intro}} \input{intro} \section{Site Selection \label{sec:site}} \input{site.tex} \section{Relationship with CMS \label{sec:cms}} \input{cms.tex} \section{Detector Concept \label{sec:det}} \input{det.tex} \section{Mechanics, Cooling, and Magnetic Shielding \label{sec:infra}} \input{infra.tex} \section{Power and Calibrations \label{sec:pow}} \input{calib.tex} \section{Trigger and Readout \label{sec:daq}} \input{readout.tex} \section{Backgrounds \label{sec:bkg}} \input{bkg.tex} \section{Simulations and Sensitivity \label{sec:sens}} \input{sim.tex} \section{Timeline and Next Steps \label{sec:timeline}} \noindent We aim to have the experiment ready for physics during Run 3. To that end, we envisage the following timeline: \begin{itemize} \item Construct small fraction of detector ($\sim10\%$) in next 2 yrs \item Install partial detector in PX56 by end of Run 2 (YETS 2017 + TS in 2018) \item Commission and take data in order to evaluate beam-on backgrounds {\it in situ} \item Construction + Installation of remainder of detector during LS2 (2019--2020) \item Final commissioning by spring 2021 \item Operate detector for physics for duration of Run 3 and HL-LHC (mid 2021--) \end{itemize} \noindent The next step in the milliQan project is to seek external funding to enable at least the 10\% construction. No such funding has yet been secured for this project, but one or more proposals to one or more funding agencies are being prepared for the near future. \section{Summary \label{sec:end}} \noindent In this LOI we have proposed a dedicated experiment that would detect ``milli-charged" particles produced by pp collisions at LHC Point 5. The experiment would be installed during LS2 in the vestigial drainage gallery above UXC and would not interfere with CMS operations. Our calculations and simulations indicate that with 300~fb$^{-1}$ of integrated luminosity, sensitivity to a particle with charge $\mathcal{O}(10^{-3})~e$ can be achieved for masses of $\mathcal{O}(1)$~GeV, and charge $\mathcal{O}(10^{-2})~e$ for masses of $\mathcal{O}(10)$~GeV. This would greatly extend the parameter space explored for particles with small charge and masses above 100 MeV. We have performed sufficient R\&D to encourage us to proceed with securing funding for the project, and with this letter of intent we express the intention to do so. \begin{acknowledgments} \noindent We wish to thank Tiziano Camporesi, Joel Butler, and the CMS collaboration for their encouragement. We would also like to thank Vladimir Ivanchenko, Andrea Dotti and Mihaly Novak for useful discussions regarding {\sc Geant4}. \end{acknowledgments} \section{Introduction \label{sec:Intro}} \input{intro} \section{Site Selection \label{sec:site}} \input{site.tex} \section{Relationship with CMS \label{sec:cms}} \input{cms.tex} \section{Detector Concept \label{sec:det}} \input{det.tex} \section{Mechanics, Cooling, and Magnetic Shielding \label{sec:infra}} \input{infra.tex} \section{Power and Calibrations \label{sec:pow}} \input{calib.tex} \section{Trigger and Readout \label{sec:daq}} \input{readout.tex} \section{Backgrounds \label{sec:bkg}} \input{bkg.tex} \section{Simulations and Sensitivity \label{sec:sens}} \input{sim.tex} \section{Timeline and Next Steps \label{sec:timeline}} \noindent We aim to have the experiment ready for physics during Run 3. To that end, we envisage the following timeline: \begin{itemize} \item Construct small fraction of detector ($\sim10\%$) in next 2 yrs \item Install partial detector in PX56 by end of Run 2 (YETS 2017 + TS in 2018) \item Commission and take data in order to evaluate beam-on backgrounds {\it in situ} \item Construction + Installation of remainder of detector during LS2 (2019--2020) \item Final commissioning by spring 2021 \item Operate detector for physics for duration of Run 3 and HL-LHC (mid 2021--) \end{itemize} \noindent The next step in the milliQan project is to seek external funding to enable at least the 10\% construction. No such funding has yet been secured for this project, but one or more proposals to one or more funding agencies are being prepared for the near future. \section{Summary \label{sec:end}} \noindent In this LOI we have proposed a dedicated experiment that would detect ``milli-charged" particles produced by pp collisions at LHC Point 5. The experiment would be installed during LS2 in the vestigial drainage gallery above UXC and would not interfere with CMS operations. Our calculations and simulations indicate that with 300~fb$^{-1}$ of integrated luminosity, sensitivity to a particle with charge $\mathcal{O}(10^{-3})~e$ can be achieved for masses of $\mathcal{O}(1)$~GeV, and charge $\mathcal{O}(10^{-2})~e$ for masses of $\mathcal{O}(10)$~GeV. This would greatly extend the parameter space explored for particles with small charge and masses above 100 MeV. We have performed sufficient R\&D to encourage us to proceed with securing funding for the project, and with this letter of intent we express the intention to do so. \begin{acknowledgments} \noindent We wish to thank Tiziano Camporesi, Joel Butler, and the CMS collaboration for their encouragement. We would also like to thank Vladimir Ivanchenko, Andrea Dotti and Mihaly Novak for useful discussions regarding {\sc Geant4}. \end{acknowledgments}
[ "CERN", "CMS", "Tiziano Camporesi", "Joel Butler", "CMS collaboration", "Vladimir Ivanchenko", "Andrea Dotti", "Mihaly Novak", "Geant4", "milliQan project", "LHC Point 5", "milli-charged particles" ]
[ "CERN", "CMS", "Tiziano Camporesi", "Joel Butler", "CMS collaboration" ]
\section{Introduction} The black hole information puzzle is the puzzle of whether black hole formation and evaporation is unitary, and debate on this issue has continued for more than 36 years \cite{Page:1993up, Giddings:2006sj, Mathur:2008wi}, since Hawking radiation was discovered \cite{Hawking:1974sw}. Hawking originally used local quantum field theory in the semiclassical spacetime background of an evaporating black hole to deduce \cite{Hawking:1976ra} that part of the information about the initial quantum state would be destroyed or leave our Universe at the singularity or quantum gravity region at or near the centre of the black hole, so that what remained outside after the black hole evaporated would not be given by unitary evolution from the initial state. However, this approach does not fully apply quantum theory to the gravitational field itself, so it was objected that the information-loss conclusion drawn from it might not apply in quantum gravity \cite{Page:1979tc}. Maldacena's AdS/CFT conjecture \cite{Maldacena:1997re} has perhaps provided the greatest impetus for the view that quantum gravity should be unitary within our Universe and give no loss of information. If one believes in local quantum field theory outside a black hole and also that one would not experience extreme harmful conditions (`drama') immediately upon falling into any black hole sufficiently large that the curvature at the surface would not be expected to be dangerous, then recent papers by Almheiri, Marolf, Polchinski, and Sully (AMPS) \cite{Almheiri:2012rt}, and by them and Stanford (AMPSS) \cite{Almheiri:2013hfa}, give a new challenge to unitarity, as they argued that unitarity, locality, and no drama are mutually inconsistent. It seems to us that locality is the most dubious of these three assumptions. Nevertheless, locality seems to be such a good approximation experimentally that we would like a much better understanding of how its violation in quantum gravity might be able to preserve unitarity and yet not lead to the drama of firewalls or to violations of locality so strong that they would be inconsistent with our observations. Giddings (occasionally with collaborators) has perhaps done the most to investigate unitary nonlocal models for quantum gravity \cite{Giddings:2006sj, Giddings:2006be, Giddings:2007ie, Giddings:2007pj, Giddings:2009ae, Giddings:2011ks, Giddings:2012bm, Giddings:2012dh, Giddings:2012gc, Giddings:2013kcj, Giddings:2013jra, Giddings:2013noa, Giddings:2014nla, Giddings:2014ova, Giddings:2015uzr, Donnelly:2016rvo, Giddings:2017mym, Donnelly:2017jcd}. For other black hole qubit models, see \cite{Terno:2005ff, Levay:2006pt, Levay:2007nm, Duff:2008eei, Levay:2008mi, Borsten:2008wd, Rubens:2009zz, Levay:2010ua, Duff:2010zz, Duff:2012nd, Borsten:2011is, Levay:2011bq, Avery:2011nb, Dvali:2011aa, Borsten:2012sga, Borsten:2012fx, Dvali:2012en, Duff:2013xna, Levay:2013epa, Verlinde:2013vja, Borsten:2013vea, Duff:2013rma, Borsten:2013uma, Dvali:2013lva, Prudencio:2014ypa, Pramodh:2014jha, Chatwin-Davies:2015hna, Dai:2015dqt, Belhaj:2016yyq, Belhaj:2016yfo}. Here we present a qubit toy model for how a black hole might evaporate unitarily and without firewalls, but with nonlocal gravitational degrees of freedom. We model radiation modes emitted by a black hole as localized qubits that interact locally with these nonlocal gravitational degrees of freedom. Similar models were first investigated by Giddings in his previously referred papers, particularly in \cite{Giddings:2011ks,Giddings:2012bm,Giddings:2012dh}. Nomura and his colleagues also have a model \cite{Nomura:2014woa,Nomura:2014voa,Nomura:2016qum} with some similarities to ours. In this way we can go from modes near the horizon that to an infalling observer appear to be close to a vacuum state (and hence without a firewall), and yet the modes that propagate outward can pick up information from the nonlocal gravitational field they pass through so that they transfer that information out from the black hole. \section{Qualitative Description of Our Qubit Model} Using Planck units in which $\hbar = c = G = k_\mathrm{Boltzmann} = 1$, a black hole that forms of area $A$ and Bekenstein-Hawking entropy $S_\mathrm{BH} = A/4$ may be considered to have $e^{S_\mathrm{BH}} = 2^{S_\mathrm{BH}/(\ln{2})}$ orthonormal states, which is the same number as the number of orthonormal states of $n = S_\mathrm{BH}/(\ln{2}) = A/(4\ln{2})$ qubits if this is an integer, which for simplicity we shall assume. We shall take the state of these $n$ qubits as being the state of the gravitational field of the black hole. We assume that this state is rapidly scrambled by highly complex unitary transformations, so that generically a black hole formed by collapse, even if it is initially in a pure state, will have these $n$ qubits highly entangled with each other. However, in our model we shall assume that there are an additional $n$ qubits of outgoing radiation modes just outside the horizon, and a third set of $n$ qubits of outgoing but infalling radiation modes just inside the horizon. We shall assume that these two sets of qubits have a unique pairing (as partner modes in the beginning of the Hawking radiation) and further that each pair is in the singlet Bell state that we shall take to represent the vacuum state as seen by an infalling observer, so that all of these $2n$ qubits of radiation modes near the black hole horizon are in the vacuum pure state and hence give no contribution to the Bekenstein-Hawking entropy $S_\mathrm{BH} = n\ln{2}$. We thus explicitly assume that the infalling observer sees only the vacuum and no firewall in crossing the event horizon. See \cite{Page:2013mqa} for one argument for justifying this assumption. Now we assume that the Hawking emission of one mode corresponds to one of the $n$ outgoing radiation modes from just outside the horizon propagating to radial infinity. However, the new assumption of this model is that the radiation qubit that propagates outward interacts (locally) with one of the $n$ nonlocal qubits representing the black hole gravitational field, in just such a way that when the mode gets to infinity, the quantum state of that radiation qubit is interchanged with the quantum state of the corresponding black hole gravitational field qubit. This is a purely unitary transformation, not leading to any loss of information. Assume for simplicity that the black hole forms in a pure state that becomes highly scrambled by a unitary transformation. Therefore, as an early outgoing radiation qubit propagates out to become part of the Hawking radiation, when it interchanges its state with that of the corresponding gravitational field qubit, it will become nearly maximally entangled with the black hole state and will have von Neumann entropy very nearly $\ln{2}$, the maximum for a qubit. So the early Hawking radiation qubits will each have nearly the maximum entropy allowed, and there will be very little entanglement between the early radiation qubits themselves. Meanwhile, the black hole qubit corresponding to each outgoing radiation qubit will have taken on the state that the outgoing radiation qubit had when it was just outside the horizon and hence be in the unique singlet Bell state with the infalling radiation qubit just inside the horizon that was originally paired with the outgoing qubit. This vacuum singlet Bell state can then be omitted from the analysis without any loss of information. In this way we can model the reduction in the size of the black hole as it evaporates by the reduction of the number of black hole qubits. We might say that each such vacuum Bell pair falls into the singularity, but what hits the singularity in this model is a unique quantum state, similar to the proposal of Horowitz and Maldacena \cite{Horowitz:2003he}. Therefore, if we start with $n$ black hole gravitational field qubits, $n$ outgoing radiation qubits just outside the horizon, and $n$ infalling radiation qubits just inside the horizon, after the emission of $n_r$ outgoing radiation qubits, $n_r$ of the infalling radiation qubits will have combined into a unique quantum state with the $n_r$ black hole qubits that were originally interacting with the $n_r$ outgoing radiation qubits that escaped, so that we can ignore them as what we might regard as merely vacuum fluctuations. This leaves $n-n_r$ pairs of outgoing radiation qubits just outside the horizon and infalling qubits just inside the horizon (each pair being in the singlet Bell state), and $n-n_r$ black hole gravitational field qubits. Eventually the number of Hawking radiation qubits, $n_r$, exceeds the number of black hole qubits remaining, $n-n_r$, when $n_r > n/2$, and the black hole becomes `old.' At this stage, the remaining black hole qubits all become nearly maximally entangled with the Hawking radiation qubits, so that the von Neumann entropy of the black hole becomes very nearly $(n-n_r)\ln{2}$, which we shall assume is very nearly $A/4$ at that time. Since the whole system is assumed to be in a pure state, and since we have assumed unitary evolution throughout, the von Neumann entropy of the Hawking radiation at this late stage is also very nearly $(n-n_r)\ln{2}$, but now this is less than the maximum value, which is $n_r\ln{2}$. Thus each of the $n_r$ Hawking radiation qubits can no longer be maximally entangled with the remaining $n-n_r$ black hole qubits, and significant entanglement begins to develop between the Hawking radiation qubits themselves. Nevertheless, for any collection of $n' < n/2$ qubits of the Hawking radiation, the von Neumann entropy of that collection is expected \cite{Page:1993df, Page:1993wv} to be very nearly $n'\ln{2}$, so one would still find negligible quantum correlations between any collection of $n'$ Hawking radiation qubits. Finally, when all $n$ of the original outgoing radiation qubits have left the black hole and propagated to infinity to become Hawking radiation qubits, there are no qubits left for the black hole; hence it has completely evaporated away. The $n$ Hawking radiation qubits now form a pure state, just as the original quantum state that formed the black hole was assumed to be. Of course, the unitary scrambling transformation of the black hole qubits means that the pure state of the final Hawking radiation can look quite different from the initial state that formed the black hole, but the two are related by a unitary transformation. The net effect is that the emission of one outgoing radiation qubit gives the transfer of the information in one black hole qubit to one Hawking radiation qubit. But rather than simply saying that this transfer is nonlocal, from the inside of the black hole to the outside, we are saying that the black hole qubit itself is always nonlocal, and that the outgoing radiation qubit picks up the information in the black hole qubit locally, as it travels outward through the nonlocal gravitational field of the black hole. Therefore, in this picture in which we have separated the quantum field theory qubits of the radiation from the black hole qubits of the gravitational field, we do not need to require any nonlocality for the quantum field theory modes, but only for the gravitational field. In this way the nonlocality of quantum gravity might not have much observable effect on experiments in the laboratory focussing mainly on local quantum field theory modes. \section{Mathematics of Qubit Transport} Before the black hole forms, we assume that we have a Hilbert space of dimension $2^n$ in which each state collapses to form a black hole whose gravitational field can be represented by $n$ nonlocal qubits. We assume that we have a pure initial state represented by the set of $2^n$ amplitudes $A_{q_1q_2\ldots q_n}$, where for each $i$ running from 1 to $n$, the corresponding $q_i$ can be 0 or 1, representing the two basis states of the $i$th qubit. Once the black hole forms, without changing the Hilbert space dimension, we can augment this Hilbert space by taking its tensor product with a 1-dimensional Hilbert space for the vacuum state of $n$ infalling and $n$ outgoing radiation modes just inside and just outside the event horizon. We shall assume that this vacuum state is the tensor product of vacuum states for each pair of modes, with each pair being in the singlet Bell state that we shall take to represent the vacuum for that pair of modes. That is, once the black hole forms, we assume that we have $n$ nonlocal qubits for the gravitational field of the black hole, labeled by $a_i$, where $i$ runs from 1 to $n$, $n$ localized qubits for the infalling radiation modes just inside the horizon, labeled by $b_i$, and $n$ localized qubits for the outgoing radiation modes just outside the horizon, labeled by $c_i$. Suppose that each qubit has basis states $\ket{0}$ and $\ket{1}$, where subscripts (either $a_i$, $b_i$, or $c_i$) will label which of the $3n$ qubits one is considering. We assume that each pair of infalling and outgoing radiation qubits is in the vacuum singlet Bell state \begin{equation} \ket{B}_{b_i c_i} = \frac{1}{\sqrt{2}}\Bigl(\ket{0}_{b_i}\ket{1}_{c_i} -\ket{1}_{b_i}\ket{0}_{c_i}\Bigr). \label{Bell} \end{equation} Initially the quantum state of the black hole gravitational field and radiation modes is \begin{equation} \ket{\Psi_0}=\sum_{q_1=0}^1\sum_{q_2=0}^1\cdots\sum_{q_n=0}^1 A_{q_1q_2\ldots q_n}\prod_{i=1}^n\ket{q_i}_{a_i}\prod_{i=1}^n\ket{B}_{b_ic_i}, \label{initial state} \end{equation} where the $A_{q_1q_2\ldots q_n}$ are the amplitudes for the $2^n$ product basis states for the black hole gravitational field. Note that the entire quantum state is the product of a state of all the black hole gravitational qubits and a single pure vacuum state for the radiation modes. During the emission of the $i$th radiation mode to become a mode of Hawking radiation at radial infinity, the basis state for the subsystem of the $i$th black hole, infalling radiation, and outgoing radiation qubits changes as \begin{equation} \ket{q_i}_{a_i}\ket{B}_{b_ic_i} \mapsto -\ket{B}_{a_ib_i}\ket{q_i}_{c_i}, \label{transfer} \end{equation} where $\ket{B}_{a_ib_i}$ is the analogue of $\ket{B}_{b_ic_i}$ given by Eq.\ (\ref{Bell}) with $b_i$ replaced by $a_i$ and $c_i$ replaced by $b_i$. As is obvious from the expressions on the right hand sides, this just interchanges the state of the $i$th black hole qubit with the state of the $i$th outgoing radiation qubit. If $P_{a_ic_i} = \ket{B}_{a_ic_i}\bra{B}_{a_ic_i}$ multiplied by the identity operator in the $b_i$ subspace, then for $\theta = \pi$ the continuous sequence of unitary transformations \begin{equation} U(\theta)=\exp\Bigl(-i\theta P_{a_ic_i}\Bigr)={\rm I}+(e^{-i\theta}-1)P_{a_ic_i} \label{Unitary operator for qubit transfer} \end{equation} becomes $U(\pi) = {\rm I}-2P_{a_ic_i}$, which gives the unitary transformation \eqref{transfer}, interchanging the states of the $i$th black hole qubit with the state of the outgoing radiation qubit. We might suppose that as the radiation qubit moves outward, the $\theta$ parameter of the unitary transformation is a function of the radius $r$ that changes from 0 at the horizon to $\pi$ at radial infinity. For example, one could take $\theta = \pi(1 - K/K_h)$, where $K$ is some curvature invariant (such as the Kretschmann invariant, $K = R^{\mu\nu\rho\sigma}R_{\mu\nu\rho\sigma}$) that decreases monotonically from some positive value at the horizon (where its value is $K_h$) to zero at infinity. We now assume that after the emission of the $i$th mode, the vacuum Bell state of the $i$th black hole qubit and the $i$th infalling radiation qubit can be dropped from the analysis, so that one only has the Hawking radiation qubit remaining for that $i$. Then the state of the subsystem for that $i$ goes from $-\ket{B}_{a_ib_i}\ket{q_i}_{c_i}$ given by Eq.\ (\ref{transfer}) to simply $\ket{q_i}_{c_i}$ for the qubit representing the Hawking radiation mode. Therefore, after all of the $n$ outgoing radiation modes propagate out to infinity while interacting with the black hole gravitational field, and after all the Bell vacua left inside the black hole are omitted, one is left with no black hole and the Hawking radiation in the final pure state \begin{equation} \ket{\Psi_1}=\sum_{q_1=0}^1\sum_{q_2=0}^1\cdots\sum_{q_n=0}^1 A_{q_1q_2\ldots q_n}\prod_{i=1}^n\ket{q_i}_{c_i}. \label{final state} \end{equation} As a note we require that nonlocal gravitational qubits $a_i$ do not create firewalls by themselves. That is, even though the vacuum states on the horizon $b_i,c_i$ are in the range of nonlocal effects, they remain to be constrained in the singlet state unless systems $c_i$ are propagating away to infinity as Hawking radiation by Eq.\ \eqref{Unitary operator for qubit transfer}. This is consistent with the above assumption that the parameter $\theta$ in Eq.\ \eqref{Unitary operator for qubit transfer} is a function of the radius $r$. Conversely, it seems plausible to assume that any incoming mode gradually \emph{drops off} some of its information during propagation through this nonlocal gravitational field. \subsection{Mining Issue} AMPSS \cite{Almheiri:2013hfa}, whose Eq.\ (3.3) is essentially the same as our \eqref{transfer}, raised the following issue with subsystem transfer models as resolutions of the firewall paradox. Suppose there exists an ideal mining equipment that can approach arbitrarily close to the horizon without falling into it, and then the equipment interacts with one of systems $c_i$ just outside the horizon. Note that this can be done without any exchange of energy due to the infinite redshift, and it is assumed that there is no entangling either. For example, the mining equipment can unitarily acts on the system $c_i$ as \begin{eqnarray} U_{\text{mine}}&:&\ket{0}_{c_i}\mapsto e^{i\phi}\ket{0}_{c_i},\;\;\;\ket{1}_{c_i}\mapsto e^{-i\phi}\ket{1}_{c_i}.\\ U_{\text{mine}}&:&\ket{B}_{b_ic_i}\mapsto\frac{\cos\phi}{\sqrt{2}}\Bigl(\ket{0}_{b_i}\ket{1}_{c_i} -\ket{1}_{b_i}\ket{0}_{c_i}\Bigr)+\frac{i\sin\phi}{\sqrt{2}}\Bigl(\ket{0}_{b_i}\ket{1}_{c_i} +\ket{1}_{b_i}\ket{0}_{c_i}\Bigr).\label{mine} \end{eqnarray} Thus the system on the horizon has one bit of information after this mining process and is thus no longer in the vacuum state. First of all, it seems implausible that such an ideal equipment can be physically realistic. Since the equipment is accelerating in order to stay outside the horizon without falling into the black hole, it has an Unruh temperature that becomes very high near the horizon. Then the equipment and the modes it interacts with, $c_i$ in this case, should strongly couple and would be expected to be approximately in a thermal state. As a consequence it seems plausible that energy must be transferred between the mining equipment and the modes $c_i$. Also, notice that the AMPSS mining argument does not take nonlocality into account. That is, the mining equipment would interact with the nonlocal gravitational degrees of freedom even if it could avoid the objection of the previous paragraph. As discussed previously, interactions with nonlocal gravitational degrees of freedom transfer part of the quantum information of the mining system into the gravitational degrees of freedom as the equipment approaches to the horizon. We can think of this transferred part as now being a part of the temporarily enlarged nonlocal gravitational degrees of freedom when the equipment is very near to the horizon. Then in this picture the mining equipment can still produce the phase change Eq.\ \eqref{mine} on the system just outside the horizon, but this excitation will be eventually absorbed into the nonlocal gravitational degrees of freedom. This absorption is possible regardless of how old the black hole is, because the nonlocal degrees of freedom are temporarily enlarged by the partially transferred degrees of freedom of the mining equipment. In summary, the AMPSS mining argument is not problematic for our model. \section{Giddings' Physical Conditions} Giddings \cite{Giddings:2012bm} has proposed a list of physical constraints on models of black hole evaporation. We shall write each constraint in italics below and then follow that with comments on how our qubit model can satisfy the proposed constraint. (i) \emph{Evolution is unitary.} Our model explicitly assumes unitary evolution. (ii) \emph{Energy is conserved.} Our model is consistent with a conserved energy given by the asymptotic behavior of the gravitational field. The unitary transformation $U(\theta(r))$ during the propagation of each radiation qubit can be written in terms of a radially dependent Hamiltonian without any explicit time dependence, so there is nothing in our model that violates energy conservation. (iii) \emph{The evolution should appear innocuous to an infalling observer crossing the horizon; in this sense the horizon is preserved.} We explicitly assume that the radiation modes are in their vacuum states when they are near the horizon, so there is no firewall or other drama there. (iv) \emph{Information escapes the black hole at a rate $dS/dt\sim1/R$.} Although we did not discuss the temporal rates above, if one radiation qubit propagates out through some fiducial radius, such as $r = 3M$, during a time period comparable to the black hole radius $R$, since during the early radiation each qubit carries an entropy very nearly $\ln{2}$, indeed one would have $dS/dt\sim1/R$. (v) \emph{The coarse-grained features of the outgoing radiation are still well-approximated as thermal.} Because of the scrambling of the black hole qubits so that each one is very nearly in a maximally mixed state, when the information is transferred from the black hole qubits to the Hawking radiation qubits, each one of these will also be very nearly in a maximally mixed state, which in the simplified toy model represents thermal radiation. Furthermore, one would expect that any collection of $n' < n/2$ qubits of the Hawking radiation also to be nearly maximally mixed, so all the coarse-grained features of the radiation would be well-approximated as thermal. (vi) \emph{Evolution of a system ${\cal H}_A\otimes{\cal H}_B$ saturates the subadditivity inequality $S_A+S_B \geq S_{AB}$.} Here it is assumed that $A$ and $B$ are subsystems of $n_A$ and $n_B$ qubits respectively of the black hole gravitational field and of the Hawking radiation, not including any of the infalling and outgoing radiation qubits when they are near the horizon. Then for $n_A + n_B < n/2$, $A$, $B$, and $AB$ are all nearly maximally mixed, so $S_A \approx n_A\ln{2}$, $S_B \approx n_B\ln{2}$, and $S_{AB} \approx (n_A+n_B)\ln{2}$, thus approximately saturating the subadditivity inequality. (Of course, for any model in which the total state of $n$ qubits is pure and any collection of $n' < n/2$ qubits has nearly maximal entropy, $S \approx n'\ln{2}$, then if $n_A < n/2$, $n_B < n/2$, but $n_A + n_B > n/2$, then $S_A \approx n_A\ln{2}$ and $S_B \approx n_B\ln{2}$, but $S_{AB} \approx (n-n_A-n_B)\ln{2}$, so $S_A+S_B-S_{AB} \approx 2n_A+2n_B-n > 0$, so that the subadditivity inequality is generically not saturated in this case.) \section{Conclusions} We have given a toy qubit model for black hole evaporation that is unitary and does not have firewalls. It does have nonlocal degrees of freedom for the black hole gravitational field, but the quantum field theory radiation modes interact purely locally with the gravitational field, so in some sense the nonlocality is confined to the gravitational sector. The model has no mining issue and also satisfies all of the constraints that Giddings has proposed, though further details would need to be added to give the detailed spectrum of Hawking radiation. The model is in many ways {\it ad hoc}, such as in the details of the qubit transfer, so one would like a more realistic interaction of the radiation modes with the gravitational field than the simple model sketched here. One would also like to extend the model to include possible ingoing radiation from outside the black hole. \section*{Acknowledgments} DNP acknowledges discussions with Beatrice Bonga, Fay Dowker, Jerome Gauntlett, Daniel Harlow, Adrian Kent, Donald Marolf, Jonathan Oppenheim, Subir Sachdev, and Vasudev Shyam at the Perimeter Institute, where an early version of this paper was completed. We also benefited from emails from Steven Avery, Giorgi Dvali, Steven Giddings, Yasunori Nomura, and Douglas Stanford. Revisions were made while using Giorgi Dvali's office during the hospitality of Matthew Kleban at the Center for Cosmology and Particle Physics of New York University. This research was supported in part by the Natural Sciences and Engineering Research Council of Canada, and in part by Perimeter Institute for Theoretical Physics. Research at Perimeter Institute is supported by the Government of Canada through the Department of Innovation, Science and Economic Development and by the Province of Ontario through the Ministry of Research, Innovation and Science. \section*{References}
[ "Black hole", "Hawking radiation", "Quantum gravity", "Maldacena's AdS/CFT conjecture", "Almheiri", "Marolf", "Polchinski", "Sully", "Stanford", "Giddings", "Donnelly", "Nomura", "Bekenstein-Hawking entropy", "Planck units", "Gravitational field", "Bell state", "Hawking radiation", "Vacuum state", "Observer", "Hawking", "Horowitz", "Maldacena", "Black hole", "Event horizon", "Singularity", "Quantum gravity", "Quantum field theory", "Quantum state", "Unitary transformation", "Hilbert space", "Vacuum state", "Gravitational field", "Hawking radiation", "Nonlocality", "Entropy", "Qubit", "Bell state", "Black hole", "Hawking radiation", "Gravitational field", "Firewall paradox", "AMPSS", "Horizon", "Black hole", "Giddings", "Quantum field theory", "Giddings", "Hawking radiation", "Black hole", "Beatrice Bonga", "Fay Dowker", "Jerome Gauntlett", "Daniel Harlow", "Adrian Kent", "Donald Marolf", "Jonathan Oppenheim", "Subir Sachdev", "Vasudev Shyam", "Perimeter Institute", "Steven Avery", "Giorgi Dvali", "Steven Giddings", "Yasunori Nomura", "Douglas Stanford", "Matthew Kleban", "Center for Cosmology and Particle Physics of New York University", "Natural Sciences and Engineering Research Council of Canada", "Government of Canada", "Province of Ontario" ]
[ "Black hole", "Hawking radiation", "Quantum gravity", "Gravitational field", "Unitary transformation" ]
\section{Introduction}\label{sintro} \section{Background: CUR and low-rank approximation}\label{sbcgr} {\em Low-rank approximation} of an $m\times n$ matrix $W$ having a small numerical rank $r$, that is, having a well-conditioned rank-$r$ matrix nearby, is one of the most fundamental problems of numerical linear algebra \cite{HMT11} with a variety of applications to highly important areas of modern computing, which range from the machine learning theory and neural networks \cite{DZBLCF14}, \cite{JVZ14} to numerous problems of data mining and analysis \cite{M11}. One of the most studied approaches to the solution of this problem is given by $CUR$ {\em approximation} where $C$ and $R$ are a pair of $m\times l$ and $k\times n$ submatrices formed by $l$ columns and $k$ rows of the matrix $W$, respectively, and $U$ is a $k\times l$ matrix such that $W\approx CUR$. Every low-rank approximation allows very fast approximate multiplication of the matrix $W$ by a vector, but CUR approximation is particularly transparent and memory efficient. The algorithms for computing it are characterized by the two main parameters: (i) their complexity and (ii) bounds on the error norms of the approximation. We assume that $r\ll \min\{m,n\}$, that is, the integer $r$ is much smaller than $\min\{m,n\}$, and we seek algorithms that use $o(mn)$ flops, that is, much fewer than the information lower bound $mn$. \section{State of the art and our progress}\label{ssartpr} The algorithms of \cite{GE96} and \cite{P00} compute CUR approximations by using order of $mn\min\{m,n\}$ flops.\footnote{Here and hereafter {\em ``flop"} stands for ``floating point arithmetic operation".} \cite{BW14} do this in $O(mn\log(mn))$ flops by using randomization. These are record upper bounds for computing a CUR approximation to {\em any input matrix} $W$, but the user may be quite happy with having a close CUR approximations to {\em many matrices} $W$ that make up the class of his/her interest. The information lower bound $mn/2$ (a flop involves at most two entries) does not apply to such a restricted input classes, and we go well below it in our paper \cite{PSZa} (we must refer to that paper for technical details because of the limitation on the size of this submission). We first formalize the problem of CUR approximation of an average $m\times n$ matrix of numerical rank $r\ll \min\{m,n\}$, assuming the customary Gaussian (normal) probability distribution for its $(m+n)r$ i.i.d. input parameters. Next we consider a two-stage approach: (i) first fix a pair of integers $k\le m$ and $l\le n$ and compute a CUR approximation (by using the algorithms of \cite{GE96} or \cite{P00}) to a random $k\times l$ submatrix and then (ii) extend it to computing a CUR approximation of an input matrix $W$ itself. We must keep the complexity of Stage (i) low and must extend the CUR approximation from the submatrix to the matrix $W$. We prove that for a specific class of input matrices $W$ these two tasks are in conflict (see Example 11 of \cite{PSZa}), but such a class of hard inputs is narrow, because we prove that our algorithm produces a close approximation to the average $m\times n$ input matrix $W$ having numerical rank $r$. (We define such an average matrix by assuming the standard Gaussian (normal) probability distribution.) By extending our two-stage algorithms with the technique of \cite{GOSTZ10}, which we call {\em cross-approximation}, we a little narrow the class of hard inputs of Example 11 of \cite{PSZa} to the smaller class of Example 14 of \cite{PSZa} and moreover deduce a sharper bounds on the error of approximation by maximizing the {\em volume} of an auxiliary $k\times l$ submatrix that defines a CUR approximation In our extensive tests with a variety of real world input data for regularization of matrices from Singular Matrix Database, our fast algorithms consistently produce close CUR approximation. Since our fast algorithms produce reasonably accurate CUR approximation to the average input matrix, the class of hard input matrices for these algorithms must be narrow, and we studied a tentative direction towards further narrowing this input class. We prove that the algorithms are expected to output a close CUR approximation to any matrix $W$ if we pre-process it by applying Gaussian multipliers. This is a nontrivial result of independent interest (proven on more than three pages), but its formal support is only for application of Gaussian multipliers, which is quite costly. We hope, however, that we can still substantially narrow the class of hard inputs even if we replace Gaussian multipliers with the products of reasonable numbers of random bidiagonal matrices and if we partly curb the permutation of these matrices. If we achieve this, then preprocessing would become non-costly. This direction seems to be quite promising, but still requires further work. Finally, our algorithms can be extended to the acceleration of various computational problems that are known to have links to low-rank approximation, but in our concluding Section \ref{scncl} we describe a novel and rather unexpected extensions to the acceleration of the Fast Multipole Method and Conjugate Gradient Algorithms,\footnote{Hereafter we use the acronyms FMM and CG.} both being among the most celebrated achievements of the 20th century in Numerical Linear Algebra. \subsection{Some related results on matrix algorithms and our progress on other fundamental subjects of matrix computations}\label{srltwr} A huge bibliography on CUR and low-rank approximation, including the known best algorithms, which we already cited, can be accessed from the papers \cite{HMT11}, \cite{M11}, \cite{BW14} and \cite{W14}. Our main contribution is dramatic acceleration of the known algorithms. Some of our techniques extend the ones of \cite{PZ16}, \cite{PZ17}, and \cite{PZa}, where we also show duality of randomization and derandomization and apply it to fundamental matrix computations. In \cite{PZ16} we prove that preprocessing with almost any well-conditioned multiplier of full rank is as efficient on the average for low-rank approximation as preprocessing with a Gaussian one, and then we propose some new highly efficient sparse and structured multipliers. Besides providing a new insight into the subject, this motivates the design of more efficient algorithms and shows specific direction to this goal. We obtain similar progress in \cite{PZa} for and \cite{PZ17} for preprocessing Gaussian elimination with no pivoting and block Gaussian elimination. We recall that Gaussian elimination with partial pivoting is performed millions time per day, where pivoting, required for numerical stabilization, is frequently a bottleneck because it interrupts the stream of arithmetic operations with foreign operations of comparison, involves book-keeping, compromises data locality, and increases communication overhead and data dependence. Randomized preprocessing is a natural substitution for pivoting, and in \cite{PZa} we show that Gaussian elimination with no pivoting as well as block Gaussian elimination (which is another valuable algorithm and which also requires protection against numerical problems) are efficient on the average input with preprocessing by any nonsingular and well-conditioned multipliers. \cite{PZ17} obtains similar progress for the important subject of the approximation of trailing singular spaces associated with the $\nu$ smallest singular values of a matrix having numerical nullity $\nu$. Our current progress greatly supersedes these earlier results, however, in terms of the scale of the acceleration of the known algorithms. Our technique of representing random Gaussian multipliers as a product of random bidiagonal factors, our extension of CUR approximation to FMM and CG algorithms, and our analysis of CUR approximation for the average input are new and can have some independent interest. \section{Conclusions}\label{scncl} We dramatically accelerated the known algorithms for the fundamental problems of CUR and low-rank approximation in the case of the average input matrix and then pointed out a direction towards heuristic extension of the resulting fast algorithm to a wider class of inputs by applying quasi Gaussian preprocessing. Our extensive tests for benchmark matrices of discretized PDEs have consistently supported the results of our formal analysis. Our study can be extended to a variety of important subjects of matrix computations. Some of such extensions have been developed in papers \cite{PZ16}, \cite{PZ17} and \cite{PZa}, and there are various challenging directions for further progress. In particular our accelerated CUR and low-rank approximation enables faster solution of some new important computational problems, thus extending the long list of the known applications. In the concluding section of \cite{PSZa}, we add two new highly important subjects to this long list.
[ "Low-rank approximation", "Numerical linear algebra", "Machine learning theory", "Neural networks", "Data mining and analysis", "CUR approximation", "Floating point arithmetic operation", "Gaussian (normal) probability distribution", "Cross-approximation", "Singular Matrix Database", "Fast Multipole Method", "Conjugate Gradient Algorithms", "Numerical Linear Algebra", "Randomization and derandomization", "Gaussian elimination", "Randomized preprocessing", "CUR approximation", "FMM and CG algorithms", "PDEs", "Matrix computations" ]
[ "CUR approximation", "Low-rank approximation", "Numerical linear algebra", "Gaussian (normal) probability distribution", "Matrix computations" ]
\section*{Supplemental Material} In this Supplemental Material, we provide more numerical data for the ground-state entanglement entropy and entanglement spectrum. \subsection*{Ground-state entanglement entropy} In the main text, we have discussed the ground-state entanglement entropy $S(\overline{\rho})$ obtained by averaging the density matrices of the three ground states, i.e., $\overline{\rho}=\frac{1}{3}\sum_{i=1}^3|\Psi_i\rangle\langle\Psi_i|$. Now we compute the corresponding result $S(|\Psi_i\rangle)$ and its derivative $dS(|\Psi_i\rangle)/dW$ of the three individual states. The sample-averaged results are shown in Fig.~\ref{Spsi}. The data of three individual states have some differences, but are qualitatively the same: for all of them, the entanglement decreases with $W$, and the derivative with respect to $W$ has a single minimum that becomes deeper for larger system sizes. For the finite systems that we have studied, the location of the minimum does depend somewhat on the individual states, but the value does not deviate much from $W=0.6$. To incorporate the effects of all of the three states, we compute the mean $\overline{S}=\frac{1}{3}\sum_{i=1}^3 S(|\Psi_i\rangle)$. This is an alternative averaging method to the one ($\overline{\rho}=\frac{1}{3}\sum_{i=1}^3|\Psi_i\rangle\langle\Psi_i|$) that we use in the main text. The sample-averaged results are shown in Fig.~\ref{Sbar}. The minimum of $\langle d\overline{S}/dW\rangle$ is located at $W\approx0.6$ for $N=5-9$ electrons [Fig.~\ref{Sbar}(c)], and its depth diverges as $h\propto N^{1.33}$ with the system size [Fig.~\ref{Sbar}(d)]. The scaling $d\overline{S}/dW\propto N^{\frac{1}{2}+\frac{1}{2\nu}}f'[N^{\frac{1}{2\nu}}(W-W_c)]$ suggests $\nu\approx 0.6$. $\langle\overline{S}\rangle$ agrees with an area law at all $W$'s, and the entanglement density starts to drop at $W\approx0.4$ [Fig.~\ref{Sbar}(b)]. All of these results are very similar to those shown in Figs.~1 and 2 in the main text. This means both averaging methods, i.e., $\overline{\rho}=\frac{1}{3}\sum_{i=1}^3|\Psi_i\rangle\langle\Psi_i|$ and $\overline{S}=\frac{1}{3}\sum_{i=1}^3 S(|\Psi_i\rangle)$, can identify the ground-state phase transitions and give the same critical $W$. However, we observe larger finite-size effects of $h$ and error bars in $\langle d\overline{S}/dW\rangle$ (especially at small $W$). \begin{figure*} \centerline{\includegraphics[width=\linewidth]{entropy_psi.pdf}} \caption{$\langle S(|\Psi_i\rangle)\rangle$ and $\langle dS(|\Psi_i\rangle)/dW\rangle$ for (a,d) $|\Psi_1\rangle$, (b,e) $|\Psi_2\rangle$ and (c,f) $|\Psi_3\rangle$, where $|\Psi_1\rangle$, $|\Psi_2\rangle$ and $|\Psi_3\rangle$ are the three states with ascending energies in the ground-state manifold. Here we averaged $20000$ samples for $N=4-7$, $5000$ samples for $N=8$, and $800$ samples for $N=9$ electrons. The data at $W=\infty$, i.e., the noninteracting limit are also given.} \label{Spsi} \end{figure*} \begin{figure} \centerline{\includegraphics[width=\linewidth]{entropy_psi_avg.pdf}} \caption{We measure the ground-state entanglement by $\overline{S}=\frac{1}{3}\sum_{i=1}^3 S(|\Psi_i\rangle)$. (a) $\langle\overline{S}\rangle$, (b) the entanglement density $\alpha$, and (c) $\langle d\overline{S}/dW\rangle$ versus the disorder strength $W$. (d) The depth $h$ of $\langle d\overline{S}/dW\rangle$ versus the number of electrons $N$ on a double logarithmic plot. The linear fit (dashed line) shows $h\propto N^{1.33}$. Here we averaged $20000$ samples for $N=4-7$, $5000$ samples for $N=8$, and $800$ samples for $N=9$ electrons. The data at $W=\infty$, i.e., the noninteracting limit are also given in (a) and (b).} \label{Sbar} \end{figure} \subsection*{Ground-state entanglement spectrum (ES)} In the main text, we consider the density of states (DOS) $\overline{D}(\xi)$ and level statistics $\overline{P}(s)$ of the ES averaged over three ground states. We find that the results of each individual state are almost the same as those obtained by averaging over three ground states, which justifies the procedures of doing an average. Here, we demonstrate the results [$D_1(\xi)$ and $P_1(s)$] of $|\Psi_1\rangle$ for completeness (Fig.~\ref{oespsi1}). The results for $|\Psi_2\rangle$ and $|\Psi_3\rangle$ are almost the same as $|\Psi_1\rangle$, thus we do not show them here. \begin{figure} \centerline{\includegraphics[width=\linewidth]{oes_N_9_psi1_v2.pdf}} \caption{The sample-averaged DOS $\langle D_1(\xi)\rangle$ and the level-spacing distribution $P_1(s)$ of the ground-state ES below $\xi=40$ for $|\Psi_1\rangle$ of $N=9$ electrons at (a) $W=0.4$, (b) $W=0.6$, (c) $W=1$, (d) $W=10$, (e) $W=100$ and (f) $W=\infty$. At each $W$, we choose three windows to compute $P_1(s)$, plotted versus $s$ in the insets. The blue crosses correspond to numerical data, while the red lines give the theoretical prediction for the Gaussian unitary ensemble (GUE), semi-Poisson (S.~P.) and the Poisson distribution, for which $P(s)=\frac{32}{\pi^2}s^2 e^{-\frac{4}{\pi}s^2}$, $P(s)=4se^{-2s}$ and $P(s)=e^{-s}$, respectively. Data from $800$ realizations of disorder. } \label{oespsi1} \end{figure} We should also consider the problem of numerical noise in the ES obtained by singular value decomposition of the many-body eigenstates. The machine precision for double precision variables is $2^{-53}$. This implies that those singular values $\sqrt{\xi}$ below $2^{-53}$ have the danger to be ruined by the numerical noise, which corresponds to $\xi=-\ln2^{-53\times 2}\approx 73.5$ in the ES. Considering that the entries of the many-body eigenstates are complex numbers (two double precision variables) in our systems and the many-body eigenstates themselves also contain numerical error from Lanczos iterations, the numerical noise in the ES may appear at lower $\xi$. In order to detect the critical $\xi$ at which the machine precision problem starts to dominate, we check the DOS $\overline{D}(\xi)$ of the ES at different disorder strengths. We expect that the ES levels generated by numerical noise always assemble around the same energy. This will correspond to a peak in the DOS that does not move with the change of disorder strength. In Fig.~\ref{oesdos}, we indeed observe such a situation deeply in the localized phase. There is always a peak around $\xi\approx50$ that does not move for $W=100,1000$ and $\infty$, meaning that the machine precision problem has occured at these disorder strengths. Therefore, we only focus on those ES levels with $\xi\leq40$ for safety. \begin{figure} \centerline{\includegraphics[width=\linewidth]{oesdos_N_9_v2.pdf}} \caption{The sample-averaged DOS $\langle\overline{D}(\xi)\rangle$ of the ground-state ES of $N=9$ electrons at $W=0.1,1,10,100,1000$ and $\infty$. $\langle\overline{D}(\xi)\rangle$ is averaged over the three ground states using $800$ samples.} \label{oesdos} \end{figure} \end{document}
[ "Gaussian unitary ensemble", "Poisson distribution", "Lanczos iterations" ]
[ "Gaussian unitary ensemble", "Poisson distribution", "Lanczos iterations" ]
\section{Gibbs' Canonical Ensemble} From Gibbs' 1902 text {\it Elementary Principles in Statistical Mechanics}, page 183 : \begin{quotation} ``If a system of a great number of degrees of freedom is microcanonically distributed in phase, any very small part of it may be regarded as canonically distributed.'' \end{quotation} Thus J. Willard Gibbs pointed out that the energy states of a ``small'' system weakly coupled to a larger ``heat reservoir'' with a temperature $T$ have a ``canonical'' distribution : $$ f(q,p) \propto e^{-{\cal H}(q,p)/kT} \ . $$ with the Hamiltonian ${\cal H}(q,p)$ that of the small system. Here $(q,p)$ represents the set of coordinates and momenta of that system. `` {\it Canonical} '' means simplest or prototypical. The heat reservoir coupled to the small system and responsible for the canonical distribution of energies is best pictured as an ideal-gas thermometer characterized by an unchanging kinetic temperature $T$ . The reservoir gas consists of many small-mass classical particles engaged in a chaotic and ergodic state of thermal and mechanical equilibrium with negligible fluctuations in its temperature and pressure. Equilibrium within this thermometric reservoir is maintained by collisions as is described by Boltzmann's equation. His ``H Theorem'' establishes the Maxwell-Boltzmann velocity distribution found in the gas. See Steve Brush's 1964 translation of Boltzmann's 1896 text {\it Vorlesungen \"uber Gastheorie}. Prior to fast computers texts in statistical mechanics were relatively formal with very few figures and only a handful of numerical results. In its more than 700 pages Tolman's 1938 tome {\it The Principles of Statistical Mechanics} includes only two Figures. [ The more memorable one, a disk colliding with a triangle, appears on the cover of the Dover reprint volume. ] Today the results-oriented graphics situation is entirely different as a glance inside any recent issue of {\it Science} confirms. \section{Nos\'e-Hoover Canonical Dynamics -- Lack of Ergodicity} In 1984, with the advent of fast computers and packaged computer graphics software already past, Shuichi Nos\'e set himself the task of generalizing molecular dynamics to mimic Gibbs' canonical distribution\cite{b1,b2}. In the end his approach was revolutionary. It led to a new form of heat reservoir described by a single degree of freedom with a logarithmic potential, rather than the infinitely-many oscillators or gas particles discussed in textbooks. Although the theory underlying Nos\'e's approach was cumbersome Hoover soon pointed out a useful simplification\cite{b3,b4} : Liouville's flow equation in the phase space provides a direct proof that the ``Nos\'e-Hoover'' motion equations are consistent with Gibbs' canonical distribution. Here are the motion equations for the simplest interesting system, a single one-dimensional harmonic oscillator : $$ \dot q = (p/m) \ ; \ \dot p = -\kappa q - \zeta p \ ; \ \dot \zeta = [ \ (p^2/mkT) - 1 \ ]/\tau^2 \ . $$ The ``friction coefficient'' $\zeta$ stabilizes the kinetic energy $(p^2/2m)$ through integral feedback, extracting or inserting energy as needed to insure a time-averaged value of precisely $(kT/2)$ . The parameter $\tau$ is a relaxation time governing the rate of the thermostat's response to thermal fluctuations. In what follows we will set all the parameters and constants $(m,\kappa,k,T,\tau)$ equal to unity, purely for convenience. Then the Nos\'e-Hoover equations have the form : $$ \dot q = p \ ; \ \dot p = -q -\zeta p \ ; \ \dot \zeta = p^2 - 1 \ [ \ {\rm NH} \ ] \ . $$ Liouville's phase-space flow equation, likewise written here for a single degree of freedom, is just the usual continuity equation for the three-dimensional flow of a probability density in the ($q,p,\zeta$) phase space : $$ \dot f = (\partial f/\partial t) + \dot q(\partial f/\partial q) + \dot p(\partial f/\partial p) + \dot \zeta(\partial f/\partial \zeta) = -f(\partial \dot q/\partial q) -f(\partial \dot p/\partial p)-f(\partial \dot \zeta/\partial \zeta) \ . $$ This approach leads directly to the simple [ NH ] dynamics described above. It is easy to verify that Gibbs' canonical distribution needs only to be multiplied by a Gaussian distribution in $\zeta$ in order to satisfy Liouville's equation. $$ e^{-q^2/2}e^{-p^2/2}e^{-\zeta^2/2} \propto f_{NH} \propto f_Ge^{-\zeta^2/2} \longrightarrow (\partial f_{NH}/\partial t) \equiv 0 \ . $$ Hoover emphasized that the simplest thermostated system, a harmonic oscillator, does {\it not} fill out the entire Gibbs' distribution in $(q,p,\zeta)$ space. It is not ``ergodic'' and fails to reach all of the oscillator phase space. In fact, with {\it all} of the parameters ( mass, force constant, Boltzmann's constant, temperature, and relaxation time $\tau$ ) set equal to unity only six percent of the Gaussian distribution is involved in the chaotic sea\cite{b5}. See {\bf Figure 1} for a cross section of the Nos\'e-Hoover sea in the $p=0$ plane. The complexity in the figure, where the ``holes'' correspond to two-dimensional tori in the three-dimensional $(q,p,\zeta)$ phase space, is due to the close relationship of the Nos\'e-Hoover thermostated equations to conventional chaotic Hamiltonian mechanics with its infinitely-many elliptic and hyperbolic points. \begin{figure} \includegraphics[width=4.5in,angle=-90.]{fig1.ps} \caption{ The $p=0$ cross section of the chaotic sea for the Nos\'e-Hoover harmonic oscillator. 502 924 crossings of the plane are shown. The fourth-order Runge-Kutta integration used a timestep $dt = 0.0001$. A point was plotted whenever the product $p_{old}p_{new}$ was negative. } \end{figure} \section{More General Thermostat Ideas} New varieties of thermostats, some of them Hamiltonian and some not, appeared over the ensuing 30-year period following Nos\'e's work\cite{b6,b7,b8,b9,b10,b11,b12,b13,b14,b15, b16,b17,b18}. This list is by no means complete. Though important, simplicity is not the sole motivation for abandoning purely-Hamiltonian thermostats. Relatively recently we pointed out that Hamiltonian thermostats are incapable of generating or absorbing heat flow\cite{b6,b7}. The close connection between changing phase volume and entropy production guarantees that Hamiltonian mechanics is fundamentally inconsistent with irreversible flows. At equilibrium Bra\'nka, Kowalik, and Wojciechowski\cite{b8} followed Bulgac and Kusnezov\cite{b9,b10} in emphasizing that {\it cubic} frictional forces, $-\zeta^3p$, which also follow from a novel Hamiltonian, promote a much better coverage of phase space, as shown in {\bf Figure 2} . The many small holes in the $p=0$ cross section show that this approach also lacks ergodicity. \begin{figure} \includegraphics[width=4.5in,angle=-90.]{fig2.ps} \caption{ The $p=0$ cross section of the chaotic sea for an oscillator governed by Bra\'nka, Kowalik, and Wojciechowski's choice of the motion equation, $\ddot q = \dot p = -q -\zeta^3p \ ; \ \dot \zeta = p^2 - 1$ . 20 billion timesteps, with $dt = 0.0001$, resulted in 636 590 crossings of the $p=0$ section, using the integration procedure of Figure 1. } \end{figure} \subsection{Joint Control of Two Velocity Moments} Attempts to improve upon this situation led to a large literature with the most useful contributions applying thermostating ideas with two or more thermostat variables\cite{b9,b10}. An example, applied to the harmonic oscillator, was tested by Hoover and Holian\cite{b11} and found to provide all of Gibbs' distribution : $$ \dot q = p \ ; \ \dot p = -q - \zeta p - \xi p^3\ ; \ \dot \zeta = p^2 - 1 \ ; \ \dot \xi = p^4 - 3p^2 \ {\rm [ \ HH \ ]} $$ The two thermostat variables $(\zeta,\xi)$ together guarantee that both the second and the fourth moments of the velocity distribution have their Maxwell-Boltzmann values [ 1 and 3 ] . Notice that two-dimensional cross sections like those in the Figures are no longer useful diagnostics for ergodicity once the phase-space dimensionality exceeds three. \subsection{Joint Control of Coordinates and Velocities} In 2014 Patra and Bhattacharya\cite{b12} suggested thermostating both the coordinates and the momenta : $$ \dot q = p - \xi q \ ; \ \dot p = -q - \zeta p \ ; \ \dot \zeta = p^2 - 1 \ ; \ \dot \xi = q^2 - 1 \ {\rm [ \ SEPB \ ]} \ . $$ an approach already tried by Sergi and Ezra in 2001\cite{b13}. A slight variation of the Sergi-Ezra-Patra-Bhattacharya thermostat takes into account Bulgac and Kusnezov's observation that cubic terms favor ergodicity : $$ \dot q = p - \xi^3 q \ ; \ \dot p = -q - \zeta p \ ; \ \dot \zeta = p^2 - 1 \ ; \ \dot \xi = q^2 - 1 \ {\rm [ \ PB_{var} \ ]} \ . $$ These last two-thermostat equations appear to be a good candidate for ergodicity, reproducing the second and fourth moments of $(q,p,\zeta,\xi)$ within a fraction of a percent. We have not carried out the thorough investigation that would be required to establish their ergodicity as the single-thermostat models are not only simpler but also much more easily diagnosed because their sections are two-dimensional rather than three-dimensional. \section{Single-Thermostat Ergodicity} Combining the ideas of ``weak control'' and the successful simultaneous thermostating of coordinates and momenta\cite{b14} led to further trials attempting the weak control of two different kinetic-energy moments\cite{b15}. One choice out of the hundreds investigated turned out to be successful for the harmonic oscillator : $$ \dot q = p \ ; \ \dot p = - q -\zeta( 0.05p + 0.32p^3) \ ; \ \dot \zeta = 0.05(p^2 - 1) + 0.32(p^4 - 3p^2) \ [ \ {\rm ``0532 \ Model''} \ ] \ . $$ These three oscillator equations passed all of the following tests for ergodicity : \noindent [ 1 ] The moments $\langle \ p^2 \ \rangle = 1 \ ; \ \langle \ p^4\ \rangle = 3 \ ; \ \langle \ p^6 \ \rangle = 15 $ were confirmed. \noindent [ 2 ] The independence of the largest Lyapunov exponent to the initial conditions indicated the absence of the toroidal solutions. \noindent [ 3 ] The separation of two nearby trajectories had an average value of 6 :\\ $\langle \ (q_1-q_2)^2 + (p_1-p_2)^2 + (\zeta_1-\zeta_2)^2 \ \rangle = 2 + 2 + 2 = 6 $ . \noindent [ 4 ] The times spent at positive and negative values of $\{ \ q,p,\zeta \ \}$ were close to equal. \noindent [ 5 ] The times spent in regions with each of the 3! orderings of the three dependent variables were equal for long times. These five criteria were useful tools for confirming erogidicity. Evidently weak control is the key to efficient ergodic thermostating of oscillator problems. \begin{figure} \includegraphics[width=4.5in,angle=-90.]{fig3.ps} \caption{ $p=0$ cross section for a singly-thermostated quartic oscillator, with motion equations $ \ddot q = \dot p = -q^3 -\zeta p^3 \ ; \ \dot \zeta = p^4 - 3p^2$ . Runge-Kutta integration as in Figures 1 and 2 with 503 709 crossings of the $p=0$ plane. Several hundred singly-thermostated attempts failed to obtain canonical ergodicity for the quartic oscillator. } \end{figure} \section{A Fly in the Ointment, the Quartic Potential} The success in thermostating the harmonic oscillator led to like results for the simple pendulum but {\it not} for the quartic potential\cite{b15}. See {\bf Figure 3}. This somewhat surprising setback motivates the need for more work and is the subject of the Ian Snook Prize for 2016. This Prize will be awarded to the author(s) of the most interesting original work exploring the ergodicity of single-thermostated statistical-mechanical systems. The systems are not at all limited to the examples of the quartic oscillator and the Mexican Hat potential but are left to the imagination and creativity of those entering the competition. \begin{figure} \includegraphics[width=2.0in,angle=-0.]{fig4.ps} \caption{ Shuichi Nos\'e ( 1951-2005 ) and Ian Snook ( 1945-2013 ) } \end{figure} \section{Conclusions -- Ian Snook Prize for 2016} It is our intention to reward the most interesting and convincing entry submitted for publication to Computational Methods in Science and Technology ( www.cmst.eu ) prior to 31 January 2017. The 2016 Ian Snook prize of \$500 dollars will be presented to the winner in early 2017. An Additional Prize of the same amount will likewise be presented by the Institute of Bioorganic Chemistry of the Polish Academy of Sciences ( Poznan Supercomputing and Networking Center ). We are grateful for your contributions. This work is dedicated to the memories of our colleagues, Ian Snook ( 1945-2013 ) and Shuichi Nos\'e ( 1951-2005 ), shown in {\bf Figure 4} . \pagebreak
[ "Gibbs' Canonical Ensemble", "Elementary Principles in Statistical Mechanics", "J. Willard Gibbs", "Hamiltonian", "Canonical", "Boltzmann's equation", "H Theorem", "Maxwell-Boltzmann velocity distribution", "Steve Brush", "Vorlesungen \\\"uber Gastheorie", "Tolman's 1938 tome", "The Principles of Statistical Mechanics", "Science", "Nos\\'e-Hoover Canonical Dynamics", "Shuichi Nos\\'e", "Hoover", "Liouville's flow equation", "Gibbs' canonical distribution", "Nos\\'e-Hoover", "Nos'e", "Bra\\'nka", "Kowalik", "Wojciechowski", "Bulgac", "Kusnezov", "Hoover", "Holian", "Gibbs", "Patra", "Bhattacharya", "Sergi", "Ezra", "Hamiltonian mechanics", "Maxwell-Boltzmann", "Ian Snook Prize for 2016", "Shuichi Nos\\'e", "Ian Snook", "Conclusions -- Ian Snook Prize for 2016", "Computational Methods in Science and Technology", "2016 Ian Snook prize", "Institute of Bioorganic Chemistry of the Polish Academy of Sciences", "Poznan Supercomputing and Networking Center", "Ian Snook ( 1945-2013 )", "Shuichi Nos\\'e ( 1951-2005 )" ]
[]
\section{Introduction} In recent decades acoustic techniques in solid state physics demonstrate serious progress, especially by moving to previously unattainable high-frequency band, up to a terahertz frequencies \cite{ps-ultrasonics}. Considerable efforts in this direction, called often picosecond ultrasonics, are stimulated by the short-wavelength character of acoustic waves in this band, and, in some cases, efficient coupling of acoustic strain to electronic, optical, magnetic excitations in solid state. This allows application of high-frequency acoustic signals for testing and control of nanodimensional solid-state structures. From practical point of view, the most serious restriction of picosecond ultrasonics is the use of ultrafast (femtosecond) lasers for both excitation of acoustic signals and detection of its coupling to a solid-state nanostructure, usually with the use of pump-probe technique. In spite of considerable improvement of such lasers characteristics and growth of their availability, development of a robust electrically controlled picosecond acoustic technique would be an essential breakthrough in the field. Speaking about the high-frequency acoustic wave excitation, terahertz sasers could be a solution of the problem \cite{saser1,saser2,saser3}. For detection purposes, several options are available. The superconductor based detectors are in use from 70th. The robust bolometers used to be widely employed for acoustic spectroscopy are currently less popular since, in contrast to optical methods, they are hardly sensitive to the spectrum of an acoustic signal. The superconductor contacts do possess spectral selectivity \cite{super-contacts} but their fabrication is quite sophisticated. Semiconductor-based approaches are more preferable. A photo-electric acoustic wave detection by {\it p-i-n} diodes with a quantum well embedded into the {\it i} region demonstrated high efficiency, but, although based on electric current measurement, requires use of femtosecond laser for temporal signal sampling \cite{pin}. An alternative, also semiconductor-based, method using Schottky diodes has been demonstrated recently \cite{schottky}. It is purely electrical and is based on induction of displacement current by propagating acoustic wave. Considering such factors as all-electrical detection principle, use of robust well-studied devices technology which can be integrated with various solid-state structures, possible room-temperature applications, this method looks an attractive candidate for wide use as high-frequency acoustic detector. In this paper the main physical principles of Schottky diode acoustic detection are considered theoretically in details. The developed model allows to address such issues as feasible magnitude of the electrical signal, fundamental restrictions on detectable acoustic signal frequency, possible ways of the diode structure optimization. The paper is organized as follows. In section I the expression for the accumulated electrical charge due to the acoustic strain perturbation is obtained for important cases of piezoelectric and deformation potential coupling. It is used then in section II for analysis of the electrical response of the Schottky diode. Then, the conclusions follow. \section{Expression for the acoustic wave induced charge in a diode} \begin{figure} \centering \includegraphics[width=0.6\linewidth]{fig1} \caption{The schematics of the energy diagram of $n$-type Schottky diode with the used coordinate frame. $z=z_i$ corresponds to the metal-semiconductor interface. The insert shows the model electrical circuit which is used for the electrical detection of acoustic signals. } \label{fig:1} \end{figure} The energy diagram of the Schottky diode is shown in Fig.\ref{fig:1} for particular case of n-doped semiconductor. We consider the range of external biases $V$, for which the Schottky barrier is much higher than temperature measured in energy units. In this case the electrical current is small and it is possible to assume that electron distributions in semiconductor and metal regions correspond to quasi-equilibrium and can be characterized by quasi-Fermi levels shifted by the value of electrical bias $eV$ assuming positive sign for the direct bias of the diode. While an acoustic wave propagates through the structure, the related strain induces the potential acting on electrons. Such a potential can be described within the deformation potential model \cite{Gantmakher-Levinson}. Redistribution of the charge carriers in this potential gives rise to the perturbation of the electric filed in the system. In addition, in a piezoelectric semiconductor electric field is perturbed due to the lattice polarization induced by the acoustic wave. The perturbed potential $\delta \varphi$ satisfies the Poisson equation, which in one-dimensional limit, corresponding to an acoustic wave propagating along $z$-axis which is normal to the flat metal-semiconductor interface, is \begin{equation} \label{eq:Poisson} \frac{d \delta \varphi}{dz} =\frac{e}{\varepsilon_s \varepsilon_0} \delta n + \frac{1}{\varepsilon_s \varepsilon_0} \frac{d P_z}{dz}, \end{equation} where $\delta n$ is perturbation of the electron concentration, $\varepsilon_s$ and $\varepsilon_0$ are the dielectric constant of the semiconductor and the absolute permittivity, and $P_z$ is the $z$-component of the peizoelectric polarization. The important assumption we are going to use is that all perturbations caused by the acoustic wave are much slower than the electron relaxation processes in both metal and semiconductor. The latter can be characterized by the dielectric relaxation time $\varepsilon_{s,m} \varepsilon_0/\sigma$, where $\sigma$ is conductivity and $\varepsilon_m$ is the lattice dielectric permittivity of metal. Such time is usually within subpicosecond band for semiconductor and even shorter for metal. Thus, we may use a quasi-static approach determining $\delta n$ while dealing with sub-terahertz acoustic waves. This means that at any time instant the electron density perturbation is the same as in the case of static nonuniform strain distribution corresponding to this particular time. Specifically, dropping the time dependence for brevity, in the linear approach for semiconductor region $z<z_i$ we have: \begin{equation} \label{eq:dn_s} \delta n (z) = e (\delta \varphi (z) - U_{DP}(z)/e - \delta V_s) \frac{dn_s}{dE_F}, \end{equation} where $n_s (E_F)$ is the electron concentration dependence on the Fermi energy, $U_{DP}$ is the deformation potential energy of electrons, and we allow perturbation of the semiconductor reference potential, $\delta V_s$, caused by the acoustic wave. Note, that the value of the derivative in right hand side of Eq.(\ref{eq:dn_s}) depends on coordinate. Analogously, in metal, $z>z_i$, we have \begin{equation} \label{eq:dn_m} \delta n (z) = e (\delta \varphi (z) - U_{DP}(z)/e - \delta V_m) \frac{dn_m}{dE_F}. \end{equation} With Eqs.(\ref{eq:dn_s},\ref{eq:dn_m}), the Poisson equation becomes a linear inhomogeneous differential equation. It is convenient to perform its solution separately for $z<z_i$ and $z>z_i$, applying then the boundary conditions at $z=z_i$. Using standard variation of constants method and taking into account that $\varphi(z=-\infty) = \delta V_s$, $\varphi(z=\infty) = \delta V_m$ , we obtain \begin{eqnarray} \label{eq:pt-sol} \delta \varphi (z) = \delta V_s +c_s \phi_{s2} (z) +\frac{\phi_{s2} (z)}{w_s} \int_z^{z_i} dz' \phi_{s1}(z') \left(k_s^2 (z') U_{DP}(z')/e- \frac{1}{\varepsilon\varepsilon_0} \frac{d P_z}{dz'}\right) + \\ \frac{\phi_{s1} (z)}{w_s} \int_{-\infty}^{z} dz' \phi_{s2}(z') \left(k_s^2 (z') U_{DP}(z')/e- \frac{1}{\varepsilon\varepsilon_0} \frac{d P_z}{dz'}\right), \mbox{~for~} z<z_i \nonumber \\ \delta \varphi (z) = \delta V_m +c_m \phi_{s2} (z) -\frac{\phi_{m2} (z)}{w_m} \int_{z_i}^z dz' \phi_{m1}(z') k_m^2 (z') U_{DP}(z')/e - \nonumber \\ \frac{\phi_{m1} (z)}{w_m} \int_z^{\infty} dz' \phi_{m2}(z') k_m^2 (z') U_{DP}(z')/e, \mbox{~for~} z>z_i. \nonumber \end{eqnarray} Here $c_s$ and $c_m$ are constants, $k_{s,m}^2 =e^2dn_{s,m}/dE_{F} (\varepsilon_{s,m} \varepsilon_0)^{-1}$, and $\phi_{m1,2}$ are fundamental solutions of the homogeneous versions of equations for $\delta \varphi$: \begin{eqnarray} \label{eq:pt-hom} \frac{d \phi_{s1,2}}{dz^2} =k_s^2 (z) \phi_{s1,2} \mbox{~for~} z<z_i \\ \frac{d \phi_{m1,2}}{dz^2} =k_m^2 (z) \phi_{m1,2} \mbox{~for~} z>z_i \nonumber \end{eqnarray} These functions are selected such that $\phi_{s2} (-\infty) =0$, $\phi_{m2} (\infty) =0$ and Wroskians in Eq.(\ref{eq:pt-sol}) are $w_{s,m} = \phi_{s,m1} \phi'_{s,m2} - \phi'_{s,m1} \phi_{s,m2}$. The constants $c_s$ and $c_m$ are determined via the boundary conditions at $z=z_i$, requiring continuity of potential and electrical induction. Then, it is straightforward to calculate the perturbation of the accumulated charge, $\delta Q$: \begin{equation} \label{eq:charge-def} \delta Q=\varepsilon\varepsilon_0 S \int_{z_i}^{\infty} dz k_m^2 (-\delta \varphi (z) + U_{DP}(z)/e + \delta V_m), \end{equation} where $S$ is the diode cross-section. After some algebra from the expressions for the potential we obtain \begin{eqnarray} \label{eq:charge-expr} \delta Q=C \left( \delta V - V_{PZ}(z_i) + \int_{-\infty}^{z_i} dz G_s (z) \left( V_{DP}(z) +V_{PZ} (z)\right) - \right. \nonumber \\ \left. \int_{z_i}^\infty dz G_m (z) V_{DP} (z) \right), \end{eqnarray} where we introduced the effective potential due to the deformation potential acousto-electric coupling $V_{DP} \equiv - U_{DP}/e$, potential induced due to poiezoelectric action of the aoustic wave $V_{PZ}$ such that $V'_{PZ} = P_z/(\varepsilon_s \varepsilon_0)$, the kernel functions \begin{eqnarray} \label{eq:kernel} G_s (z)= \frac{1}{\phi'_{s2}(z_i)}\phi_{s2}(z)k_s^2(z), \\ G_m (z)= \frac{1}{w_m} \left( \phi_{m1}(z_i) \phi'_{m2} (z_i)-\frac{\varepsilon_s}{\varepsilon_m}\phi_{m2} (z_i) \phi'_{m1}\right) \frac{1}{\phi'_{m2}(z_i)} \phi_{m2}(z)k_m^2(z) \nonumber \end{eqnarray} and the diode capacitance $C=\varepsilon_s\varepsilon_0 S/L_{eff}$ with \begin{equation} \label{eq:thickness} L_{eff}=\frac{\phi_{s2}(z_i)}{\phi'_{s2}(z_i)} - \frac{\varepsilon_s}{\varepsilon_m} \frac{\phi_{m2}(z_i)}{\phi'_{m2}(z_i)}. \end{equation} In Fig.\ref{fig:kernel} we plot the spatial dependence of the kernel function $G_s$ calculated for GaAs Schottky diodes with doping $10^{17}~cm^{-3}$ and $10^{18}~cm^{-3}$ and temperatures $10K$ and $300K$. The steady-state potential profile and the screening parameter were determined with the standard approach assuming low value of the diode current \cite{Sze}. As we see, the charge is controlled by the perturbation near the edge of the depletion layer. This result is expectable: indeed, the used boundary conditions assume no acoustic perturbation for $z=-\infty$. In this case although variation of strain inside the spatially uniform portion of semiconductor leads to charge redistribution, it does not change the total charge in it. Only if strain changes near the inhomogeneous region near the edge of the depletion layer, the total charge experiences the perturbation. For comparison, we show the kernel function for a rough model of step-like dependence of $k_s$, where it is set to zero in the depletion region and to the value of the bulk semiconductor to the left of its edge, assumed to be infinitely sharp. The approximation allows analytical determination of $G_s$. As we see, for semiconductor this model is not very good, especially at room temperature where depletion region edge is not well-defined. However, it is good for the metal region since here any energetic perturbation is much less then the Fermi energy. As a result, for metal we can use the analytical expression for $G_m$, which is $G_m=k_m \exp (-k_m (z-z_i))$ for $z>z_i$. It is important to mention useful normalization conditions, which hold for any distribution of potential in the diode: \begin{eqnarray} \label{eq:kernel-normailzation} \int_{-\infty}^{z_i} G_s (z) dz = 1, \\ \int_{z_i}^{\infty} G_m (z) dz = \xi_m \equiv \frac{1}{w_m} \left( \phi_{m1}(z_i) \phi'_{m2} (z_i)-\frac{\varepsilon_s}{\varepsilon_m}\phi_{m2} (z_i) \phi'_{m1}\right) \nonumber \end{eqnarray} \begin{figure} \centering \includegraphics[width=0.7\linewidth]{kernels} \caption{The kernel function $G_s$ for GaAs Schottky diodes with doping level $10^{17}$~cm$^{-3}$ and $10^{18}$~cm$^{-3}$ and different temperatures. $z=0$ corresponds to the metal-GaAs interface, {\it i.e.} $z_i=0$. For comparison, the results for model step-like spatial dependence of $k_s^2$ are shown. } \label{fig:kernel} \end{figure} Let us discuss the in some details the deformation potential and piezoelectric couplings. For semiconductor-contribution this is straightforward. The deformation coupling describes the shift of the bottom of the conduction band minima. Its specific form depends on the crystal symmetry and the momentum position of the conduction band \cite{Ivchenko-Pikus}. In any case, $V_{DP}$ is proportional to strain. Below, to be specific, we will provide expressions for the case of GaAs with $z$-axis parallel to its [111] crystallographic direction and longitudinal acoustic wave propagating along $z$. In this case \begin{equation} \label{eq:DP111} V_{DP}= \frac{E_1}{e} u_{zz}, \end{equation} where $E_1$ is the deformation potential constant and $u_{zz}$ is the only present component of strain. The piezoelectric potential is determined by the strain-induced piezoelectric polarization. For the mentioned geometry and acoustic wave polarization we obtain \begin{equation} \label{eq:PZ111} V_{PZ}= \frac{2 e_{14}}{\sqrt{3}\varepsilon \varepsilon_0} u_z, \end{equation} where $e_{14}$ is the piezoelectric constant of a cubic material and $u_z$ is the only present component of displacement in the considered longitudinal acoustic wave. As we see, the piezoelectric effect induces charge not only because of charge redistribution, but also due to direct induction of potential (the second term in the brackets of Eq.(\ref{eq:charge-expr})). It is worth to mention that for the case of different crystallographic orientation, crystal symmetry or acoustic wave polarization the general structure of the expressions for deformation and piezoelectric potentials remains the same with the former proportional to strain and the latter proportional to displacement. Of course, in some cases some contributions vanish. For example, in GaAs there is no piezoelectric coupling for acoustic wave of any polarization propagating along [100] direction; deformation potential in this case is absent for transverse wave. For the metal region consideration of the coupling of acoustic wave to electrons is more complicated than for semiconductor. This is because the deformation potential in a metal is considered as a perturbation of electron spectrum in some momentum point near the Fermi surface. Therefore, this value is, strictly speaking, momentum dependent. While considering screening, a momentum-averaged value is introduced to determine the charge perturbation \cite{Gantmakher-Levinson,Abrikosov}. Its dependence on the strain components is determined by the symmetry of the metal Fermi surface. In fact, the corresponding constants are hardly known. This is because experiments on electron transport or ultrasound attenuation in metals provide {\it screened} value of electron-phonon coupling averaged in a specific way \cite{Abrikosov}. In the following, we will use in metal \begin{equation} \label{eq:DP-metal} V_{DP}= \frac{E_m}{e} u_{zz}, \end{equation} keeping in mind that the effective constant $E_m$ has specific value dependent on the metal crystallographic orientation (for metal single crystals) and acoustic wave polarization. By the order of magnitude, one can expect $E_m$ to be about few electronvolts. It is worth mentioning that in general we should not discard piezoelectric-like coupling in metal. It is usually done while considering electron scattering by phonons since efficient screening in metals cancels any macroscopic potential. However, the magnitude of the space charge induced under the screening does not vanish. In particular, this is seen from the expression for $G_m$, which provides finite value for the induced charge regardless of large value of $k_m$. In the following we do not include piezoelectric contribution in metal into consideration since no info is available of its presence and strength. However, one has to keep in mind that high-frequency acoustic wave detection by Schottky diode could reveal possible piezoelectric-like coupling in metals. In principle, it can be distinguish from the deformation potential, since, similar to the semiconductor, the it should be proportional to the displacement rather than strain \section{Detection of the acoustic wave by the diode} Naturally, the signal induced by an acoustic wave passing through the diode depends both on its intrinsic characteristics and the properties of the electrical circuit which includes the Schottky diode. We consider simple model circuit consisting of the diode and series resistance $R$ (see the insert of Fig.\ref{fig:1}). Using Eq.(\ref{eq:charge-expr}) we can easily obtain equation for $\delta V$: \begin{eqnarray} \label{eq:circuit} \frac{d \delta V}{dt} +\frac{\delta V}{RC} =\frac{dS}{dt} \\ S=\left( V_{PZ}(z_i) - \int_{-\infty}^{z_i} dz G_s (z) \left( V_{DP}(z) +V_{PZ} (z)\right) + \xi_m V_{DP} ^{(m)}(z_i) \right), \nonumber \end{eqnarray} where the right hand side can be considered as a source caused by an acoustic wave, smallness of the screening length in the metal is taken into account, and superscript $(m)$ indicates the deformation potential in the metal. The particular form of the acoustic signal depends on the kind of the acoustic source. In high-frequency band the most popular one is a bipolar strain pulse generated with the use of picosecond ultrasonics technique \cite{ps-ultrasonics}. Alternatively, quasi-monochromatic acoustic waves can be produced by semiconductor superlattices illuminated by femtosecond laser pulses \cite{ps-ultrasonics} or sasers \cite{saser1,saser2,saser3}. Since in the linear response regime any acoustic signal can be presented as a plane wave superposition, in Eq.({\ref{eq:circuit}) we switch to the frequency domain and obtain \begin{equation} \label{eq:circuit-freq} \delta V_\omega = \frac{1}{1+i (\omega RC)^{-1}} S_\omega. \end{equation} The intrinsic detection properties of the diode are reflected by the frequency dependence of $S_\omega$. In fact, it is determined by the spatial broadening of the kernel function $G_s$. Assuming the plane-wave strain, we obtain \begin{equation} \label{eq:S_omega} S_\omega= - i \tilde{V}_{PZ} \left(1-J_s \exp(i\theta)\right) +\xi_m \tilde{V}_{DP}^{(m)} - \tilde{V}_{DP}^{(s)}J_s \exp(i\theta), \end{equation} where $\tilde{V}_{PZ}$ and $\tilde{V}_{DP}^{(s,m)}$ are the amplitudes of the piezoelectric and deformation potentials (with the superscript labeling semiconductor and metal contributions). For the specific case of [111]-oriented semiconductor (Eqs.(\ref{eq:DP111},\ref{eq:PZ111},\ref{eq:DP-metal})) we have $\tilde{V}_{PZ}=2 e_{14} u_{zz}^{(0)}s \left(\sqrt{3} \varepsilon_s \varepsilon_0 \omega\right)^{-1}$ and $\tilde{V}_{DP}^{(s,m)}= E_{s,m} u_{zz}^{(0)}/e$, where $s$ is sound velocity and $u_{zz}^{(0)}$ is the strain amplitude. In Eq.(\ref{eq:S_omega}) the overlap integral is introduced: \begin{equation} \label{eq:overlap} J_s \exp (i\theta) =\int_{-\infty} ^{z_i} dz G_s(z) \exp (i\omega z/s). \end{equation} \begin{figure} \centering \includegraphics[width=0.7\linewidth]{freq-sens-amp} \caption{The calculated overlap $J_s$ for various doping levels and temperature. } \label{fig:overlap} \end{figure} The calculated frequency dependence of $J_s$ is shown in Fig.\ref{fig:overlap}. As it is expected, $J_s$ is suppressed for frequencies corresponding to the acoustic wavelength smaller than the spatial localization length of the kernel $G_s$. The frequency dependence of $\theta$, which is not shown in a graph, reflects the phase shift of the acoustic signal at the edge of the depletion layer and at the metal-semiconductor interface and corresponds roughly to $2\pi$ variation for frequency increase about $90$ and $26$~GHz for doping $10^{18}$ and $10^{17}$~cm$^{-3}$, respectively. \begin{figure} \centering \includegraphics[width=0.7\linewidth]{freq-sens-amp-pz} \caption{The calculated value of $J_s^{(PZ)}$ for various doping levels and temperature. The lines' legend is the same as in Fig.\protect\ref{fig:overlap}. } \label{fig:overlap-pz} \end{figure} If piezoelectric coupling is present in the structure, it commonly exceeds the deformation one for frequencies below a hundred gigahertz. For separate analysis of the piezoelectric contribution it is convenient to introduce the value $J_s^{(PZ)} \equiv |1 -J_s \exp (i \theta)|$. The frequency dependence of $J_s^{(PZ)}$ is shown in Fig.\ref{fig:overlap-pz}. Naturally, it shows resonances corresponding to in-phase perturbation at the edge of the semiconductor depletion region and metal-semiconductor interface. Positions of these resonances can be easily predicted since the piezoelectric contribution to the diode response is determined by the parameters of semiconductor only, which are usually well-known. It is worth to mention a special case of piezoelectric coupling and relatively low frequency acoustic wave, for which the acoustic wavelength is larger than both the broadening of $G_s$ and the thickness of the depletion layer. Here, $S$ becomes proportional to strain. So, for the particular case of Eq.(\ref{eq:PZ111}) we have $S= 2 e_{14} u_{zz} L_{eff}(\sqrt{3}\varepsilon_0 \varepsilon_s)^{-1}$. If, in addition, if $(RC)^{-1}$ exceeds considerably the characteristic acoustic frequency, $\delta V = S$. In other words, the electrical signal measures directly the value of strain in near-interface region. For doping $10^{18}$~cm$^{-3}$, this approach can be valid for frequency up to several tens of gigahertz. In diodes where piezoelectric coupling is absent, for example those employing non-piezoelectric semiconductors, like Si or Ge, or grown along certain crystallographic directions, like [001] GaAs, the situation is different. The resonances are expected in this case as well, but their location is difficult to predict because of unknown value of the effective deformation potential constant in metal. For higher frequencies the deformation potential coupling is most efficient. In addition, as we see from Fig.\ref{fig:overlap}, the semiconductor contribution is suppressed for high frequencies. However, the metal contribution persists for any realistic frequency. This means that the actual frequency restrictions are set by the ability of high-frequency electronics to measure the high-frequency electric signals. Summarizing the obtained results we can conclude that the acoustic wave detection by Schottky diodes can be described by a simple model where electrical response of the diode is caused by the displacement current induced by electrons screening the strain-induced perturbation. The actual upper frequency limit is set by the parameters of the current-registering equipment rather than internal diode properties due to the fast electronic response and small screening length in metal contact of the diode. On the other hand, the semiconductor-side signal contributions are efficient, for common diode structures for frequencies below few hundreds of gigahertz. These results will be an important guide for interpretation of the measured electrical diode response to an acoustic perturbation as well as optimization of the Schottky diode acoustic wave detectors. \begin{acknowledgments} \end{acknowledgments}
[ "Acoustic techniques", "Solid state physics", "Picosecond ultrasonics", "Femtosecond lasers", "Terahertz sasers", "Superconductor based detectors", "Bolometers", "Superconductor contacts", "Semiconductor-based approaches", "Photo-electric acoustic wave detection", "Schottky diodes", "Piezoelectric", "Deformation potential coupling", "Schottky diode acoustic detection", "GaAs Schottky diodes", "Sze", "Fermi energy", "metal", "semiconductor", "GaAs", "Ivchenko-Pikus", "Gantmakher-Levinson", "Abrikosov", "electronvolts", "deformation potential", "piezoelectric potential", "Schottky diode", "metal", "semiconductor", "acoustic wave", "deformation potential", "piezoelectric", "strain", "ultrasonics technique", "semiconductor superlattices", "femtosecond laser pulses", "sasers", "plane wave", "kernel function", "depletion layer", "metal-semiconductor interface", "doping", "Schottky diodes", "Si", "Ge", "GaAs", "metal", "semiconductor", "electrons" ]
[ "metal", "semiconductor", "Schottky diodes", "deformation potential", "Piezoelectric" ]
\section{Introduction} In his book ``Proximal Flows''~\cite[Section~\RNum{2}.3, p.\ 19]{glasner1976proximal} Glasner defines the notion of a {\em strongly amenable group}: A group is strongly amenable if each of its proximal actions on a compact space has a fixed point. A continuous action $G \curvearrowright X$ of a topological group on a compact Hausdorff space is proximal if for every $x, y \in X$ there exists a net $\{g_n\}$ of elements of $G$ such that $\lim_n g_n x = \lim_n g_n y$. Glasner shows that virtually nilpotent groups are strongly amenable and that non-amenable groups are not strongly amenable. He also gives examples of amenable --- in fact, solvable --- groups that are not strongly amenable. Glasner and Weiss~\cite{glasner2002minimal} construct proximal minimal actions of the group of permutations of the integers, and Glasner constructs proximal flows of Lie groups~\cite{glasner1983proximal}. To the best of our knowledge there are no other such examples known. Furthermore, there are no other known examples of minimal proximal actions that are not also {\em strongly proximal}. An action $G \curvearrowright X$ is strongly proximal if the orbit closure of every Borel probability measure on $G$ contains a point mass measure. This notion, as well as that of the related Furstenberg boundary~\cites{furstenberg1963poisson, furstenberg1973boundary, furman2003minimal}, have been the object of a much larger research effort, in particular because a group is amenable if and only if all of its strongly proximal actions on compact spaces have fixed points. Richard Thompson's group $F$ has been alternatively ``proved'' to be amenable and non-amenable (see, e.g.,~\cite{cannon2011thompson}), and the question of its amenability is currently unresolved. In this paper we pursue the less ambitious goal of showing that is it not strongly amenable, and do so by directly constructing a proximal action that has no fixed points. This action does admit an invariant measure, and thus does not provide any information about the amenability of $F$. It is a new example of a proximal action which is not strongly proximal. \vspace{0.3in} The authors would like to thank Eli Glasner and Benjamin Weiss for enlightening and encouraging conversations. \section{Proofs} Let $F$ denote Thompson's group $F$. In the representation of $F$ as a group of piecewise linear transformations of $\mathbb{R}$ (see, e.g.,~\cite[Section 2.C]{kaimanovich2016thompson}), it is generated by $a$ and $b$ which are given by \begin{align*} a(x) &= x-1\\ b(x) &= \begin{cases} x& x \leq 0\\ x/2& 0 \leq x \leq 2\\ x-1& 2 \leq x. \end{cases} \end{align*} The set of dyadic rationals $\Gamma =\mathbb{Z}[\frac{1}{2}]$ is the orbit of $0$. The Schreier graph of the action $G \curvearrowright \Gamma$ with respect to the generating set $\{a,b\}$ is shown in Figure~\ref{fig:schreier} (see~\cite[Section 5.A, Figure 6]{kaimanovich2016thompson}). The solid lines denote the $a$ action and the dotted lines denote the $b$ action; self-loops (i.e., points stabilized by a generator) are omitted. This graph consists of a tree-like structure (the blue and white nodes) with infinite chains attached to each node (the red nodes). \begin{figure}[ht] \centering \includegraphics[scale=0.6]{schreier.pdf} \caption{\label{fig:schreier}The action of $F$ on $\Gamma$.} \end{figure} Equipped with the product topology, $\{-1,1\}^\Gamma$ is a compact space on which $F$ acts continuously by shifts: \begin{align} \label{shift-action} [f x](\gamma) = x(f^{-1}\gamma). \end{align} \begin{proposition} \label{prop:pre_proximal} Let $c_{-1}, c_{+1} \in \{-1,1\}^{\Gamma}$ be the constant functions. Then for any $x \in \{-1,1\}^{\Gamma}$ it holds that at least one of $c_{-1},c_{+1}$ is in the orbit closure $\overline{F x}$. \end{proposition} \begin{proof} It is known that the action $F \curvearrowright \Gamma$ is highly-transitive (Lemma 4.2 in ~\cite{cannon1994notes}), i.e. for every finite $V, W \subset \Gamma$ of the same size there exists a $f \in F$ such that $f(V)=W$. Let $x\in \{-1,1\}^{\Gamma}$. There is at least one of -1 and 1, say $\alpha$, for which we have infinitely many $\gamma \in \Gamma$ with $x(\gamma)=\alpha$. Given a finite $W \subset \Gamma$ choose a $V \subset \Gamma$ of the same size and such that $x(\gamma) = \alpha$ for all $\gamma \in V$. Then there is some $f \in F$ with $f(V) = W$, and so $f x$ takes the value $\alpha$ on $W$. Since $W$ is arbitrary we have that $c_\alpha$ is in the orbit closure of $x$. \end{proof} Given $x_1,x_2 \in \{-1,1\}^{\Gamma}$, let $d$ be their pointwise product, given by $d(\gamma) = x_1(\gamma) \cdot x_2(\gamma)$. By Proposition~\ref{prop:pre_proximal} there exists a sequence $\{f_n\}$ of elements in $F$ such that either $\lim_n f_n d = c_{+1}$ or $\lim_n f_n d = c_{-1}$. In the first case $\lim_n f_n x_1 = \lim_n f_n x_2$, while in the second case $\lim_n f_n x_1 = -\lim_n f_n x_2$, and so this action resembles a proximal action. In fact, by identifying each $x \in \{-1,1\}^{\Gamma}$ with $-x$ one attains a proximal action, and indeed we do this below. However, this action has a fixed point --- the constant functions --- and therefore does not suffice to prove our result. We spend the remainder of this paper in deriving a new action from this one. The new action retains proximality but does not have fixed points. Consider the path $(\rfrac{1}{2}, \rfrac{1}{4},\rfrac{1}{8},\ldots,\rfrac{1}{2^n},\ldots)$ in the Schreier graph of $\Gamma$ (Figure~\ref{fig:schreier}); it starts in the top blue node and follows the dotted edges through the blue nodes on the rightmost branch of the tree. The pointed Gromov-Hausdorff limit of this sequence of rooted graphs\footnote{The limit of a sequence of rooted graphs $(G_n,v_n)$ is a rooted graph $(G,v)$ if each ball of radius $r$ around $v_n$ in $G_n$ is, for $n$ large enough, isomorphic to the ball of radius $r$ around $v$ in $G$ (see, e.g.,~\cite[p.\ 1460]{aldous2007processes}).} is given in Figure~\ref{fig:schreier2}, and hence is also a Schreier graph of some transitive $F$-action $F \curvearrowright F/K$. In terms of the topology on the space $\mathrm{Sub}_F \subset \{0,1\}^F$ of the subgroups of $F$, the subgroup $K$ is the limit of the subgroups $K_n$, where $K_n$ is the stabilizer of $\rfrac{1}{2^n}$. It is easy to verify that $K$ is the subgroup of $F$ consisting of the transformations that stabilize $0$ and have right derivative $1$ at $0$ (although this fact will not be important). Let $\Lambda = F/K$. \begin{figure}[ht] \centering \includegraphics[scale=0.6]{schreier2.pdf} \caption{\label{fig:schreier2}The action of $F$ on $\Lambda$.} \end{figure} We can naturally identify with $\mathbb{Z}$ the chain black nodes at the top of $\Lambda$ (see Figure~\ref{fig:schreier2}). Let $\Lambda'$ be the subgraph of $\Lambda$ in which the dotted edges connecting the black nodes have been removed. Given a black node $n \in \mathbb{Z}$, denote by $T_n$ the connected component of $n$ in $\Lambda'$; this includes the black node $n$, the chain that can be reached from it using solid edges, and the entire tree that hangs from it. Each graph $T_n$ is isomorphic to the Schreier graph of $\Gamma$, and so the graph $\Lambda$ is a covering graph of $\Gamma$ (in the category of Schreier graphs). Let \begin{align*} \Psi \colon \Lambda \to \Gamma \end{align*} be the covering map. That is, $\Psi$ is a graph isomorphism when restricted to each $T_n$, with the black nodes in $\Lambda$ mapped to the black node $0 \in \Gamma$. Using the map $\Psi$ we give names to the nodes in $\Lambda$. Denote the nodes in $T_0$ as $\{(0, \gamma) \,:\, \gamma \in \Gamma\}$ so that $\Psi(0,\gamma) = \gamma$. Likewise, in each $T_n$ denote by $(n,\gamma)$ the unique node in $T_n$ that $\Psi$ maps to $\gamma$. Hence we identify $\Lambda$ with \begin{align*} \mathbb{Z} \times \Gamma = \{(n, \gamma)\,:\, n \in \mathbb{Z}, \gamma \in \Gamma\} \end{align*} and the $F$-action is given by \begin{align} \label{a-action-on-Lambda} a (n,\gamma) &= (n, a \gamma)\\ \label{b-action-on-Lambda} b (n,\gamma) &= \begin{cases} (n, b \gamma)&\mbox{if }\gamma \neq 0\\ (n+1, 0)&\mbox{if }\gamma= 0 \end{cases} \end{align} Equip $\{-1,1\}^\Lambda$ with the product topology to get a compact space. As usual, the $F$-action on $\Lambda$ (given explicitly in ~\ref{a-action-on-Lambda} and ~\ref{b-action-on-Lambda}) defines a continuous action on $\{-1,1\}^\Lambda$. Consider $\pi:\{-1,1\}^\Gamma \to \{-1,1\}^\Lambda$, given by $\pi(x)(n, \gamma) = (-1)^n x(\gamma)$. Let $Y = \pi(\{-1,1\}^\Gamma) \subseteq \{-1,1\}^\Lambda$. \begin{claim} \label{clm:compact-and-invariant} $Y$ is compact and $F$-invariant. \end{claim} \begin{proof} $\pi$ is injective and continuous, so $Y = \pi(\{-1,1\}^\Gamma) \subseteq \{-1,1\}^\Lambda$ is compact and isomorphic to $\{-1,1\}^\Gamma$. Moreover, $Y$ is invariant to the action of $F$, because $a^{\pm 1}\pi(x) = \pi (a^{\pm 1}x)$ and $b^{\pm 1}\pi(x) = \pi(b^{\pm}\bar{x})$ where $\bar{x}(\gamma) = \begin{cases} x(\gamma)&\mbox{if }\gamma \neq 0\\ -x(\gamma)&\mbox{if } \gamma = 0 \end{cases}$. \end{proof} The last $F$-space we define is $Z$, the set of pairs of mirror image configurations in $Y$: \begin{align} \label{the-space-Z} Z = \left\{\{y, -y\}\,:\,y\in Y \right\}. \end{align} Now it is clear that equipped with the quotient topology, $Z$ is a compact and Hausdorff $F$-space. Furthermore, we now observe that $Z$ admits an invariant measure. Consider the i.i.d.\ Bernoulli $1/2$ measure on $\{-1,1\}^\Gamma$, i.e. the unique Borel measure on $\{-1,1\}^\Gamma$, for which \begin{align*} X_\gamma \colon & \{-1,1\}^\Gamma \to \{0, 1\},\quad x\mapsto \frac{x(\gamma)+1}{2} \end{align*} are independent Bernoulli $1/2$ random variables for all $\gamma \in \Gamma$. Clearly, it is an invariant measure and hence it is pushed forward to an invariant measure on $Y$, and then on $Z$. In particular, this shows that $Z$ is not strongly proximal. \begin{claim} \label{clm:no-fixed-points} The action $F \curvearrowright Z$ does not have any fixed points. \end{claim} \begin{proof} Pick $\hat{y} = \{y, -y\}\in Z$. We have $[by](0, -1) = y(0, -1) \neq -y(0, -1)$, so $by\neq -y$. Similarly, $[b y](0, 0) = y(-1, 0) = -y(0, 0) \neq y(0, 0)$, and so $by \neq y$. Hence $b\hat{y}\neq \hat{y}$. \end{proof} \begin{proposition} \label{thm:proximal} The action $F \curvearrowright Z$ is proximal. \end{proposition} \begin{proof} Let $\hat{y_1}=\{y_1, -y_1\}$ and $\hat{y_2}=\{y_2,-y_2\}$ be two points in $Z$, and let $y_i=\pi(x_i)$. Let $x_1 \cdot x_2$ denote the pointwise product of $x_1$ and $x_2$. Now by Proposition~\ref{prop:pre_proximal} there is a sequence of elements $\{f_n\}_n$ in $F$ such that $\{f_n (x_1 \cdot x_2)\}_n$ tends to either $c_{-1}$ or $c_{+1}$ in $\{-1,1\}^\Gamma$. Since $Y$ is compact, we may assume that $\{f_n y_1\}_n$ and $\{f_n y_2\}_n$ have limits, by descending to a subsequence if necessary. It is straightforward to check that $f_n y_1 \cdot f_n y_2 = f_n\pi(x_1)\cdot f_n\pi(x_2)=\pi(f_n x_1) \cdot \pi(f_n x_2)$. So: \begin{align*} [f_n y_1 \cdot f_n y_2](n,\gamma) &= [\pi(f_n x_1) \cdot \pi(f_n x_2)](n, \gamma)\\ &= (-1)^{2n}\;[f_n x_1](\gamma)\;[f_n x_2](\gamma)\\ &=[f_n x_1 \cdot f_n x_2](\gamma) = [f_n (x_1 \cdot x_2)](\gamma) \end{align*} So $\lim_n f_n y_1 = \pm \lim_n f_n y_2$, which implies $\lim_n f_n \hat{y_1} = \lim_n f_n \hat{y_2}$. \end{proof} \begin{theorem} Thompson's group $F$ is not strongly amenable. \end{theorem} \begin{proof} Since the space $Z$ we constructed above is proximal (Proposition~\ref{thm:proximal}), and has no fixed points (Claim~\ref{clm:no-fixed-points}), we conclude that $F$ has a proximal action with no fixed points, so $F$ is not strongly amenable. \end{proof}
[ "Proximal Flows", "Glasner", "Weiss", "Richard Thompson's group", "Eli Glasner", "Benjamin Weiss", "Thompson's group", "Schreier graph", "Gromov-Hausdorff limit", "proximal action", "product topology", "compact space", "F-action", "F-space", "Thompson's group", "Bernoulli", "Borel" ]
[ "Glasner", "proximal action", "Proximal Flows", "Thompson's group", "Schreier graph" ]
\section*{Abstract}{\small Here is considered the full evolution of a spherical supernova remnant. We start by calculating the early time ejecta-dominated stage and continue through the different phases of interaction with the circumstellar medium, and end with the dissipation and merger phase. The physical connection between the phases reveals new results. One is that the blast wave radius during the adiabatic phase is significantly smaller than it would be, if one does not account for the blast wave interaction with the ejecta. \vspace{10mm} \normalsize} \end{minipage} \section{Introduction} $\,\!$\indent A supernova remnant (SNR), the aftermath of a supernova explosion, is an important phenomenon of study in astrophysics. The typical $10^{51}$ erg of energy released in the explosion is transferred primarily into the interstellar medium during the course of evolution of a SNR. SNR are also valuable as tools to study the evolution of star, the evolution of the Galaxy, and the evolution of the interstellar medium. A SNR emits in X-rays from its hot shocked gas, in infrared from heated dust, and in radio continuum. The latter is via synchrotron emission from relativistic electrons accelerated at the SNR shock. The evolution of a single SNR can be studied and calculated using a hydrodynamics code. However to study the physical conditions of large numbers of SNR, it is desirable to have analytic methods to obtain input parameters needed to run a detailed hydrodynamic simulation. The short paper describes the basic ideas behind the analytic methods, the creation of software to carry out the calculations and some new results of the calculations. \section{Theory and calculation methods} $\,\!$\indent The general time sequence of events that occur after a supernova explosion, which comprise the supernova remnant can be divided into a number of phases of evolution (Chevalier, 1977). These are summarized as follows. The ejecta dominated (ED) phase is the earliest phase when the ejecta from the explosion are not yet strongly decelerated by interaction. Self-similar solutions were found for the ejecta phase for the case of a supernova with ejecta with a power-law density profile occurring in a circumstellar medium with a power-law density profile (Chevalier, 1982). Solutions were given for ejecta power-law indices of 7 and 12, and circumstellar medium power-law indices of 0 and 2. The latter correspond to uniform a circumstellar medium and one caused by a stellar wind with constant mass-loss rate. The non-self similar evolution between ED to the Sedov-Taylor (ST) self-similar phase was treated by Truelove and McKee (1999). They found the so-called unified solution for the evolution of the forward and reverse shock waves during this phase. The Sedov-Taylor (ST) self-similar phase is that for which the shocked ISM mass dominates over the shocked ejecta mass and for which radiative energy losses from the hot interior supernova remnant gas remain negligible. These solutions are reviewed in numerous works, and are based on the original work on blast waves initiated by instantaneous point energy injection in a uniform medium (Taylor, 1946; Sedov, 1946). The next stage occurs when radiative losses from the post-shock gas become important enough to affect the post-shock pressure and the dynamics of expansion of the supernova remnant. This phase is called the pressure-driven snowplow phase (PDS phase). Cooling sets in most rapidly for the interior gas closest to the outer shock front, so that a thin cold shell forms behind the shock. Interior to the thin shell, the interior remains hot and has significant pressure, so it continues to expand the shell. The shell decelerates because it is gaining mass continually while being acted upon by the interior pressure. Here we refer the review of this phase of evolution by Cioffi, McKee and Bertschinger (1988) This work also compares the analytic solutions to numerical hydrodynamic solutions for verification. When the interior pressure has dropped enough, it no longer influences the evolution of the massive cool shell. After this time, the supernova remnant is in the momentum conserving shell (MCS phase. The shell slows down according to the increase in swept up mass from the interstellar medium. The final fate of a supernova remnant is merger with the interstellar medium, when the shock velocity drops low enough the the expanding shell is no longer distinguishable from random motions in the interstellar medium. To create an analytic model, or its realization in software, the different phases of evolution were joined. This problem is not simple, as pointed out in the work of Truelove and McKee (1999). The evolution of the SNR is determined by the distribution of mass, pressure and velocity within the SNR and the shock jump conditions where there are any shocks. We follow similar methods to those in Truelove and McKee (1999), to ensure that the SNR evolution has continuous shock velocity and radius with time and closely follows that of more detailed hydrodynamic calculations. \section{Results} $\,\!$\indent Analytic solutions have been created which cover the evolution of the SNR from early ED phase through ED-ST transition, ST phase, ST to PDS transition and final dissolution of the SNR. We have taken care to properly join the different phases as noted above. These solutions allow variation in the input physical parameters, such as explosion energy, ejected mass, ejecta and circumstellar medium density profiles and age. The numerical implementation of the solutions provides various output quantities, such as forward and reverse shock radius, and shock velocities and temperatures. These can be compared to the observed properties of a given SNR. Adjustment of the input parameters to match the observed properties yields estimates of the physical properties of the SNR, and also allows estimates in uncertainties in these properties. One of the new results from the analytic calculations is that the shock radius at any given time during the ST phase is significantly less than it is for the standard analytic ST solution. The reduced shock radius is a real physical effect and is understood as caused by interaction of the reverse shock wave with the (initially unshocked) ejecta. This result has not been pointed out previously, and will change SNR parameter estimates that have been made with the standard ST solution. Results of some of the calculations with the full-evolution model are shown in Figures 1 and 2. Figure 1 shows the forward and reverse shock radii and velocities for the ED phase, ED to ST phase and ST phase, for a SNR in a uniform circumstellar medium, and the parameters listed in the figure caption. Figure 2 shows similar plots for a SNR in a stellar wind circumstellar medium. \begin{figure} \center \includegraphics[width=\textwidth]{s0n7.JPG} \caption{Left panel: forward and reverse shock radius vs. time for a SNR with energy $E=10^{51}$erg, ejected mass $2M_{\odot}$, in a uniform circumstellar medium ($s=0$) with density 1 cm$^{-3}$ and temperature 100 K. The ejecta density power-law index is $n=7$. Right panel: forward and reverse shock velocity vs. time.} \end{figure} \begin{figure} \center \includegraphics[width=\textwidth]{s2n7.JPG} \caption{Left panel: forward and reverse shock radius vs. time for a SNR with energy $E=10^{51}$erg, ejected mass $2M_{\odot}$, in a stellar wind ($s=2$) with wind velocity 30 km/s and mass loss rate $10^{-6}M_{\odot}$/yr. The ejecta density power-law index is $n=7$. Right panel: forward and reverse shock velocity vs. time.} \end{figure} \small \section*{Acknowledgments} Support for this work was provided the Natural Sciences and Engineering Research Council of Canada. \section*{References} \bibliographystyle{aj} \small
[ "Supernova remnant", "Astrophysics", "Interstellar medium", "X-rays", "Infrared", "Radio continuum", "Synchrotron emission", "Chevalier", "Ejecta dominated phase", "Sedov-Taylor phase", "Truelove and McKee", "Pressure-driven snowplow phase", "Momentum conserving shell phase", "Interstellar medium", "circumstellar medium", "Natural Sciences and Engineering Research Council of Canada", "Canada", "stellar wind" ]
[ "Supernova remnant", "Interstellar medium", "circumstellar medium", "Ejecta dominated phase", "Sedov-Taylor phase" ]
\section{Introduction} The motivation for this note was the observation that the basic recursion relation for the modified Bessel function $K$[1], $$K_0(z)+\left(\frac{2}{z}\right)K_1(z)=K_2(z)$$ can be expressed as the symmetry with respect to $m=0$ and $n=1$ of the sum $$\sum_{k=0}^n K_{k-m-1}(z)\left(\frac{z}{2}\right)^{k+m}.\eqno(1)$$ The attempt to generalize this to arbitrary $m$ and $n$ led to our principal result\vskip .1in \noindent {\bf Theorem 1}\vskip .1in For positive integers $m$ and $n$ the expression $$(n+1)!\sum_{k=0}^n\frac{1}{k!}{m+k+1\choose{m}}K_{k-m-1}(z)\left(\frac{z}{2}\right)^{k+m}\eqno(2)$$ is symmetric with respect to $m$ and $n$.\vskip .1in \noindent This will be proven in the following section and some similar results presented in the concluding paragraph. \section{Calculation} Consider the sum $$F(n,p,q)=\frac{(n+q+1)!}{q!(q+1)!}\sum_{k=0}^p \frac{(q+k+1)!}{(k+1)!}\frac{(n+k)!}{k!}\eqno(3)$$ for $p,q,n\in {\cal{Z}}^+$. One finds that, e.g. $$F(1,p,q)=\frac{(p+q+2)!}{p!q!}$$ $$F(2,p,q)=\frac{(p+q+2)!}{p!q!}[6+2(p+q)+pq]$$ and by induction on $n$ one obtains \vskip .1in \newpage \noindent {\bf Lemma 1}\vskip .1in $$\frac{p!q!}{(p+q+2)!}F(n,p,q)$$ is a polynomial $P(p,q)=P(q,p)$ of degree $n-1$ in $p$ and $q$. Next, by interchanging the order of summation and invoking lemma 1, one has \vskip .1in \noindent {\bf Lemma 2}\vskip .1in $$G(p,q,z)=\sum_{n=0}^{\infty}\frac{1}{(n!)^2}F(n,p,q)z^n=\sum_{k=0}^p {q+k+1\choose{q}}\;_2F_1(k+1,q+2;1;z)$$ is analytic for $ |z|<1$ and symmetric with respect to $p$ and $q$. \vskip .1in Finally, noting that[2] $$\int_0^{\infty}J_0(z\sqrt{x})\;_2F_1(k+1,q+2;1;-x)dx=\frac{2^{-k-q} z^{k+q+1}}{k!(q+1)!}K_{k-q-1}(z)\eqno(4)$$ (changing $q$ to $m$ and $p$ to $n$) we have Theorem 1. For example, with $m=0$ we get the possibly new summation $$\sum_{k=0}^n\frac{1}{k!}K_{k-1}(z)(z/2)^k=\frac{1}{n!}K_{n+1}(z)(z/2)^n.\eqno(5)$$ Setting $z=-ix$ in the relation $$K_{\nu}(z)= \frac{\pi}{2}i^{\nu+1}[J_{\nu}(iz)+iY_{\nu}(iz)]\eqno(6)$$ after a small manipulation one obtains \vskip .1in \noindent {\bf Theorem 2}\vskip .1in $$(-1)^m(n+1)!\sum_{k=0}^n\frac{1}{k!}{m+k+1\choose{m}}\, J_{k-m-1}(x)(x/2)^{k+m}\eqno(7)$$ $$(-1)^m(n+1)!\sum_{k=0}^n\frac{1}{k!} {m+k+1\choose{m}}\, Y_{k-m-1}(x)(x/2)^{k+m}\eqno(8)$$ are both symmetric with respect to $m$ and $n$.\vskip .1in \newpage \noindent {\bf Corollary }\vskip .1in $$\sum_{k=0}^{n}\frac{1}{k!}\, {\cal{C}}_{k-1}(x)(x/2)^k =-\frac{1}{n!}\, {\cal{C}}_{n+1}(x)(x/2)^n\eqno(9)$$ where ${\cal{C}}=aJ +b Y$. \vskip .2in \section{Discussion} Analogous sum relations can be obtained by other means. For example, let us start with the hypergeometric summation formula[3] $$\;_3F_2(-n,1,a;3-a,n+3;-1)=\frac{(n+2)n!}{2(a-1)\Gamma(a-2)}\left[\frac{\Gamma(a-1)}{(n+1)!}+(-1)^n\Gamma(a-n-2)\right].\eqno(9)$$ But, $$\;_3F_2(-n,1,a;3-a,n+3;-1)=\frac{n!(n+2)!}{\Gamma(a-2)\Gamma(a)}\sum_{k=1}^{n+1} (-1)^{k+1}\frac{\Gamma(a-1+k)\Gamma(a-1-k)}{\Gamma(n+k)\Gamma(n-k)}.\eqno(10)$$ With $n$ replaced by $n-1$ and $a=(s+n)/2+1$, the first term of (9) is half of what would be the $k=0$ term of the sum in (10) and one has $$\sum_{k=0}^n(-1)^k(2-\delta_{k,0})\frac{\Gamma\left(\frac{s+n}{2}-k\right)\Gamma\left(\frac{s+n}{2}+k\right)}{(n-k)!(n+k)!}=\frac{(-1)^n}{n!}\Gamma\left(\frac{s+n}{2}\right)\Gamma\left(\frac{s-n}{2}\right).\eqno(11)$$ Next we take the inverse Mellin transform of both sides, noting that $$\int_{c-i\infty}^{c+i\infty}\frac{ds}{2\pi i}(2/x)^s\Gamma\left(\frac{s+n}{2}-k\right)\Gamma\left(\frac{s+n}{2}+k\right)=4x^nK_{2k}(x)\eqno(12)$$ $$\int_{c-i\infty}^{c+i\infty}\frac{ds}{2\pi i}(2/x)^s\Gamma\left(\frac{s-n}{2}\right)\Gamma\left(\frac{s+n}{2}\right)=4K_n(x).\eqno(13)$$ Consequently, $$K_n(x)=\left(\frac{x}{2}\right)^n\sum_{k=0}^{n}(-1)^{k+n}n!\frac{(2-\delta_{k,0})}{(n-k)!(n+k)!}K_{2k}(x).\eqno(14)$$ Since many integrals of the Gauss hypergeometric function are known, one of the most extensive tabulations being[2], Lemma 2 is the gateway to a myriad of unexpected finite sum identities involving various classes of special functions. We conclude by listing a small selection.. From[2] $$\int_0^{\infty}(1-e^{-t})^{\lambda-1}e^{-xt}\;_2F_1(k+1,m+2;1;ze^{-t})dt$$ $$=B(x,\lambda)\;_3F_2(k+1,m+2,x;1,x+\lambda; z) \eqno(15)$$ and one has the symmetry of $$ \sum_{k=0}^n {m+k+1\choose{m}}\;_3F_2(k+1,m+2,x;1,x+\lambda;z)\eqno(16)$$ For example for $m=0$ $$\sum_{k=0}^n\;_3F_2(k+1,2,x;1,x+\lambda;z)=(n+1)\;_2F_1(n+2,x;x+\lambda;z).\eqno(17)$$ Similarly, $$\frac{n!(n+1)!}{\Gamma(n+2-a)}\sum_{k=0}^n\frac{(m+k+1)!\Gamma(k+1-a)}{k!(k+1)!}.$$ $$=\frac{m!(m+1)!}{\Gamma(m+2-a)}\sum_{k=0}^m\frac{(n+k+1)!\Gamma(k+1-a)}{k!(k+1)!}\eqno(18)$$ $$\sum_{k=0}^n{m+k+1\choose{m}}=\sum_{k=0}^m{n+k+1\choose{n}}.\eqno(19)$$ $$\sum_{k=0}^n{m+k+1\choose{m}}\;_3F_2(k+1,m+2,a;1,a+b;z)$$ $$=\sum_{k=0}^m{n+k+1\choose{n}}\;_3F_2(k+1,n+2,a;1,a+b;z).\eqno(20)$$ $$\sum_{k=0}^n\;_3F_2(k+1,2,a;1,a+b;1)=\frac{(n+1)\Gamma(b-n-2)\Gamma(a+b)}{\Gamma(a+b-n-2)\Gamma(b)}.\eqno(21)$$ $$\sum_{k=0}^n\frac{(p+k)!}{k!}=\frac{(n+p+1)!}{(p+1)n!},\quad p=0,1,2,\cdots\eqno(22)$$ $$\sum_{k=0}^n{m+k+1\choose{m}}z^{(k+m)/2}S_{-k-m-2,k-m-1}(z)$$ $$=\sum_{k=0}^m{n+k+1\choose{n}}z^{(k+n)/2}S_{-k-n-2,k-n-1}(z).\eqno(23)$$ $$\sum_{k=0}^n{m+k+1\choose{m}}z^{(k+m)/2}W_{-k-m-2,k-m-1}(z)$$ $$=\sum_{k=0}^m{n+k+1\choose{n}}z^{(k+n)/2}W_{-k-n-2,k-n-1}(z).\eqno(24)$$ \section{References} \noindent [1] G.E. Andrews, R. Askey and R. Roy, {\it Special Functions} [Cambridge University Press, 1999] \noindent [2] A.P. Prudnikov, Yu. A.Brychkov and O.I. Marichev,{\it Integrals and Series, Vol. 3} [Gordon and Breach, NY 1986] Section 2.21.1. \noindent [3] Ibid. Section (2.4.1). \end{document}
[ "Gauss hypergeometric function", "G.E. Andrews", "R. Askey", "R. Roy", "Cambridge University Press", "A.P. Prudnikov", "Yu. A.Brychkov", "O.I. Marichev", "Gordon and Breach" ]
[ "Gauss hypergeometric function", "G.E. Andrews", "R. Askey", "R. Roy", "Cambridge University Press" ]
\section{Introduction} It is well known that a gas composed of bosonic atoms with repulsive interparticle interaction at appropriate values of density and temperature undergoes Bose-Einstein condensation (BEC), a phase transition which shares similarities to transitions to superfluid and superconductive states. Since the first experimental demonstration of BEC \cite{Anderson-science269,Davis-prl75,Bradley-prl75}, efforts have directed toward in investigating the thermodynamic properties of such a macroscopic quantum system and finding suitable theoretical descriptions of the phase transitions \cite{Shi-physrep304}. Recently there is a revival of experimental interest devoted to the study of the thermodynamics of quantum gases. On one hand, distinguished works have explored the thermodynamics: a Fermi gas with repulsive interactions \cite{Lee-pra85}, a Fermi gas in the limit of very strong interactions, i.e., near the unitary regime \cite{Nascimbene-nature463,Ku-Science335}, a Fermi gas in a three-dimensional optical lattice showing fermionic Mott-insulator transition \cite{Duarte-prl114}, and the Boson gas in a two-dimensional optical lattice showing a bosonic Mott-Insulator transition \cite{Gemelke-nature460}. On the other hand, works on weakly interacting bosonic gases have demonstrated that, even in this simpler system, the understanding and characterization of the thermodynamic behavior, especially across the phase transition, are not yet complete \cite{Hung-nature470, Donner-science315, Olivares-Quiroz-jphysbatomphys43} and that more experimental works is needed to validate the theoretical predictions \cite{Goswami-jlowtempphys172, Floerchinger-pra79, Stanley-revmodphys71, Tarasov-pra90}. New approaches to investigating these systems and new experimental results can therefore contribute, in general, to advance the understanding of the thermodynamics of quantum gases and, in particular, of their phase transitions. In this work, we experimentally determine a global susceptibility from a global thermodynamical variables approach for a harmonically trapped Bose gas \cite{Romero-Rochin-prl94,Romero-Rochin-bjp35,Sandoval-Figueroa-pre78}. We investigate and characterize the behavior of the susceptibility when the gas undergoes a BEC. In standard thermodynamics, the equivalent quantity of the global susceptibility that we define in this work is the isothermal compressibility. This parameter describes the relative variation of the volume $V$ of a system due to a change in the pressure $P$ at constant temperature $T$: $k_T=-\frac{1}{V}\left(\frac{\partial V}{\partial P}\right)_{N,T}$. It is a property associated with density fluctuations and it can also be expressed in terms of a second derivative of the free energy with respect to the pressure. At a second-order phase transition it is therefore expected to show a singularity. Here we provide experimental evidence of such a singular behavior by taking advantage of the global thermodynamic approach. \section{Thermodynamics based on global variables} Global variables have already been successfully employed to obtain the phase diagram \cite{Romero-Rochin-pra85} and measure the heat capacity \cite{Shiozaki-pra90} of a gas in a harmonic potential. The need to review standard thermodynamics when dealing with quantum gases comes naturally from the fact that they are usually trapped in nonhomogeneous (normally harmonic) potentials. In this situation standard definitions of pressure and volume do not apply. In fact, $P$ and $V$ are conjugate variables of thermodynamical systems defined for homogeneous densities. In particular, $P$ is an intensive variable having the same value in every position inside the volume occupied by the gas. The local density approximation (LDA) is often used in non-homogeneous situations to define local variables. A different approach, involving a set of thermodynamic variables with single values for the entire gas, allows a global description of the thermodynamics of an inhomogeneous gas and of its phase transitions. This global approach is particularly suited, compared to the LDA, for the case in which the gas is characterized by abrupt spatial variations of the density, as in the occurrence of a phase transition or in a more exotic situation such as in the presence of vortices or local potential impurities. The use of global variables to describe the thermodynamics of an inhomogeneous system has been extensively described elsewhere \cite{Romero-Rochin-prl94,Romero-Rochin-bjp35,Sandoval-Figueroa-pre78}. In brief, within the basis of thermodynamic and statistical mechanics one can infer a volume parameter and a pressure parameter respectively: \begin{equation} {\cal V}=\frac{1}{\omega_x\omega_y\omega_z}, \label{eq:V} \end{equation} \begin{equation} \Pi=\frac{2}{3{\cal V}}\langle U({\bf r})\rangle=\frac{m}{3{\cal V}}\int d^3r~n({\bf r})(\omega^2_xx^2+\omega^2_yy^2+\omega^2_zz^2),\label{eq:Pi} \end{equation} where $\omega_i$ with $\left(i=x,y,z\right)$ are the harmonic trap frequencies, $ \langle U({\bf r})\rangle $ is the spatial mean of the external potential, and $n({\bf r})$ is the density of the sample. ${\cal V}$ is a natural extensive ``volume'' for the trapped gas and the thermodynamic limit can be achieved by making the density parameter $n_{\cal V}=N/{\cal V}$ constant as $N$ and ${\cal V}$ grow indefinitely. $\Pi$ is its intensive conjugated variable $(\Pi=-\left(\frac{\partial F}{\partial{\cal V}}\right)_{N,T})$, where $F=F(N,{\cal V} ,T)$ is the Helmholtz free energy. A nice proof that $\Pi = \Pi\left( N,{\cal V},T \right)$ and ${\cal V}$ are a good set of variables to describe the system is obtained through the determination of the heat capacity, $C_{\cal V}$ \cite{Shiozaki-pra90}, whose behavior is close that one expected from treatment of a harmonic trapped Bose gas \cite{Grossmann-physletta208, Giorgini-jlowtempphys109}. In this framework, the isothermal compressibility parameter can be obtained from the following relation: \begin{equation} \kappa_T=-\frac{1}{{\cal V}}\left(\frac{\partial{\cal V}}{\partial\Pi}\right)_{N,T}. \end{equation} $\kappa_T$ is a quantity with the same properties of the standard compressibility $k_T$ \cite{Yukalov-pra72, Yukalov-laserphyslett2} and indicates the thermodynamic stability defined by the second derivative of Gibbs free energy. The convexity property of the free energy is maintained with the condition, $0 \leq \kappa_T < \infty$. Therefore, with this susceptibility we characterize a system in thermodynamic equilibrium \cite{Yukalov-laserphys23}. \section{Experimental system and measurement} We performed the measurements to determine $\kappa_T$ across the transition from a thermal cloud to a BEC of $^{87}{\rm Rb}$ atoms with a new experimental setup in which the volume parameter can be easily varied. The system is built in a standard double magneto-optical trap (MOT) configuration \cite{Myatt-oplett21}. In the first vacuum cell we load a MOT of $10^8$ atoms from a dispenser and then we transfer the atoms to the second cell using an on-resonance beam. Here, we recapture the atoms in a second MOT and, after performing a sub-Doppler cooling, we spin polarize the atomic sample in the hyperfine state $F=2,\, m_F=2$. Afterwards, we transfer the atoms at temperatures of about $40~\mu{\rm K}$, in a pure quadrupole magnetic trap where a first radio-frequency evaporation is performed. Simultaneously, we ramp on a far-detuned beam (with wavelength, $\lambda=1064~{\rm nm}$) focused on a waist $w_0=85~\mu{\rm m}$, dislocated by $z_0=300~\mu{\rm m}$ along the gravity direction below the center of the quadrupole trap. When the temperature of the atomic cloud decreases to approximately $10~\mu {\rm K}$, atoms migrate from the quadrupole trap to the center of the beam, which serves as an optical dipole trap (ODT). At that point we reduce the vertical magnetic-field gradient to a value that no longer compensates for the gravity. The atoms are thus confined in a hybrid trap given by the combination of the optical and magnetic confinements \cite{Lin-pra79}. Here we further decrease the temperature of the cloud by a second stage of radio-frequency evaporation followed by optical evaporation obtained by exponentially ramping down the power of the laser beam. We can eventually achieve a pure BEC of $\sim10^5$ atoms at typical temperatures of $100--200~{\rm nK}$. The hybrid potential, including gravity can be described by the following expression: \begin{eqnarray} U({\bf r})=\mu B'_x\sqrt{x^2+\frac{y^2}{2}+\frac{z^2}{2}}-\frac{U_0}{(1+y^2/y^2_R)}\nonumber\\ \exp\left[-\frac{2x^2+2(z-z_0)^2}{w_0^2(1+y^2/y^2_R)}\right]+mg(z-z_0)+E_0 \end{eqnarray} $\mu$ is the atomic magnetic moment, $B'_x$ is the gradient of the quadrupole trap along the $x$ direction, $y_R = w^2_0\pi/\lambda$ is the Rayleigh range of the beam which propagates along direction $y$ and $U_0$ is the optical trap depth. $g$ is the gravitational acceleration, $m$ is the atomic mass and $E_0$ is the energy difference between the zero-field point absent the dipole trap and the total trap minimum, giving the trap minimum $U({\bf r}_{min})=0$ \cite{Lin-pra79}. At low temperatures the effective potential of the HT can be safely approximated by a three dimensional harmonic potential, whose frequencies are \begin{equation} \omega_x\simeq\omega_z=\sqrt{\frac{4U_0}{m w_0^2}},~~\omega_y=\sqrt{\frac{\mu B'_x}{2 m \left| z_0 \right|}}.\label{eq:freq} \end{equation} The trap has a cylindrical symmetry where the radial frequency confinement is due the ODT and the axial weaker confinement is due to magnetic-field gradient. We characterize the atomic cloud by using absorption imaging after a free expansion from the trapping potential with a time of flight of $30~{\rm ms}$. Each image is fitted to a two-dimensional bimodal distribution composed of a Gaussian function and a Thomas-Fermi function, which are known to properly describe the thermal and the condensed component of the gas, respectively. The number of particles and temperature are obtained from the fitted images following conventional procedure. The volume parameter can be easily changed by varying the radial frequencies of the hybrid confinement, which directly depend on the final laser power of the ODT. We consider measurements for seven different sets of frequencies, i.e., for seven different volume parameters. Different temperatures have been obtained by changing the radio-frequency evaporation ramp; in this way the initial conditions for the optical evaporation change, allow us to achieve different final temperatures with the same trapping frequencies since the final power the ODT is the same. For each volume parameter we have performed many experimental runs for temperatures within the range $40--400~{\rm nK}$ and postselected atomic clouds containing $(1\pm 0.1)\times10^5$ atoms to be taken in consideration. In order to calculate the pressure parameter $\Pi$ by performing the integral in Eq. (\ref{eq:Pi}), it is necessary to reconstruct the density profile of the atoms in the trap, $n(\mathbf{r})$, from the measured profiles in the time of flight and the trap frequencies. Toward this aim, for the thermal component we can safely assume a free expansion, whereas for the interacting condensed component we apply the Castin-Dum procedure \cite{Castin-prl77,Kagan-pra55}. In Fig. \ref{PixT} we plot the calculated $\Pi(T)$ for each volume parameter. With a decrease in the temperature the atomic gas undergoes BEC: at high temperatures we observe a linear dependence of the pressure parameter on $T$ until an abrupt change takes place at a critical temperature $T_c$, and the decrease become faster than linear. Above $T_c$, experimental data are well reproduced by the the ideal gas law $\Pi {\cal V} = N k_B T$, plotted in Fig. \ref{PixT} for the known number of particles and the different volumes. Below $T_c$ we perform a proper empirical exponential fit which follows the behavior of the experimental points. These fitting functions, in principle, are not related to any theoretical model. For each volume parameter we can extract the critical pressure for condensation: lower volumes demand higher pressure to condense. The transition line from a thermal atomic cloud to a BEC in the $\Pi{\cal V}$-plane is shown in Fig. \ref{logVxPi} marking the separation between the white (thermal) and the gray (BEC) zone. \begin{figure}[t!] \includegraphics[width=1.0\columnwidth]{Fig1} \caption{(Color online) Pressure parameter vs temperature for a constant number of atoms ($N=1\times10^5$) and different volume parameters: ${\cal V}_{1}=1.9 \times 10^{-7} {\rm s}^3$, ${\cal V}_{2}=6.4\times 10^{-8}~{\rm s}^3$, ${\cal V}_{3}=3.2\times 10^{-8}~{\rm s}^3$, ${\cal V}_{4}=2.1\times 10^{-8}~{\rm s}^3$, ${\cal V}_{5}=1.75\times 10^{-8}~{\rm s}^3$, ${\cal V}_{6}=1.4\times 10^{-8}~{\rm s}^3$, and ${\cal V}_{7}=1\times 10^{-8}~{\rm s}^3$. Solid lines above $T_c$ represent the ideal gas law, whereas below $T_c$ are empirical exponential fits. The dotted black line marks the transition between the thermal and the condensed regimes. Error bars represent the statistical error on the average.} \label{PixT} \end{figure} \section{Isotherms and determination of compressibility parameter} From the measurements shown in Fig. \ref{PixT}, we extract different isotherms relating the volume and pressure parameters, ${\cal V}={\cal V}_T(\Pi)$, which we plot in Fig. \ref{logVxPi}. As the temperature decreases, the overall isothermal lines shift towards a lower pressure. We can clearly identify two different behaviors in the two different regions of the thermal and condensed regimes. In the thermal region, experimental points are well reproduced by the ideal gas law for the known number of atoms and temperatures (plotted as lines on the log-log scale of the figure). When an isotherm crosses the critical line for condensation an abrupt change occurs and it departs from the ideal gas behavior. \begin{figure}[t!] \includegraphics[width=1.0\columnwidth]{Fig2} \caption{(Color online) Isothermal ${\cal V}$ vs $\Pi$. Symbols represent the measured volume parameter vs the pressure parameter for different temperatures. The diagram shows the classical phase in white and the quantum phase in gray, which are separated by the critical line on the ${\cal V} \Pi$ plane. In the thermal region the data obey the ideal gas and its behavior is linear in the log-log scale. On the other hand, in the BEC region the data exhibit a nonlinear behavior, indeed, we implement empirical fitting curves, known in the literature as the extended Langmuir adsorption isotherm equation, to follow each isotherm (these curves are no used in the analysis). The error bars on the ${\cal V}$ axis come from error propagation of the measurement of the frequencies; on the other hand, the error bars on the $\Pi$ axis are associated with the exponential fit in Fig. \ref{PixT}.} \label{logVxPi} \end{figure} We can now extract the isothermal compressibility $\kappa_T$ from derivation of the isotherms in Fig. \ref{logVxPi}. Derivation is performed point by point in correspondence with the experimental data in order not to rely on the arbitrary fitting curves, which do not correspond to any theoretical model. The obtained $\kappa_T$ values for three isothermal curves are shown in Fig. \ref{kTxPi}. We have chosen the curves for $T=150~{\rm nK}$ [Fig. \ref{kTxPi}(a))], $T=80~{\rm nK}$ [Fig. \ref{kTxPi}(b)] and $T=40~{\rm nK}$ [Fig. \ref{kTxPi}(c)] because they demonstrate the three classes of behavior: pure thermal gas, gas undergoing BEC transition, and gas in the single BEC region, respectively. The isothermal curve at $150~{\rm nK}$ shows the decrease in $\kappa_T$ with $1/\Pi$, as expected for an ideal gas. Let us now consider the isotherm at $80~{\rm nK}$: at low pressures the gas is thermal and the compressibility $\kappa_T$ decreases with increasing $\Pi$; when the pressure reaches the region between $20$ and $30~(\times 10^{-19}{\rm J}\cdot{\rm s}^{-3})$, the sudden increase in $\kappa_T$ indicates the transition. The compressibility reaches a maximum value before returning close to the base-line after $40 \times 10^{-19}{\rm J}\cdot{\rm s}^{-3}$. In this pressure range the compressibility acquires values 4 to 8 times higher than the base-line. The behavior of $\kappa_T$ in Fig. \ref{kTxPi} is typical for a second-order phase transition. An investigation of $\kappa_T$ vs $\Pi$ for different isothermal curves, where the transition takes place, reveals that at higher temperatures the transition occurs at a higher pressure and the peak of compressibility is broader for higher temperatures. Contrary to the expectation that quantities involving integration of density over the potential \cite{Ku-Science335} would be weakly sensitive to the phase transition, our data shows a sudden large variation in the compressibility at the thermal-BEC transition. \begin{figure}[h!] \includegraphics[width=1.0\columnwidth]{Fig3} \caption{Isothermal compressibility parameter vs pressure parameter for three temperatures: (a) $T=150~{\rm nK}$, (b) $T=80~{\rm nK}$, and (c) $T=40~{\rm nK}$. The inset in (b) is the $\kappa_T$ calculated from a simple toy model for the density distribution. Lines are guides for the eye. The error bar on the abscissa is not included so as to pollute the behavior of $\kappa_T$. On the other hand, the error bar on the ordinate comes from the extrapolation of the tangent isothermal curve in Fig. \ref{logVxPi}}. \label{kTxPi} \end{figure} \section{Discussion} We performed the data analysis using Castin-Dum procedure to reconstruct the \textit{in situ} density distribution starting with a Thomas-Fermi fit of the condensed component in the time-of-flight images. In order to probe that the general results we found do not depend on the specific model for the analysis, we also tested an alternative, less constrained, model. We fitted our images with two Gaussians for the thermal and condensed components and we reconstructed the \textit{in situ} profiles by applying a variational method \cite{Perez-Garcia-pra56, Teles-pra87, Teles-pra88} which has already proved to be valid to study the ballistic expansion dynamics of a condensate \cite{Teles-pra87, Teles-pra88}. We checked that the $\Pi(T)$ curves, and therefore all the derived thermodynamic quantities, extracted with the two reconstructing methods are quantitatively comparable. A complete theory predicting the exact behavior of the compressibility parameter across the transition does not exist. Nevertheless, the need to make a prediction about the behavior and the shape of the compressibility around the critical point arises naturally. We have therefore attempted a comparison between our findings and the results of a toy model. We calculate $\Pi$ for synthetic density profiles consisting in a Gaussian thermal component and a Thomas-Fermi condensed one with a relative atom number given by the ideal BEC result. This model qualitatively catches the general experimental findings. In particular, the position and the shape of the compressibility peak are reproduced by the model as presented in the inset in Fig. \ref{kTxPi}. Nevertheless this simple model cannot give quantitative predictions, for example, of the absolute value of the compressibility, because it is over-simplified. A fair quantitative comparison would demand a more elaborate model, beyond the scope of this experimental report. The introduction of the global variable approach has proven to be a valid complementary approach to the LDA. Generally speaking, the LDA approach in fact in fact has strong intrinsic limitations in the case where sudden variation of the densities occurs, as at the thermal-condensed interface in a Bose gas. In this situation the LDA would in fact require a very high imaging resolution, which is experimentally challenging. With the global approach we overcome this limitation by describing the system undergoing phase transition as a whole and we can provide evidence of the compressibility peak at the transition. On the other hand, the global variables approach needs many measurements for different volumes with the same atom number to trace a single isothermal curve, and this can be experimentally nontrivial. In this sense the LDA has the advantage of leading to a complete isothermal curve from the analysis of a single image. Due to the lack of experimental points, we cannot precisely measure the compressibility in the close vicinity of the phase transition. Nevertheless, the expected sharp peak in $\kappa_T$ near the critical point is quite clear and shares remarkable similarities to the behavior of the isothermal compressibility for liquid helium as observed across the $\lambda$-point \cite{Boghosian-pr152,Grilly-pr149,Elwell-pr164}. \section{Conclusions} In this article, we have used the concept of global thermodynamic variables to measure, the most appropriate susceptibility to understand the phase transition, the isothermal compressibility parameter of a harmonically confined Bose gas. Once the sample had undergone BEC we characterized this phase transition, from the classical to the quantum regime, indicating a second-order transition likely related to a spontaneous symmetry breaking. The concept of using global variables to determine the global compressibility is quite useful in situations where the LDA cannot be applied. In another more complex physical systems in which there are abrupt changes in the density are of interest for superfluid physics, such as vortices, vortex lattices, solitons, \textit{inter alia}, and, especially, superfluid turbulence, recently demonstrated by our group \cite{Henn-prl103}. In this case the local variables do not make sense and the global behavior in the compressibility may indicate new characteristics of the turbulent regime. Such an investigation is currently in progress. We acknowledge financial support from FAPESP (Brazil), CNPq (Brazil), CAPES (Brazil), and LENS (Italy).
[ "Bose-Einstein condensation", "Fermi gas", "Gibbs free energy", "thermodynamic equilibrium", "BEC", "Rb", "MOT", "ODT", "gravity", "atomic magnetic moment", "quadrupole trap", "gravitational acceleration", "atomic mass", "Thomas-Fermi function", "Gaussian function", "Castin-Dum procedure", "ideal gas law", "thermal atomic cloud", "Ideal gas law", "BEC transition", "Second-order phase transition", "Castin-Dum procedure", "Thomas-Fermi fit", "Variational method", "Bose gas", "Brazil", "Italy", "FAPESP", "CNPq", "CAPES", "LENS", "superfluid turbulence", "liquid helium" ]
[ "BEC", "Bose-Einstein condensation", "Fermi gas", "ideal gas law", "Brazil" ]
\section*{Introduction} Carbon nanotubes (CNTs) are promising nanomaterials, which have been extensively studied by many researchers \cite{cnt1,cnt2,cnt3,cnt4,cnt5,cnt6,cnt7,cnt8,cnt9,cnt10,25,27,28,23,16}. Due to different combinations of structural variation, CNTs can exhibit a wide range of electronic and optical properties, which can be of great use in the design of novel techniques \cite{25}. CNTs are also polyfunctional macromolecules, where specific reactions can occur at various sites with different efficiencies \cite{28}. There are three major types of CNTs: armchair CNTs, chiral CNTs, and zigzag CNTs, which are distinguished by the geometrical vector ($n$,$m$), with $n$ and $m$ being integers. CNTs can behave as either metals or semiconductors depending on their chiral angles, diameters, and lengths. Therefore, a further investigation of how these factors affect the properties of CNTs is essential for the comprehensive understanding of these materials \cite{25,23}. In particular, it is useful to study the basic repeating units of CNTs, which still need further fundamental research exploration \cite{16}. The targeting units of the present study, a series of $n$-cyclacenes, consisting of $n$ fused benzene rings forming a closed loop (see \Cref{fig:tube-cyclic10-geometry}), are the shortest ($n$,0) zigzag CNTs with hydrogen passivation, which have attracted considerable interest in the research community due to their fascinating electronic properties \cite{23,16,7,19,31,1,6,11,24,17,10,9,C2015,new1}. As $n$-cyclacenes belong to the category of cata-condensed aromatics (i.e., molecules that have no carbon atoms belonging to more than two rings), each carbon atom is on the periphery of the conjugated system \cite{7}. Before $n$-cyclacenes are intensively connected to zigzag CNTs, they have been studied mainly due to the research curiosity in highly conjugated cyclic systems. The studies of $n$-cyclacenes can also be important for atomic-level structural control in the synthesis of CNTs. In addition, bottom-up approaches to the synthesis of CNTs not only provide a fundamental understanding of the relationship between the design of CNTs and their electronic properties, but also greatly lower the synthetic temperatures \cite{25}. While zigzag CNTs may be synthesized from cycloarylenes by devising the cutout positions of CNTs \cite{27}, it remains important to systematically investigate the properties of $n$-cyclacenes, which can be useful for exploring the possible utility of their cylindrical cavities in host-guest chemistry \cite{31}. The structure of $n$-cyclacene has two types of components: an arenoid belt (composed of fused benzene rings) and two peripheral circuits (the top and bottom peripheral circuits) \cite{1}. The peripheral circuits are of two types: $4k$ and $4k+2$ (where $k$ is an integer), depending on the number of benzene rings in $n$-cyclacene. In previous studies, it has been shown that $n$-cyclacene with even-number benzene rings ($4k$ type) is more stable than that with odd-number benzene rings ($4k+2$ type) \cite{1,7,6,new1}. Therefore, the nature of peripheral circuits (i.e., the cryptoannulenic effect) is expected to be responsible for the properties of $n$-cyclacene. Besides, the structure of $n$-cyclacene can also be regarded as two fused trannulenes (i.e., circular, all-trans cyclic polyene ribbons) \cite{31,11}. From the bond length analysis of $n$-cyclacene, there is bond length alternation in the benzene ring, and the aromaticity is reduced due to the structural strain, which can hence be responsible for the properties of $n$-cyclacene. Even though there has been a keen interest in $n$-cyclacenes, the studies of their electronic properties are scarce. While $n$-cyclacene may be synthesized via an intramolecular cyclization of $n$-acene (a chain-like molecule with $n$ linearly fused benzene rings, e.g., see Figure 1 of Ref.\ \cite{1}), the synthetic procedure has been very challenging, and has not succeeded in producing pure $n$-cyclacene \cite{17,1,10}, possibly due to its highly strained structure and highly reactive nature \cite{16,17}. As the stabilities of annulated polycyclic saturated hydrocarbons decrease rapidly with the number of fused benzene rings \cite{19}, the synthesis of larger $n$-cyclacenes should be even more difficult. To date, the reported properties of $n$-cyclacenes are based on theoretical calculations. Nevertheless, accurate prediction of the electronic properties of larger $n$-cyclacenes has been very challenging for traditional electronic structure methods, due to the presence of strong static correlation effects \cite{9}. Kohn-Sham density functional theory (KS-DFT) \cite{ks2} with conventional (i.e., semilocal \cite{kslda1,kslda2,PBE,M06L}, hybrid \cite{hybrid,B3LYP,LCHirao,wB97X,wB97X-D,wM05-D,LC-D3,LCAC}, and double-hybrid \cite{B2PLYP,wB97X-2,PBE0-2,SCAN0-2}) exchange-correlation (XC) density functionals can yield unreliable results for systems with strong static correlation effects \cite{Cohen2012}. High-level {\it ab initio} multi-reference methods \cite{9,CASPT2,Acene-DMRG,2-RDM,GNRs-DMRG,GNRs-PHF,GNRs-MRAQCC,multi-reference,multi-reference2} are typically required to accurately predict the properties of larger $n$-cyclacenes. However, as the number of electrons in $n$-cyclacene quickly increases with increasing $n$, there have been very few studies on the properties of larger $n$-cyclacenes using multi-reference methods, due to their prohibitively high cost. To circumvent the formidable computational expense of high-level {\it ab initio} multi-reference methods, we have recently developed thermally-assisted-occupation density functional theory (TAO-DFT) \cite{tao1,tao2}, a very efficient electronic structure method for studying the properties of large ground-state systems (e.g., containing up to a few thousand electrons) with strong static correlation effects \cite{z,NK,HSM}. In contrast to KS-DFT, TAO-DFT is a density functional theory with fractional orbital occupations, wherein strong static correlation is explicitly described by the entropy contribution (see Eq.\ (26) of Ref.\ \cite{tao1}), a function of the fictitious temperature and orbital occupation numbers. Note that the entropy contribution is completely missing in KS-DFT. Recently, we have studied the electronic properties of zigzag graphene nanoribbons (ZGNRs) using TAO-DFT \cite{z}. The ground states of ZGNRs are found to be singlets for all the widths and lengths studied. The longer ZGNRs should possess increasing polyradical character in their ground states, with the active orbitals being mainly localized at the zigzag edges. Our results are in good agreement with the available experimental and highly accurate {\it ab initio} data. Besides, on the basis of our TAO-DFT calculations, the active orbital occupation numbers for the ground states of ZGNRs should exhibit a curve crossing behavior in the approach to unity (singly occupied) with increasing ribbon length. Very recently, the curve crossing behavior has been confirmed by highly accurate {\it ab initio} multi-reference methods \cite{multi-reference2}! TAO-DFT has similar computational cost as KS-DFT for single-point energy and analytical nuclear gradient calculations, and reduces to KS-DFT in the absence of strong static correlation effects. Besides, existing XC density functionals in KS-DFT may also be adopted in TAO-DFT. Relative to high-level {\it ab initio} multi-reference methods, TAO-DFT is computationally efficient, and hence very powerful for the study of large polyradical systems. In addition, the orbital occupation numbers from TAO-DFT, which are intended to simulate the natural orbital occupation numbers (NOONs) [i.e., the eigenvalues of one-electron reduced density matrix] \cite{noon}, can be very useful for assessing the possible polyradical character of systems. Recent studies have demonstrated that the orbital occupation numbers from TAO-DFT are qualitatively similar to the NOONs from high-level {\it ab initio} multi-reference methods, giving promise for applying TAO-DFT to large polyradical systems \cite{tao1,z,NK,multi-reference2}. Due to its computational efficiency and reasonable accuracy for large systems with strong static correlation effects, in this work, TAO-DFT is adopted to study the electronic properties of $n$-cyclacenes ($n$ = 4--100). As $n$-cyclacenes have not been successfully synthesized, no experimental data are currently available for comparison. Therefore, our results are compared with the available high-level {\it ab initio} data as well as those obtained from various XC density functionals in KS-DFT. In addition, as $n$-cyclacene can be considered as an interconnection of $n$-acene, the electronic properties of $n$-cyclacene are also compared with those of $n$-acene to assess the role of cyclic topology. \section*{Computational Details} All calculations are performed with a development version of \textsf{Q-Chem 4.0} \cite{qchem}, using the 6-31G(d) basis set with the fine grid EML(75,302), consisting of 75 Euler-Maclaurin radial grid points and 302 Lebedev angular grid points. Results are calculated using KS-LDA (i.e., KS-DFT with the LDA XC density functional \cite{kslda1,kslda2}) and TAO-LDA (i.e., TAO-DFT with the LDA XC density functional and the LDA $\theta$-dependent density functional $E_{\theta}^{\text {LDA}}$ (see Eq.\ (41) of Ref.\ \cite{tao1}) with the fictitious temperature $\theta$ = 7 mhartree (as defined in Ref.\ \cite{tao1}). Note that KS-LDA is simply TAO-LDA with $\theta$ = 0, and hence it is important to assess the performance of KS-LDA here to assess the significance of TAO-LDA. The ground state of $n$-cyclacene/$n$-acene ($n$ = 4--100) is obtained by performing spin-unrestricted KS-LDA and TAO-LDA calculations for the lowest singlet and triplet energies of $n$-cyclacene/$n$-acene on the respective geometries that were fully optimized at the same level of theory. The singlet-triplet energy (ST) gap of $n$-cyclacene/$n$-acene is calculated as $(E_{\text{T}} - E_{\text{S}})$, the energy difference between the lowest triplet (T) and singlet (S) states of $n$-cyclacene/$n$-acene. \section*{Results and Discussion} \subsection*{Singlet-Triplet Energy Gap} \Cref{fig:stgap} shows the ST gap of $n$-cyclacene as a function of the number of benzene rings, calculated using spin-unrestricted KS-LDA and TAO-LDA. The results are compared with the available data \cite{9}, calculated using the complete-active-space second-order perturbation theory (CASPT2) \cite{CASPT2} (a high-level {\it ab initio} multi-reference method) as well as the M06L functional \cite{M06L} (a popular semilocal XC density functional) and the B3LYP functional \cite{hybrid,B3LYP} (a popular hybrid XC density functional) in KS-DFT. As can be seen, the anticipated even-odd oscillations in the ST gaps may be attributed to the cryptoannulenic effects of $n$-cyclacenes \cite{1,7,6,new1}. However, the amplitudes of the even-odd oscillations are considerably larger for KS-DFT with the XC functionals, which are closely related to the degree of spin contamination (as discussed in Ref.\ \cite{9}). In general, the larger fraction of Hartree-Fock (HF) exchange adopted in the XC functional in KS-DFT, the higher the degree of spin contamination for systems with strong static correlation effects. For example, the ST gap obtained with KS-B3LYP is unexpectedly large at $n$ = 10, due to the high degree of spin contamination \cite{9}. On the other hand, as commented in Ref.\ \cite{9}, the ST gaps obtained with CASPT2 are rather sensitive to the choice of active space. Since the complete $\pi$-valence space was not selected as the active space (due to the prohibitively high cost), the CASPT2 results here should be taken with caution. Recent studies have shown that a sufficiently large active space should be adopted in high-level {\it ab initio} multi-reference calculations \cite{Acene-DMRG,GNRs-DMRG,multi-reference2} for accurate prediction of the electronic properties of systems with strong static correlation effects, which can, however, be prohibitively expensive for large systems. Note that the ST gap obtained with CASPT2 unexpectedly increases at $n$ = 12, possibly due to the insufficiently large active space adopted in the calculations \cite{9}. To assess the role of cyclic topology, \Cref{fig:stgapcycace1,fig:stgapcycace2} show the ST gap of $n$-cyclacene/$n$-acene as a function of the number of benzene rings, calculated with spin-unrestricted TAO-LDA. Similar to $n$-acenes, the ground states of $n$-cyclacenes remain singlets for all the cases investigated. In contrast to $n$-acene, the ST gap of $n$-cyclacene, however, displays oscillatory behavior for small $n$, and the oscillation vanishes gradually with increasing $n$. For small $n$, $n$-cyclacene with even-number benzene rings exhibits a larger ST gap (i.e., greater stability) than that with odd-number benzene rings. For sufficiently large $n$ ($n > 30$), the ST gap of $n$-cyclacene converges monotonically from below to the ST gap of $n$-acene (which monotonically decreases with increasing $n$). At the level of TAO-LDA, the ST gaps of the largest $n$-cyclacene and $n$-acene studied (i.e., $n$ = 100) are essentially the same (0.49 kcal/mol). On the basis of the ST gaps obtained with TAO-LDA, the cryptoannulenic effect and structural strain of $n$-cyclacene are more important for the smaller $n$, and less important for the larger $n$. Due to the symmetry constraint, the spin-restricted and spin-unrestricted energies for the lowest singlet state of $n$-cyclacene/$n$-acene, calculated using the exact theory, should be identical \cite{tao1,tao2,z,GNRs-PHF}. Recent studies have shown that KS-DFT with conventional XC density functionals cannot satisfy this condition for the larger $n$-cyclacene/$n$-acene, due to the aforementioned spin contamination \cite{9,Acene-DMRG,GNRs-DMRG,GNRs-PHF,tao1,tao2,z}. To assess the possible symmetry-breaking effects, spin-restricted TAO-LDA calculations are also performed for the lowest singlet energies on the respective optimized geometries. Within the numerical accuracy of our calculations, the spin-restricted and spin-unrestricted TAO-LDA energies for the lowest singlet state of $n$-cyclacene/$n$-acene are essentially the same (i.e., essentially no unphysical symmetry-breaking effects occur in our spin-unrestricted TAO-LDA calculations). \subsection*{Vertical Ionization Potential, Vertical Electron Affinity, and Fundamental Gap} At the lowest singlet state (i.e., the ground-state) geometry of $n$-cyclacene/$n$-acene (containing $N$ electrons), TAO-LDA is adopted to calculate the vertical ionization potential $\text{IP}_{v}={E}_{N-1}-{E}_{N}$, vertical electron affinity $\text{EA}_{v}={E}_{N}-{E}_{N+1}$, and fundamental gap $E_{g}=\text{IP}_{v}-\text{EA}_{v}={E}_{N+1}+{E}_{N-1}-2{E}_{N}$ using multiple energy-difference methods, with ${E}_{N}$ being the total energy of the $N$-electron system. With increasing number of benzene rings in $n$-cyclacene, $\text{IP}_{v}$ oscillatorily decreases (see \Cref{fig:ip}), $\text{EA}_{v}$ oscillatorily increases (see \Cref{fig:ea}), and hence $E_{g}$ oscillatorily decreases (see \Cref{fig:fg}). However, these oscillations are damped and eventually disappear with increasing $n$. For sufficiently large $n$ ($n > 30$), the $\text{IP}_{v}$ and $E_{g}$ values of $n$-cyclacene converge monotonically from above to those of $n$-acene (which monotonically decrease with increasing $n$), while the $\text{EA}_{v}$ value of $n$-cyclacene converges monotonically from below to that of $n$-acene (which monotonically increases with increasing $n$). Note also that the $E_{g}$ value of $n$-cyclacene ($n$ = 13--54) is within the most interesting range (1 to 3 eV), giving promise for applications of $n$-cyclacenes in nanophotonics. \subsection*{Symmetrized von Neumann Entropy} To investigate the possible polyradical character of $n$-cyclacene/$n$-acene, we calculate the symmetrized von Neumann entropy (e.g., see Eq.\ (9) of Ref.\ \cite{GNRs-PHF}) \begin{equation}\label{eq1} S_{\text{vN}} = -\frac{1}{2} \sum_{i=1}^{\infty} \bigg\lbrace f_{i}\ \text{ln}(f_{i}) + (1-f_{i})\ \text{ln}(1-f_{i}) \bigg\rbrace, \end{equation} for the lowest singlet state of $n$-cyclacene/$n$-acene as a function of the number of benzene rings, using TAO-LDA. Here, $f_{i}$ the occupation number of the $i^{\text{th}}$ orbital obtained with TAO-LDA, which ranges from 0 to 1, is approximately the same as the occupation number of the $i^{\text{th}}$ natural orbital \cite{tao1,tao2,z,NK,HSM,multi-reference2}. For a system without strong static correlation ($\{f_{i}\}$ are close to either 0 or 1), $S_{\text{vN}}$ provides insignificant contributions, while for a system with strong static correlation ($\{f_{i}\}$ are fractional for active orbitals and are close to either 0 or 1 for others), $S_{\text{vN}}$ increases with the number of active orbitals. As shown in \Cref{fig:s}, the $S_{\text{vN}}$ value of $n$-cyclacene oscillatorily increases with increasing number of benzene rings. Nonetheless, the oscillation is damped and eventually disappears with the increase of $n$. For sufficiently large $n$ ($n > 30$), the $S_{\text{vN}}$ value of $n$-cyclacene converges monotonically from above to that of $n$-acene (which monotonically increases with increasing $n$). Therefore, similar to $n$-acenes \cite{tao1,tao2,z,NK,HSM,Acene-DMRG,GNRs-DMRG,GNRs-PHF,GNRs-MRAQCC,multi-reference2}, the larger $n$-cyclacenes should possess increasing polyradical character. \subsection*{Active Orbital Occupation Numbers} To illustrate the causes of the increase of $S_{\text{vN}}$ with $n$, we plot the active orbital occupation numbers for the lowest singlet state of $n$-cyclacene as a function of the number of benzene rings, calculated using TAO-LDA. Here, the highest occupied molecular orbital (HOMO) is the ${(N/2)}^{\text{th}}$ orbital, and the lowest unoccupied molecular orbital (LUMO) is the ${(N/2 + 1)}^{\text{th}}$ orbital, where $N$ is the number of electrons in $n$-cyclacene. For brevity, HOMO, HOMO$-$1, ..., and HOMO$-$15, are denoted as H, H$-$1, ..., and H$-$15, respectively, while LUMO, LUMO+1, ..., and LUMO+15, are denoted as L, L+1, ..., and L+15, respectively. As presented in \Cref{fig:occupation}, the number of fractionally occupied orbitals increases with increasing cyclacene size, clearly indicating that the polyradical character of $n$-cyclacene indeed increases with the cyclacene size. Similar to the previously discussed properties, the active orbital occupation numbers of $n$-cyclacene also exhibit oscillatory behavior, showing wave-packet oscillations. \subsection*{Real-Space Representation of Active Orbitals} For the lowest singlet states of some representative $n$-cyclacenes ($n$ = 4--7), we explore the real-space representation of active orbitals (e.g., HOMOs and LUMOs), obtained with TAO-LDA. Similar to previous findings for $n$-acenes \cite{Acene-DMRG,GNRs-DMRG,GNRs-PHF,z}, the HOMOs and LUMOs of $n$-cyclacenes are mainly localized at the peripheral carbon atoms (see \Cref{fig:realspace}). \section*{Conclusions} In conclusion, we have studied the electronic properties of $n$-cyclacenes ($n$ = 4--100), including the ST gaps, vertical ionization potentials, vertical electron affinities, fundamental gaps, symmetrized von Neumann entropy, active orbital occupation numbers, and real-space representation of active orbitals, using our newly developed TAO-DFT, a very efficient electronic structure method for the study of large systems with strong static correlation effects. To assess the effects of cyclic nature, the electronic properties of $n$-cyclacenes have also been compared with those of $n$-acenes. Similar to $n$-acenes, the ground states of $n$-cyclacenes are singlets for all the cases investigated. In contrast to $n$-acenes, the electronic properties of $n$-cyclacenes, however, display oscillatory behavior for small $n$ ($n \le 30$) in the approach to the corresponding properties of $n$-acenes with increasing number of benzene rings, which to the best of our knowledge have never been addressed in the literature. The oscillatory behavior may be related to the cryptoannulenic effect and structural strain of $n$-cyclacene, which have been shown to be important for small $n$, and unimportant for sufficiently large $n$. On the basis of several measures (e.g., the smaller ST gap, the smaller $E_{g}$, and the larger $S_{\text{vN}}$), for small $n$, $n$-cyclacene with odd-number benzene rings should possess stronger radical character than that with even-number benzene rings. In addition, based on the calculated orbitals and their occupation numbers, the larger $n$-cyclacenes are expected to possess increasing polyradical character in their ground states, where the active orbitals are mainly localized at the peripheral carbon atoms. Since TAO-DFT is computationally efficient, it appears to be a promising method for studying the electronic properties of large systems with strong static correlation effects. Nevertheless, as with all approximate electronic structure methods, a few limitations remain. Relative to the exact full configuration interaction (FCI) method \cite{FCI}, TAO-LDA (with $\theta$ = 7 mhartree) is not variationally correct (i.e., overcorrelation can occur), and hence, the orbital occupation numbers from TAO-LDA may not be the same as the NOONs from the FCI method. To assess the accuracy of our TAO-LDA results, as the computational cost of the FCI method is prohibitive, the electronic properties of $n$-cyclacenes from relatively affordable {\it ab initio} multi-reference methods are called for. \section*{Acknowledgements} This work was supported by the Ministry of Science and Technology of Taiwan (Grant No.\ MOST104-2628-M-002-011-MY3), National Taiwan University (Grant No.\ NTU-CDP-105R7818), the Center for Quantum Science and Engineering at NTU (Subproject Nos.:\ NTU-ERP-105R891401 and NTU-ERP-105R891403), and the National Center for Theoretical Sciences of Taiwan. We are grateful to the Computer and Information Networking Center at NTU for the partial support of high-performance computing facilities. \section*{Author Contributions} C.-S.W. and P.-Y.L. contributed equally to this work. J.-D.C. conceived and designed the project. C.-S.W. and J.-D.C. performed the calculations. P.-Y.L. and J.-D.C. wrote the paper. All authors performed the data analysis. \section*{Additional Information} {\bf Competing financial interests:} The authors declare no competing financial interests.
[ "Carbon nanotubes", "CNTs", "armchair CNTs", "chiral CNTs", "zigzag CNTs", "n-cyclacenes", "benzene rings", "cata-condensed aromatics", "cycloarylenes", "arenoid belt", "trannulenes", "n-acene", "Kohn-Sham density functional theory", "KS-DFT", "thermally-assisted-occupation density functional theory", "TAO-DFT", "TAO-DFT", "KS-DFT", "ZGNRs", "Q-Chem 4.0", "KS-LDA", "TAO-LDA", "n-cyclacene", "n-acene", "CASPT2", "M06L", "B3LYP", "Hartree-Fock", "KS-B3LYP", "benzene", "nanophotonics", "Ministry of Science and Technology of Taiwan", "National Taiwan University", "Center for Quantum Science and Engineering at NTU", "National Center for Theoretical Sciences of Taiwan", "Computer and Information Networking Center at NTU", "C.-S.W.", "P.-Y.L.", "J.-D.C." ]
[ "n-cyclacene", "n-cyclacenes", "TAO-DFT", "KS-DFT", "CNTs" ]
\section{I. \ Objective function used to correct the ZZ-error channel} Thanks to the isomorphism between SU(2) generators, $\sigma_X,\sigma_Y, \sigma_Z$, and the subgroup of SU(4) generators, $\sigma_{ZZ},\sigma_{ZX},\sigma_{IY}$, the right hand side of Eq. (11) can be expressed in a more tractable way as \begin{equation}\label{eq:appendix_1} \left(\prod_{j=4}^{1}\exp\left[i\frac{\psi_j}{2} \sigma_{Z}\right]\left[U\right]^{n_j}\exp\left[-i\frac{\psi_j}{2} \sigma_{Z}\right]\right)U, \end{equation} where $U=\exp\left[-i \frac{5\theta_0}{2}(1+\delta)\sigma_{X}\right]$ and $\theta_0=\arccos\left[\frac{1}{4}\left(\sqrt{13}-1\right)\right]$. The error component of Eq. \eqref{eq:appendix_1} is then isolated by expanding the sequence to first order in $\delta$. The resulting unperturbed matrix and first order error matrix, $A +\delta B $, are expressed in terms of the SU(2) generators as $A=\Lambda_1\sigma_I + i \Lambda_2\sigma_X + i \Lambda_3 \sigma_Y + i \Lambda_4\sigma_Z$ and $B=\Delta_1\sigma_I + i \Delta_2\sigma_X + i \Delta_3 \sigma_Y + i \Delta_4\sigma_Z$, where the $\Delta_i$'s and $\Lambda_i $'s are functions of $\psi_j$'s, $n_j$'s and $\theta_0$. Their closed forms are rather long to include them here, but can be easily obtained with any symbolic computation program.\\ \indent Making use again of the isomorphism between SU(2) and a subgroup of SU(4), we express the local invariants corresponding to $\mathcal{U}^{(6k)}$ in Eq. (11) in terms of the elements of the matrix $A$: \begin{equation} \begin{aligned} G_1(\mathcal{U}^{(6k)})&=(\Lambda_1^2+\Lambda_4^2 - \Lambda_2^2 - \Lambda_3^2)^2\\ G_2(\mathcal{U}^{(6k)})&=3 \Lambda_4^4 + 3 \Lambda_1^4 - 2 \Lambda_1^2 (\Lambda_2^2 + \Lambda_3^2) + 3 (\Lambda_2^2 + \Lambda_3^2)^2 + \Lambda_4^2 (6 \Lambda_1^2 - 2 (\Lambda_2^2 + \Lambda_3^2)). \end{aligned} \end{equation} \indent With the above expressions and the terms that conform the matrix $B$, we construct our objective function such that the error matrix $B$ is canceled and the local invariants of the sequence and target operation are as close as possible. Accordingly, the objective function is given by \begin{equation}\label{eq:objective_function} f=\Delta_1^2+ \Delta_2^2 + \Delta_3^2 + \Delta_4^2 +[G_1(\mathcal{U}^{(6k)})- G_1(\mathfrak{U})]^2 +[G_2(\mathcal{U}^{(6k)})- G_2(\mathfrak{U})]^2, \end{equation} where $G_i(\mathfrak{U})$ are the local invariants of the target operation.\\ \indent The values of the solutions found by numerically minimizing the objective function while targeting a {\sc cnot} operation are: \begin{equation} \begin{aligned} \psi_1=& 1.135268,\\ \psi_2=& -0.405533,\\ \psi_3=& -1.841855,\\ \psi_4=& 0.191753. \end{aligned} \end{equation} Moreover, the angles of the local operations needed to transform $\mathcal{U}^{(6k)}_{\text{\sc cnot}}$ into {\sc cnot}, Eq. (12), are \begin{equation} \begin{aligned} \phi_1=& -1.607820,\\ \phi_2=& 0.234035. \end{aligned} \end{equation} \indent Similarly, the solutions found with the numerical minimization of Eq. \eqref{eq:objective_function} that yield a corrected rotation equivalent to $(5\theta_0/k)_{ZZ}$, for $k=\{5,10,20\}$ respectively, are \begin{equation} \begin{aligned} \psi_1=\{&-0.183589,-0.103032,-0.0522225\},\\ \psi_2=\{&-3.061776,-3.129928,-3.138440\},\\ \psi_3=\{&-2.019322,-2.583841,-2.862841\},\\ \psi_4=\{&1.750803,0.844394,0.418648\}. \end{aligned} \end{equation} \indent Finally, the single-qubit rotation angles in Eq. (13), for $k=\{5,10,20\}$, are \begin{equation} \begin{aligned} \beta_5=&3.111045,\\ \beta_{10}=&2.290846,\\ \beta_{20}=&-1.216184,\\ \gamma_{5}=&-2.117345,\\ \gamma_{10}=&-1.850509,\\ \gamma_{20}=&1.430782. \end{aligned} \end{equation} \section{II. \ Effect of imperfect local gates on the infidelity of the composite pulse sequences} The contour plots in Fig. \ref{fig:CNOT with CK1} present the resulting infidelity of the length-40 (Eq. (10) with $k=20$) and length-120 (Eq. (12) with $k=20$) composite pulse sequences when systematic error in the two-qubit gate and imperfect local gates are taken into account. We apply each sequence to a Hamiltonian formed by an Ising coupling of strength $\alpha$ and random fluctuations only on the SU(4) generators that the particular sequence targets, $H= \alpha\sigma_{ZZ}+\sum \delta_{ij}\sigma_{ij}$ (the length-40 sequence targets all SU(4) generators but $\sigma_{ZZ}$, whereas the length-121 sequence targets all 15 SU(4) generators). As stated in the main text, each local gate of the composite sequence is perturbed by a random local gate of the form $\exp\left[-i \sum \Delta_i\sigma_i\right]\otimes \exp\left[-i \sum \Delta_j \sigma_j\right]$. We analyze separately two types of error that can affect the local gates: systematic and random errors. In order to represent the effect of systematic errors, when the same local gate is invoked multiple times in the sequence, it is invoked with the same perturbation. Whereas for random errors the perturbation is never the same. For each of many realizations of the perturbations, we numerically find the average infidelity of the imperfect local gates invoked as well as the infidelity of the composite pulse sequence, which is formed using those imperfect local gates and it is also perturbed by systematic errors at the two-qubit level. These infidelities are averaged over noise realizations by sampling each stochastic noise variable over a normal distribution of standard deviation $\sigma$, with the average being taken over 500 samples for each value of $\sigma$. \\ \indent As mentioned in the main text and shown in the figures below, the average infidelity of a composite {\sc cnot} caused by errors in the local gates increases with the length of the sequence. For systematic local errors, the {\sc cnot} infidelity increases up to about 80 times the local gate infidelity for the length-120 sequence, which has 121 local gates. On the other hand, random local errors have qualitatively the same effect as systematic ones, but are quantitatively more pernicious, resulting in a {\sc cnot} infidelity increase up to about 480 times the random local gate infidelity for the length-120. Similarly, the {\sc cnot} formed using the length-40 sequence has 41 local gates and about 18 times the local gate infidelity for systematic errors and about 90 times the local gate infidelity for random errors. Fortunately, random errors are typically much smaller to begin with than systematic errors. \section{III. \ Improving the gate fidelity of the cross resonance gate between transmon qubits} In a recent experimental work with the cross resonance (CR) gate between transmon qubits by Sheldon et al. \cite{Sheldon2016}, the two-qubit entangling gate was improved through a detailed Hamiltonian estimation and the application of a cancellation tone that raises the two-qubit fidelity over 99$\%$. The experimental CR Hamiltonian is stated in terms of $\sigma_{ZX}, \ \sigma_{ZY}, \ \sigma_{ZZ}, \ \sigma_{IX}, \ \sigma_{IZ}$, and $ \sigma_{IY}$, of which $\sigma_{ZZ}$ is comparatively small and $\sigma_{IZ}$ is negligible. The authors improve the CR gate by applying a cancellation tone on the target qubit such that unwanted interaction terms of the CR Hamiltonian are eliminated. They choose the cancellation tone phase at which $\sigma_{ZY}$ and $\sigma_{IY}$ are zero, and the amplitude of the cancellation tone is tuned such that $\sigma_{IX}$ and $\sigma_{IY}$ are zero as well. With this technique, the authors report a $\sigma_{ZX}$ gate that is locally equivalent to {\sc cnot} with gate fidelity of 99.1$\%$, an important improvement from previously reported fidelities of 94-96$\%$.\\ \indent Nonetheless, as stated in their work, there are systematic error terms that still remain after the experimental procedure. According to their modeling, the residual error corresponds to a $\sigma_{ZZ}$ term in the Hamiltonian around an order of magnitude smaller than the interaction term, and a $\sigma_{IX}$ term an order of magnitude smaller than $\sigma_{ZZ}$. Following the method presented in the main text, we transform this Hamiltonian into a ZZ coupling by applying a Hadamard transformation to the second qubit. In this context, the dominant error term will give two error channels from the anticommuting set, which can be corrected with the length-5 sequence, Eq. (10) with $k=5$. Using the experimentally reported parameters, and considering that with the length-5 sequence the {\sc cnot} infidelity is about 8 times the average single-qubit gate infidelity, we calculate that our sequence would immediately improve the {\sc cnot} fidelity from the current value of 99.1$\%$ up to 99.6$\%$ with the presently achievable single-qubit gate fidelities of 99.95$\%$ \cite{Sheldon2016a}. All this was calculated with the $\sigma_{IX}$ error present, which, in principle, can be completely corrected from the start by a more precisely tuned cancellation tone. If one were to improve the single-qubit gate fidelities and $T_2$ time, our sequence could further boost the two-qubit fidelity up to 99.98$\%$. \begin{figure}[tbp] \centering \begin{subfigure}[b]{0.45\linewidth} \includegraphics[width=\linewidth]{fig1s_a_Sup_Mat.pdf} \caption{Length-40 sequence, local systematic error }\label{fig:systematiclength40} \end{subfigure} \begin{subfigure}[b]{0.45\linewidth} \includegraphics[width=\linewidth]{fig1s_b_Sup_Mat.pdf} \caption{Length-40 sequence, local random error }\label{fig:randomlength40} \end{subfigure} \begin{subfigure}[b]{0.45\linewidth} \includegraphics[width=\linewidth]{fig1s_c_Sup_Mat.pdf} \caption{Length-120 sequence, local systematic error }\label{fig:systematiclength120} \end{subfigure} \begin{subfigure}[b]{0.45\linewidth} \includegraphics[width=\linewidth]{fig1s_d_Sup_Mat.pdf} \caption{Length-120 sequence, local random error }\label{fig:randomlength120} \end{subfigure} \renewcommand{\baselinestretch}{1} \small\normalsize \begin{quote} \caption{(Color online) Composite {\sc cnot} infidelity vs averaged local gate infidelity vs noise strength ($\sigma/\alpha$). The length-40 sequence is given by Eq. (10) in the main text, with $k=20$. The length-120 sequence is given by Eq. (12) in the main text, with $k=20$.} \label{fig:CNOT with CK1} \end{quote} \end{figure} \renewcommand{\baselinestretch}{2} \small\normalsize \putbib[library] \end{bibunit} \end{document}
[ "Objective function", "ZZ-error channel", "SU(2)", "SU(4)", "Isomorphism", "Matrix", "Hamiltonian", "Ising coupling", "CNOT operation", "Composite pulse sequences", "Infidelity", "Systematic error", "Two-qubit gate", "Local gates", "Sheldon et al." ]
[ "SU(2)", "Infidelity", "Objective function", "SU(4)", "Matrix" ]
\section{Introduction} \label{sec:introduction} Bioinformaticians define the $k$th-order de Bruijn graph for a string or set of strings to be the directed graph whose nodes are the distinct $k$-tuples in those strings and in which there is an edge from $u$ to $v$ if there is a \((k + 1)\)-tuple somewhere in those strings whose prefix of length $k$ is $u$ and whose suffix of length $k$ is $v$.\footnote{An alternative definition, which our data structure can be made to handle but which we do not consider in this paper, has an edge from $u$ to $v$ whenever both nodes are in the graph.} These graphs have many uses in bioinformatics, including {\it de novo\/} assembly~\cite{zerbino2008velvet}, read correction~\cite{DBLP:journals/bioinformatics/SalmelaR14} and pan-genomics~\cite{siren2014indexing}. The datasets in these applications are massive and the graphs can be even larger, however, so pointer-based implementations are impractical. Researchers have suggested several approaches to representing de Bruijn graphs compactly, the two most popular of which are based on Bloom filters~\cite{wabi,cascading} and the Burrows-Wheeler Transform~\cite{bowe2012succinct,boucher2015variable,belazzougui2016bidirectional}, respectively. In this paper we describe a new approach, based on minimal perfect hash functions~\cite{mehlhorn1982program}, that is similar to that using Bloom filters but has better theoretical bounds when the number of connected components in the graph is small, and is fully dynamic: i.e., we can both insert and delete nodes and edges efficiently, whereas implementations based on Bloom filters are usually semi-dynamic and support only insertions. We also show how to modify our implementation to support, e.g., jumbled pattern matching~\cite{BCFL12} with fixed-length patterns. Our data structure is based on a combination of Karp-Rabin hashing~\cite{KR87} and minimal perfect hashing, which we will describe in the full version of this paper and which we summarize for now with the following technical lemmas: \begin{lemma} \label{lem:static} Given a static set $N$ of $n$ $k$-tuples over an alphabet $\Sigma$ of size $\sigma$, with high probability in $O(kn)$ expected time we can build a function \(f : \Sigma^k \rightarrow \{0, \ldots, n - 1\}\) with the following properties: \begin{itemize} \item when its domain is restricted to $N$, $f$ is bijective; \item we can store $f$ in $O(n + \log k+\log\sigma)$ bits; \item given a $k$-tuple $v$, we can compute \(f (v)\) in $\Oh{k}$ time; \item given $u$ and $v$ such that the suffix of $u$ of length \(k - 1\) is the prefix of $v$ of length \(k - 1\), or vice versa, if we have already computed \(f (u)\) then we can compute \(f (v)\) in $\Oh{1}$ time. \end{itemize} \end{lemma} \begin{lemma} \label{lem:dynamic} If $N$ is dynamic then we can maintain a function $f$ as described in Lemma~\ref{lem:static} except that:\ \begin{itemize} \item the range of $f$ becomes \(\{0, \ldots, 3 n - 1\}\); \item when its domain is restricted to $N$, $f$ is injective; \item our space bound for $f$ is $\Oh{n (\log \log n + \log \log \sigma)}$ bits with high probability; \item insertions and deletions take $\Oh{k}$ amortized expected time. \item the data structure may work incorrectly with very low probability (inversely polynomial in $n$). \end{itemize} \end{lemma} Suppose $N$ is the node-set of a de Bruijn graph. In Section~\ref{sec:static} we show how we can store $\Oh{n \sigma}$ more bits than Lemma~\ref{lem:static} such that, given a pair of $k$-tuples $u$ and $v$ of which at least one is in $N$, we can check whether the edge \((u, v)\) is in the graph. This means that, if we start with a $k$-tuple in $N$, then we can explore the entire connected component containing that $k$-tuple in the underlying undirected graph. On the other hand, if we start with a $k$-tuple not in $N$, then we will learn that fact as soon as we try to cross an edge to a $k$-tuple that is in $N$. To deal with the possibility that we never try to cross such an edge, however --- i.e.,\@\xspace that our encoding as described so far is consistent with a graph containing a connected component disjoint from $N$ --- we cover the vertices with a forest of shallow rooted trees. We store each root as a $k$-tuple, and for each other node we store \(1 + \lg \sigma\) bits indicating which of its incident edges leads to its parent. To verify that a $k$-tuple we are considering is indeed in the graph, we ascend to the root of the tree that contains it and check that $k$-tuple is what we expect. The main challenge for making our representation dynamic with Lemma~\ref{lem:dynamic} is updating the covering forest. In Section~\ref{sec:dynamic} how we can do this efficiently while maintaining our depth and size invariants. Finally, in Section~\ref{sec:jumbled} we observe that our representation can be easily modified for other applications by replacing the Karp-Rabin hash function by other kinds of hash functions. To support jumbled pattern matching with fixed-length patterns, for example, we hash the histograms indicating the characters' frequencies in the $k$-tuples. \section{Static de Bruijn Graphs} \label{sec:static} Let \(G\) be a de Bruijn graph of order \(k\), let \(N = \{v_0, \ldots, v_{n-1}\}\) be the set of its nodes, and let \(E = \{a_0, \ldots, a_{e-1}\}\) be the set of its edges. We call each \(v_i\) either a node or a \(k\)-tuple, using interchangeably the two terms since there is a one-to-one correspondence between nodes and labels. We maintain the structure of \(G\) by storing two binary matrices, \IN and \OUT, of size \(n \times \sigma\). For each node, the former represents its incoming edges whereas the latter represents its outgoing edges. In particular, for each \(k\)-tuple \(v_x = c_1 c_2 \ldots c_{k-1} a\), the former stores a row of length \(\sigma\) such that, if there exists another \(k\)-tuple \(v_y = b c_1 c_2 \ldots c_{k-1}\) and an edge from \(v_y\) to \(v_x\), then the position indexed by \(b\) of such row is set to \TRUE. Similarly, \OUT contains a row for \(v_y\) and the position indexed by \(a\) is set to \TRUE. As previously stated, each \(k\)-tuple is uniquely mapped to a value between \(0\) and \(n-1\) by \(f\), where $f$ is as defined in Lemma~\ref{lem:static}, and therefore we can use these values as indices for the rows of the matrices \IN and \OUT, i.e.,\@\xspace in the previous example the values of \(\IN[f(v_x)][b]\) and \(\OUT[f(v_y)][a]\) are set to \TRUE. We note that, e.g., the SPAdes assembler~\cite{Ban12} also uses such matrices. Suppose we want to check whether there is an edge from \(b X\) to \(X a\). Letting \(f(b X) = i\) and \(f(X a) = j\), we first assume \(b X\) is in \(G\) and check the values of \(\OUT [i] [a] \) and \( \IN [j] [b]\). If both values are \TRUE, we report that the edge is present and we say that the edge is \emph{confirmed} by \IN and \OUT; otherwise, if any of the two values is \FALSE, we report that the edge is absent. Moreover, note that if \(b X\) is in \(G\) and \(\OUT [i] [a] = \TRUE\), then \(X a\) is in \(G\) as well. Symmetrically, if \(X a\) is in \(G\) and \(\IN [j] [b] = \TRUE\), then \(b X\) is in \(G\) as well. Therefore, if \(\OUT [i] [a] = \IN [j] [b] = \TRUE\), then \(b X\) is in \(G\) if and only if \(X a\) is. This means that, if we have a path \(P\) and if all the edges in \(P\) are confirmed by \IN and \OUT, then either all the nodes touched by \(P\) are in \(G\) or none of them is. We now focus on detecting false positives in our data structure maintaining a reasonable memory usage. Our strategy is to sample a subset of nodes for which we store the plain-text \(k\)-tuple and connect all the unsampled nodes to the sampled ones. More precisely, we partition nodes in the undirected graph \(G^\prime\) underlying \(G\) into a forest of rooted trees of height at least \(k \lg \sigma \) and at most \(3 k \lg \sigma\). For each node we store a pointer to its parent in the tree, which takes \(1 + \lg \sigma\) bits per node, and we sample the \(k\)-mer at the root of such tree. We allow a tree to have height smaller than \(k \lg \sigma\) when necessary, e.g., if it covers a connected component. Figure~\ref{fig:trees} shows an illustration of this idea. \begin{figure}[t!] \begin{center} \includegraphics[width=\textwidth]{trees.pdf} \caption{Given a de Bruijn graph (left), we cover the underlying undirected graph with a forest of rooted trees of height at most \(3 k \lg \sigma\) (center). The roots are shown as filled nodes, and parent pointers are shown as arrows; notice that the directions of the arrows in our forest are not related to the edges' directions in the original de Bruijn graph. We sample the $k$-tuples at the roots so that, starting at a node we think is in the graph, we can verify its presence by finding the root of its tree and checking its label in $\Oh{k \log \sigma}$ time. The most complicated kind of update (right) is adding an edge between a node $u$ in a small connected component to a node $v$ in a large one, $v$'s depth is more than \(2 k \lg \sigma\) in its tree. We re-orient the parent pointers in $u$'s tree to make $u$ the temporary root, then make $u$ point to $v$. We ascend \(k \lg \sigma\) steps from $v$, then delete the parent pointer $e$ of the node $w$ we reach, making $w$ a new root. (To keep this figure reasonably small, some distances in this example are smaller than prescribed by our formulas.)} \label{fig:trees} \end{center} \end{figure} We can therefore check whether a given node \(v_x\) is in \(G\) by first computing \(f(v_x)\) and then checking and ascending at most \(3 k \lg \sigma\) edges, updating \(v_x\) and \(f(v_x)\) as we go. Once we reach the root of the tree we can compare the resulting \(k\)-tuple with the one sampled to check if \(v_x\) is in the graph. This procedure requires \Oh{k \lg \sigma} time since computing the first value of \(f(v_x)\) requires \Oh{k}, ascending the tree requires constant time per edge, and comparing the \(k\)-tuples requires \Oh{k}. We now describe a Las Vegas algorithm for the construction of this data structure that requires, with high probability, \Oh{kn + n\sigma} expected time. We recall that \(N\) is the set of input nodes of size \(n\). We first select a function \(f\) and construct bitvector \(B\) of size \(n\) initialized with all its elements set to \FALSE. For each elements \(v_x\) of \(N\) we compute \(f(v_x) = i\) and check the value of \(B[i]\). If this value is \FALSE we set it to \TRUE and proceed with the next element in \(N\), if it is already set to \TRUE, we reset \(B\), select a different function \(f\), and restart the procedure from the first element in \(N\). Once we finish this procedure --- i.e.,\@\xspace we found that \(f\) do not produces collisions when applied to \(N\) --- we store \(f\) and proceed to initialize \IN and \OUT correctly. This procedure requires with high probability \Oh{kn} expected time for constructing \(f\) and \Oh{n\sigma} time for computing \IN and \OUT. Notice that if \(N\) is the set of \(k\)-tuples of a single text sorted by their starting position in the text, each \(f(v_x)\) can be computed in constant time from \(f(v_{x-1})\) except for \(f(v_0)\) that still requires \Oh{k}. More generally, if \(N\) is the set of \(k\)-tuples of \(t\) texts sorted by their initial position, we can compute \(n - t\) values of the function \(f(v_x)\) in constant time from \(f(v_{x-1})\) and the remaining in \Oh{k}. We will explain how to build the forest in the full version of this paper. In this case the construction requires, with high probability, \(\Oh{kt + n + n\sigma} = \Oh{kt + n\sigma}\) expected time. Combining our forest with Lemma~\ref{lem:static}, we can summarize our static data structure in the following theorem: \begin{theorem} \label{thm:static} Given a static $\sigma$-ary $k$th-order de Bruijn graph $G$ with $n$ nodes, with high probability in $\Oh{k n + n \sigma}$ expected time we can store $G$ in $\Oh{\sigma n}$ bits plus $\Oh{k \log \sigma}$ bits for each connected component in the underlying undirected graph, such that checking whether a node is in $G$ takes $\Oh{k \log \sigma}$ time, listing the edges incident to a node we are visiting takes $\Oh{\sigma}$ time, and crossing an edge takes $\Oh{1}$ time. \end{theorem} In the full version we will show how to use monotone minimal perfect hashing~\cite{BBPV09} to reduce the space to $(2+\epsilon)n\sigma$ bits of space (for any constant $\epsilon>0$). We will also show how to reduce the time to list the edges incident to a node of degree $d$ to $O(d)$, and the time to check whether a node is in $G$ to $\Oh{k}$. We note that the obtained space and query times are both optimal up to constant factors, which is unlike previous methods which have additional factor(s) depending on $k$ and/or $\sigma$ in space and/or time. \section{Dynamic de Bruijn Graphs} \label{sec:dynamic} In the previous section we presented a static representation of de Buijn graphs, we now present how we can make this data structure dynamic. In particular, we will show how we can insert and remove edges and nodes and that updating the graph reduces to managing the covering forest over \(G\). In this section, when we refer to $f$ we mean the function defined in Lemma~\ref{lem:dynamic}. We first show how to add or remove an edge in the graph and will later describe how to add or remove a node in it. The updates must maintain the following invariant: any tree must have size at least $k\log\sigma$ and height at most $3k\log\sigma$ except when the tree covers (all nodes in) a connected component of size at most $k\log\sigma$. Let \(v_x\) and \(v_y\) be two nodes in \(G\), \(e = (v_x, v_y)\) be an edge in \(G\), and let \(f(v_x) = i\) and \(f(v_y) = j\). Suppose we want to add \(e\) to \(G\). First, we set to \TRUE the values of \(\OUT[i][a]\) and \(\IN[j][b]\) in constant time. We then check whether \(v_x\) or \(v_y\) are in different components of size less than \(k \lg \sigma\) in \Oh{k \lg \sigma} time for each node. If both components have size greater than \(k \lg \sigma\) we do not have to proceed further since the trees will not change. If both connected components have size less than \(k \lg \sigma\) we merge their trees in \Oh{k \lg \sigma} time by traversing both trees and switching the orientation of the edges in them, discarding the samples at the roots of the old trees and sampling the new root in \Oh{k} time. If only one of the two connected components has size greater than \(k \lg \sigma\) we select it and perform a tree traversal to check whether the depth of the node is less than \(2 k \lg \sigma\). If it is, we connect the two trees as in the previous case. If it is not, we traverse the tree in the bigger components upwards for \(k \lg \sigma\) steps, we delete the edge pointing to the parent of the node we reached creating a new tree, and merge it with the smaller one. This procedure requires \Oh{k \lg \sigma} time since deleting the edge pointing to the parent in the tree requires \Oh{1} time, i.e.,\@\xspace we have to reset the pointer to the parent in only one node. Suppose now that we want to remove \(e\) from \(G\). First we set to \FALSE the values of \(\OUT[i][a]\) and \(\IN[j][b]\) in constant time. Then, we check in \Oh{k} time whether \(e\) is an edge in some tree by computing \(f(v_x)\) and \(f(v_y)\) checking for each node if that edge is the one that points to their parent. If \(e\) is not in any tree we do not have to proceed further whereas if it is we check the size of each tree in which \(v_x\) and \(v_y\) are. If any of the two trees is small (i.e.,\@\xspace if it has fewer than \(k \lg \sigma\) elements) we search any outgoing edge from the tree that connects it to some other tree. If such an edge is not found we conclude that we are in a small connected component that is covered by the current tree and we sample a node in the tree as a root and switch directions of some edges if necessary. If such an edge is found, we merge the small tree with the bigger one by adding the edge and switch the direction of some edges originating from the small tree if necessary. Finally if the height of the new tree exceeds $3k\log\sigma$, we traverse the tree upwards from the deepest node in the tree (which was necessarily a node in the smaller tree before the merger) for \(2k \lg \sigma\) steps, delete the edge pointing to the parent of the reached node, creating a new tree. This procedure requires $\Oh{k \lg \sigma}$ since the number of nodes traversed is at most \(O(k \lg \sigma)\) and the number of changes to the data structures is also at most \(O(k \lg \sigma)\) with each change taking expected constant time. It is clear that the insertion and deletion algorithms will maintain the invariant on the tree sizes. It is also clear that the invariant implies that the number of sampled nodes is $O(n/(k\log\sigma))$ plus the number of connected components. We now show how to add and remove a node from the graph. Adding a node is trivial since it will not have any edge connecting it to any other node. Therefore adding a node reduces to modify the function \(f\) and requires \Oh{k} amortized expected time. When we want to remove a node, we first remove all its edges one by one and, once the node is isolated from the graph, we remove it by updating the function \(f\). Since a node will have at most \(\sigma\) edges and updating \(f\) requires \Oh{k} amortized expected time, the amortized expected time complexity of this procedure is $\Oh{\sigma k\lg \sigma+ k}$. Combining these techniques for updating our forest with Lemma~\ref{lem:dynamic}, we can summarize our dynamic data structure in the following theorem: \begin{theorem} \label{thm:dynamic} We can maintain a $\sigma$-ary $k$th-order de Bruijn graph $G$ with $n$ nodes that is fully dynamic (i.e., supporting node and edge insertions and deletions) in $\Oh{n (\log \log n + \sigma)}$ bits (plus $\Oh{k \log \sigma}$ bits for each connected component) with high probability, such that we can add or remove an edge in expected \Oh{k\lg\sigma} time, add a node in expected \Oh{k+\sigma} time, and remove a node in expected \Oh{\sigma k\lg \sigma} time, and queries have the same time bounds as in Theorem~\ref{thm:static}. The data structure may work incorrectly with very low probability (inversely polynomial in $n$). \end{theorem} \section{Jumbled Pattern Matching} \label{sec:jumbled} Karp-Rabin hash functions implicitly divide their domain into equivalence classes --- i.e., subsets in which the elements hash to the same value. In this paper we have chosen Karp-Rabin hash functions such that each equivalence class contains only one $k$-tuple in the graph. Most of our efforts have gone into being able, given a $k$-tuple and a hash value, to determine whether that $k$-tuple is the unique element of its equivalence class in the graph. In some sense, therefore, we have treated the equivalence relation induced by our hash functions as a necessary evil, useful for space-efficiency but otherwise an obstacle to be overcome. For some applications, however --- e.g., parameterized pattern matching, circular pattern matching or jumbled pattern matching --- we are given an interesting equivalence relation on strings and asked to preprocess a text such that later, given a pattern, we can determine whether any substrings of the text are in the same equivalence class as the pattern. We can modify our data structure for some of these applications by replacing the Karp-Rabin hash function by other kinds of hash functions. For indexed jumbled pattern matching~\cite{BCFL12,KRR13,ACLL14} we are asked to pre-process a text such that later, given a pattern, we can determine quickly whether any substring of the text consists of exactly the same multiset of characters in the pattern. Consider fixed-length jumbled pattern matching, when the length of the patterns is fixed at pre-processing time. If we modify Lemmas~\ref{lem:static} and~\ref{lem:dynamic} so that, instead of using Karp-Rabin hashes in the definition of the function $f$, we use a hash function on the histograms of characters' frequencies in $k$-tuples, our function $f$ will map all permutations of a $k$-tuple to the same value. The rest of our implementation stays the same, but now the nodes of our graph are multisets of characters of size $k$ and there is an edge between two nodes $u$ and $v$ if it is possible to replace an element of $u$ and obtain $v$. If we build our graph for the multisets of characters in $k$-tuples in a string $S$, then our process for checking whether a node is in the graph tells us whether there is a jumbled match in $S$ for a pattern of length $k$. If we build a tree in which the root is a graph for all of $S$, the left and right children of the root are graphs for the first and second halves of $S$, etc., as described by Gagie et al.~\cite{GHLW15}, then we increase the space by a logarithmic factor but we can return the locations of all matches quickly. \begin{theorem} \label{thm:jumbled} Given a string \(S [1..n]\) over an alphabet of size $\sigma$ and a length $k \ll n$, with high probability in $\Oh{k n + n \sigma}$ expected time we can store \((2n \log \sigma)(1+o(1))\) bits such that later we can determine in $\Oh{k \log \sigma}$ time if a pattern of length $k$ has a jumbled match in $S$. \end{theorem} \section*{Acknowledgements} Many thanks to Rayan Chikhi and the anonymous reviewers for their comments. \bibliographystyle{splncs03}
[ "Bioinformaticians", "de Bruijn graph", "Bloom filters", "Burrows-Wheeler Transform", "minimal perfect hash functions", "Karp-Rabin hashing", "de Bruijn Graphs", "G", "SPAdes assembler", "Las Vegas algorithm", "de Bruijn graph", "de Bruijn Graphs", "Lemma", "Dynamic de Bruijn Graphs", "Karp-Rabin hash functions", "Gagie et al.", "Rayan Chikhi", "anonymous reviewers" ]
[ "de Bruijn graph", "G", "Lemma", "Karp-Rabin hashing", "de Bruijn Graphs" ]
\section{Introduction} A popular version of the third law of thermodynamics is that the entropy density of a physical system tends to zero in the $T \to 0$ limit\cite{mandl}. However, there is a class of theoretical models that violate this law\cite{fowler33,pauling,nagle66,lieb67,chow87,bramwell01,castelnovo08}:\ models in this class exhibit a ground-state degeneracy which grows exponentially with the system size, leading to a non-zero entropy density even at $T=0$. Nor can these be easily dismissed as theorists' abstractions, since one also sees ample evidence in experiment\cite{giauque36,harris97,ramirez99,higashinaka03} that there are systems in which the entropy plateaus at a non-zero value over a large range of temperature. In many such cases it is suspected that it eventually falls to zero at a much lower temperature scale, though recent theoretical work on skyrmion magnets suggests that this intuition may not always be reliable \cite{moessner}. Whatever the ultimate low-temperature fate of these materials, it is clear that over a broad range of temperatures they exhibit physics which is well captured by models with a non-zero residual entropy density. One important class of these are so-called ice models, in which the ground-state manifold consists of all configurations which satisfy a certain local `ice rule' constraint\cite{siddharthan99,denhertog00,isakov05}. The first such model was Pauling's model for the residual configurational entropy of water ice\cite{pauling}. Here the local constraint is that two of the four hydrogens neighboring any given oxygen should be chemically bonded to it to form a water molecule. Similar models were subsequently discovered to apply to the orientations of spins along local Ising axes in certain rare-earth pyrochlores\cite{siddharthan99,bramwell01}, which by analogy were dubbed `spin ice' compounds. Such models develop power-law spin-spin correlations at low temperatures, with characteristic `pinch points' in the momentum representation of the spin-spin correlation function\cite{bramwell01a}, but they do not order. Their low-temperature state is often referred to as a `co-operative paramagnet' \cite{villain79}. One interesting feature of such co-operative paramagnets is their response to an applied magnetic field. The configurations that make up the ice-rule manifold usually have different magnetizations; thus an applied field, depending on its direction, may either reduce\cite{higashinaka03,hiroi03,tabata06} or entirely eliminate\cite{fukazawa02,jaubert08} the degeneracy. In the latter case, further interesting physics may arise when the system is heated, especially if the ice-rule constraints do not permit the thermal excitation of individual flipped spins. In such cases the lowest-free-energy excitation may be a {\it string\/} of flipped spins extending from one side of the system to the other. A demagnetization transition mediated by such excitations is known as a {\it Kasteleyn transition}\cite{kasteleyn,fennell07,jaubert08}. In spin ice research to date, insight has often been gained from the study of simplified models where the dimensionality is reduced or the geometry simplified while retaining the essential physics\cite{mengotti11,chern12,wan}. In that spirit, we present in this paper a two-dimensional ice model which exhibits a Kasteleyn transition in an applied magnetic field. The model is especially interesting since, unlike its three-dimensional counterparts, it has the same Ising quantization axis for every spin. This raises the possibility that it could be extended to include a transverse magnetic field, thereby allowing the exploration of quantum Kasteleyn physics. The remainder of this paper is structured as follows. In section \ref{sec:model}, we present our spin ice model, along with some analytical and numerical results on its thermodynamic properties in the absence of an applied magnetic field. In section \ref{sec:kasteleyn}, we analyse the model in the presence of a magnetic field:\ we show that it has a Kasteleyn transition, and we characterize it. In section \ref{sec:entropy}, we use an alternative representation of the ice-rule states --- the `string representation' --- to determine the model's entropy as a function of its magnetization. Finally, in section \ref{sec:summary}, we summarize our findings and discuss possible future lines of work. \section{The model} \label{sec:model} The model that we shall consider has the following Hamiltonian: \be H = \sum_{ij} J_{ij} \sigma_i \sigma_j - h \sum_i \sigma_i. \label{ham} \ee Here $i$ and $j$ label the sites of a two-dimensional square lattice, $\sigma_i = \pm 1$ is an Ising variable on lattice site $i$, and $h$ is an externally applied (longitudinal) magnetic field. The exchange interaction $J_{ij}$ is given by: \be J_{ij} = \left\{ \begin{array}{lll} \phantom{-}J & \qquad & {\bf r}_j = {\bf r}_i + {\hat {\bf x}}; \\ -J & \qquad & {\bf r}_j = {\bf r}_i + {\hat {\bf y}}; \\ -J & \qquad & {\bf r}_i = n {\hat {\bf x}} + m {\hat {\bf y}} \,\,\,\,(n+m\,\,\mbox{odd}) \\ & & \quad \mbox{and}\,{\bf r}_j = {\bf r}_i + {\hat {\bf x}} + {\hat {\bf y}}; \\ -J & \qquad & {\bf r}_i = n {\hat {\bf x}} + m {\hat {\bf y}} \,\,\,\,(n+m\,\,\mbox{even}) \\ & & \quad \mbox{and}\,{\bf r}_j = {\bf r}_i - {\hat {\bf x}} + {\hat {\bf y}}; \\ \phantom{-}0 & & \mbox{otherwise,} \end{array} \right. \label{exchanges} \ee where ${\bf r}_i$ is the position vector of site $i$, ${\hat {\bf x}}$ and ${\hat {\bf y}}$ are the unit vectors of a Cartesian system in the two-dimensional plane, and $J$ is a positive constant. In this paper, we shall always work in the limit $J \gg \vert h \vert, k_B T$. Furthermore, where necessary we shall take the number of sites in the lattice to be $N$, always assuming $N$ to be large enough that edge effects can be neglected. When we refer to the density of something (e.g.\ the entropy density), we shall always mean that quantity divided by the number of spins --- not, for example, by the number of plaquettes. The lattice described by (\ref{exchanges}) is shown in the upper-left inset of Fig.~\ref{defects}, with ferromagnetic bonds represented by solid lines and antiferromagnetic bonds represented by dotted lines. One may view this lattice as made of corner-sharing plaquettes, one of which is shown in the lower-right inset of Fig.~\ref{defects}. It is easy to see that the bonds on this plaquette cannnot all be satisfied at once:\ the model (\ref{ham}) is therefore magnetically frustrated. The sixteen spin configurations of the elementary plaquette, together with their energies, are shown in Table \ref{plaqconf}. \begin{table} \begin{center} \begin{tabular}{c|c|c|c|c|c|c} Configuration & $\ua\ua\ua\ua$ & $\ua\da\da\ua$ & $\ua\da\ua\da$ & $\da\ua\da\ua$ & $\da\ua\ua\da$ & $\da\da\da\da$ \\ \hline Energy & $-2J-4h$ & $-2J$ & $-2J$ & $-2J$ & $-2J$ & $-2J+4h$ \end{tabular} \vspace*{3mm} \begin{tabular}{c|c|c|c|c|c|c|c|c} Configuration & $\ua\ua\ua\da$ & $\ua\ua\da\ua$ & $\ua\da\ua\ua$ & $\da\ua\ua\ua$ & $\da\da\da\ua$ & $\da\da\ua\da$ & $\da\ua\da\da$ & $\ua\da\da\da$ \\ \hline Energy & $-2h$ & $-2h$ & $-2h$ & $-2h$ & $2h$ & $2h$ & $2h$ & $2h$ \end{tabular} \vspace*{3mm} \begin{tabular}{c|c|c} Configuration & $\ua\ua\da\da$ & $\da\da\ua\ua$ \\ \hline Energy & $6J$ & $6J$ \end{tabular} \end{center} \caption{The energies of the sixteen spin configurations of the elementary plaquette. Each configuration is specified by listing the orientations of the four plaquette spins in the order corresponding to the numbering in Fig.~\ref{defects}. The first six configurations listed are those that, in the absence of an external magnetic field, constitute the sixfold-degenerate ground-state (or `ice rule') manifold.} \label{plaqconf} \end{table} When $h=0$, i.e.\ in the absence of an external magnetic field, there are six degenerate ground-state configurations. They are shown in the left-hand inset of Fig.~\ref{C}:\ we shall call them the `ice-rule configurations,' and the manifold spanned by them the `ice-rule manifold.' Since this Ising model is magnetically frustrated, we do not expect it to show an ordering transition as the temperature is reduced. Rather, we expect a smooth crossover into a co-operative paramagnetic state in which every plaquette is in one of the ice-rule configurations. The density of defects (a measure of how many plaquettes are not in an ice-rule configuration) should vanish smoothly as the temperature tends to zero, and the specific heat will show a corresponding Schottky-like peak at temperatures $T \sim J/k_B$ but no sharp features. Because the ground-state degeneracy is exponential in the system size, the model will have a non-zero entropy density even at zero temperature. A na{\"\i}ve estimate would suggest a value of $k_B \ln 6$ per plaquette, i.e.\ $\frac{1}{2} k_B \ln 6 \approx 0.896\,k_B$ per spin, due to the six-fold ground-state degeneracy. This estimate, however, is too na{\"\i}ve, since it ignores the important constraint that the ice-rule configurations chosen for two neighboring plaquettes must agree on the orientation of the spin at their shared corner. We may easily improve our estimate of the zero-temperature entropy density by taking this constraint into account at a local level. Imagine `growing' a spin configuration of the lattice from top to bottom. Each time a new row is added, the orientations of spins 1 and 2 of each plaquette of the row being added ($j$) will be fixed by the (already chosen) configuration of the row above ($j-1$). The ice rules for this model do not favor any particular direction for any single site on the plaquette; hence the probabilities of the four configurations of this pair of spins are simply $P_{\ua\ua} = P_{\ua\da} = P_{\da\ua} = P_{\da\da} = 1/4$. The number of ice-rule configurations consistent with these constraints is (see Fig.~\ref{C}) $N_{\ua\ua} = N_{\da\da} = 1$; $N_{\ua\da} = N_{\da\ua} = 2$. Thus half the plaquettes in the new row have no choice of configuration, while the other half may choose between two. This gives an average entropy per plaquette of $\frac{1}{2} k_B \ln 2$, which corresponds to an entropy density of $\frac{1}{4} k_B \ln 2 \approx 0.173\,k_B$ per spin. This estimate is still rather crude, since it neglects correlations between the configurations of neighboring plaquettes in row $j-1$, which will be induced by their connections to a common plaquette in row $j-2$. However, it was shown by Lieb \cite{lieb67} that such correlation corrections may be resummed to yield an exact result for the ground-state entropy density of such `square ice' models:\ $s_0 \equiv S_0/N = \frac{3}{4} k_B \ln \left( \frac{4}{3} \right) \approx 0.216\,k_B$. We shall call this value the `Lieb entropy density,' and denote it $s_0^{\rm Lieb}$. All of the above expectations are borne out by Monte Carlo simulations of the model, the results of which are shown in Figs.~\ref{defects}--\ref{C}. First, we demonstrate the increasing predominance of ice-rule configurations as the temperature is lowered. For this it is useful to define the number of defects on a plaquette as the number of single spin-flips by which the spin configuration deviates from the closest ice-rule configuration. By this measure, the states in the top line of Table \ref{plaqconf} have zero defects, those in the second line have one, and those in the third line have two. Fig.~\ref{defects} shows the density of defects as a function of temperature. \begin{figure} \centerline{\includegraphics[width=0.95\columnwidth]{Fig1.eps}} \caption{The density of defects, $\rho_{\rm defects}$, as a function of scaled temperature, $k_B T/J$, for a lattice of 8192 spins and in the absence of an applied magnetic field. The number of defects on a plaquette is defined as the number of single spin-flips by which it differs from the nearest ice-rule configuration. Thus each state in the ground-state manifold of the system has $\rho_{\rm defects}=0$. The dotted line marks the high-temperature asymptotic value of $3/8$ (see text). Inset (top left): A portion of the lattice, with ferromagnetic bonds represented by solid lines and antiferromagnetic bonds by dotted lines. Inset (bottom right): The unit cell of the lattice, including the numbering convention we use for the spins on a single plaquette.} \label{defects} \end{figure} The asymptotic high-temperature value of this quantity can be easily calculated. In the infinite-temperature limit all configurations of a plaquette are equally probable, i.e.\ each has a probability $\frac{1}{16}$. From Table \ref{plaqconf}, we see that there are six configurations with no defects, eight configurations with one, and two configurations with two. Hence the average number of defects per plaquette at infinite temperature is $0 \times \frac{6}{16} + 1 \times \frac{8}{16} + 2 \times \frac{2}{16} = \frac{3}{4}$. Since there are twice as many spins as plaquettes, the defect density is simply half of this, i.e.\ $\rho_{\rm defects} \to \frac{3}{8} = 0.375$ as $k_B T/J \to \infty$. Second, we calculate the entropy density of the system as a function of temperature, using the Wang-Landau method\cite{wang2001}. The results are shown in Fig.~\ref{Entropy}. At high temperatures the entropy density tends to $k_B \ln 2$, the Ising paramagnetic value. At low temperatures it tends to a non-zero constant value which is in good agreement with the Lieb entropy density $s_0^{\rm Lieb}$ given above. In between there are no sharp features, confirming that the model exhibits only a crossover from high-temperature paramagnetic to low-temperature cooperative-paramagnetic behavior. \begin{figure} \centerline{\includegraphics[width=\columnwidth]{Fig2.eps}} \caption{The dimensionless entropy density of the system, $s/k_B$, as a function of scaled temperature, $k_B T/J$, for a lattice of 8192 spins and in the absence of an applied magnetic field, calculated using the Wang-Landau method. At high temperatures the entropy density is that of an Ising paramagnet, $k_B \ln 2$ per spin. The zero-temperature residual entropy density is consistent with Lieb's exact result for two-dimensional ice models, $s_0^{\rm Lieb} = \frac{3}{4} k_B \ln \left( \frac{4}{3} \right) \approx 0.216\,k_B$.} \label{Entropy} \end{figure} Third, we obtain the specific heat capacity as a function of temperature, also using the Wang-Landau method. The results are shown in Fig.~\ref{C}. In keeping with our results for the entropy density in Fig.~\ref{Entropy}, we see that although there is a broad Schottky-like peak at temperatures of order $J/k_B$ there are no sharp features, supporting our expectation that this model would not exhibit a phase transition. \begin{figure} \centerline{\includegraphics[width=0.85\columnwidth]{Fig3.eps}} \caption{The dimensionless heat capacity per spin, $C/k_B$, as a function of scaled temperature, $k_B T/J$, in the absence of an applied magnetic field, calculated using the Wang-Landau method. Inset (left):\ the six degenerate zero-field ground states for a single plaquette. Inset (right):\ the same states in the string representation.} \label{C} \end{figure} \section{Kasteleyn transition} \label{sec:kasteleyn} Our real interest in this model, however, is in its unusual response to an externally applied longitudinal magnetic field. We call this field $h$, and in the following we shall take it to be positive. As shown in the first line of Table \ref{plaqconf}, the degeneracy between the six ice-rule configurations is lifted as soon as the field $h$ is applied. Indeed, for any non-zero $h$ (and remembering that we always work in the $h \ll J$ limit) the ground state of a plaquette is the unique `all up' configuration. It follows that, at $T=0$, the entire lattice simply has $\sigma_i = +1$ for all sites $i$. Now let us consider what happens to this fully magnetized state as the temperature is increased. One might expect the appearance of a dilute set of `down' spins. However, a feature of this model is that a single spin-flip takes the system out of the ice-rule manifold, and at $h,k_B T \ll J$ this will not occur. To understand what will happen instead, let us introduce a representation of the states in the ice-rule manifold in terms of strings. We begin with a single plaquette. If we take as our reference state the one in which all the spins are up, we may represent the six ice-rule configurations in terms of lines joining the spins that are down. This is shown in the right-hand inset of Fig.~\ref{C}. Representing the `all down' configuration as two vertical lines rather than two horizontal ones is in principle arbitrary, but it has the advantage of yielding a model in which these lines of down spins can neither cross each other nor form closed loops. To make an ice-rule-obeying configuration of the entire lattice, we must put these plaquettes together in such a way that any string that leaves one plaquette enters its neighbor. Thus there is a one-to-one mapping between ice-rule-obeying configurations of the spins $\sigma_i$ and configurations of these strings. Each string must extend all the way across the lattice. To proceed further, let us suppose that the lattice consists of $L_x$ sites in the horizontal direction and $L_y$ sites in the vertical direction, so that $N = L_x L_y$. Each string, irrespective of its configuration, contains precisely $L_y$ spins, so that a configuration with $N_s$ strings has $N_s L_y$ down spins and thus an energy of $2 h N_s L_y$ relative to the fully magnetized state (or `string vacuum'). Such a string is the {\it minimal\/} demagnetizing excitation of the system that is consistent with the ice rule. Since a single string has an energy cost proportional to the linear size of the system, it might appear that such strings cannot be thermally excited. This is not true, however, because a single string also has two choices about which way to go every time it enters a new plaquette, meaning that its entropy of $k_B L_y \ln 2$ is also proportional to $L_y$. Thus the free-energy cost of introducing a single string into the fully magnetized state is \be F = E - TS = \left( 2h - k_B T \ln 2 \right) L_y. \ee When the temperature reaches the critical value $T_c = 2h/(k_B \ln 2)$, this free-energy cost flips sign, and the system becomes unstable to the proliferation of strings. (This is somewhat similar to what happens in a Berezinskii-Kosterlitz-Thouless transition\cite{b,kt}, except that in our model we do not have `positive' and `negative' strings, so the physics of screening plays no r{\^o}le.) In fact the increase in the string density from zero for $T > T_c$ --- which corresponds directly to the decrease in the magnetization from its saturated value --- is continuous. This is because the above argument applies strictly only to a single string introduced into the fully magnetized state. Once a finite density of strings has been created the entropy associated with new ones is reduced, and thus the temperature at which it becomes free-energetically favorable to create them goes up. This kind of transition, in which the elementary thermal excitations are system-spanning strings, is called a {\it Kasteleyn transition\/}. It was first described by Kasteleyn in the context of dimer models\cite{kasteleyn}. The above predictions are again borne out by our Monte Carlo simulations, the results of which are shown in Figs.~\ref{MvsHT}--\ref{TkvsH}. Fig.~\ref{MvsHT} shows a three-dimensional plot of the equilibrium value of the magnetization, $M$, as a function of the temperature and the applied magnetic field. At all temperatures below $T_c(h)$ the magnetization takes its saturated value; above $T_c(h)$ it decreases smoothly with increasing temperature, tending to zero only as $T \to \infty$. This may be understood in the string representation of the problem. As more and more strings are introduced, the entropy density of each new one decreases; in the limit where half the lattice sites are populated by strings it tends to zero, meaning that this will occur only in the infinite-temperature limit. \begin{figure} \centerline{\includegraphics[width=\columnwidth]{Fig4.eps}} \caption{The ratio of the magnetization to its saturated value, $M/M_{\rm sat}$, as a function of scaled temperature, $k_B T/J$, and scaled longitudinal field, $h/J$. The solid black line shows the theoretical prediction for the Kasteleyn transition temperature, $T_c = 2h/(k_B \ln 2)$.} \label{MvsHT} \end{figure} Fig.~\ref{chivsT} shows the magnetic susceptibility, determined at three different values of the applied field. In each case, one sees at $T=T_c(h)$ the asymmetric peak characteristic of a Kasteleyn transition. This highlights an intriguing consequence of the physics of the Kasteleyn strings:\ below $T_c(h)$ the linear susceptibility is strictly zero, while as $T_c(h)$ is approached from above the susceptibility diverges. For a two-dimensional Kasteleyn transition one expects to find $\beta= 1/2$ on the high-temperature side\cite{nagle1975,moessner2003}, that is, \be \mu \sim t^{1/2}, \ee where $\mu \equiv (M_{\rm sat}-M)/M_{\rm sat}$ is the reduced magnetization and $t \equiv (T - T_c)/T_c$ is the reduced temperature. This is indeed the case in our simulations:\ the inset of Fig.~\ref{chivsT} is a logarithmic plot of $\mu$ as a function of $t$, calculated for a system of 8192 spins and with an applied field of $h/J = 0.017$ (grey filled circles), compared with the expected $t^{1/2}$ behavior (solid red line). Similar behavior is found for all simulated fields below $0.1 h/J$. \begin{figure} \centerline{\includegraphics[width=\columnwidth]{Fig5.eps}} \caption{The magnetic susceptibility, $\chi$, as a function of scaled temperature, $k_B T/J$, for a lattice of 8192 spins with three different values of the scaled magnetic field $h/J$: 0.017 (black symbols, leftmost peak), 0.034 (red symbols, middle peak), and 0.051 (blue symbols, rightmost peak). The inset shows the reduced magnetization, $\mu$, as a function of the reduced temperature, $t$, for an applied field $h/J=0.017$ (grey filled circles). The solid red line corresponds to $\mu \sim t^{1/2}$.} \label{chivsT} \end{figure} In Fig.~\ref{TkvsH} we collect our data into a phase diagram. The filled red circles show the temperature of the Kasteleyn transition, determined from the data in Fig.~\ref{MvsHT} as the temperature at which the magnetization departs from its saturated value. The thick black line is the prediction $T_c(h) = 2h/(k_B \ln 2)$ derived above. The departure of the red points from this line at larger fields and temperatures is due to the violation of the condition $h, k_B T \ll J$. In the pink region the thermal excitations are not full strings, but instead string fragments extending from one ice-rule-violating plaquette to another. The physics of such string fragments, and their signatures in neutron scattering, were discussed by Wan and Tchernyshyov\cite{wan}. \begin{figure} \centerline{\includegraphics[width=\columnwidth]{Fig6.eps}} \caption{The phase diagram of the model as a function of scaled temperature, $k_B T/J$, and scaled magnetic field, $h/J$. The red dots show the Kasteleyn temperature as determined from the magnetization curves, i.e.\ the temperature at which the magnetization first departs from its saturated value. The black line is the theoretical prediction $T_c(h)= 2h/(k_B \ln 2)$. As expected, the simulation results depart from the theoretical prediction at temperatures where the condition that the spin configuration remain strictly in the ice-rule manifold, $h, k_B T\ll J$, is no longer fulfilled (pink area).} \label{TkvsH} \end{figure} \section{Entropy as a function of magnetization} \label{sec:entropy} Finally, let us demonstrate the usefulness of the string representation by using it to calculate the entropy density of the system, $s$, at a fixed value of the magnetization density, $m \equiv M/M_{\rm sat}$. Clearly $s(m)$ is an even function of $m$, so we may restrict our calculation to the case $m \geqslant 0$. The magnetization density may equivalently be expressed as the density of strings, $\eta_s$, via the formula $\eta_s = (1-m)/2$. To determine the entropy density corresponding to a given value of $\eta_s$, consider propagating the string configuration downwards from the top of the lattice. We shall assume that this propagation has reached a certain row $j$, and concentrate on a single string in that row. As it enters a new plaquette in row $j+1$, this string has in principle two choices:\ to continue vertically downwards, or to cross the plaquette diagonally. However, if another string is entering the same plaquette, it has only one choice, since the strings cannot cross (see Fig.~\ref{C}). The probability that a second string enters the same plaquette in row $j+1$ as the first is simply $\eta_s$. Thus the average number of choices available to the first string upon entering the new plaquette is $\eta_s \times 1 + (1-\eta_s) \times 2 = 2 - \eta_s$. This means that each string has a total entropy $S_s \approx k_B L_y \ln \left( 2 - \eta_s \right)$; with a total number of strings $\eta_s L_x$, it follows that the total entropy is $S \approx k_B L_x L_y \eta_s \ln \left( 2 - \eta_s \right)$. Dividing by the number of spins $N=L_x L_y$, and using $\eta_s = (1-m)/2$, we obtain \be s_0(m) \approx {\tilde s}_0(m) \equiv k_B \left( \frac{1-m}{2} \right) \ln \left( \frac{3+m}{2} \right). \label{sofm} \ee In Fig.~\ref{S0vsM} we compare this approximation with numerical results for the entropy density obtained using the Wang-Landau method. The filled black circles are the numerical results, while the dashed red curve is our analytical approximation (\ref{sofm}). It is clear that these were never going to coincide, since the $m \to 0$ limit of ${\tilde s}_0(m)$ is the Pauling entropy density, $\frac{1}{2} k_B \ln \left( \frac{3}{2} \right)$, while the $m \to 0$ limit of the actual entropy density is the Lieb entropy density, $\frac{3}{4} k_B \ln \left( \frac{4}{3} \right)$. \begin{figure} \centerline{\includegraphics[width=\columnwidth]{Fig7.eps}} \caption{The residual dimensionless entropy per site, $s_0/k_B$, as a function of the scaled magnetization per site, $m \equiv M/M_{\rm sat}$. The black filled circles show the values obtained numerically using the Wang-Landau method for a lattice of 512 spins. The dashed red line is the free-string result ${\tilde s}_0(m)$ (see text); note that it tends to Pauling's entropy at $m=0$ (filled red square). The solid blue line is the curve obtained by multiplying the number of microstates by a constant factor, chosen to rescale ${\tilde s}_0(0)$ to match Lieb's exact result (open blue circle).} \label{S0vsM} \end{figure} The origin of the difference between Lieb's exact result and Pauling's approximation lies in positive correlation of closed loops\cite{nagle66,lieb67}, which increases by a small factor the number of possible configurations obeying the ice rule. If one makes the crude assumption that this factor is independent of $m$, this results in a constant additive change to the logarithm in (\ref{sofm}): \be {\tilde s}_0(m) \longrightarrow k_B \left( \frac{1-m}{2} \right) \left[ \ln \left( \frac{3+m}{2} \right) + \alpha \right]. \ee If we choose the constant $\alpha$ to match the known result at $m=0$, the resulting curve (shown in blue) gives a very reasonable fit to the numerical data points over the whole range $0 \leqslant m \leqslant 1$. \vspace{5mm} \section{Summary and future work} \label{sec:summary} In this paper, we have presented a new spin-ice model defined on a two-dimensional lattice of mixed ferro- and antiferromagnetic bonds. We have analysed its thermodynamic properties in zero applied magnetic field, and we have also characterized the Kasteleyn transition that it exhibits when a magnetic field is applied. Finally, we have shown that its entropy when the magnetization is non-zero is well captured by the string representation. One appealing feature of this model is that, unlike full three-dimensional spin ices, the Ising quantization axis is the same on each lattice site. This makes it natural to consider adding to the model a spatially uniform transverse magnetic field, $\Gamma$. The results of this should be particularly interesting in the $h,\Gamma,k_B T \ll J$ regime, where the applied field is expected to stabilize the string phase at low temperatures, leading to a line of quantum Kasteleyn transitions in the zero-temperature $(h,\Gamma)$ plane. This extension of the model (\ref{ham}) is the subject of a forthcoming work \cite{dhgb}. \vspace{1mm} \begin{acknowledgments} We are pleased to acknowledge useful discussions with Rodolfo Borzi, Daniel Darroch, and Peter Holdsworth. This research was supported in part by the National Science Foundation under Grant No.\ NSF PHY11-25915, and CAH is grateful to the Kavli Institute for Theoretical Physics in Santa Barbara for their hospitality. CAH also gratefully acknowledges financial support from the EPSRC (UK) via grant number EP/I031014/1. SAG would like to acknowledge financial support from CONICET, and ANPCYT (Argentina) via grant PICT-2013-2004. \end{acknowledgments}
[ "Third law of thermodynamics", "Ice models", "Pauling's model", "Hydrogens", "Oxygen", "Kasteleyn transition", "Spin ice model", "Quantum Kasteleyn physics", "Ising model", "Lieb", "Monte Carlo simulations", "Ice-rule configurations", "Wang-Landau method", "Kasteleyn transition", "Kasteleyn transition", "Berezinskii-Kosterlitz-Thouless transition", "Monte Carlo simulations", "Kasteleyn", "Physics", "Neutron scattering", "Wan", "Tchernyshyov", "Entropy", "Magnetization", "Wang-Landau method", "Pauling", "Lieb", "Spin-ice model", "Ferro- and antiferromagnetic bonds", "Kasteleyn transition", "Transverse magnetic field", "Quantum Kasteleyn transitions", "Rodolfo Borzi", "Daniel Darroch", "Peter Holdsworth", "National Science Foundation", "Kavli Institute for Theoretical Physics", "Santa Barbara", "EPSRC (UK)", "CONICET", "ANPCYT (Argentina)" ]
[]